1. 16 Oct, 2017 15 commits
  2. 15 Oct, 2017 3 commits
  3. 14 Oct, 2017 3 commits
    • Jed Denlea's avatar
      image/gif: write fewer, bigger blocks · 8b220d8e
      Jed Denlea authored
      The indexed bitmap of a frame is encoded into a GIF by first LZW
      compression, and then packaged by a simple block mechanism.  Each block
      of up-to-256 bytes starts with one byte, which indicates the size of the
      block (0x01-0xff). The sequence of blocks is terminated by a 0x00.
      
      While the format supports it, there is no good reason why any particular
      image should be anything but a sequence of 255-byte blocks with one last
      block less than 255-bytes.
      
      The old blockWriter implementation would not buffer between Write()s,
      meaning if the lzw Writer needs to flush more than one chunk of data via
      a Write, multiple short blocks might exist in the middle of a stream.
      
      Separate but related, the old implementation also forces lzw.NewWriter
      to allocate a bufio.Writer because the blockWriter is not an
      io.ByteWriter itself.  But, even though it doesn't effectively buffer
      data between Writes, it does make extra copies of sub-blocks during the
      course of writing them to the GIF's writer.
      
      Now, the blockWriter shall continue to use the encoder's [256]byte buf,
      but use it to effectively buffer a series of WriteByte calls from the
      lzw Writer.  Once a WriteByte fills the buffer, the staged block is
      Write()n to the underlying GIF writer.  After the lzw Writer is Closed,
      the blockWriter should also be closed, which will flush any remaining
      block along with the block terminator.
      
      BenchmarkEncode indicates slight improvements:
      
      name      old time/op    new time/op    delta
      Encode-8    7.71ms ± 0%    7.38ms ± 0%   -4.27%  (p=0.008 n=5+5)
      
      name      old speed      new speed      delta
      Encode-8   159MB/s ± 0%   167MB/s ± 0%   +4.46%  (p=0.008 n=5+5)
      
      name      old alloc/op   new alloc/op   delta
      Encode-8    84.1kB ± 0%    80.0kB ± 0%   -4.94%  (p=0.008 n=5+5)
      
      name      old allocs/op  new allocs/op  delta
      Encode-8      9.00 ± 0%      7.00 ± 0%  -22.22%  (p=0.008 n=5+5)
      
      Change-Id: I9eb9367d41d7c3d4d7f0adc9b720fc24fb50006a
      Reviewed-on: https://go-review.googlesource.com/68351Reviewed-by: 's avatarNigel Tao <nigeltao@golang.org>
      8b220d8e
    • Matthew Dempsky's avatar
      cmd/compile: omit ICE diagnostics after normal error messages · f3d4ff7d
      Matthew Dempsky authored
      After we detect errors, the AST is in a precarious state and more
      likely to trip useless ICE failures. Instead let the user fix any
      existing errors and see if the ICE persists.  This makes Fatalf more
      consistent with how panics are handled by hidePanic.
      
      While here, also fix detection for release versions: release version
      strings begin with "go" ("go1.8", "go1.9.1", etc), not "release".
      
      Fixes #22252.
      
      Change-Id: I1c400af62fb49dd979b96e1bf0fb295a81c8b336
      Reviewed-on: https://go-review.googlesource.com/70850
      Run-TryBot: Matthew Dempsky <mdempsky@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      f3d4ff7d
    • Cherry Zhang's avatar
      cmd/compile: mark LoweredGetCallerPC rematerializeable · e01eac37
      Cherry Zhang authored
      The caller's PC is always available in the frame. We can just
      load it when needed, no need to spill.
      
      Change-Id: I9c0a525903e574bb4eec9fe53cbeb8c64321166a
      Reviewed-on: https://go-review.googlesource.com/70710
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarDavid Chase <drchase@google.com>
      e01eac37
  4. 13 Oct, 2017 19 commits
    • Peter Wu's avatar
      crypto/tls: replace signatureAndHash by SignatureScheme. · d1bbdbe7
      Peter Wu authored
      Consolidate the signature and hash fields (SignatureAndHashAlgorithm in
      TLS 1.2) into a single uint16 (SignatureScheme in TLS 1.3 draft 21).
      This makes it easier to add RSASSA-PSS for TLS 1.2 in the future.
      
      Fields were named like "signatureAlgorithm" rather than
      "signatureScheme" since that name is also used throughout the 1.3 draft.
      
      The only new public symbol is ECDSAWithSHA1, other than that this is an
      internal change with no new functionality.
      
      Change-Id: Iba63d262ab1af895420583ac9e302d9705a7e0f0
      Reviewed-on: https://go-review.googlesource.com/62210Reviewed-by: 's avatarAdam Langley <agl@golang.org>
      d1bbdbe7
    • David Crawshaw's avatar
      cmd/link: use the correct module data on ppc64le · c996d07f
      David Crawshaw authored
      Fixes #22250
      
      Change-Id: I0e39d10ff6f0785cd22b0105de2d839e569db4b7
      Reviewed-on: https://go-review.googlesource.com/70810
      Run-TryBot: David Crawshaw <crawshaw@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      c996d07f
    • Austin Clements's avatar
      runtime: schedule fractional workers on all Ps · e09dbaa1
      Austin Clements authored
      Currently only a single P can run a fractional mark worker at a time.
      This doesn't let us spread out the load, so it gets concentrated on
      whatever unlucky P picks up the token to run a fractional worker. This
      can significantly delay goroutines on that P.
      
      This commit changes this scheduling rule so each P separately
      schedules fractional workers. This can significantly reduce the load
      on any individual P and allows workers to self-preempt earlier. It
      does have the downside that it's possible for all Ps to be in
      fractional workers simultaneously (an effect STW).
      
      Updates #21698.
      
      Change-Id: Ia1e300c422043fa62bb4e3dd23c6232d81e4419c
      Reviewed-on: https://go-review.googlesource.com/68574
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      e09dbaa1
    • Austin Clements's avatar
      runtime: preempt fractional worker after reaching utilization goal · 28e1a8e4
      Austin Clements authored
      Currently fractional workers run until preempted by the scheduler,
      which means they typically run for 20ms. During this time, all other
      goroutines on that P are blocked, which can introduce significant
      latency variance.
      
      This modifies fractional workers to self-preempt shortly after
      achieving the fractional utilization goal. In practice this means they
      preempt much sooner, and the scale of their preemption is on the order
      of how often the user goroutine block (so, if the application is
      compute-bound, the fractional workers will also run for long times,
      but if the application blocks frequently, the fractional workers will
      also preempt quickly).
      
      Fixes #21698.
      Updates #18534.
      
      Change-Id: I03a5ab195dae93154a46c32083c4bb52415d2017
      Reviewed-on: https://go-review.googlesource.com/68573
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      28e1a8e4
    • Austin Clements's avatar
      runtime: simplify fractional mark worker scheduler · b783930e
      Austin Clements authored
      We haven't used non-zero gcForcePreemptNS for ages. Remove it and
      declutter the code.
      
      Change-Id: Id5cc62f526d21ca394d2b6ca17d34a72959535da
      Reviewed-on: https://go-review.googlesource.com/68572
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      b783930e
    • Austin Clements's avatar
      runtime: use only dedicated mark workers at reasonable GOMAXPROCS · 315c28b7
      Austin Clements authored
      When GOMAXPROCS is not small, fractional workers don't add much to
      throughput, but they do add to the latency of individual goroutines.
      In this case, it makes sense to just use dedicated workers, even if we
      can't exactly hit the 25% CPU goal with dedicated workers.
      
      This implements this logic by computing the number of dedicated mark
      workers that will us closest to the 25% target. We only fall back to
      fractional workers if that would be more than 30% off of the target
      (less than 17.5% or more than 32.5%, which in practice happens for
      GOMAXPROCS <= 3 and GOMAXPROCS == 6).
      
      Updates #21698.
      
      Change-Id: I484063adeeaa1190200e4ef210193a20e635d552
      Reviewed-on: https://go-review.googlesource.com/68571Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      315c28b7
    • Austin Clements's avatar
      runtime: separate GC background utilization from goal utilization · 27923482
      Austin Clements authored
      Currently these are the same constant, but are separate concepts.
      Split them into two constants for easier experimentation and better
      documentation.
      
      Change-Id: I121854d4fd1a4a827f727c8e5153160c24aacda7
      Reviewed-on: https://go-review.googlesource.com/68570
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
      27923482
    • Adam Langley's avatar
      crypto/x509: reformat test struct. · 504a305c
      Adam Langley authored
      https://golang.org/cl/67270 wasn't `go fmt`ed correctly, according to
      the current `go fmt`. However, what `go fmt` did looked odd, so this
      change tweaks the test to use a more standard layout.
      
      Whitespace-only; no semantic change.
      
      Change-Id: Id820352e7c9e68189ee485c8a9bfece75ca4f9cb
      Reviewed-on: https://go-review.googlesource.com/69031
      Run-TryBot: Adam Langley <agl@golang.org>
      Reviewed-by: 's avatarMartin Kreichgauer <martinkr@google.com>
      Reviewed-by: 's avatarAdam Langley <agl@golang.org>
      504a305c
    • Ben Schwartz's avatar
      net/http: HTTPS proxies support · f5cd3868
      Ben Schwartz authored
      net/http already supports http proxies. This CL allows it to establish
      a connection to the http proxy over https. See more at:
      https://www.chromium.org/developers/design-documents/secure-web-proxy
      
      Fixes golang/go#11332
      
      Change-Id: If0e017df0e8f8c2c499a2ddcbbeb625c8fa2bb6b
      Reviewed-on: https://go-review.googlesource.com/68550
      Run-TryBot: Tom Bergan <tombergan@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarTom Bergan <tombergan@google.com>
      f5cd3868
    • Daniel Theophanes's avatar
      database/sql: prevent race in driver by locking dc in Next · 897080d5
      Daniel Theophanes authored
      Database drivers should be called from a single goroutine to ease
      driver's design. If a driver chooses to handle context
      cancels internally it may do so.
      
      The sql package violated this agreement when calling Next or
      NextResultSet. It was possible for a concurrent rollback
      triggered from a context cancel to call a Tx.Rollback (which
      takes a driver connection lock) while a Rows.Next is in progress
      (which does not tack the driver connection lock).
      
      The current internal design of the sql package is each call takes
      roughly two locks: a closemu lock which prevents an disposing of
      internal resources (assigning nil or removing from lists)
      and a driver connection lock that prevents calling driver code from
      multiple goroutines.
      
      Fixes #21117
      
      Change-Id: Ie340dc752a503089c27f57ffd43e191534829360
      Reviewed-on: https://go-review.googlesource.com/65731Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      897080d5
    • David Crawshaw's avatar
      cmd/link: zero symtab fields correctly · 350b74bc
      David Crawshaw authored
      CL 69370 introduced a hasmain field to moduledata after the
      modulehashes slice. However that code was relying on the zeroing
      code after it to cover modulehashes if len(Shlibs) == 0. The
      hasmain field gets in the way of that. So clear modulehashes
      explicitly in that case.
      
      Found when looking at #22250. Not sure if it's related.
      
      Change-Id: I81050cb4554cd49e9f245d261ef422f97d026df4
      Reviewed-on: https://go-review.googlesource.com/70730
      Run-TryBot: David Crawshaw <crawshaw@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      350b74bc
    • Daniel Martí's avatar
      net: fix data race in TestClosingListener · 0e4de78d
      Daniel Martí authored
      In https://golang.org/cl/66334, the test was changed so that the second
      Listen would also be closed. However, it shouldn't have reused the same
      ln variable, as that can lead to a data race with the background loop
      that accepts connections.
      
      Simply define a new Listener, since we don't need to overwrite the first
      variable.
      
      I was able to reproduce the data race report locally about 10% of the
      time by reducing the sleep from a millisecond to a nanosecond. After the
      fix, it's entirely gone after 1000 runs.
      
      Fixes #22226.
      
      Change-Id: I7c639f9f2ee5098eac951a45f42f97758654eacd
      Reviewed-on: https://go-review.googlesource.com/70230
      Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      0e4de78d
    • Martin Möhrmann's avatar
      cmd/compile: simplify slice/array range loops for some element sizes · 743117a8
      Martin Möhrmann authored
      In range loops over slices and arrays besides a variable to track the
      index an extra variable containing the address of the current element
      is used. To compute a pointer to the next element the elements size is
      added to the address.
      
      On 386 and amd64 an element of size 1, 2, 4 or 8 bytes can by copied
      from an array using a MOV instruction with suitable addressing mode
      that uses the start address of the array, the index of the element and
      element size as scaling factor. Thereby, for arrays and slices with
      suitable element size we can avoid keeping and incrementing an extra
      variable to compute the next elements address.
      
      Shrinks cmd/go by 4 kilobytes.
      
      AMD64:
      name                   old time/op    new time/op    delta
      BinaryTree17              2.66s ± 7%     2.54s ± 0%  -4.53%  (p=0.000 n=10+8)
      Fannkuch11                3.02s ± 1%     3.02s ± 1%    ~     (p=0.579 n=10+10)
      FmtFprintfEmpty          45.6ns ± 1%    42.2ns ± 1%  -7.46%  (p=0.000 n=10+10)
      FmtFprintfString         69.8ns ± 1%    70.4ns ± 1%  +0.84%  (p=0.041 n=10+10)
      FmtFprintfInt            80.1ns ± 1%    79.0ns ± 1%  -1.35%  (p=0.000 n=10+10)
      FmtFprintfIntInt          127ns ± 1%     125ns ± 1%  -1.00%  (p=0.007 n=10+9)
      FmtFprintfPrefixedInt     158ns ± 2%     152ns ± 1%  -4.11%  (p=0.000 n=10+10)
      FmtFprintfFloat           218ns ± 1%     214ns ± 1%  -1.61%  (p=0.000 n=10+10)
      FmtManyArgs               508ns ± 1%     504ns ± 1%  -0.93%  (p=0.001 n=9+10)
      GobDecode                6.76ms ± 1%    6.78ms ± 1%    ~     (p=0.353 n=10+10)
      GobEncode                5.84ms ± 1%    5.77ms ± 1%  -1.31%  (p=0.000 n=10+9)
      Gzip                      223ms ± 1%     218ms ± 1%  -2.39%  (p=0.000 n=10+10)
      Gunzip                   40.3ms ± 1%    40.4ms ± 3%    ~     (p=0.796 n=10+10)
      HTTPClientServer         73.5µs ± 0%    73.3µs ± 0%  -0.28%  (p=0.000 n=10+9)
      JSONEncode               12.7ms ± 1%    12.6ms ± 8%    ~     (p=0.173 n=8+10)
      JSONDecode               57.5ms ± 1%    56.1ms ± 2%  -2.40%  (p=0.000 n=10+10)
      Mandelbrot200            3.80ms ± 1%    3.86ms ± 6%    ~     (p=0.579 n=10+10)
      GoParse                  3.25ms ± 1%    3.23ms ± 1%    ~     (p=0.052 n=10+10)
      RegexpMatchEasy0_32      74.4ns ± 1%    76.9ns ± 1%  +3.39%  (p=0.000 n=10+10)
      RegexpMatchEasy0_1K       243ns ± 2%     248ns ± 1%  +1.86%  (p=0.000 n=10+8)
      RegexpMatchEasy1_32      71.0ns ± 2%    72.8ns ± 1%  +2.55%  (p=0.000 n=10+10)
      RegexpMatchEasy1_1K       370ns ± 1%     383ns ± 0%  +3.39%  (p=0.000 n=10+9)
      RegexpMatchMedium_32      107ns ± 0%     113ns ± 1%  +5.33%  (p=0.000 n=6+10)
      RegexpMatchMedium_1K     35.0µs ± 1%    36.0µs ± 1%  +3.13%  (p=0.000 n=10+10)
      RegexpMatchHard_32       1.65µs ± 1%    1.69µs ± 1%  +2.23%  (p=0.000 n=10+9)
      RegexpMatchHard_1K       49.8µs ± 1%    50.6µs ± 1%  +1.59%  (p=0.000 n=10+10)
      Revcomp                   398ms ± 1%     396ms ± 1%  -0.51%  (p=0.043 n=10+10)
      Template                 63.4ms ± 1%    60.8ms ± 0%  -4.11%  (p=0.000 n=10+9)
      TimeParse                 318ns ± 1%     322ns ± 1%  +1.10%  (p=0.005 n=10+10)
      TimeFormat                323ns ± 1%     336ns ± 1%  +4.15%  (p=0.000 n=10+10)
      
      Updates: #15809.
      
      Change-Id: I55915aaf6d26768e12247f8a8edf14e7630726d1
      Reviewed-on: https://go-review.googlesource.com/38061
      Run-TryBot: Martin Möhrmann <moehrmann@google.com>
      Reviewed-by: 's avatarKeith Randall <khr@golang.org>
      743117a8
    • Frank Somers's avatar
      runtime: use vDSO on linux/386 to improve time.Now performance · af40cbe8
      Frank Somers authored
      This change adds support for accelerating time.Now by using
      the __vdso_clock_gettime fast-path via the vDSO on linux/386
      if it is available.
      
      When the vDSO path to the clocks is available, it is typically
      5x-10x faster than the syscall path (see benchmark extract
      below).  Two such calls are made for each time.Now() call
      on most platforms as of go 1.9.
      
      - Add vdso_linux_386.go, containing the ELF32 definitions
        for use by vdso_linux.go, the maximum array size, and
        the symbols to be located in the vDSO.
      
      - Modify runtime.walltime and runtime.nanotime to check for
        and use the vDSO fast-path if available, or fall back to
        the existing syscall path.
      
      - Reduce the stack reservations for runtime.walltime and
        runtime.monotime from 32 to 16 bytes. It appears the syscall
        path actually only needed 8 bytes, but 16 is now needed to
        cover the syscall and vDSO paths.
      
      - Remove clearing DX from the syscall paths as clock_gettime
        only takes 2 args (BX, CX in syscall calling convention),
        so there should be no need to clear DX.
      
      The included BenchmarkTimeNow was run with -cpu=1 -count=20
      on an "Intel(R) Celeron(R) CPU J1900 @ 1.99GHz", comparing
      released go 1.9.1 vs this change. This shows a gain in
      performance on linux/386 (6.89x), and that no regression
      occurred on linux/amd64 due to this change.
      
      Kernel: linux/i686, GOOS=linux GOARCH=386
         name      old time/op  new time/op  delta
         TimeNow   978ns ± 0%   142ns ± 0%  -85.48%  (p=0.000 n=16+20)
      
      Kernel: linux/x86_64, GOOS=linux GOARCH=amd64
         name      old time/op  new time/op  delta
         TimeNow   125ns ± 0%   125ns ± 0%   ~       (all equal)
      
      Gains are more dramatic in virtualized environments,
      presumably due to the overhead of virtualizing the syscall.
      
      Fixes #22190
      
      Change-Id: I2f83ce60cb1b8b310c9ced0706bb463c1b3aedf8
      Reviewed-on: https://go-review.googlesource.com/69390
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      af40cbe8
    • Tobias Klauser's avatar
      syscall: correct type for timeout argument to Select on linux/{arm64,mips64x} · bf237f53
      Tobias Klauser authored
      syscall.Select uses SYS_PSELECT6 on arm64 and mipx64x, however this
      syscall expects its 5th argument to be of type Timespec (with seconds
      and nanoseconds) instead of type Timeval (with seconds and microseconds)
      This leads to the timeout being too short by a factor of 1000.
      
      This CL fixes this by adjusting the timeout argument accordingly,
      similarly to how glibc does it for architectures where neither
      SYS_SELECT nor SYS__NEWSELECT are available.
      
      Fixes #22246
      
      Change-Id: I33a183b0b87c2dae4a77a2d00f8615169fad48dd
      Reviewed-on: https://go-review.googlesource.com/70590
      Run-TryBot: Ian Lance Taylor <iant@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      bf237f53
    • Wei Xiao's avatar
      cmd/asm: refine Go assembly for ARM64 · 531e6c06
      Wei Xiao authored
      Some ARM64-specific instructions (such as SIMD instructions) are not supported.
      This patch adds support for the following:
      1. Extended register, e.g.:
           ADD	Rm.<ext>[<<amount], Rn, Rd
           <ext> can have the following values:
             UXTB, UXTH, UXTW, UXTX, SXTB, SXTH, SXTW and SXTX
      2. Arrangement for SIMD instructions, e.g.:
           VADDP	Vm.<T>, Vn.<T>, Vd.<T>
           <T> can have the following values:
             B8, B16, H4, H8, S2, S4 and D2
      3. Width specifier and element index for SIMD instructions, e.g.:
           VMOV	Vn.<T>[index], Rd // MOV(to general register)
           <T> can have the following values:
             S and D
      4. Register List, e.g.:
           VLD1	(Rn), [Vt1.<T>, Vt2.<T>, Vt3.<T>]
      5. Register offset variant, e.g.:
           VLD1.P	(Rn)(Rm), [Vt1.<T>, Vt2.<T>] // Rm is the post-index register
      6. Go assembly for ARM64 reference manual
           new added instructions are required to have according explanation items in
           the manual and items for existed instructions will be added incrementally
      
      For more information about the refinement background, please refer to the
      discussion (https://groups.google.com/forum/#!topic/golang-dev/rWgDxCrL4GU)
      
      This patch only adds syntax and doesn't break any assembly that already exists.
      
      Change-Id: I34e90b7faae032820593a0e417022c354a882008
      Reviewed-on: https://go-review.googlesource.com/41654
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      Reviewed-by: 's avatarCherry Zhang <cherryyz@google.com>
      531e6c06
    • Jed Denlea's avatar
      image/gif: try harder to use global color table · 31cd20a7
      Jed Denlea authored
      The GIF format allows for an image to contain a global color table which
      might be used for some or every frame in an animated GIF.  This palette
      contains 24-bit opaque RGB values.  An individual frame may use the
      global palette and enable transparency by picking one number to be
      transparent, instead of the color value in the palette.
      
      image/gif decodes a GIF, which contains an []*image.Paletted that holds
      each frame.  When decoded, if a frame has a transparent color and uses
      the global palette, a copy of the global []color.Color is made, and the
      transparency color index is replaced with color.RGBA{}.
      
      When encoding a GIF, each frame's palette is encoded to the form it
      might exist in a GIF, up to 768 bytes "RGBRGBRGBRGB...". If a frame's
      encoded palette is equal to the encoded global color table, the frame
      will be encoded with the flag set to use the global color table,
      otherwise the frame's palette will be included.
      
      So, if the color in the global color table that matches the transparent
      index of one frame wasn't black (and it frequently is not), reencoding a
      GIF will likely result in a larger file because each frame's palette
      will have to be encoded inline.
      
      This commit takes a frame's transparent color index into account when
      comparing an individual image.Paletted's encoded color table to the
      global color table.
      
      Fixes #22137
      
      Change-Id: I5460021da6e4d7ce19198d5f94a8ce714815bc08
      Reviewed-on: https://go-review.googlesource.com/68313Reviewed-by: 's avatarNigel Tao <nigeltao@golang.org>
      31cd20a7
    • David Chase's avatar
      cmd/compile: attempt to deflake debug_test.go · e45e4902
      David Chase authored
      Excluded when -short because it still runs relatively long,
      but deflaked.
      
      Removed timeouts from normal path and ensured that they were
      not needed and that reference files did not change.
      
      Use "tbreak" instead of "break" with gdb to reduce chance
      of multiple hits on main.main.  (Seems not enough, but a
      move in the right direction).
      
      By default, testing ignores repeated lines that occur when
      nexting.  This appears to sometimes be timing-dependent and
      is the observed source of flakiness in testing so far.
      Note that these can also be signs of a bug in the generated
      debugging output, but it is one of the less-confusing bugs
      that can occur.
      
      By default, testing with gdb uses compilation with
      inlining disabled to prevent dependence on library code
      (it's a bug that library code is seen while Nexting, but
      the bug is current behavior).
      
      Also by default exclude all source files outside /testdata
      to prevent accidental dependence on library code.  Note that
      this is currently only applicable to dlv because (for the
      debugging information we produce) gdb does not indicate a
      change in the source file for inlined code.
      
      Added flags -i and -r to make gdb testing compile with
      inlining and be sensitive to repeats in the next stream.
      This is for developer-testing and so we can describe these
      problems in bug reports.
      
      Updates #22206.
      
      Change-Id: I9a30ebbc65aa0153fe77b1858cf19743bdc985e4
      Reviewed-on: https://go-review.googlesource.com/69930
      Run-TryBot: David Chase <drchase@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: 's avatarRuss Cox <rsc@golang.org>
      Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      e45e4902
    • Tim Cooper's avatar
      reflect: allow Copy to a byte array or byte slice from a string · 245e386e
      Tim Cooper authored
      This somewhat mirrors the special case behavior of the copy built-in.
      
      Fixes #22215
      
      Change-Id: Ic353003ad3de659d3a6b4e9d97295b42510f3bf7
      Reviewed-on: https://go-review.googlesource.com/70431Reviewed-by: 's avatarIan Lance Taylor <iant@golang.org>
      245e386e