- 07 Jan, 2015 19 commits
-
-
Keith Randall authored
The equal algorithm used to take the size equal(p, q *T, size uintptr) bool With this change, it does not equal(p, q *T) bool Similarly for the hash algorithm. The size is rarely used, as most equal functions know the size of the thing they are comparing. For instance f32equal already knows its inputs are 4 bytes in size. For cases where the size is not known, we allocate a closure (one for each size needed) that points to an assembly stub that reads the size out of the closure and calls generic code that has a size argument. Reduces the size of the go binary by 0.07%. Performance impact is not measurable. Change-Id: I6e00adf3dde7ad2974adbcff0ee91e86d2194fec Reviewed-on: https://go-review.googlesource.com/2392Reviewed-by: Russ Cox <rsc@golang.org>
-
Josh Bleecher Snyder authored
It is unused as of e7173dfd. Change-Id: I3e4ea3fc66cf0a768ff28172a151b244952eefc9 Reviewed-on: https://go-review.googlesource.com/2093Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Keith Randall authored
Use a lookup table to find the function which contains a pc. It is faster than the old binary search. findfunc is used primarily for stack copying and garbage collection. benchmark old ns/op new ns/op delta BenchmarkStackCopy 294746596 255400980 -13.35% (findfunc is one of several tasks done by stack copy, the findfunc time itself is about 2.5x faster.) The lookup table is built at link time. The table grows the binary size by about 0.5% of the text segment. We impose a lower limit of 16 bytes on any function, which should not have much of an impact. (The real constraint required is <=256 functions in every 4096 bytes, but 16 bytes/function is easier to implement.) Change-Id: Ic315b7a2c83e1f7203cd2a50e5d21a822e18fdca Reviewed-on: https://go-review.googlesource.com/2097Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This implements support for calls to and from C in the ppc64 C ABI, as well as supporting functionality such as an entry point from the dynamic linker. Change-Id: I68da6df50d5638cb1a3d3fef773fb412d7bf631a Reviewed-on: https://go-review.googlesource.com/2009Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Cgo will need this for calls from C to Go and for handling signals that may occur in C code. Change-Id: I50cc4caf17cd142bff501e7180a1e27721463ada Reviewed-on: https://go-review.googlesource.com/2008Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
R13 is the C TLS pointer. Once we're calling to and from C code, if we clobber R13 in our code, sigtramp won't know whether to get the current g from REGG or from C TLS. The simplest solution is for Go code to preserve the C TLS pointer. This is equivalent to what other platforms do, except that on other platforms the TLS pointer is in a special register. Change-Id: I076e9cb83fd78843eb68cb07c748c4705c9a4c82 Reviewed-on: https://go-review.googlesource.com/2007Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This implements the ELF relocations and dynamic linking tables necessary to support internal linking on ppc64. It also marks ppc64le ELF files as ABI v2; failing to do this doesn't seem to confuse the loader, but it does confuse libbfd (and hence gdb, objdump, etc). Change-Id: I559dddf89b39052e1b6288a4dd5e72693b5355e4 Reviewed-on: https://go-review.googlesource.com/2006Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Most ppc64 relocations come in six or more variants where the basic relocation formula is the same, but which bits of the computed value are installed where changes. Introduce the concept of "variants" for internal relocations to support this. Since this applies to architecture-independent relocation types like R_PCREL, we do this in relocsym. Currently there is only an identity variant. A later CL that adds support for ppc64 ELF relocations will introduce more. Change-Id: I0c5f0e7dbe5beece79cd24fe36267d37c52f1a0c Reviewed-on: https://go-review.googlesource.com/2005Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
ppc64 has a bunch of these. Change-Id: I3b93ed2bae378322a8dec036b1681e520b56ff53 Reviewed-on: https://go-review.googlesource.com/2003Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Austin Clements authored
ppc64 function symbols have both a global entry point and a local entry point, where the difference is stashed in sym.other. We'll need this information to generate calls to ELF ABI functions. Change-Id: Ibe343923f56801de7ebec29946c79690a9ffde57 Reviewed-on: https://go-review.googlesource.com/2002Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Keith Randall authored
update #9401 Change-Id: I634a772814e7cd066f631a68342e7c3dc9d27e72 Reviewed-on: https://go-review.googlesource.com/2370Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
Cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks will be allocated directly. There is no point in cacheing 32KB+ stacks as we ask for and return 32KB at a time from the allocator. Note that the minimum stack is 8K on windows/64bit and 4K on windows/32bit and plan9. For these os/arch combinations, the number of stack orders is less so that we have the same maximum cached size. Fixes #9045 Change-Id: Ia4195dd1858fb79fc0e6a91ae29c374d28839e44 Reviewed-on: https://go-review.googlesource.com/2098Reviewed-by: Russ Cox <rsc@golang.org>
-
Oling Cat authored
Change-Id: I7238ae84d637534a345e5d077b8c63466148bd75 Reviewed-on: https://go-review.googlesource.com/1521Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
The ones at the end of M and G are just used to compute their size for use in assembly. Generate the size explicitly. The one at the end of itab is variable-sized, and at least one. The ones at the end of interfacetype and uncommontype are not needed, as the preceding slice references them (the slice was originally added for use by reflect?). The one at the end of stackmap is already accessed correctly, and the runtime never allocates one. Update #9401 Change-Id: Ia75e3aaee38425f038c506868a17105bd64c712f Reviewed-on: https://go-review.googlesource.com/2420Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
Fold in some startup randomness to make the hash vary across different runs. This helps prevent attackers from choosing keys that all map to the same bucket. Also, reorganize the hash a bit. Move the *m1 multiply to after the xor of the current hash and the message. For hash quality it doesn't really matter, but for DDOS resistance it helps a lot (any processing done to the message before it is merged with the random seed is useless, as it is easily inverted by an attacker). Update #9365 Change-Id: Ib19968168e1bbc541d1d28be2701bb83e53f1e24 Reviewed-on: https://go-review.googlesource.com/2344Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Matthew Dempsky authored
The gc toolchain no longer includes a C compiler, so mentions of "6c" can be removed or replaced by 6g as appropriate. Similarly, some cgo functions that previously generated C source output no longer need to. Change-Id: I1ae6b02630cff9eaadeae6f3176c0c7824e8fbe5 Reviewed-on: https://go-review.googlesource.com/2391Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Brad Fitzpatrick authored
Change-Id: I315b338968cb1d9298664d181de44a691b325bb8 Reviewed-on: https://go-review.googlesource.com/2450Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Brad Fitzpatrick authored
Reader.Discard is the complement to Peek. It discards the next n bytes of input. We already have Reader.Buffered to see how many bytes of data are sitting available in memory, and Reader.Peek to get that that buffer directly. But once you're done with the Peek'd data, you can't get rid of it, other than Reading it. Both Read and io.CopyN(ioutil.Discard, bufReader, N) are relatively slow. People instead resort to multiple blind ReadByte calls, just to advance the internal b.r variable. I've wanted this previously, several people have asked for it in the past on golang-nuts/dev, and somebody just asked me for it again in a private email. There are a few places in the standard library we'd use it too. Change-Id: I85dfad47704a58bd42f6867adbc9e4e1792bc3b0 Reviewed-on: https://go-review.googlesource.com/2260Reviewed-by: Russ Cox <rsc@golang.org>
-
Shenghou Ma authored
This CL only fixes the build, there are two failing tests: RaceMapBigValAccess1 and RaceMapBigValAccess2 in runtime/race tests. I haven't investigated why yet. Updates #9516. Change-Id: If5bd2f0bee1ee45b1977990ab71e2917aada505f Reviewed-on: https://go-review.googlesource.com/2401Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
- 06 Jan, 2015 21 commits
-
-
Martin Möhrmann authored
Use direct binary insertion instead of recursive calls to symMerge when one of the blocks has only one element. benchmark old ns/op new ns/op delta BenchmarkStableString1K 421999 397629 -5.77% BenchmarkStableInt1K 123422 120592 -2.29% BenchmarkStableInt64K 9629094 9620200 -0.09% BenchmarkStable1e2 123089 120209 -2.34% BenchmarkStable1e4 39505228 36870029 -6.67% BenchmarkStable1e6 8196612367 7630840157 -6.90% Change-Id: I49905a909e8595cfa05920ccf9aa00a8f3036110 Reviewed-on: https://go-review.googlesource.com/2219Reviewed-by: Robert Griesemer <gri@golang.org>
-
Russ Cox authored
sysReserve doesn't actually reserve the full amount requested on 64-bit systems, because of problems with ulimit. Instead it checks that it can get the first 64 kB and assumes it can grab the rest as needed. This doesn't work well with the "let the kernel pick an address" mode, so don't do that. Pick a high address instead. Change-Id: I4de143a0e6fdeb467fa6ecf63dcd0c1c1618a31c Reviewed-on: https://go-review.googlesource.com/2345Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
The line 'mp.schedlink = mnext' has an implicit write barrier call, which needs a valid g. Move it above the setg(nil). Change-Id: If3e86c948e856e10032ad89f038bf569659300e0 Reviewed-on: https://go-review.googlesource.com/2347Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
This test is doing pointer graph manipulation from C, and we cannot support that with concurrent GC. The wbshadow mode correctly diagnoses missing write barriers. Disable the test in that mode for now. There is a bigger issue behind it, namely SWIG, but for now we are focused on making all.bash pass with wbshadow enabled. Change-Id: I55891596d4c763e39b74082191d4a5fac7161642 Reviewed-on: https://go-review.googlesource.com/2346Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Brad Fitzpatrick authored
It did tons of write syscalls before: https://www.youtube.com/watch?v=t60fhjAqBdw This is the worst offender. It's not worth fixing all the cases of two consecutive prints. Change-Id: I95860ef6a844d89b149528195182b191aad8731b Reviewed-on: https://go-review.googlesource.com/2371Reviewed-by: Rob Pike <r@golang.org>
-
Adam Langley authored
There are two methods by which TLS clients signal the renegotiation extension: either a special cipher suite value or a TLS extension. It appears that I left debugging code in when I landed support for the extension because there's a "+ 1" in the switch statement that shouldn't be there. The effect of this is very small, but it will break Firefox if security.ssl.require_safe_negotiation is enabled in about:config. (Although almost nobody does this.) This change fixes the original bug and adds a test. Sadly the test is a little complex because there's no OpenSSL s_client option that mirrors that behaviour of require_safe_negotiation. Change-Id: Ia6925c7d9bbc0713e7104228a57d2d61d537c07a Reviewed-on: https://go-review.googlesource.com/1900Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Adam Langley authored
SignPSS is documented as allowing opts to be nil, but actually crashes in that case. This change fixes that. Change-Id: Ic48ff5f698c010a336e2bf720e0f44be1aecafa0 Reviewed-on: https://go-review.googlesource.com/2330Reviewed-by: Minux Ma <minux@golang.org>
-
Russ Cox authored
First, call clearcheckmarks immediately after changing checkmark, so that there is less time when the checkmark flag and the bitmap are inconsistent. The tiny gap between the two lines is fine, because the world is stopped. Before, the gap was much larger and included such code as "go bgsweep()", which allocated. Second, modify gcphase only when the world is stopped. As written, gcscan_m was changing gcphase from 0 to GCscan and back to 0 while other goroutines were running. Another goroutine running at the same time might decide to sleep, see GCscan, call gcphasework, and start "helping" by scanning its stack. That's fine, except that if gcphase flips back to 0 as the goroutine calls scanblock, it will start draining the work buffers prematurely. Both of these were found wbshadow=2 (and a lot of hard work). Eventually that will run automatically, but right now it still doesn't quite work for all.bash, due to mmap conflicts with pthread-created threads. Change-Id: I99aa8210cff9c6e7d0a1b62c75be32a23321897b Reviewed-on: https://go-review.googlesource.com/2340Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
Found with GODEBUG=wbshadow=2 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: I5624b509a36650bce6834cf394b9da163abbf8c0 Reviewed-on: https://go-review.googlesource.com/2310Reviewed-by: Rick Hudson <rlh@golang.org>
-
Alex Brainman authored
Fixes #9121 Change-Id: Id6ca9f259260310c4c6cbdabbc8f2fead8414e6a Reviewed-on: https://go-review.googlesource.com/2202Reviewed-by: Minux Ma <minux@golang.org>
-
Shenghou Ma authored
Change-Id: Ia18b8411bebc47ea71ac1acd9ff9dc570ec15dea Reviewed-on: https://go-review.googlesource.com/2341Reviewed-by: Dave Cheney <dave@cheney.net>
-
Russ Cox authored
Use typedmemmove, typedslicecopy, and adjust reflect.call to execute the necessary write barriers. Found with GODEBUG=wbshadow=2 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: Iec5b5b0c1be5589295e28e5228e37f1a92e07742 Reviewed-on: https://go-review.googlesource.com/2312Reviewed-by: Keith Randall <khr@golang.org>
-
Russ Cox authored
These depend on storing arbitrary integer values using pointer atomics, and we can't support that anymore. Change-Id: I8cadd6d462c3eebdbe7078f43fe7c779fa8f52b3 Reviewed-on: https://go-review.googlesource.com/2311Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
A side effect of this change is that when assertI2T writes to the memory for the T being extracted, it can use typedmemmove for write barriers. There are other ways we could have done this, but this one finishes a TODO in package runtime. Found with GODEBUG=wbshadow=2 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: Icbc8aabfd8a9b1f00be2e421af0e3b29fa54d01e Reviewed-on: https://go-review.googlesource.com/2279Reviewed-by: Keith Randall <khr@golang.org>
-
Russ Cox authored
Found with GODEBUG=wbshadow=2 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: I1320d5340a9e421c779f24f3b170e33974e56e4f Reviewed-on: https://go-review.googlesource.com/2278Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
Found with GODEBUG=wbshadow=2 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: Iea83d693480c2f3008b4e80d55821acff65970a6 Reviewed-on: https://go-review.googlesource.com/2277Reviewed-by: Keith Randall <khr@golang.org>
-
Russ Cox authored
Preparation for replacing many memmove calls in runtime with typedmemmove, which is a clearer description of what the routine is doing. For the same reason, rename writebarriercopy to typedslicecopy. Change-Id: I6f23bef2c2215509fefba175b16908f76dc7538c Reviewed-on: https://go-review.googlesource.com/2276Reviewed-by: Rick Hudson <rlh@golang.org>
-
Russ Cox authored
Add write barrier to atomic operations manipulating pointers. In general an atomic write of a pointer word may indicate racy accesses, so there is no strictly safe way to attempt to keep the shadow copy in sync with the real one. Instead, mark the shadow copy as not used. Redirect sync/atomic pointer routines back to the runtime ones, so that there is only one copy of the write barrier and shadow logic. In time we might consider doing this for most of the sync/atomic functions, but for now only the pointer routines need that treatment. Found with GODEBUG=wbshadow=1 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: I852936b9a111a6cb9079cfaf6bd78b43016c0242 Reviewed-on: https://go-review.googlesource.com/2066Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
The Gobuf.g goroutine pointer is almost always updated by assembly code. In one of the few places it is updated by Go code - func save - it must be treated as a uintptr to avoid a write barrier being emitted at a bad time. Instead of figuring out how to emit the write barriers missing in the assembly manipulation, change the type of the field to uintptr, so that it does not require write barriers at all. Goroutine structs are published in the allg list and never freed. That will keep the goroutine structs from being collected. There is never a time that Gobuf.g's contain the only references to a goroutine: the publishing of the goroutine in allg comes first. Goroutine pointers are also kept in non-GC-visible places like TLS, so I can't see them ever moving. If we did want to start moving data in the GC, we'd need to allocate the goroutine structs from an alternate arena. This CL doesn't make that problem any worse. Found with GODEBUG=wbshadow=1 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: I85f91312ec3e0ef69ead0fff1a560b0cfb095e1a Reviewed-on: https://go-review.googlesource.com/2065Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
Found with GODEBUG=wbshadow=1 mode. Eventually that will run automatically, but right now it still detects other missing write barriers. Change-Id: Ic8624401d7c8225a935f719f96f2675c6f5c0d7c Reviewed-on: https://go-review.googlesource.com/2064Reviewed-by: Rick Hudson <rlh@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Russ Cox authored
This is the detection code. It works well enough that I know of a handful of missing write barriers. However, those are subtle enough that I'll address them in separate followup CLs. GODEBUG=wbshadow=1 checks for a write that bypassed the write barrier at the next write barrier of the same word. If a bug can be detected in this mode it is typically easy to understand, since the crash says quite clearly what kind of word has missed a write barrier. GODEBUG=wbshadow=2 adds a check of the write barrier shadow copy during garbage collection. Bugs detected at garbage collection can be difficult to understand, because there is no context for what the found word means. Typically you have to reproduce the problem with allocfreetrace=1 in order to understand the type of the badly updated word. Change-Id: If863837308e7c50d96b5bdc7d65af4969bf53a6e Reviewed-on: https://go-review.googlesource.com/2061Reviewed-by: Austin Clements <austin@google.com>
-