- 29 Jan, 2015 14 commits
-
-
Robert Griesemer authored
cmd/go doesn't complain (this is an open issue), but go/types does Change-Id: I2caec1f7aec991a9500d2c3504c29e4ab718c138 Reviewed-on: https://go-review.googlesource.com/3541Reviewed-by: Alan Donovan <adonovan@google.com>
-
Rick Hudson authored
Set gcscanvalid=false after you have cased to _Grunning. If you do it before the cas and the atomicstatus races to a scan state, the scan will set gcscanvalid=true and we will be _Grunning with gcscanvalid==true which is not a good thing. Change-Id: Ie53ea744a5600392b47da91159d985fe6fe75961 Reviewed-on: https://go-review.googlesource.com/3510Reviewed-by: Austin Clements <austin@google.com>
-
Austin Clements authored
Yet another leftover from C: parfor took a func value for the callback, casted it to an unsafe.Pointer for storage, and then casted it back to a func value to call it. This is unnecessary, so just store the body as a func value. Beyond general cleanup, this also eliminates the last use of unsafe in parfor. Change-Id: Ia904af7c6c443ba75e2699835aee8e9a39b26dd8 Reviewed-on: https://go-review.googlesource.com/3396Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Austin Clements authored
Prior to the conversion of the runtime to Go, this void* was necessary to get closure information in to C callbacks. There are no more C callbacks and parfor is perfectly capable of invoking a Go closure now, so eliminate ctx and all of its unsafe-ness. (Plus, the runtime currently doesn't use ctx for anything.) Change-Id: I39fc53b7dd3d7f660710abc76b0d831bfc6296d8 Reviewed-on: https://go-review.googlesource.com/3395Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Austin Clements authored
parfor originally used a tail array for its thread array. This got replaced with a slice allocation in the conversion to Go, but many of its gnarlier effects remained. Instead of keeping track of the pointer to the first element of the slice and using unsafe pointer math to get at the ith element, just keep the slice around and use regular slice indexing. There is no longer any need for padding to 64-bit align the tail array (there hasn't been since the Go conversion), so remove this unnecessary padding from the parfor struct. Finally, since the slice tracks its own length, replace the nthrmax field with len(thr). Change-Id: I0020a1815849bca53e3613a8fa46ae4fbae67576 Reviewed-on: https://go-review.googlesource.com/3394Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This cleanup was slated for after the conversion of the runtime to Go. Also improve type and function documentation. Change-Id: I55a16b09e00cf701f246deb69e7ce7e3e04b26e7 Reviewed-on: https://go-review.googlesource.com/3393Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Austin Clements authored
Currently, if we do an atomic{load,store}64 of an unaligned address on 386, we'll simply get a non-atomic load/store. This has been the source of myriad bugs, so add alignment checks to these two operations. These checks parallel the equivalent checks in sync/atomic. The alignment check is not necessary in cas64 because it uses a locked instruction. The CPU will either execute this atomically or raise an alignment fault (#AC)---depending on the alignment check flag---either of which is fine. This also fixes the two places in the runtime that trip the new checks. One is in the runtime self-test and shouldn't have caused real problems. The other is in tickspersecond and could, in principle, have caused a misread of the ticks per second during initialization. Change-Id: If1796667012a6154f64f5e71d043c7f5fb3dd050 Reviewed-on: https://go-review.googlesource.com/3521Reviewed-by: Russ Cox <rsc@golang.org>
-
Russ Cox authored
Like the -exec flag, which specifies a program to use to run a built executable, the -toolexec flag specifies a program to use to run a tool like 5a, 5g, or 5l. This flag enables running the toolchain under common testing environments, such as valgrind. This flag also enables the use of custom testing environments or the substitution of alternate tools. See https://godoc.org/rsc.io/toolstash for one possibility. Change-Id: I256aa7af2d96a4bc7911dc58151cc2155dbd4121 Reviewed-on: https://go-review.googlesource.com/3351Reviewed-by: Rob Pike <r@golang.org>
-
Dmitry Vyukov authored
Language specification says that variables are captured by reference. And that is what gc compiler does. However, in lots of cases it is possible to capture variables by value under the hood without affecting visible behavior of programs. For example, consider the following typical pattern: func (o *Obj) requestMany(urls []string) []Result { wg := new(sync.WaitGroup) wg.Add(len(urls)) res := make([]Result, len(urls)) for i := range urls { i := i go func() { res[i] = o.requestOne(urls[i]) wg.Done() }() } wg.Wait() return res } Currently o, wg, res, and i are captured by reference causing 3+len(urls) allocations (e.g. PPARAM o is promoted to PPARAMREF and moved to heap). But all of them can be captured by value without changing behavior. This change implements simple strategy for capturing by value: if a captured variable is not addrtaken and never assigned to, then it is captured by value (it is effectively const). This simple strategy turned out to be very effective: ~80% of all captures in std lib are turned into value captures. The remaining 20% are mostly in defers and non-escaping closures, that is, they do not cause allocations anyway. benchmark old allocs new allocs delta BenchmarkCompressedZipGarbage 153 126 -17.65% BenchmarkEncodeDigitsSpeed1e4 91 69 -24.18% BenchmarkEncodeDigitsSpeed1e5 178 129 -27.53% BenchmarkEncodeDigitsSpeed1e6 1510 1051 -30.40% BenchmarkEncodeDigitsDefault1e4 100 75 -25.00% BenchmarkEncodeDigitsDefault1e5 193 139 -27.98% BenchmarkEncodeDigitsDefault1e6 1420 985 -30.63% BenchmarkEncodeDigitsCompress1e4 100 75 -25.00% BenchmarkEncodeDigitsCompress1e5 193 139 -27.98% BenchmarkEncodeDigitsCompress1e6 1420 985 -30.63% BenchmarkEncodeTwainSpeed1e4 109 81 -25.69% BenchmarkEncodeTwainSpeed1e5 211 151 -28.44% BenchmarkEncodeTwainSpeed1e6 1588 1097 -30.92% BenchmarkEncodeTwainDefault1e4 103 77 -25.24% BenchmarkEncodeTwainDefault1e5 199 143 -28.14% BenchmarkEncodeTwainDefault1e6 1324 917 -30.74% BenchmarkEncodeTwainCompress1e4 103 77 -25.24% BenchmarkEncodeTwainCompress1e5 190 137 -27.89% BenchmarkEncodeTwainCompress1e6 1327 919 -30.75% BenchmarkConcurrentDBExec 16223 16220 -0.02% BenchmarkConcurrentStmtQuery 17687 16182 -8.51% BenchmarkConcurrentStmtExec 5191 5186 -0.10% BenchmarkConcurrentTxQuery 17665 17661 -0.02% BenchmarkConcurrentTxExec 15154 15150 -0.03% BenchmarkConcurrentTxStmtQuery 17661 16157 -8.52% BenchmarkConcurrentTxStmtExec 3677 3673 -0.11% BenchmarkConcurrentRandom 14000 13614 -2.76% BenchmarkManyConcurrentQueries 25 22 -12.00% BenchmarkDecodeComplex128Slice 318 252 -20.75% BenchmarkDecodeFloat64Slice 318 252 -20.75% BenchmarkDecodeInt32Slice 318 252 -20.75% BenchmarkDecodeStringSlice 2318 2252 -2.85% BenchmarkDecode 11 8 -27.27% BenchmarkEncodeGray 64 56 -12.50% BenchmarkEncodeNRGBOpaque 64 56 -12.50% BenchmarkEncodeNRGBA 67 58 -13.43% BenchmarkEncodePaletted 68 60 -11.76% BenchmarkEncodeRGBOpaque 64 56 -12.50% BenchmarkGoLookupIP 153 139 -9.15% BenchmarkGoLookupIPNoSuchHost 508 466 -8.27% BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76% BenchmarkClientServer 62 59 -4.84% BenchmarkClientServerParallel4 62 59 -4.84% BenchmarkClientServerParallel64 62 59 -4.84% BenchmarkClientServerParallelTLS4 79 76 -3.80% BenchmarkClientServerParallelTLS64 112 109 -2.68% BenchmarkCreateGoroutinesCapture 10 6 -40.00% BenchmarkAfterFunc 1006 1005 -0.10% Fixes #6632. Change-Id: I0cd51e4d356331d7f3c5f447669080cd19b0d2ca Reviewed-on: https://go-review.googlesource.com/3166Reviewed-by: Russ Cox <rsc@golang.org>
-
Mikio Hara authored
A few packages that handle net.IPConn in golang.org/x/net sub repository already implement full stack test cases with more coverage than the net package. There is no need to keep duplicate code around here. This change removes full stack test cases for IPConn that require knowing how to speak with each of protocol stack implementation of supported platforms. Change-Id: I871119a9746fc6a2b997b69cfd733463558f5816 Reviewed-on: https://go-review.googlesource.com/3404Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Mikio Hara authored
For now solaris port does not support cgo. Moreover, its system calls and library interfaces are different from BSD. Change-Id: Idb4fed889973368b35d38b361b23581abacfdeab Reviewed-on: https://go-review.googlesource.com/3306Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
-
Alex Plugaru authored
Fixes #9693 Change-Id: Ibf07199729bfc883b2a7e051cafd98185f912acd Reviewed-on: https://go-review.googlesource.com/3283Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Evan Phoenix authored
Using a mutex to protect a single int operation is quite heavyweight. Using sync/atomic provides much better performance. This change was benchmarked as such: BenchmarkSync 10000000 139 ns/op BenchmarkAtomic 200000000 9.90 ns/op package blah import ( "sync" "sync/atomic" "testing" ) type Int struct { mu sync.RWMutex i int64 } func (v *Int) Add(delta int64) { v.mu.Lock() defer v.mu.Unlock() v.i += delta } type AtomicInt struct { i int64 } func (v *AtomicInt) Add(delta int64) { atomic.AddInt64(&v.i, delta) } func BenchmarkSync(b *testing.B) { s := new(Int) for i := 0; i < b.N; i++ { s.Add(1) } } func BenchmarkAtomic(b *testing.B) { s := new(AtomicInt) for i := 0; i < b.N; i++ { s.Add(1) } } Change-Id: I6998239c785967647351bbfe8533c38e4894543b Reviewed-on: https://go-review.googlesource.com/3430Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Dominik Vogt authored
This patch was previously sent for review using hg: golang.org/cl/173930043 Change-Id: I559a2f2ee07990d0c23d2580381e32f8e23077a5 Reviewed-on: https://go-review.googlesource.com/3033Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
- 28 Jan, 2015 25 commits
-
-
Robert Griesemer authored
Also: - use io.ByteScanner rather than io.RuneScanner internally - minor simplifications in Float.Add/Sub Change-Id: Iae0e99384128dba9eccf68592c4fd389e2bd3b4f Reviewed-on: https://go-review.googlesource.com/3380Reviewed-by: Rob Pike <r@golang.org>
-
Rick Hudson authored
Set the minimum heap size to 4Mbytes except when the hash table code wants to force a GC. In an unrelated change when a mutator is asked to assist the GC by marking pointer workbufs it will keep working until the requested number of pointers are processed even if it means asking for additional workbufs. Change-Id: I661cfc0a7f2efcf6286b5d37d73e593d9ecd04d5 Reviewed-on: https://go-review.googlesource.com/3392Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Dmitry Vyukov authored
If result of string(i) does not escape, allocate a [4]byte temp on stack for it. Change-Id: If31ce9447982929d5b3b963fd0830efae4247c37 Reviewed-on: https://go-review.googlesource.com/3411Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Currently we always allocate string buffers in heap. For example, in the following code we allocate a temp string just for comparison: if string(byteSlice) == "abc" { ... } This change extends escape analysis to cover []byte->string conversions and string concatenation. If the result of operations does not escape, compiler allocates a small buffer on stack and passes it to slicebytetostring and concatstrings. Then runtime uses the buffer if the result fits into it. Size of the buffer is 32 bytes. There is no fundamental theory behind this number. Just an observation that on std lib tests/benchmarks frequency of string allocation is inversely proportional to string length; and there is significant number of allocations up to length 32. benchmark old allocs new allocs delta BenchmarkFprintfBytes 2 1 -50.00% BenchmarkDecodeComplex128Slice 318 316 -0.63% BenchmarkDecodeFloat64Slice 318 316 -0.63% BenchmarkDecodeInt32Slice 318 316 -0.63% BenchmarkDecodeStringSlice 2318 2316 -0.09% BenchmarkStripTags 11 5 -54.55% BenchmarkDecodeGray 111 102 -8.11% BenchmarkDecodeNRGBAGradient 200 188 -6.00% BenchmarkDecodeNRGBAOpaque 165 152 -7.88% BenchmarkDecodePaletted 319 309 -3.13% BenchmarkDecodeRGB 166 157 -5.42% BenchmarkDecodeInterlacing 279 268 -3.94% BenchmarkGoLookupIP 153 135 -11.76% BenchmarkGoLookupIPNoSuchHost 508 466 -8.27% BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76% BenchmarkClientServerParallel4 62 61 -1.61% BenchmarkClientServerParallel64 62 61 -1.61% BenchmarkClientServerParallelTLS4 79 78 -1.27% BenchmarkClientServerParallelTLS64 112 111 -0.89% benchmark old ns/op new ns/op delta BenchmarkFprintfBytes 381 311 -18.37% BenchmarkStripTags 2615 2351 -10.10% BenchmarkDecodeNRGBAGradient 3715887 3635096 -2.17% BenchmarkDecodeNRGBAOpaque 3047645 2928644 -3.90% BenchmarkGoLookupIP 153 135 -11.76% BenchmarkGoLookupIPNoSuchHost 508 466 -8.27% Change-Id: I9ec01da816945c3329d7be3c7794b520418c3f99 Reviewed-on: https://go-review.googlesource.com/3120Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Rick Hudson authored
During a concurrent GC stacks are scanned in an initial scan phase informing the GC of all pointers on the stack. The GC only needs to rescan the stack if it potentially changes which can only happen if the goroutine runs. This CL tracks whether the Goroutine has run since it was last scanned and thus may have changed its stack. If necessary the stack is rescanned. Change-Id: I5fb1c4338d42e3f61ab56c9beb63b7b2da25f4f1 Reviewed-on: https://go-review.googlesource.com/3275Reviewed-by: Russ Cox <rsc@golang.org>
-
Robert Griesemer authored
Change-Id: I369723c7a65f9a72c60b55704cebf40d78cf4f75 Reviewed-on: https://go-review.googlesource.com/3444Reviewed-by: Alan Donovan <adonovan@google.com>
-
Brad Fitzpatrick authored
This should fix the race builders. Change-Id: I9c9e7393d5e29d64ab797e346b34b1fa1dfe6d96 Reviewed-on: https://go-review.googlesource.com/3441Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
-
Dmitry Vyukov authored
net/http/pprof part of tracing functionality: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub Full change: https://codereview.appspot.com/146920043 Change-Id: I9092028fcbd5e8f97a56f2c155889ccdfb494afb Reviewed-on: https://go-review.googlesource.com/1453Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Currently we allocate a new string during []byte->string conversion in string comparison expressions. String allocation is unnecessary in this case, because comparison does memorize the strings for later use. This change uses slicebytetostringtmp to construct temp string directly from []byte buffer and passes it to runtime.eqstring. Change-Id: If00f1faaee2076baa6f6724d245d5b5e0f59b563 Reviewed-on: https://go-review.googlesource.com/3410Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Coarse-grained test skips to fix bots. Need to look closer at windows and nacl failures. Change-Id: I767ef1707232918636b33f715459ee3c0349b45e Reviewed-on: https://go-review.googlesource.com/3416Reviewed-by: Aram Hăvărneanu <aram@mgk.ro> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Dmitry Vyukov authored
Escape analysis treats everything assigned to OIND/ODOTPTR as escaping. As the result b escapes in the following code: func (b *Buffer) Foo() { n, m := ... b.buf = b.buf[n:m] } This change recognizes such assignments and ignores them. Update issue #9043. Update issue #7921. There are two similar cases in std lib that benefit from this optimization. First is in archive/zip: type readBuf []byte func (b *readBuf) uint32() uint32 { v := binary.LittleEndian.Uint32(*b) *b = (*b)[4:] return v } Second is in time: type data struct { p []byte error bool } func (d *data) read(n int) []byte { if len(d.p) < n { d.p = nil d.error = true return nil } p := d.p[0:n] d.p = d.p[n:] return p } benchmark old ns/op new ns/op delta BenchmarkCompressedZipGarbage 32431724 32217851 -0.66% benchmark old allocs new allocs delta BenchmarkCompressedZipGarbage 153 143 -6.54% Change-Id: Ia6cd32744e02e36d6d8c19f402f8451101711626 Reviewed-on: https://go-review.googlesource.com/3162Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Currently all PTRLIT element initializers escape. There is no reason for that. This change links STRUCTLIT to PTRLIT; STRUCTLIT element initializers are already linked to the STRUCTLIT. As the result, PTRLIT element initializers escape when PTRLIT itself escapes. Change-Id: I89ecd8677cbf81addcfd469cd2fd461c0e9bf7dd Reviewed-on: https://go-review.googlesource.com/3031Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Change-Id: I832a433f0f2fc10b0a2fea0bfb003a988fc2c81b Reviewed-on: https://go-review.googlesource.com/2039Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
cmd/go part of tracing functionality: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub Full change: https://codereview.appspot.com/146920043 Change-Id: If346e11b8029c475b01fbf7172ce1c88171fb1b2 Reviewed-on: https://go-review.googlesource.com/1460Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
testing part of tracing functionality: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub Full change: https://codereview.appspot.com/146920043 Change-Id: Ia3c2c4417106937d5775b0e7064db92c1fc36679 Reviewed-on: https://go-review.googlesource.com/1461Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
runtime/pprof part of tracing functionality: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub Full change: https://codereview.appspot.com/146920043 Change-Id: I3143a569cbd33576f19ca47308d1ff5200d8c955 Reviewed-on: https://go-review.googlesource.com/1452Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Add actual tracing of interesting runtime events. Part of a larger tracing functionality: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub Full change: https://codereview.appspot.com/146920043 Change-Id: Icccf54aea54e09350bb698ba6bf11532f9fbe6d3 Reviewed-on: https://go-review.googlesource.com/1451Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
This is first patch of series of patches that implement tracing functionality. Design doc: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-yoVhk4YZrNIDNf9RybngBc14/pub Full change: https://codereview.appspot.com/146920043 Change-Id: I84588348bb05a6f6a102c230f3bca6380a3419fe Reviewed-on: https://go-review.googlesource.com/1450Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
For some reason the current conditions require the type to be "uintptr-shaped". This cuts off structs and arrays with a pointer. isdirectiface and width==widthptr is sufficient condition to enable the fast paths. Change-Id: I11842531e7941365413606cfd6c34c202aa14786 Reviewed-on: https://go-review.googlesource.com/3414Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Fixes #9691. Change-Id: I22bfc82e05497e91a7b18a668913aed6c723365d Reviewed-on: https://go-review.googlesource.com/3282Reviewed-by: Russ Cox <rsc@golang.org>
-
Dmitry Vyukov authored
Race builders report goroutine leaks after addition of this benchmark: http://build.golang.org/log/18e47f4cbc18ee8db125e1f1157573dd1e333c41 Close idle connection in default transport. Change-Id: I86ff7b2e0972ed47c5ebcb9fce19e7f39d3ff530 Reviewed-on: https://go-review.googlesource.com/3412Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Dmitry Vyukov authored
Call frame allocations can account for significant portion of all allocations in a program, if call is executed in an inner loop (e.g. to process every line in a log). On the other hand, the allocation is easy to remove using sync.Pool since the allocation is strictly scoped. benchmark old ns/op new ns/op delta BenchmarkCall 634 338 -46.69% BenchmarkCall-4 496 167 -66.33% benchmark old allocs new allocs delta BenchmarkCall 1 0 -100.00% BenchmarkCall-4 1 0 -100.00% Update #7818 Change-Id: Icf60cce0a9be82e6171f0c0bd80dee2393db54a7 Reviewed-on: https://go-review.googlesource.com/1954Reviewed-by: Keith Randall <khr@golang.org>
-
Mikio Hara authored
This change extends existing test case to Windows for helping to fix golang.org/issue/5395. Change-Id: Iff077fa98ede511981df513f48d84c19375b3e04 Reviewed-on: https://go-review.googlesource.com/3304Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
-
Russ Cox authored
Pointers change from run to run, making it hard to use the debug output to identify the reason for a changed object file. Change-Id: I0c954da0943092c48686afc99ecf75eba516de6a Reviewed-on: https://go-review.googlesource.com/3352Reviewed-by: Aram Hăvărneanu <aram@mgk.ro> Reviewed-by: Rob Pike <r@golang.org>
-
David Leon Gil authored
ECDSA is unsafe to use if an entropy source produces predictable output for the ephemeral nonces. E.g., [Nguyen]. A simple countermeasure is to hash the secret key, the message, and entropy together to seed a CSPRNG, from which the ephemeral key is derived. Fixes #9452 -- This is a minimalist (in terms of patch size) solution, though not the most parsimonious in its use of primitives: - csprng_key = ChopMD-256(SHA2-512(priv.D||entropy||hash)) - reader = AES-256-CTR(k=csprng_key) This, however, provides at most 128-bit collision-resistance, so that Adv will have a term related to the number of messages signed that is significantly worse than plain ECDSA. This does not seem to be of any practical importance. ChopMD-256(SHA2-512(x)) is used, rather than SHA2-256(x), for two sets of reasons: *Practical:* SHA2-512 has a larger state and 16 more rounds; it is likely non-generically stronger than SHA2-256. And, AFAIK, cryptanalysis backs this up. (E.g., [Biryukov] gives a distinguisher on 47-round SHA2-256 with cost < 2^85.) This is well below a reasonable security-strength target. *Theoretical:* [Coron] and [Chang] show that Chop-MD(F(x)) is indifferentiable from a random oracle for slightly beyond the birthday barrier. It seems likely that this makes a generic security proof that this construction remains UF-CMA is possible in the indifferentiability framework. -- Many thanks to Payman Mohassel for reviewing this construction; any mistakes are mine, however. And, as he notes, reusing the private key in this way means that the generic-group (non-RO) proof of ECDSA's security given in [Brown] no longer directly applies. -- [Brown]: http://www.cacr.math.uwaterloo.ca/techreports/2000/corr2000-54.ps "Brown. The exact security of ECDSA. 2000" [Coron]: https://www.cs.nyu.edu/~puniya/papers/merkle.pdf "Coron et al. Merkle-Damgard revisited. 2005" [Chang]: https://www.iacr.org/archive/fse2008/50860436/50860436.pdf "Chang and Nandi. Improved indifferentiability security analysis of chopMD hash function. 2008" [Biryukov]: http://www.iacr.org/archive/asiacrypt2011/70730269/70730269.pdf "Biryukov et al. Second-order differential collisions for reduced SHA-256. 2011" [Nguyen]: ftp://ftp.di.ens.fr/pub/users/pnguyen/PubECDSA.ps "Nguyen and Shparlinski. The insecurity of the elliptic curve digital signature algorithm with partially known nonces. 2003" New tests: TestNonceSafety: Check that signatures are safe even with a broken entropy source. TestINDCCA: Check that signatures remain non-deterministic with a functional entropy source. Updated "golden" KATs in crypto/tls/testdata that use ECDSA suites. Change-Id: I55337a2fbec2e42a36ce719bd2184793682d678a Reviewed-on: https://go-review.googlesource.com/3340Reviewed-by: Adam Langley <agl@golang.org>
-
- 27 Jan, 2015 1 commit
-
-
Robert Griesemer authored
- fixed Float.Add, Float.Sub - fixed Float.PString to be platform independent - fixed Float.Uint64 - fixed various test outputs TBR: adonovan Change-Id: I9d273b344d4786f1fed18862198b23285c358a39 Reviewed-on: https://go-review.googlesource.com/3321Reviewed-by: Robert Griesemer <gri@golang.org>
-