- 06 Aug, 2013 8 commits
-
-
Brad Fitzpatrick authored
On 10.6, OS X's fcntl returns EBADF instead of EINVAL. R=golang-dev, iant, dave CC=golang-dev https://golang.org/cl/12493043
-
Rob Pike authored
Update #6046. This CL just does findnull and findnullw. There are other functions to fix but doing them a few at a time will help isolate any (unlikely) breakages these changes bring up in architectures I can't test myself. R=golang-dev, dsymonds CC=golang-dev https://golang.org/cl/12520043
-
Dmitriy Vyukov authored
R=alex.brainman CC=golang-dev https://golang.org/cl/12502044
-
Dmitriy Vyukov authored
Embed all data necessary for read/write operations directly into netFD. benchmark old ns/op new ns/op delta BenchmarkTCP4Persistent 27669 23341 -15.64% BenchmarkTCP4Persistent-2 18173 12558 -30.90% BenchmarkTCP4Persistent-4 10390 7319 -29.56% This change will intentionally break all builders to see how many allocations they do per read/write. This will be fixed soon afterwards. R=golang-dev, alex.brainman CC=golang-dev https://golang.org/cl/12413043
-
Dmitriy Vyukov authored
gcpc/gcsp are used by GC in similar situation. gcpc/gcsp are also more stable than gp->sched, because gp->sched is mutated by entersyscall/exitsyscall in morestack and mcall. So it has higher chances of being inconsistent. Also, rename gcpc/gcsp to syscallpc/syscallsp. This is the same as reverted change 12250043 with save marked as textflag 7. The problem was that if save calls morestack, then subsequent lessstack spoils g->sched.pc/sp. And that bad values were remembered in g->syscallpc/sp. Entersyscallblock had the same problem, but it was never triggered to date. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/12478043
-
Kyle Lemons authored
R=golang-dev, r CC=golang-dev https://golang.org/cl/12403043
-
Keith Randall authored
Basically a partial rollback of 12053043 until I can figure out what is really going on. Fixes bug 6051. R=golang-dev CC=golang-dev https://golang.org/cl/12496043
-
Brad Fitzpatrick authored
R=golang-dev, r CC=golang-dev https://golang.org/cl/12490043
-
- 05 Aug, 2013 23 commits
-
-
Russ Cox authored
This means that pprof will no longer report profiles on OS X. That's unfortunate, but the profiles were often wrong and, worse, it was difficult to tell whether the profile was wrong or not. The workarounds were making the scheduler more complex, possibly caused a deadlock (see issue 5519), and did not actually deliver reliable results. It may be possible for adventurous users to apply a patch to their kernels to get working results, or perhaps having no results will encourage someone to do the work of creating a profiling thread like on Windows. Issue 6047 has details. Fixes #5519. Fixes #6047. R=golang-dev, bradfitz, r CC=golang-dev https://golang.org/cl/12429045
-
Brad Fitzpatrick authored
Uglier. ««« original CL description all: use strings.IndexByte instead of Index where possible R=golang-dev, khr CC=golang-dev https://golang.org/cl/12486043 »»» R=golang-dev CC=golang-dev https://golang.org/cl/12485044
-
Brad Fitzpatrick authored
R=golang-dev, khr CC=golang-dev https://golang.org/cl/12486043
-
Pieter Droogendijk authored
Fixes #5372. Fixes #5577. R=gri, rsc, bradfitz, r CC=golang-dev https://golang.org/cl/12265043
-
Brad Fitzpatrick authored
This means that in the common case (modern kernel), we only make 1 system call to dup instead of two, and we also avoid grabbing the syscall.ForkLock. R=golang-dev, iant CC=golang-dev https://golang.org/cl/12476043
-
Keith Randall authored
you do reflect.call with too big an argument list. Not worth the hassle. Fixes #6023 Fixes #6033 R=golang-dev, bradfitz, dave CC=golang-dev https://golang.org/cl/12485043
-
Brad Fitzpatrick authored
Fixes #3751 R=golang-dev, khr CC=golang-dev https://golang.org/cl/12483043
-
Dave Cheney authored
Fixes #4963. Sets the append crossover to 0 on intel platforms. Results for linux/amd64 Core i5 SNB benchmark old ns/op new ns/op delta BenchmarkAppend 102 104 +1.96% BenchmarkAppend1Byte 10 11 +0.92% BenchmarkAppend4Bytes 15 11 -28.10% BenchmarkAppend7Bytes 17 12 -32.58% BenchmarkAppend8Bytes 18 12 -36.17% BenchmarkAppend15Bytes 24 11 -55.02% BenchmarkAppend16Bytes 25 11 -56.03% BenchmarkAppend32Bytes 11 12 +4.31% BenchmarkAppendStr1Byte 8 9 +13.99% BenchmarkAppendStr4Bytes 11 9 -17.52% BenchmarkAppendStr8Bytes 14 9 -35.70% BenchmarkAppendStr16Bytes 21 9 -55.19% BenchmarkAppendStr32Bytes 10 10 -5.66% BenchmarkAppendSpecialCase 49 52 +7.96% Results for linux/386 Atom(TM) CPU 330 @ 1.60GHz benchmark old ns/op new ns/op delta BenchmarkAppend 219 218 -0.46% BenchmarkAppend1Byte 75 72 -3.44% BenchmarkAppend4Bytes 92 73 -19.87% BenchmarkAppend7Bytes 108 74 -31.20% BenchmarkAppend8Bytes 116 74 -35.95% BenchmarkAppend15Bytes 162 77 -52.22% BenchmarkAppend16Bytes 169 77 -54.20% BenchmarkAppend32Bytes 88 86 -2.38% BenchmarkAppendStr1Byte 57 59 +3.32% BenchmarkAppendStr4Bytes 72 59 -17.40% BenchmarkAppendStr8Bytes 92 60 -34.70% BenchmarkAppendStr16Bytes 141 63 -54.89% BenchmarkAppendStr32Bytes 75 73 -2.64% BenchmarkAppendSpecialCase 270 270 +0.00% R=golang-dev, r CC=golang-dev https://golang.org/cl/12440044
-
Keith Randall authored
For normal slices a[i:j] we're generating 3 bounds checks: j<={len(string),cap(slice)}, j<=j (!), and i<=j. Somehow snuck in as part of the [i:j:k] implementation where the second check does something. Remove the second check when we don't need it. R=rsc, r CC=golang-dev https://golang.org/cl/12311046
-
Rémy Oudompheng authored
Update #5910. R=golang-dev, daniel.morsing, rsc CC=golang-dev https://golang.org/cl/11373044
-
Russ Cox authored
While we're here, add a test for the same functionality in gzip, which was already implemented, and add bzip2 CRC checks. Fixes #5772. R=golang-dev, r CC=golang-dev https://golang.org/cl/12387044
-
Russ Cox authored
It's still easy to turn off, but the builders are happy. Also document. R=golang-dev, iant, dvyukov CC=golang-dev https://golang.org/cl/12371043
-
Dmitriy Vyukov authored
Break all 386 builders. ««« original CL description runtime: use gcpc/gcsp during traceback of goroutines in syscalls gcpc/gcsp are used by GC in similar situation. gcpc/gcsp are also more stable than gp->sched, because gp->sched is mutated by entersyscall/exitsyscall in morestack and mcall. So it has higher chances of being inconsistent. Also, rename gcpc/gcsp to syscallpc/syscallsp. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/12250043 »»» R=rsc CC=golang-dev https://golang.org/cl/12424045
-
Brad Fitzpatrick authored
Fixes #4807 R=golang-dev, rsc CC=golang-dev https://golang.org/cl/12349044
-
Dmitriy Vyukov authored
It was needed for the old scheduler, because there temporary could be more threads than gomaxprocs. In the new scheduler gomaxprocs is always respected. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/12438043
-
Dmitriy Vyukov authored
gcpc/gcsp are used by GC in similar situation. gcpc/gcsp are also more stable than gp->sched, because gp->sched is mutated by entersyscall/exitsyscall in morestack and mcall. So it has higher chances of being inconsistent. Also, rename gcpc/gcsp to syscallpc/syscallsp. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/12250043
-
Adam Langley authored
GCM is Galois Counter Mode, an authenticated encryption mode that is, nearly always, used with AES. R=rsc CC=golang-dev https://golang.org/cl/12375043
-
Adam Langley authored
In the event that code tries to use a hash function that isn't compiled in and panics, give the developer a fighting chance of figuring out which hash function it needed. R=golang-dev, rsc CC=golang-dev https://golang.org/cl/12420045
-
ChaiShushan authored
Fixes #6045. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/12463043
-
Rob Pike authored
Fixes #6025. R=golang-dev, dsymonds CC=golang-dev https://golang.org/cl/12387046
-
ChaiShushan authored
Fixes #5785. R=golang-dev, dave CC=golang-dev https://golang.org/cl/10587043
-
Rob Pike authored
Fixes #6003. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/12387045
-
Rob Pike authored
Avoids seeing "Janet" as "Januaryet". Fixes #6020. R=golang-dev, dsymonds CC=golang-dev https://golang.org/cl/12448044
-
- 04 Aug, 2013 5 commits
-
-
Dmitriy Vyukov authored
Blockingsyscall was used in net package on windows, it's not used anymore. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/12436043
-
Dmitriy Vyukov authored
Remove dead code related to allocation of type metadata with SysAlloc. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/12311045
-
Dmitriy Vyukov authored
Runtime netpoll supports at most one read waiter and at most one write waiter. It's responsibility of net package to ensure that. Currently windows implementation allows more than one waiter in Accept. It leads to "fatal error: netpollblock: double wait". R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/12400045
-
Josh Bleecher Snyder authored
Whether the keys are concatenated or separate (or a mixture) depends on the server. Fixes #5979. R=golang-dev, bradfitz CC=golang-dev https://golang.org/cl/12433043
-
Dmitriy Vyukov authored
Windows dynamic priority boosting assumes that a process has different types of dedicated threads -- GUI, IO, computational, etc. Go processes use equivalent threads that all do a mix of GUI, IO, computations, etc. In such context dynamic priority boosting does nothing but harm, so turn it off. In particular, if 2 goroutines do heavy IO on a server uniprocessor machine, windows rejects to schedule timer thread for 2+ seconds when priority boosting is enabled. Fixes #5971. R=alex.brainman CC=golang-dev https://golang.org/cl/12406043
-
- 03 Aug, 2013 4 commits
-
-
Rob Pike authored
The test isn't checking deliberate panics so catching them just makes the code longer. R=golang-dev, dsymonds CC=golang-dev https://golang.org/cl/12420043
-
Josh Bleecher Snyder authored
Fixes #5982. R=golang-dev, r CC=golang-dev https://golang.org/cl/12387043
-
Rob Pike authored
Generated by addca. R=gobot CC=golang-dev https://golang.org/cl/12419043
-
Ian Lance Taylor authored
Update #5764 R=golang-dev, dave, rsc CC=golang-dev https://golang.org/cl/12388043
-