Commit 9a3112bc authored by Austin Clements's avatar Austin Clements

runtime: one more Map{Bits,Spans} before arena_used update

In order to avoid a race with a concurrent write barrier or garbage
collector thread, any update to arena_used must be preceded by mapping
the corresponding heap bitmap and spans array memory. Otherwise, the
concurrent access may observe that a pointer falls within the heap
arena, but then attempt to access unmapped memory to look up its span
or heap bits.

Commit d57c889a fixed all of the places where we updated arena_used
immediately before mapping the heap bitmap and spans, but it missed
the one place where we update arena_used and depend on later code to
update it again and map the bitmap and spans. This creates a window
where the original race can still happen. This commit fixes this by
mapping the heap bitmap and spans before this arena_used update as
well. This code path is only taken when expanding the heap reservation
on 32-bit over a hole in the address space, so these extra mmap calls
should have negligible impact.

Fixes #10212, #11324.

Change-Id: Id67795e6c7563eb551873bc401e5cc997aaa2bd8
Reviewed-on: https://go-review.googlesource.com/11340
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
Reviewed-by: 's avatarDmitry Vyukov <dvyukov@google.com>
parent 2a331ca8
......@@ -405,7 +405,10 @@ func mHeap_SysAlloc(h *mheap, n uintptr) unsafe.Pointer {
// Keep everything page-aligned.
// Our pages are bigger than hardware pages.
h.arena_end = p + p_size
h.arena_used = p + (-uintptr(p) & (_PageSize - 1))
used := p + (-uintptr(p) & (_PageSize - 1))
mHeap_MapBits(h, used)
mHeap_MapSpans(h, used)
h.arena_used = used
h.arena_reserved = reserved
} else {
var stat uint64
......
......@@ -41,7 +41,7 @@ type mheap struct {
bitmap uintptr
bitmap_mapped uintptr
arena_start uintptr
arena_used uintptr
arena_used uintptr // always mHeap_Map{Bits,Spans} before updating
arena_end uintptr
arena_reserved bool
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment