Commit 955b4caa authored by Michael Hudson-Doyle's avatar Michael Hudson-Doyle

test: move allocation before munmap in recover4

recover4 allocates 16 pages of memory via mmap, makes a 4 page hole in it with
munmap, allocates another 16 pages of memory via normal allocation and then
tries to copy from one to the other. For some reason on arm64 (but no other
platform I have tested) the second allocation sometimes causes the runtime to
ask the kernel for 4 additional pages of memory -- which the kernel satisfies
by remapping the pages that were just unmapped!

Moving the second allocation before the munmap fixes this behaviour, I can run
recover4 tens of thousands of times without failure with this fix vs a failure
rate of ~0.5% before.

Fixes #12549

Change-Id: I490b895b606897e4f7f25b1b51f5d485a366fffb
Reviewed-on: https://go-review.googlesource.com/14632Reviewed-by: 's avatarDave Cheney <dave@cheney.net>
parent 7c61d24f
......@@ -52,6 +52,8 @@ func main() {
log.Fatalf("mmap: %v", err)
}
other := make([]byte, 16*size)
// Note: Cannot call syscall.Munmap, because Munmap checks
// that you are unmapping a whole region returned by Mmap.
// We are trying to unmap just a hole in the middle.
......@@ -59,8 +61,6 @@ func main() {
log.Fatalf("munmap: %v", err)
}
other := make([]byte, 16*size)
// Check that memcopy returns the actual amount copied
// before the fault (8*size - 5, the offset we skip in the argument).
n, err := memcopy(data[5:], other)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment