Commit f2d05389 authored by Austin Clements's avatar Austin Clements

runtime: perform write barriers on direct channel receive

Currently we have write barriers for direct channel sends, where the
receiver is blocked and the sender is writing directly to the
receiver's stack; but not for direct channel receives, where the
sender is blocked and the receiver is reading directly from the
sender's stack.

This was okay with the old write barrier because either 1) the
receiver would write the received pointer into the heap (causing it to
be shaded), 2) the pointer would still be on the receiver's stack at
mark termination and we would rescan it, or 3) the receiver dropped
the pointer so it wasn't necessarily reachable anyway.

This is not okay with the write barrier because it lets a grey stack
send a white pointer to a black stack and then remove it from its own
stack. If the grey stack was the sole grey-protector of this pointer,
this hides the object from the garbage collector.

Fix this by making direct receives perform a stack-to-stack write
barrier just like direct sends do.

Fixes #17694.

Change-Id: I1a4cb904e4138d2ac22f96a3e986635534a5ae41
Reviewed-on: https://go-review.googlesource.com/32450Reviewed-by: 's avatarRick Hudson <rlh@golang.org>
parent 9a8bf2d6
......@@ -287,16 +287,18 @@ func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func()) {
goready(gp, 4)
}
// Sends and receives on unbuffered or empty-buffered channels are the
// only operations where one running goroutine writes to the stack of
// another running goroutine. The GC assumes that stack writes only
// happen when the goroutine is running and are only done by that
// goroutine. Using a write barrier is sufficient to make up for
// violating that assumption, but the write barrier has to work.
// typedmemmove will call bulkBarrierPreWrite, but the target bytes
// are not in the heap, so that will not help. We arrange to call
// memmove and typeBitsBulkBarrier instead.
func sendDirect(t *_type, sg *sudog, src unsafe.Pointer) {
// Send on an unbuffered or empty-buffered channel is the only operation
// in the entire runtime where one goroutine
// writes to the stack of another goroutine. The GC assumes that
// stack writes only happen when the goroutine is running and are
// only done by that goroutine. Using a write barrier is sufficient to
// make up for violating that assumption, but the write barrier has to work.
// typedmemmove will call bulkBarrierPreWrite, but the target bytes
// are not in the heap, so that will not help. We arrange to call
// memmove and typeBitsBulkBarrier instead.
// src is on our stack, dst is a slot on another stack.
// Once we read sg.elem out of sg, it will no longer
// be updated if the destination's stack gets copied (shrunk).
......@@ -306,6 +308,15 @@ func sendDirect(t *_type, sg *sudog, src unsafe.Pointer) {
memmove(dst, src, t.size)
}
func recvDirect(t *_type, sg *sudog, dst unsafe.Pointer) {
// dst is on our stack or the heap, src is on another stack.
// The channel is locked, so src will not move during this
// operation.
src := sg.elem
typeBitsBulkBarrier(t, uintptr(dst), uintptr(src), t.size)
memmove(dst, src, t.size)
}
func closechan(c *hchan) {
if c == nil {
panic(plainError("close of nil channel"))
......@@ -536,9 +547,7 @@ func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func()) {
}
if ep != nil {
// copy data from sender
// ep points to our own stack or heap, so nothing
// special (ala sendDirect) needed here.
typedmemmove(c.elemtype, ep, sg.elem)
recvDirect(c.elemtype, sg, ep)
}
} else {
// Queue is full. Take the item at the
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment