Commit 6e9f664b authored by Michael Anthony Knyszek's avatar Michael Anthony Knyszek Committed by Austin Clements

runtime: don't coalesce scavenged spans with unscavenged spans

As a result of changes earlier in Go 1.12, the scavenger became much
more aggressive. In particular, when scavenged and unscavenged spans
coalesced, they would always become scavenged. This resulted in most
spans becoming scavenged over time. While this is good for keeping the
RSS of the program low, it also causes many more undue page faults and
many more calls to madvise.

For most applications, the impact of this was negligible. But for
applications that repeatedly grow and shrink the heap by large amounts,
the overhead can be significant. The overhead was especially obvious on
older versions of Linux where MADV_FREE isn't available and
MADV_DONTNEED must be used.

This change makes it so that scavenged spans will never coalesce with
unscavenged spans. This  results in fewer page faults overall. Aside
from this, the expected impact of this change is more heap growths on
average, as span allocations will be less likely to be fulfilled. To
mitigate this slightly, this change also coalesces spans eagerly after
scavenging, to at least ensure that all scavenged spans and all
unscavenged spans are coalesced with each other.

Also, this change adds additional logic in the case where two adjacent
spans cannot coalesce. In this case, on platforms where the physical
page size is larger than the runtime's page size, we realign the
boundary between the two adjacent spans to a physical page boundary. The
advantage of this approach is that "unscavengable" spans, that is, spans
which cannot be scavenged because they don't cover at least a single
physical page are grown to a size where they have a higher likelihood of
being discovered by the runtime's scavenging mechanisms when they border
a scavenged span. This helps prevent the runtime from accruing pockets
of "unscavengable" memory in between scavenged spans, preventing them
from coalescing.

We specifically choose to apply this logic to all spans because it
simplifies the code, even though it isn't strictly necessary. The
expectation is that this change will result in a slight loss in
performance on platforms where the physical page size is larger than the
runtime page size.

Update #14045.

Change-Id: I64fd43eac1d6de6f51d7a2ecb72670f10bb12589
Reviewed-on: https://go-review.googlesource.com/c/158078
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: 's avatarAustin Clements <austin@google.com>
parent 2f99e889
......@@ -426,14 +426,17 @@ func (h *mheap) coalesce(s *mspan) {
prescavenged := s.released() // number of bytes already scavenged.
// merge is a helper which merges other into s, deletes references to other
// in heap metadata, and then discards it.
// in heap metadata, and then discards it. other must be adjacent to s.
merge := func(other *mspan) {
// Adjust s via base and npages.
// Adjust s via base and npages and also in heap metadata.
s.npages += other.npages
s.needzero |= other.needzero
if other.startAddr < s.startAddr {
s.startAddr = other.startAddr
h.setSpan(s.base(), s)
} else {
h.setSpan(s.base()+s.npages*pageSize-1, s)
}
s.npages += other.npages
s.needzero |= other.needzero
// If before or s are scavenged, then we need to scavenge the final coalesced span.
needsScavenge = needsScavenge || other.scavenged || s.scavenged
......@@ -450,16 +453,63 @@ func (h *mheap) coalesce(s *mspan) {
h.spanalloc.free(unsafe.Pointer(other))
}
// realign is a helper which shrinks other and grows s such that their
// boundary is on a physical page boundary.
realign := func(a, b, other *mspan) {
// Caller must ensure a.startAddr < b.startAddr and that either a or
// b is s. a and b must be adjacent. other is whichever of the two is
// not s.
// If pageSize <= physPageSize then spans are always aligned
// to physical page boundaries, so just exit.
if pageSize <= physPageSize {
return
}
// Since we're resizing other, we must remove it from the treap.
if other.scavenged {
h.scav.removeSpan(other)
} else {
h.free.removeSpan(other)
}
// Round boundary to the nearest physical page size, toward the
// scavenged span.
boundary := b.startAddr
if a.scavenged {
boundary &^= (physPageSize - 1)
} else {
boundary = (boundary + physPageSize - 1) &^ (physPageSize - 1)
}
a.npages = (boundary - a.startAddr) / pageSize
b.npages = (b.startAddr + b.npages*pageSize - boundary) / pageSize
b.startAddr = boundary
h.setSpan(boundary-1, a)
h.setSpan(boundary, b)
// Re-insert other now that it has a new size.
if other.scavenged {
h.scav.insert(other)
} else {
h.free.insert(other)
}
}
// Coalesce with earlier, later spans.
if before := spanOf(s.base() - 1); before != nil && before.state == mSpanFree {
merge(before)
h.setSpan(s.base(), s)
if s.scavenged == before.scavenged {
merge(before)
} else {
realign(before, s, before)
}
}
// Now check to see if next (greater addresses) span is free and can be coalesced.
if after := spanOf(s.base() + s.npages*pageSize); after != nil && after.state == mSpanFree {
merge(after)
h.setSpan(s.base()+s.npages*pageSize-1, s)
if s.scavenged == after.scavenged {
merge(after)
} else {
realign(s, after, after)
}
}
if needsScavenge {
......@@ -1309,6 +1359,10 @@ func (h *mheap) scavengeLargest(nbytes uintptr) {
}
n := t.prev()
h.free.erase(t)
// Now that s is scavenged, we must eagerly coalesce it
// with its neighbors to prevent having two spans with
// the same scavenged state adjacent to each other.
h.coalesce(s)
t = n
h.scav.insert(s)
released += r
......@@ -1328,6 +1382,10 @@ func (h *mheap) scavengeAll(now, limit uint64) uintptr {
r := s.scavenge()
if r != 0 {
h.free.erase(t)
// Now that s is scavenged, we must eagerly coalesce it
// with its neighbors to prevent having two spans with
// the same scavenged state adjacent to each other.
h.coalesce(s)
h.scav.insert(s)
released += r
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment