Commit f864d89e authored by Iskander Sharipov's avatar Iskander Sharipov Committed by Brad Fitzpatrick

runtime: remove TODO notes suggesting jump tables

For memmove/memclr using jump tables only reduces overall
function performance for both amd64 and 386.

Benchmarks for 32-bit memclr:

	name            old time/op    new time/op    delta
	Memclr/5-8        8.01ns ± 0%    8.94ns ± 2%  +11.59%  (p=0.000 n=9+9)
	Memclr/16-8       9.05ns ± 0%    9.49ns ± 0%   +4.81%  (p=0.000 n=8+8)
	Memclr/64-8       9.15ns ± 0%    9.49ns ± 0%   +3.76%  (p=0.000 n=9+10)
	Memclr/256-8      16.6ns ± 0%    16.6ns ± 0%     ~     (p=1.140 n=10+9)
	Memclr/4096-8      179ns ± 0%     166ns ± 0%   -7.26%  (p=0.000 n=9+8)
	Memclr/65536-8    3.36µs ± 1%    3.31µs ± 1%   -1.48%  (p=0.000 n=10+9)
	Memclr/1M-8       59.5µs ± 3%    60.5µs ± 2%   +1.67%  (p=0.009 n=10+10)
	Memclr/4M-8        239µs ± 3%     245µs ± 0%   +2.49%  (p=0.004 n=10+8)
	Memclr/8M-8        618µs ± 2%     614µs ± 1%     ~     (p=0.315 n=10+8)
	Memclr/16M-8      1.49ms ± 2%    1.47ms ± 1%   -1.11%  (p=0.029 n=10+10)
	Memclr/64M-8      7.06ms ± 1%    7.05ms ± 0%     ~     (p=0.573 n=10+8)
	[Geo mean]        3.36µs         3.39µs        +1.14%

For less predictable data, like loop iteration dependant sizes,
branch table still shows 2-5% worse results.
It also makes code slightly more complicated.

This CL removes TODO note that directly suggest trying this
optimization out. That encourages people to spend their time
in a quite hopeless endeavour.

The code used to implement branch table used a 32/64-entry table
with pointers to TEXT blocks that implemented every associated
label work. Most last entries point to "loop" code that is
a fallthrough for all other sizes that do not map into specialized
routines. The only inefficiency is extra MOVL/MOVQ required
to fetch table pointer itself as MOVL $sym<>(SB)(AX*4) is not valid
in Go asm (it works in other assemblers):

	TEXT ·memclrNew(SB), NOSPLIT, $0-8
        	MOVL    ptr+0(FP), DI
        	MOVL    n+4(FP), BX
        	// Handle 0 separately.
        	TESTL   BX, BX
        	JEQ     _0
        	LEAL    -1(BX), CX // n-1
        	BSRL    CX, CX
		// AX or X0 zeroed inside every text block.
       		MOVL    $memclrTable<>(SB), AX
        	JMP     (AX)(CX*4)
	_0:
        	RET

Change-Id: I4f706931b8127f85a8439b95834d5c2485a5d1bf
Reviewed-on: https://go-review.googlesource.com/115678
Run-TryBot: Iskander Sharipov <iskander.sharipov@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: 's avatarBrad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: 's avatarKeith Randall <khr@golang.org>
parent 92f8acd1
......@@ -16,6 +16,7 @@ TEXT runtime·memclrNoHeapPointers(SB), NOSPLIT, $0-8
// MOVOU seems always faster than REP STOSL.
tail:
// BSR+branch table make almost all memmove/memclr benchmarks worse. Not worth doing.
TESTL BX, BX
JEQ _0
CMPL BX, $2
......@@ -38,7 +39,6 @@ tail:
JBE _65through128
CMPL BX, $256
JBE _129through256
// TODO: use branch table and BSR to make this just a single dispatch
loop:
MOVOU X0, 0(DI)
......
......@@ -17,6 +17,7 @@ TEXT runtime·memclrNoHeapPointers(SB), NOSPLIT, $0-16
// MOVOU seems always faster than REP STOSQ.
tail:
// BSR+branch table make almost all memmove/memclr benchmarks worse. Not worth doing.
TESTQ BX, BX
JEQ _0
CMPQ BX, $2
......@@ -39,7 +40,6 @@ tail:
JBE _129through256
CMPB internal∕cpu·X86+const_offsetX86HasAVX2(SB), $1
JE loop_preheader_avx2
// TODO: use branch table and BSR to make this just a single dispatch
// TODO: for really big clears, use MOVNTDQ, even without AVX2.
loop:
......
......@@ -39,6 +39,7 @@ TEXT runtime·memmove(SB), NOSPLIT, $0-12
// 128 because that is the maximum SSE register load (loading all data
// into registers lets us ignore copy direction).
tail:
// BSR+branch table make almost all memmove/memclr benchmarks worse. Not worth doing.
TESTL BX, BX
JEQ move_0
CMPL BX, $2
......@@ -58,7 +59,6 @@ tail:
JBE move_33through64
CMPL BX, $128
JBE move_65through128
// TODO: use branch table and BSR to make this just a single dispatch
nosse2:
/*
......
......@@ -43,6 +43,8 @@ tail:
// registers before writing it back. move_256through2048 on the other
// hand can be used only when the memory regions don't overlap or the copy
// direction is forward.
//
// BSR+branch table make almost all memmove/memclr benchmarks worse. Not worth doing.
TESTQ BX, BX
JEQ move_0
CMPQ BX, $2
......@@ -63,7 +65,6 @@ tail:
JBE move_65through128
CMPQ BX, $256
JBE move_129through256
// TODO: use branch table and BSR to make this just a single dispatch
TESTB $1, runtime·useAVXmemmove(SB)
JNZ avxUnaligned
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment