int main() { // Generate data const unsigned arraySize = 32768; int data[arraySize];
for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256;
// !!! With this, the next loop runs faster std::sort(data, data + arraySize); // Test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // Primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; }
Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds.
With the sorted data, the code runs in 1.93 seconds.
Initially, this might be just a language or compiler anomaly. So a Trail in Java.
java code
import java.util.Arrays; import java.util.Random;
public class Main { public static void main(String[] args) { // Generate data int arraySize = 32768; int data[] = new int[arraySize];
Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256;
// !!! With this, the next loop runs faster Arrays.sort(data); // Test long start = System.nanoTime(); long sum = 0;
for (int i = 0; i < 100000; ++i) { // Primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } }
Notice that the data is evenly distributed between 0 and 255.
When the data is sorted, roughly the first half of the iterations will not enter the if-statement.
After that, they will all enter the if-statement.
This is very friendly to the branch predictor since the branch consecutively goes the same direction many times.
Even a simple saturating counter will correctly predict the branch except for the few iterations after it switches direction.
Quick visualization:
T = branch taken N = branch not taken data[] = 0, 1, 2, 3, 4, … 126, 127, 128, 129, 130, … 250, 251, 252, … branch = N N N N N … N N T T T … T T T … = NNNNNNNNNNNN … NNNNNNNTTTTTTTTT … TTTTTTTTTT (easy to predict)
However, when the data is completely random, the branch predictor is rendered useless because it can’t predict random data.
data[] = 226, 185, 125, 158, 198, 144, 217, 79, 202, 118, 14, 150, 177, 182, 133, … branch = T, T, N, T, T, T, T, N, T, N, N, T, T, T, N … = TTNTTTTNTNNTTTN … (completely random – hard to predict)
If the compiler isn’t able to optimize the branch into a conditional move,
Replace:
c++ code
if (data[c] >= 128) sum += data[c];
with:
c++ code
int t = (data[c] - 128) >> 31; sum += ~t & data[c];
This eliminates the branch and replaces it with some bitwise operations.
With the Branch:There is a huge difference between the sorted and unsorted data.
With the Hack:There is no difference between sorted and unsorted data.
In the C++ case, the hack is actually a tad slower than with the branch when the data is sorted.
c++ code
if (data[c] >= 128) sum += data[c];
The meaning of this particular if… else… branch is to add something when a condition is satisfied.
This type of branch can be easily transformed into a conditional movestatement, which would be compiled into a conditional move instruction: cmovl, in an x86 system.
The branch and thus the potential branch prediction penalty is removed.
In C, thus C++, the statement, which would compile directly (without any optimization) into the conditional move instruction in x86, is the ternary operator … ? … : ….
So it can be rewritten as:
c++ code
sum += data[c] >=128 ? data[c] : 0;
[ad type=”banner”]
java code
x86 // Branch - Random seconds = 8.885 // Branch - Sorted seconds = 1.528 // Branchless - Random seconds = 3.716 // Branchless - Sorted seconds = 3.71
max2 uses much less code due to the usage of instruction cmovge
But the real gain is that max2does not involve branch jumps, jmp, which would have a significant performance penalty if the predicted result is not right
In a typical x86 processor, the execution of an instruction is divided into several stages.
Thus they have different hardware to deal with different stages.
So we do not have to wait for one instruction to finish to start a new one.
This is called pipelining.
Starting with the original loop:
java code
for (unsigned i = 0; i < 100000; ++i) { for (unsigned j = 0; j < arraySize; ++j) { if (data[j] >= 128) sum += data[j]; } }
With loop interchange, we can change this loop to:
java code
for (unsigned j = 0; j < arraySize; ++j) { for (unsigned i = 0; i < 100000; ++i) { if (data[j] >= 128) sum += data[j]; } }
if” conditional is constant throughout the execution of the “i” loop, so you can hoist the “if” out:
java code
for (unsigned j = 0; j < arraySize; ++j) { if (data[j] >= 128) { for (unsigned i = 0; i < 100000; ++i) { sum += data[j]; } } }
the inner loop can be collapsed into one single expression, assuming the floating point model allows it
java code
for (unsigned j = 0; j < arraySize; ++j) { if (data[j] >= 128) { sum += data[j] * 100000; } }
The Valgrind tool cachegrind has a branch-predictor simulator, enabled by using the –branch-sim=yes flag.
Running it over the examples in this question, with the number of outer loops reduced to 10000 and compiled with g++, gives these results:
Drilling down into the line-by-line output produced by cg_annotate we see for the loop in question:
Sorted:
java code
Bc Bcm Bi Bim 10,001 4 0 0 for (unsigned i = 0; i < 10000; ++i) . . . . { . . . . // primary loop 327,690,000 10,016 0 0 for (unsigned c = 0; c < arraySize; ++c) . . . . { 327,680,000 10,006 0 0 if (data[c] >= 128) 0 0 0 0 sum += data[c]; . . . . } . . . . }
Unsorted:
java code
Bc Bcm Bi Bim 10,001 4 0 0 for (unsigned i = 0; i < 10000; ++i) . . . . { . . . . // primary loop 327,690,000 10,038 0 0 for (unsigned c = 0; c < arraySize; ++c) . . . . { 327,680,000 164,050,007 0 0 if (data[c] >= 128) 0 0 0 0 sum += data[c]; . . . . } . . . . }
[ad type=”banner”]
the problematic line – in the unsorted version the if (data[c] >= 128) line is causing 164,050,007 mispredicted conditional branches (Bcm) under cachegrind’s branch-predictor model, whereas it’s only causing 10,006 in the sorted version.
On Linux you can use the performance counters subsystem to accomplish The Valgrind tool cachegrind has a branch-predictor simulator, enabled by using the –branch-sim=yes flag, but with native performance using CPU counters.
A common way to eliminate branch prediction is that, to work in managed languages -a table lookup instead of using a branch
java code
// generate data int arraySize = 32768; int[] data = new int[arraySize];
Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.Next(256);
// Too keep the spirit of the code in-tact we will make a separate lookup table // (Assume we cannot modify 'data' or the number of loops) int[] lookup = new int[256];
for (int c = 0; c < 256; ++c) lookup[c] = (c >= 128) ? c : 0; // test DateTime startTime = System.DateTime.Now; long sum = 0;
for (int i = 0; i < 100000; ++i) { // primary loop for (int j = 0; j < arraySize; ++j) { // here you basically want to use simple operations - so no // random branches, but things like &, |, *, -, +, etc are fine. sum += lookup[data[j]]; } }
measure the performance of this loop with different conditions:
java code
for (int i = 0; i < max; i++) if (condition) sum++;
Here are the timings of the loop with different True-False patterns:
Condition Pattern Time (ms)
(i & 0×80000000) == 0 T repeated 322
(i & 0xffffffff) == 0 F repeated 276
(i & 1) == 0 TF alternating 760
(i & 3) == 0 TFFFTFFF… 513
(i & 2) == 0 TTFFTTFF… 1675
(i & 4) == 0 TTTTFFFFTTTTFFFF1275
(i & 8) == 0 8T 8F 8T 8F … 752
(i & 16) == 0 16T 16F 16T 16F … 490
A “bad” true-false pattern can make an if-statement up to six times slower than a “good” pattern!
Thus which pattern is good and which is bad depends on the exact instructions generated by the compiler and on the specific processor.
The way to avoid branch prediction errors is to build a lookup table, and index it using the data
By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted.
Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don’t care
java code
// Test clock_t start = clock(); long long a[] = {0, 0}; long long sum;
for (unsigned i = 0; i < 100000; ++i) { // Primary loop for (unsigned c = 0; c < arraySize; ++c) { int j = (data[c] >> 7); a[j] += data[c]; } }
This code wastes half of the adds, but never has a branch prediction failure.
The technique of indexing into an array, instead of using an if statement, can be used for deciding which pointer to use.
A library that implemented binary trees, and instead of having two named pointers (pLeft and pRight or whatever) had a length-2 array of pointers, and used the “decision bit” technique to decide which one to follow.
For example, instead of:
java code
if (x < node->value) node = node->pLeft; else node = node->pRight; this library would do something like:
i = (x < node->value); node = node->link[i];
[ad type=”banner”]
In the sorted case, you can do better than relying on successful branch prediction or any branchless comparison trick: completely remove the branch.
The array is partitioned in a contiguous zone with data < 128 and another with data >= 128. So you should find the partition point with a dichotomic search (using Lg(arraySize) = 15 comparisons), then do a straight accumulation from that point.
Something like (unchecked)
java code
int i= 0, j, k= arraySize; while (i < k) { j= (i + k) >> 1; if (data[j] >= 128) k= j; else i= j; } sum= 0; for (; i < arraySize; i++) sum+= data[i];
slightly more obfuscated
java code
int i, k, j= (i + k) >> 1; for (i= 0, k= arraySize; i < k; (data[j] >= 128 ? k : i)= j) j= (i + k) >> 1; for (sum= 0; i < arraySize; i++) sum+= data[i];
The diagram shows why the branch predictor gets confused.
Each element in the original code is a random value
java code
data[c] = std::rand() % 256;
so the predictor will change sides as the std::rand() blow.
Once it’s sorted, the predictor will first move into a state of strongly not taken and when the values change to the high value the predictor will in three runs through change all the way from strongly not taken to strongly taken.
Wikitechy Founder, Author, International Speaker, and Job Consultant. My role as the CEO of Wikitechy, I help businesses build their next generation digital platforms and help with their product innovation and growth strategy. I'm a frequent speaker at tech conferences and events.
nice try
nice