A piece of C++ code that seems very peculiar. For some reason, sorting the data miraculously makes the code almost six times faster.

c++ code
#include <algorithm>
#include <ctime>
#include <iostream>

int main()
{
// Generate data
const unsigned arraySize = 32768;
int data[arraySize];

for (unsigned c = 0; c < arraySize; ++c)
data[c] = std::rand() % 256;

// !!! With this, the next loop runs faster
std::sort(data, data + arraySize);
// Test
clock_t start = clock();
long long sum = 0;
for (unsigned i = 0; i < 100000; ++i)
{
// Primary loop
for (unsigned c = 0; c < arraySize; ++c)
{
if (data[c] >= 128)
sum += data[c];
}
}
double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
std::cout << elapsedTime << std::endl;
std::cout << "sum = " << sum << std::endl;
}
  • Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds.
  • With the sorted data, the code runs in 1.93 seconds.

Initially, this might be just a language or compiler anomaly. So a Trail in Java.

java code
import java.util.Arrays;
import java.util.Random;

public class Main
{
public static void main(String[] args)
{
// Generate data
int arraySize = 32768;
int data[] = new int[arraySize];

Random rnd = new Random(0);
for (int c = 0; c < arraySize; ++c)
data[c] = rnd.nextInt() % 256;

// !!! With this, the next loop runs faster
Arrays.sort(data);
// Test
long start = System.nanoTime();
long sum = 0;

for (int i = 0; i < 100000; ++i)
{
// Primary loop
for (int c = 0; c < arraySize; ++c)
{
if (data[c] >= 128)
sum += data[c];
}
}

System.out.println((System.nanoTime() - start) / 1000000000.0);
System.out.println("sum = " + sum);
}
}

With a somewhat similar but less extreme result.

c ++ code
if (data[c] >= 128) sum += data[c];
  • Notice that the data is evenly distributed between 0 and 255.
  • When the data is sorted, roughly the first half of the iterations will not enter the if-statement.
  • After that, they will all enter the if-statement.
  • This is very friendly to the branch predictor since the branch consecutively goes the same direction many times.
  • Even a simple saturating counter will correctly predict the branch except for the few iterations after it switches direction.

Quick visualization:

T = branch taken N = branch not taken data[] = 0, 1, 2, 3, 4, … 126, 127, 128, 129, 130, … 250, 251, 252, … branch = N N N N N … N N T T T … T T T … = NNNNNNNNNNNN … NNNNNNNTTTTTTTTT … TTTTTTTTTT (easy to predict)

  • However, when the data is completely random, the branch predictor is rendered useless because it can’t predict random data.

data[] = 226, 185, 125, 158, 198, 144, 217, 79, 202, 118, 14, 150, 177, 182, 133, … branch = T, T, N, T, T, T, T, N, T, N, N, T, T, T, N … = TTNTTTTNTNNTTTN … (completely random – hard to predict)

If the compiler isn’t able to optimize the branch into a conditional move,

Replace:

c++ code
if (data[c] >= 128) 
sum += data[c];

with:

c++ code
int t = (data[c] - 128) >> 31;
sum += ~t & data[c];
  • This eliminates the branch and replaces it with some bitwise operations.

C++ – Visual Studio 2010 – x64 Release

c ++ code
// Branch - Random 
seconds = 11.777
// Branch - Sorted
seconds = 2.352
// Branchless - Random
seconds = 2.564
// Branchless – Sorted
seconds = 2.587

Java – Netbeans 7.1.1 JDK 7 – x64

java code
// Branch - Random 
seconds = 10.93293813
// Branch - Sorted
seconds = 5.643797077
// Branchless – Random
seconds = 3.113581453
// Branchless – Sorted
seconds = 3.186068823
  • With the Branch:There is a huge difference between the sorted and unsorted data.
  • With the Hack:There is no difference between sorted and unsorted data.
  • In the C++ case, the hack is actually a tad slower than with the branch when the data is sorted.

c++ code
if (data[c] >= 128)
sum += data[c];
  • The meaning of this particular if… else… branch is to add something when a condition is satisfied.
  • This type of branch can be easily transformed into a conditional movestatement, which would be compiled into a conditional move instruction: cmovl, in an x86 system.
  • The branch and thus the potential branch prediction penalty is removed.
  • In C, thus C++, the statement, which would compile directly (without any optimization) into the conditional move instruction in x86, is the ternary operator … ? … : ….
  • So it can be rewritten as:
c++ code
sum += data[c] >=128 ? data[c] : 0; 
[ad type=”banner”]
java code
x86
// Branch - Random
seconds = 8.885
// Branch - Sorted
seconds = 1.528
// Branchless - Random seconds = 3.716
// Branchless - Sorted seconds = 3.71
java code
x64
// Branch – Random
seconds = 11.302
// Branch – Sorted
seconds = 1.830
// Branchless - Random
seconds = 2.736
// Branchless – Sorted
seconds = 2.737
[ad type=”banner”]

Thus from the previous fix, lets investigate the x86 assembly they generate. For simplicity, we use two functions max1 and max2.

java code
max1 uses the conditional branch if... else ...:
int max1(int a, int b)
{
if (a > b)
return a;
else return b;
}
java code
max2 uses the ternary operator ... ? ... : ...:
int max2(int a, int b)
{
return a > b ? a : b;
}
C++ code
max1 
movl %edi, -4(%rbp)
movl %esi, -8(%rbp)
movl -4(%rbp), %eax
cmpl -8(%rbp), %eax
jle .L2
movl -4(%rbp), %eax
movl %eax, -12(%rbp)
jmp .L4
.L2:
movl -8(%rbp), %eax
movl %eax, -12(%rbp)
.L4:
movl -12(%rbp), %eax
leave
ret
:max2
movl %edi, -4(%rbp)
movl %esi, -8(%rbp)
movl -4(%rbp), %eax
cmpl %eax, -8(%rbp)
cmovge -8(%rbp), %eax
leave
ret
  • max2 uses much less code due to the usage of instruction cmovge
  • But the real gain is that max2does not involve branch jumps, jmp, which would have a significant performance penalty if the predicted result is not right
  • In a typical x86 processor, the execution of an instruction is divided into several stages.
  • Thus they have different hardware to deal with different stages.
  • So we do not have to wait for one instruction to finish to start a new one.
  • This is called pipelining.

 Starting with the original loop:

java code
for (unsigned i = 0; i < 100000; ++i)
{
for (unsigned j = 0; j < arraySize; ++j)
{
if (data[j] >= 128)
sum += data[j];
}
}

With loop interchange, we can change this loop to:

java code
for (unsigned j = 0; j < arraySize; ++j) 
{
for (unsigned i = 0; i < 100000; ++i)
{
if (data[j] >= 128)
sum += data[j];
}
}

if” conditional is constant throughout the execution of the “i” loop, so you can hoist the “if” out:

java code
for (unsigned j = 0; j < arraySize; ++j) 
{
if (data[j] >= 128)
{
for (unsigned i = 0; i < 100000; ++i)
{
sum += data[j];
} } }

the inner loop can be collapsed into one single expression, assuming the floating point model allows it

java code
for (unsigned j = 0; j < arraySize; ++j)
{
if (data[j] >= 128)
{
sum += data[j] * 100000;
}
}

  • The Valgrind tool cachegrind has a branch-predictor simulator, enabled by using the –branch-sim=yes flag.
  • Running it over the examples in this question, with the number of outer loops reduced to 10000 and compiled with g++, gives these results:

Sorted:

java code
==32551== Branches:        656,645,130  (  656,609,208 cond +    35,922 ind)
==32551== Mispredicts: 169,556 ( 169,095 cond + 461 ind)
==32551== Mispred rate: 0.0% ( 0.0% + 1.2% )

Unsorted:

java code
==32555== Branches:        655,996,082  (  655,960,160 cond +  35,922 ind)
==32555== Mispredicts: 164,073,152 ( 164,072,692 cond + 460 ind)
==32555== Mispred rate: 25.0% ( 25.0% + 1.2% )
  • Drilling down into the line-by-line output produced by cg_annotate we see for the loop in question:

Sorted:

java code
Bc    Bcm Bi Bim
10,001 4 0 0 for (unsigned i = 0; i < 10000; ++i)
. . . . {
. . . . // primary loop
327,690,000 10,016 0 0 for (unsigned c = 0; c < arraySize; ++c)
. . . . {
327,680,000 10,006 0 0 if (data[c] >= 128)
0 0 0 0 sum += data[c];
. . . . }
. . . . }

Unsorted:

java code

Bc Bcm Bi Bim
10,001 4 0 0 for (unsigned i = 0; i < 10000; ++i)
. . . . {
. . . . // primary loop
327,690,000 10,038 0 0 for (unsigned c = 0; c < arraySize; ++c)
. . . . {
327,680,000 164,050,007 0 0 if (data[c] >= 128)
0 0 0 0 sum += data[c];
. . . . }
. . . . }
[ad type=”banner”]
  • the problematic line – in the unsorted version the if (data[c] >= 128) line is causing 164,050,007 mispredicted conditional branches (Bcm) under cachegrind’s branch-predictor model, whereas it’s only causing 10,006 in the sorted version.

  • On Linux you can use the performance counters subsystem to accomplish The Valgrind tool cachegrind has a branch-predictor simulator, enabled by using the –branch-sim=yes flag, but with native performance using CPU counters.

Sorted:

java code
Performance counter stats for './sumtest_sorted': 
11808.095776 task-clock # 0.998 CPUs utilized
1,062 context-switches # 0.090 K/sec
14 CPU-migrations # 0.001 K/sec
337 page-faults # 0.029 K/sec
26,487,882,764 cycles # 2.243 GHz
41,025,654,322 instructions # 1.55 insns per cycle 6,558,871,379 branches # 555.455 M/sec
567,204 branch-misses # 0.01% of all branches 11.827228330 seconds time elapsed

Unsorted:

java code
Performance counter stats for './sumtest_unsorted': 28877.954344 task-clock # 0.998 CPUs utilized
2,584 context-switches # 0.089 K/sec
18 CPU-migrations # 0.001 K/sec
335 page-faults # 0.012 K/sec
65,076,127,595 cycles # 2.253 GHz
41,032,528,741 instructions # 0.63 insns per cycle 6,560,579,013 branches # 227.183 M/sec
1,646,394,749 branch-misses # 25.10% of all branches 28.935500947 seconds time elapsed
  • It can also do source code annotation with dissassembly.
  • perf record -e branch-misses ./sumtest_unsorted
  • perf annotate -d sumtest_unsorted

Percent |   Source code & Disassembly of sumtest_unsorted

java code
...
: sum += data[c];
0.00 : 400a1a: mov -0x14(%rbp),%eax
39.97 : 400a1d: mov %eax,%eax
5.31 : 400a1f: mov -0x20040(%rbp,%rax,4),%eax
4.60 : 400a26: cltq
0.00 : 400a28: add %rax,-0x30(%rbp)
...

A common way to eliminate branch prediction is that, to work in managed languages -a table lookup instead of using a branch

java code
// generate data
int arraySize = 32768;
int[] data = new int[arraySize];

Random rnd = new Random(0);
for (int c = 0; c < arraySize; ++c)
data[c] = rnd.Next(256);

// Too keep the spirit of the code in-tact we will make a separate lookup table
// (Assume we cannot modify 'data' or the number of loops)
int[] lookup = new int[256];

for (int c = 0; c < 256; ++c)
lookup[c] = (c >= 128) ? c : 0;
// test
DateTime startTime = System.DateTime.Now;
long sum = 0;

for (int i = 0; i < 100000; ++i)
{
// primary loop
for (int j = 0; j < arraySize; ++j)
{
// here you basically want to use simple operations - so no
// random branches, but things like &, |, *, -, +, etc are fine.
sum += lookup[data[j]];
}
}

DateTime endTime = System.DateTime.Now;
Console.WriteLine(endTime - startTime);
Console.WriteLine("sum = " + sum);

Console.ReadLine();

  • measure the performance of this loop with different conditions:
java code
for (int i = 0; i < max; i++) if (condition) sum++; 

Here are the timings of the loop with different True-False patterns:

Condition                          Pattern                Time (ms)

(i & 0×80000000) == 0     T repeated          322

(i & 0xffffffff) == 0              F repeated          276

(i & 1) == 0                       TF alternating      760

(i & 3) == 0                       TFFFTFFF…       513

(i & 2) == 0                       TTFFTTFF…      1675

(i & 4) == 0               TTTTFFFFTTTTFFFF1275

(i & 8) == 0                    8T 8F 8T 8F …       752

(i & 16) == 0             16T 16F 16T 16F …    490

  • A “bad” true-false pattern can make an if-statement up to six times slower than a “good” pattern!
  •  Thus which pattern is good and which is bad depends on the exact instructions generated by the compiler and on the specific processor.

  • The way to avoid branch prediction errors is to build a lookup table, and index it using the data
  • By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted.
  • Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don’t care
java code
// Test
clock_t start = clock();
long long a[] = {0, 0};
long long sum;

for (unsigned i = 0; i < 100000; ++i)
{
// Primary loop
for (unsigned c = 0; c < arraySize; ++c)
{
int j = (data[c] >> 7);
a[j] += data[c];
}
}

double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC;
sum = a[1];

This code wastes half of the adds, but never has a branch prediction failure.

  • The technique of indexing into an array, instead of using an if statement, can be used for deciding which pointer to use.
  • A library that implemented binary trees, and instead of having two named pointers (pLeft and pRight or whatever) had a length-2 array of pointers, and used the “decision bit” technique to decide which one to follow.
  • For example, instead of:
java code
if (x < node->value)
node = node->pLeft;
else
node = node->pRight;
this library would do something like:

i = (x < node->value);
node = node->link[i];
[ad type=”banner”]

  • In the sorted case, you can do better than relying on successful branch prediction or any branchless comparison trick: completely remove the branch.
  • The array is partitioned in a contiguous zone with data < 128 and another with data >= 128. So you should find the partition point with a dichotomic search (using Lg(arraySize) = 15 comparisons), then do a straight accumulation from that point.

Something like (unchecked)

java code
int i= 0, j, k= arraySize;
while (i < k)
{
j= (i + k) >> 1;
if (data[j] >= 128)
k= j;
else
i= j;
}
sum= 0;
for (; i < arraySize; i++)
sum+= data[i];

slightly more obfuscated

java code
int i, k, j= (i + k) >> 1;
for (i= 0, k= arraySize; i < k; (data[j] >= 128 ? k : i)= j)
j= (i + k) >> 1;
for (sum= 0; i < arraySize; i++)
sum+= data[i];

The diagram shows why the branch predictor gets confused.

  • Each element in the original code is a random value
java code
data[c] = std::rand() % 256;
  • so the predictor will change sides as the std::rand() blow.
  • Once it’s sorted, the predictor will first move into a state of strongly not taken and when the values change to the high value the predictor will in three runs through change all the way from strongly not taken to strongly taken.

Categorized in: