Skip to main content

The C/C++ Developer’s Guide – Part 1: ccache

Head shot for Benjamin Cribb
Operations Leader
September 3, 2019 | 8:00 am CDT

This is the first of two posts in which we cover several handy open-source tools that can help decrease your compile times. The focus here is ccache.

As a software developer, much of your average workday is composed of a succession of recurring develop–build–test cycles: you devise and write new code (or modify existing code), type make into your terminal or hit the “compile“ button in your IDE, execute the resulting program and/or tests, then go back to working on your code base.

If you’re using a just-in-time-compiled or interpreted language such as JavaScript, Perl or Python, you are in the fortunate position that the build step is trivial or happens in the background. Maybe no such step even exists in the first place. But if your code is written in a compiled language such as C++ or C, you know the routine: the larger and the more complex your project grows, the longer it will inevitably take to build, especially if your development machine is a regular laptop and not a monstrous workstation. So, your computer starts toiling away and, unless you have some other small task at hand to bridge the time gap1, all you can do while waiting for the compiler to carry out its job is wait.

Of course, you can tackle this like many performance issues by throwing additional hardware at it. You can place a more potent machine under your desk or set up a dedicated build server for the whole team. But hardware costs money, takes up space, consumes electricity and produces noise and heat, so why not try to make more efficient use of the resources are already there?

Tools to the rescue!

Fortunately, there are several tools that can help. We can roughly separate them into two categories that attack the issue from different angles:

  1. Memorizing your build artifacts so the compiler does not have to do the same thing repeatedly.
  2. Harnessing idle processing power by turning the machines in your office into a distributed build cluster.

I believe in the open-source movement, so that is what we’ll be focusing on. The best-known representative of category 1 is ccache, which we cover in this article.

ccache: a C/C++ compiler cache

Originally, ccache was developed as a utility for the Samba open-source project. It is a compiler cache that keeps the artefacts produced by past compilation runs in order to speed up subsequent runs. Roughly speaking, if you try to recompile a source file with the same contents, the same compiler and the same flags, the product is retrieved from the cache instead of being compiled anew in a time-consuming step.

How ccache works

This section looks at the inner workings of ccache. If you want a more detailed look, the official documentation should have everything you need.

ccache works as a compiler wrapper — its external interface is very similar to that of your compiler, and it passes your commands to the latter. Unfortunately, since ccache needs to examine and interpret the command-line flags, it cannot be paired with arbitrary compilers; it is currently designed to be used in combination with either GCC or Clang, the two big open-source compiler projects.

You invoke ccache just like your regular compiler and pass your long list of command-line arguments as normal. It operates on the level of individual source files, so one call to ccache generally translates one .cpp or .c file into one .o file. For each input source file, the wrapper performs a lookup in its cache. If it encounters a hit, it simply delivers the cached item. In the case of a cache miss, it launches the actual compiler, passing on the litany of command-line arguments. From there, it inserts the resulting artefact into the cache.

The cache itself is a regular directory on your disk. It is assigned a quota and, if that quota is exceeded, old entries get evicted. You can tell ccache which directory to use for the current build. This way, you can maintain different caches for different projects, so compiling a new project B will not pollute the existing cache of project A.

A look inside

spotlight placeholder
Schematic internal workflow of ccache.

The lookup of cache entries is carried out with the help of a unique tag, which is a string consisting of two elements: a hash value and the size of the preprocessed source file. The hash value is computed by running the information that is relevant to produce the output file through the MD4 message-digest function. Among others, this information includes the following items:

  • the identity of the compiler (name, size and modification time),
  • the compiler flags used,
  • the contents of the input source file,
  • the contents of the included header files (and their transitive hull).

The graphic on the right side shows what the approximate workflow looks like.

After computing the tag value, ccache checks if an entry with that tag already exists in the cache. If so, we have a hit: no recompilation is needed. Conveniently, ccache memorizes not only the artifact itself but also the console output the compiler printed when translating that artefact — hence, if you retrieve a cached file that has previously produced compiler warnings, ccache will print out these warnings again.

Additional cleverness with the direct mode

While that process already works well in practice, performance may suffer from the fact that every execution of ccache requires a full run of the preprocessor on the source file. This can potentially turn the preprocessor into a severe bottleneck. As a solution, ccache implements an alternative direct mode, which is somewhat more complex but renders the compulsory preprocessor run obsolete. In that mode, ccache computes MD4 hashes for each include header file individually and stores the results in a so-called manifest. A cache lookup is done by comparing the hashes of the source file and of all its includes with the contents of the manifest; if all hashes match pair-wise, we have a hit. In current versions of ccache, the direct mode is enabled by default.

Is MD4 considered insecure?

If you are familiar with cryptography, you may be asking yourself why the severely outdated and cryptographically compromisable MD4 hash function is being used. The answer is simple: because MD4 is fast and cryptographic strength is not relevant in our use case. After all, ccache is not about preventing the forgery of sensitive message contents by a malicious attacker. Rather, it is about detecting and avoiding redundant work. The combination of a 128-bit MD4 hash with a length suffix makes it sufficiently unlikely that you will ever hit a tag collision in practice2.

How to use ccache in your project

The below assumes that your development machine is a Linux box that runs a 64-bit version of either Debian/Ubuntu or Fedora/CentOS/RHEL. If you are on another Linux distribution or another unixoid operating system such as BSD or macOS, the instructions will be analogous but may differ in certain details such as paths. This content does not apply to Windows users.

Installing ccache

Simply install the package that comes with your Linux distribution. Done!

On Debian/Ubuntu:

$ sudo apt install ccache

On Fedora/CentOS/RHEL:

$ sudo dnf install ccache

or

$ sudo yum install ccache

Enabling ccache in your project

The ccache package contains a system directory that contains a number of symbolic links; these links are named after the common compiler binaries (including gcc, clang and c++) but point to ccache’s compiler wrapper. To activate ccache, you prepend that directory to your PATH environment variable so the wrapper is called instead of the regular compiler binary. Paste the following line into your terminal or add it to your project configuration. If you are sure you want to use ccache for everything, you can also add it to your shell configuration file (~/.bashrc, ~/.zshrc etc.). Note that the ccache directory must come first because the system searches the PATH from left to right.

On Debian/Ubuntu:

$ export PATH="/usr/lib/ccache:$PATH"

On Fedora/CentOS/RHEL:

$ export PATH="/usr/lib64/ccache:$PATH"

Your next build will now be accelerated with ccache. Of course, you will not notice any speedups during the first run because the cache is initially empty. Subsequent builds will benefit from the cache.

By default, ccache will place the cache in ~/.ccache below your home directory and assign a maximum capacity of 5 GB.

Optional step: tell ccache which cache directory to use

You may want to set up a separate, individual cache just for your project. This makes sense if you work on several projects and do not want builds of other projects to evict your precious cached compilation artefacts. Of course, your disk must be large enough to accommodate several caches3. To set up a dedicated cache for your project, simply create a new empty directory in the desired location (e.g., ~/Projects/myproject/ccache). Then tell ccache about that directory through the CCACHE_DIR environment variable:

$ export CCACHE_DIR="$HOME/Projects/myproject/ccache"

Optional step: configuring the cache

There is a wide variety of settings to configure or tweak your cache(s). For an extensive overview, refer to the official documentation.

Global preferences that are supposed to be set for all caches should go to /etc/ccache.conf. You can override those global settings for an individual cache by defining new values in a file named ccache.conf inside the cache directory. For instance, if you wish to double the capacity of your main project’s cache, edit ~/Projects/myproject/ccache/ccache.conf (if it is not there, create it) and add the following line:

max_size = 10.0G

Optional advanced-level step: using ccache with Docker

You may be using Docker as an easy way to provide a consistent predefined build environment. In such a configuration, the build is performed within the container.

To be effective, your compiler cache should be somewhat long-lived and persistent. Hence, you do not want to couple the state of the cache with the state of the container, which is why you should keep the cache directory outside the container and bind-mount it into the container’s file-system tree. In the following example, we will mount it in a fixed location as /ccache. Note that this is one way to do it, but there are several alternatives.

Edit your Dockerfile and have ccache installed in the container. If your container is based on Debian, Ubuntu or a derivative thereof, you must write something like this:

RUN apt-get install -y ccache

To define the environment variable that tells ccache which cache directory to use, add the following line to the Dockerfile:

ENV CCACHE_DIR "/ccache"

Afterwards, rebuild your container image.

To execute the build, run Docker as usual but tell it on the command line to bind your cache directory to /ccache:

$ docker run (...) --volume=$HOME/Projects/myproject/ccache:/ccache (...)

A real-world example

In a recent primary project, our team supported a customer in developing the control software of a product line of medical devices. Our project’s code base was written mostly in C++ and had accumulated well over half a million lines of code in total. Our build system uses CMake in combination with Ninja4, which automatically parallelizes the build for the number of CPU cores available. The build environment is containerized with Docker.

On my laptop, the first compilation run after a checkout takes almost 45 minutes. Now, Ninja is smart and keeps track of the dependencies between source files and build artefacts, so it generally rebuilds only the transitive hull of the artifacts affected by my modifications. However, every time I switch between development branches in my working copy, substantial parts of the source tree are recompiled from scratch.

I ran the build on a clean source tree with three different configurations:

spotlight placeholder
Build times without and with ccache.
  • without ccache,
  • with ccache and an empty cache,
  • with ccache and a warm cache.

Each build configuration was measured ten times. The times shown are the respective arithmetic means; error bars represent one standard deviation. The results are depicted on the left side.

The bad news is that ccache incurs a small overhead of approximately 5 % when the cache is cold. But there is the good news: when the cache is warmed up, the time savings are significant. Of the mere three minutes that remain, the bulk of the time is taken by invocations to the linker, whose work cannot be cached by ccache.

Room for improvement

Although ccache is a great tool that can save time, it still leaves room for further improvement. One weakness of ccache is the fact that it operates at the granularity of entire source files. Thus, it is unable to handle cases properly. For example, when you change only a comment in a widely included header. Such a modification would require no recompilation at all.

A smart caching solution would base its decisions not on the raw source code but on the abstract syntax tree (AST) that results from parsing that code. A change in a line of code that does not affect the AST can safely be ignored. This has the potential to greatly reduce the number of false cache misses. There is ongoing research into this topic, resulting in a promising prototype named cHash. If you want to know more about it, read the paper.

Notes

  1. And as we all know, mental context switches are usually very expensive.
  2. I can think of a collision attack where a malicious actor poisons your compilation cache with a forged source file that happens to produce the exact same tag as one of your project files. If the cached artefact gets successfully linked into your binary (which means it must also provide the same symbols), then the attacker has managed to inject evil code into your program. But this is a really far-fetched scenario.
  3. You want both your build directory and the cache to reside on a fast SSD. If you don’t have an SSD in your machine already, go install one now.
  4. In case you are not familiar with Ninja: it is a build system that puts a focus on speed. Ninja assumes that control files are generated rather than hand-written.

Within UL Solutions we provide a broad portfolio of offerings to many industries. This includes certification, testing, inspection, assessment, verification and consulting services. In order to protect and prevent any conflict of interest, perception of conflict of interest and protection of both our brand and our customers brands, UL Solutions has processes in place to identify and manage any potential conflicts of interest and maintain the impartiality of our conformity assessment services.