C++ Insights with Clang 18 and more

I'm happy to announce that C++ Insights is now driven by LLVM/Clang 18! This time, updating to the new Clang version was more straightforward than before. However, I had new obstacles to fight.

I've been using an ARM Mac for some time now, meaning that my main build machine runs on ARM. I decided to take a bigger step and start building an ARM version as well. I offer self-hosted installations, for example, inside your company, which I'm using myself during all my classes or talks. Running Docker in native mode instead of translating Intel to ARM and vice versa seems to be a good thing.

I tested parts of that on my machine. The build itself did work. Getting LLVM/Clang for ARM was a different story. For a while now, I've been building my own LLVM/Clang distribution for C++ Insights. The reason for doing that is that the macOS version for LLVM/Clang isn't reliable. Sometimes, they get published after a new release, and sometimes, they don't.

It took me a lot of build attempts, due to different machines and macOS versions, to complete the build.

The next issue was to make all the involved Docker images multi-architecture images. That part went okay. However, it took way more time than expected, but it worked.

New image names

Due to the new architecture, I changed the binary names on GitHub. Instead of insights-ubuntu-14.04 the binaries are now named:

  • insights-macos.tar.gz
  • insights-ubuntu-amd64.tar.gz
  • insights-ubuntu-arm64.tar.gz

That change was long overdue, as ubuntu-14.04 has been a lie for some time now. But of course, some parts of the chain expected the most recent version to be named ubuntu-14.04—another solvable task.

My next challenge was using the ARM docker image during a GitHub Action run. Most articles I found online talked about how to build such an image, but I wanted to use it (I struggled with the building part before).

As far as I can tell, there is no easy way. I was hoping to be able to use GitHub's container option and pass the desired platform (arm64), but that failed. At the moment, it looks like GitHub only supports native images out of the box.

With the help of a search engine, various cups of coffee, and multiple failed workflow runs, I now have a rather ugly but working solution. In each step of a GitHub Action where the command must run inside the C++ Insights build Docker, I run the command manually like this:

docker run \
  --rm \
  -v $(pwd):/${{ github.workspace }} \
  -w ${{ github.workspace }} \
  --user $(id -u):$(id -g) \
  --platform linux/arm64 \
  andreasfertig/cppinsights-builder \
  cmake --build build

Far from nice, a lot of code duplication and at one point I even have to put the various commands in a file to execute them from there. I started thinking about creating my own GitHub Action, but that's for another day.

New things take longer

All right, at this point, the build was rolling... very, very slowly. The quemu simulated docker isn't the fastest. Build times for C++ Insights are now up from ~14 min to ~40 min. It's not great, but I hope this improves in the future.

Now that I had the binaries for each platform, I had to prepare the cppinsights-container, which holds the binary and is used by the server. This was when things got interesting again. I updated the Makefile to download the two binary and name them insights-arm64 and insights-amd64. So far, so good. But the docker build, which was done for both platforms in parallel thanks to docker buildx needed to know which binary to copy into the container.

Thankfully, Docker passes an argument to the build TARGETARCH, which contains the target architecture. The rest was a piece of cake then. Actually, I didn't have cake, only coffee, and some Japanese sweets.

Unexpected difference between libc++ and libstdc++

I pushed the work to GitHub and was surprised by a failed workflow. You would think that after around 50 failed workflows, I should have expected that, but I didn't. The issue now is that during its build, the container executes a few transformations to verify that everything is working. This comes from the very beginning when things were less robust, but I kept it around.

This test failed, or more precisely, the attempt to execute a test using libc++. Doing the same with libstdc++ worked. All tests run both combinations. More investigation indicates that the docker container gets killed during execution. I suspect this comes from the QEMU emulation. Not sure why libc++ causes the trouble and libstdc++ works, but why not. You can see a similar pattern during the build of C++ Insights when the tests run with both libs. There it is libstdc++ ~17min vs. libc++ ~28min.

I decided to save my resources and turned off the test with libc++ for ARM. Everything works fine on my Mac, so I'm more confident the failure comes from the emulation. The main container, the one used by the C++ Insights web server, runs the tests as before with both libs.

One more issue to solve

Finally, I had one minor bump when I tried to update the cppinsights-webfrontend-container for multi-platform. Silly me, I expected the workflow to get green, but it didn't.

The issue here was, once again, the Dockerfile:

"deb [arch=amd64] https://download.docker.com/linux/ubuntu

The line above is required to make docker available for installation on Ubuntu. Of course, the architecture has been hard-coded to amd64 for the last few years. Nothing a good search engine can't solve:

"deb [arch=$(dpkg --print-architecture)] https://download.docker.com/linux/ubuntu

The end

Believe it or not, that's it! The C++ Insights webserver runs fine despite all the changes, and I have an ARM version running locally.

Effort? Around three days. Not working exclusively on the various issues. I did other things in between while waiting for the next red workflow; sorry, I meant to say green. But it did consume many hours nonetheless.

So, I hope you enjoy using C++ Insights with the latest Clang additions from Clang 18. I'm also happy if you take advantage of the fact that you can now run C++ Insights on ARM.

Oh, I almost forgot, I also upgraded GCC from 12 to 13. That might be close before they release GCC-14, but it still brings new library features.

Support the project

You can support the project by becoming a Github sponsor or, of course, contributing with code.