Logo

Blog


How to run a local instance of C++ Insights

Now that C++ Insights has been available for more than one and a half years, I have received requests on how to run a local instance. I do it myself for my training classes or during conference talks simply because I do not trust the Wi-Fi at conferences or training facilities. In this article, I will cover how you can run a local instance of the web frontend of C++ Insights together with the same binary as the website uses.

General overview

Let's first look behind the scenes at the website itself. It is powered by a Linux server running an Apache web server with Python. The source code editor uses JavaScript, specifically CodeMirror, for syntax highlighting and editing facilities.

When you request a transformation by pressing the play button or the equivalent shortcut, a REST request is sent to the web server. The Python part processes this request and, if it is valid, invokes a Docker container that contains the C++ Insights binary. There are at least two reasons for this. First, with that, no users shall have access to the web server itself, and each invocation is separate. However, the second reason is probably more important. When compiling the C++ Insights binary, it also gets the include paths of the systems compiled in. This makes it somewhat difficult to port it between systems. Keeping it in more or less the same environment as it was compiled in makes things easier.

Setting the local environment up

Now, to get things running on your local machine, you must have Docker installed. If you are happy with what is up and running at cppinsights.io, you can just clone this git repository:

In that repository, run make get. It will download the latest pre-build Docker images from DockerHub:

The first image is the run-time environment for C++ Insights, which is the exact same as the website's.

The second image is the Docker image for the website itself. As a careful reader, you may realize at this point that the website itself is not running in a Docker environment. However, doing it for this purpose seems to be the easiest way to distribute it.

After that, you can start a local instance with make start. You should have a local instance of C++ Insights running at 127.0.0.1:5000. In case of trouble, you can run make logs to see what is happening in the container. make stop shuts the instance down.

How it works

This all works as the second Docker container gets access to the host's Docker socket. With that, it is possible for one Docker to run containers available on the host systems. It is not exactly what is sometimes referred to as a docker-in-docker installation, but it is close. There may be security issues I'm unaware of. I advise you not to use this setup in a production-like environment.

If you have any comments or questions, please reach out to me via X, LinkedIn, or, of course GitHub.

You can support the project by becoming a GitHub Sponsor or, of course, with code contributions.

Andreas