Cloning and building a Beman library is simple. Simple in that the process isn’t atypical and simple in that there aren’t a lot of (complicated) commands. Currently we have a two step process: git clone https://github.com/bemanproject/exemplar.git followed by something like cmake --workflow --preset gcc-debug.
The basic instructions get you predictable results. Currently, if you use cmake --workflow --preset gcc-debug it builds with a specific version of the project’s dependencies that are known to work.
The basic instructions get you high-quality configurations. The presets we provide have well-thought-out defaults for flags, sanitizers, optimization, etc.
The door isn’t closed to other configurations. If you want to use VCPkg or Conan for dependencies, fine. If you want to use a different generator or a different compiler, fine. This all should be possible with regular CMake commands. Right now if you run vanilla CMake with this repository, it’ll use find_package instead of FetchContent so you can do whatever.
We use standard CMake code. Here we’re not doing so well. We use custom toolchains for one, but that doesn’t bother me as much as our complicated use-fetch-content.cmake script which gets invoked in our CMakeLists.txt files.
@purpleKarrot, I agree that all code that is useful for multiple CMake projects should ideally be upstreamed to CMake. In your expert opinion, what is the best way to achieve the above goals? What is the missing feature (if any), we should attempt to upstream to CMake?
We aren’t doing magic in the toolchain files, we can technically move everything in the toolchain file out to the preset config and our CI script, but eh, it’s a lot cleaner to do this in a toolchain way.
beman.exemplar does not require a toolchain file to be setup.
The core of everything is that there is tension between these two goals when dependencies, even simple build-time dependencies like a test framework, get involved. Standard CMake does not provide a mechanism for provisioning dependencies, so any solution that is “simple” is definitionally not “standard”.
Sidebar
The closest CMake ever came to providing something akin to dependency management was super projects, the original use case for ExternalProject and FetchContent. Super projects are distinct from traditional package management, in a super project the composer of the project is responsible for figuring out the complete graph of all dependencies. CMake documents this usage in its “Complex Dependency Hierarchies” section for FetchContent. /Sidebar
CMake might one day provide a standard way to provision builds, there are a couple proposals kicking around, but it’s not today. Beman can either settle on one of the ecosystems that has solved this problem as its “first class” solution (while not shutting the door on the others), or it can use slightly convoluted solutions implemented in CMake.
There is no option that fully satisfies the triangle of CMake-only, standard CMake, and handles dependency provisioning. It is a “pick any two” dilemma.
I clearly see provisioning as the missing piece, but not as something that should be integrated into CMake. My requirements for a provisioning system would be:
There is a (standardized, build system agnostic) manifest in each project that describes the project’s direct dependencies. There is a distinction between build dependencies and runtime dependencies.
There may be a solver tool that recursively collects package manifests of dependencies and produces a lockfile.
There may be multiple (potentially platform specific) provisioning approaches:
Create a CMake dependency provider based on FetchContent.
Download binary packages into a prefix path.
Invoke the platform’s system packaging tool to install dependencies.
Create a container with dependencies preinstalled (build dependencies layered on top of runtime dependencies).
…
There doesn’t need to be a provisioning tool or approach that satisfies everybody’s requirements. I think it would be perfectly fine to have a provisioning approach that works exclusively on Linux with rpm packages, for example. What is important, is that there is a single manifest per project rather than for each provisioning tool (vcpkg.json plus conanfile.py plus more).
Apart from provisioning, I don’t see anything missing. Especially, I don’t see the need for sharing configurations between projects. Both @vito.gamberini and I made it clear in our C++Now presentations that certain things are none of your business as a project maintainer. Of course, we may want infrastructure (like toolchains, buildscripts, sanitizer configurations) to be shared across projects, but this can be done without the project having a declared dependency to the infrastructure.
Packaging systems have non-overlapping manifest feature sets (consider cargo’s dependency overriding feature) and version conventions (consider ArchLinux’s “-n” suffixes). I’m having trouble imagining a world where such a system doesn’t devolve into either frustrating limitations or explosive complexity.