In Linux, Compiling Software from Source is the 'Right Thing to do'
Ended up building GCC-4.9 to build python-2.7 to then use both to build LLVM-3.7. To
be honest, I have given up the ghost on trying to package binaries, but instead to
build them in my own $HOME/.local
subdirectory.
The reasons for hand-rolling my own are very simple; Linux as an ecosystem, is pretty terrible for per-user installation of software. You as the user, is largely at the mercy of the distribution that is packaging up the software, that the distro isn’t going to give up support, say, just because you are 2 releases out of date.
Linux is also saddled with the problem of competiting packaging standards, be it .deb
,
.rpm
or Arch Linux’s AUR. In addition, all of the packaging systems out there
makes a number of rigid base assumptions about the user:
- User has root access privileges to install any packages they want
- User has admins who will install any necessary (or unnecessary!) packages on request
- User has admins who are willing to take on the extra burden of looking for extra packages that fall outside the disto’s maintained set of software
- Distros will craft packages to the needs of an individual or an organisation
- User has otherwise the knowledge or capability to install software from sources
- User has the computing resources to be able to compile those software from sources
- Packaging systems have scant regard that the software compiled will be binary compatible with other distros (or previous versions of an existing distro!)
I’m lucky enough that my admins are generally well-versed enough to navigate through 3rd-party repos in order to install newer software that we need from time-to-time on systems are considered ‘antiquated’. But that isn’t always possible - the best example in recent times would be when Chrome dropped support for RHEL6. The reason simply came down to a single library that it depended was too old, and didn’t provide a few new library calls. Imagine that happening on a commonly used, and commercially supported platform!
The alternative, was to build your own, or to take it from an non-commercially supported source. This is based on the assumption that either you have the know-how and time to build the whole software stack, or that you’re entrusting the security of your systems to the integrity and the skills of the person who has prebuilt the binaries for you.
Even if you are willing to do the dirty job, you are only able to do so if the software is open-sourced; there is no way an individual can build the commercial version of Chrome, aside from Google. At best, you’ll end up with Chromium. And thereafter, the support for updating/patching and rebuilding the software ends up being your or your organisation’s burden. That’s not an enviable position.
There is much to be said about running software on a non-fragmented ecosystem;
You download an .exe
from the Internet, and the expectation is that it’ll run.
See how your Ubuntu instance would like it if you gave it an RPM file. Or even
if you drop a Fedora 22 RPM file to your CentOS6 instance, I bet my top dollar
that yum
is going to give you a hard time on package conflicts.
In that respect, there is much to like about the JVM’s promise of ‘write-once, run-everywhere’; that guarantee is extended across all Linux distros. If you downloaded the binary artifact from Java’s site, the software will run on any distro (circa-2009 and ahead). The necessity of not introducing new dependencies is due to commercial realities - organisations by nature are conservative entities; people are there to do their jobs and to provide upgrades that are justified (say security patches), not to do it to chase after the latest and the greatest. A new Wayland compositor that supports the latest graphics card you have might be in your best interests to upgrade, but won’t make sense for a commercial entity if an upgrade breaks its production system, disrupting the bread and butter of supporting its existing business logic.
Applying to my own ‘business case’, I have a disparate number of Linux distributions that I am running, ranging from old and un-upgradable to the newfangled. Thankfully they are all x86_64, so there is one less concern with respect to machine architecture. Building from source with an old environment ensures binary-compatibility on all the systems I have, and saves me a ton of hassle from rebuilding them for each of the operating environments, or to deal with disparate package managers.
It’s liberating not having to worry about each of the different specification files or custom patches/quirks that each distro apply to their own customized version of the package, which is one of the strongest justifications to why I’d keep compiling my own packages by hand. Organisationally, it just makes sense to reduce the overhead of customizing distro-specific packages in favour of a plain tarball that runs everywhere, which is also the philosophy that we’ve adopted for packaged software we release at work. (It’s great to reduce the workload on the build team too. Love it when it’s a win-win situation!)