The Dream - The Universal Desktop Application
Software virtualisation and sandboxing have been concepts familiar to software engineers since the venerable IBM mainframe era, the 60’s. Fast forward a few decades and now anyone can tap into the increasingly larger pool of power their own home computers offer to run virtual computers inside their own physical device. A decade ago, thanks to the widespread use of ridiculously powerful servers, virtualisation at a massive scale became a reality which gave birth to cloud computing as we understand it today. Amazon were the first ones to realise this and seize the opportunity to lead the way in this burgeoning industry, when they launched Amazon Web Services and started renting out any masses of computer power that was so cheap for them to obtain. The idea was genius and revolutionary. Soon enough others followed suit, finding greater (Google, Digital Ocean) or smaller (Salesforce) success.
In 2013 things were going to take another interesting turn as Docker Inc., formerly known as dotCloud, released Docker, once an internal product they used as a tool to facilitate their in-house cloud infrastructure, as an open platform everybody could use to deploy their own systems in a repeatable, secure and highly efficient manner. Nowadays, Docker forms the backbone of most of the Internet’s backend server infrastructure and has become the de facto microservice paradigm.
Containers conveniently encase a trimmed down version of an operating system alongside any number of userland applications and libraries necessary to support the runtime environment of a particular application. This allows us to run software in a completely isolated environment where it can be supplied with the right versions of the dependencies it requires without affecting the host environment. Just like that, library versioning and mutating ABIs become a problem of the past, how great is that? This has been a much celebrated advancement in the cloud computing sphere but left some people wondering, what about the desktop?
With all of these recent developments further fuelling the explosive growth of the Web 2.0, mobile apps and “the cloud” in general, we’ve seen traditional user-facing desktop applications take a backseat position in the modern software landscape, as fewer and fewer developers find it an attractive choice to bring their software products to their end users, and with good reason. Nonetheless, some of us certainly wondered if the level of portability and simplicity this shift in paradigms pioneered by the cloud engineering industry couldn’t be somehow replicated offline. That was the question. Several people in the Linux world stepped up with an answer.
The Year of the Linux Desktop Starts With a Single Desktop
But first, a bit of history
Note: I will elaborate on this on a further edit to this article. Stay tuned if you’re interested.
Like virtualisation, application sandboxing is an old concept that has not been revisited until fairly recently. The first implementation appeared in FreeBSD 4 in the form of “jails”: processes running under a chroot
ed root tree which have a virtualised view of the system’s resources, provided by the underlying operating system. This includes networking and pseudo-filesystems such as /proc
.
A few advancements and implementations were built during the mid noughties, such as Glick, released in 2007 by Alexander Larsson, a core contributor to GNOME, who would also go on to develop Flatpak, of which we’ll talk about soon, a few years later. Google also implemented sandboxing capabilities into their Chrome browser to increase its stability and security. Every web page was now strictly confined to the virtual limits of its tab. Code running inside a tab was no longer able to access resources beyond the scope of the isolated process it ran in. Furthermore, if a given web page got stuck in an endless loop or hung up in any other way, it wouldn’t be able to infect all the other tabs running alongside it. You could attribute this to the fact every page was now being run in a process, not to the concept of sandboxing itself, but forking out a new process every time a new separate browsing session was open certainly enabled this idea.
Some years later, Lennart Poettering, the man behind systemd, also wrote about this topic in 2014. It was around this time when the first implementations of the idea he had described came to existence, in the name of packaged apps.
Packaged apps in Linux
Packaged apps borrow several design goals from Docker itself as, put simply, they’re standalone, self-contained software applications, bundled in with all of the libraries and configuration files they need to run in a single, transparent runtime context. They provide the following benefits:
- Download and run: all it takes to use a packaged app is an address to get it from and a mouse (or keyboard) to launch it.
- Security: packaged apps run in isolation inside their host and as such have no visibility of anything beyond their predefined runtime boundaries.†
- Consistent environment across distros: Ubuntu 14.04, Fedora 26, Arch Linux, Debian 9, neither the distro nor the release matter. It will run just the same!
- Seamless upgrades and no dependency hell: as your application is bundled in with all of its dependencies, it can be upgraded without touching a single file in the host operating system. Furthermore, packaged apps will never interfere or be affected by dependencies they share with other applications installed in your system.
- Language-agnostic: Python, C, Go, .NET Core… you can make a packaged app with almost any language or framework.
As you’ll have probably realised by now, packaged apps bring to the table a set of advantages akin to what containers do. This new way of distributing software to the desktop userbase can save a lot of time and headaches both to developers and users alike, as applications can be used as easily as they can be developed and distributed. Most of the middle-men (i.e. package maintainers) are removed from the picture and the people building the application can be certain their product will run the same in their end users’ machines as it did in their dev boxes at the time they decided to release it.
Major implementations
As with anything in the Linux world, every time something new comes up, everyone wants to make their own flavour of it. In this case I will briefly talk about the three major software bundling systems I know about.
- Snappy: developed by Canonical and pre-installed in Ubuntu.
- Flatpak: developed by the GNOME team (mainly Lars Andersson), Red Hat soon adopted this as their preferred bundle system. A notable application supplied as a Flatpak is Inkscape, the vector graphics editing tool.
- AppImage: another take at the packaged app problem, designed to be simple and lightweight both for the end user and the developer. Krita (digital painting, similar to Adobe Photoshop) and Tusk (an Evernote client) are a couple of examples of software that are distributed in this format that I can name off the top of my head.
Both Snappy and Flatpak provide software repositories you can browse to discover and install new bundles. If you use Flatpatk, visit http://flatpak.org/apps.html or, if you use Snappy, just run snap find <pkgname>
from your command line.
It is true having so many different software bundling implementations kind of defeats their purpose, as end users may find their favourite applications are not packaged for the bundle system they’ve got installed and thus will have to keep track of what apps are provided by what system. Still, thanks to these technologies, application portability across Linux distributions is now a tangible goal and the platform developers decide to target becomes less of a moving target. Better ship your app both as a Flatpak and a Snap than deploy it and maintain its DEB, RPM, AUR and Slackware packages, no?
This is all well and good for us Linux users, but what about Windows and macOS?
Universal Desktop Applications
The Linux folks may have been busy working on a number of powerful frameworks to standardise the distribution of software applications across their heterogeneous userbase but this approach is useless to everyone else. Furthermore, all the major implementations of the packaged app concept are not completely “plug and play”, as they still require the user to install a number of packages and runtimes which, I dare say, don’t make for the best no-frills experience.
Here is where the folks over at GitHub step into the picture when, in 2013, they unveiled Electron, a new application development framework built with total portability in mind.
Write a single app, deploy it everywhere.
The idea was compelling. Indeed, this concept worked so well that soon after major players in the industry started deploying their desktop software as Electron apps. Microsoft did with Visual Studio Code, and many others joined them, like Slack, WordPress, Discord, and naturally GitHub with their own text editor, Atom.
Under the hood, Electron is little more than a Node application running on a trimmed down instance of the Chromium web browser. This makes the development of local desktop applications no different than that of their web counterparts. A HTML/CSS frontend and Node and JavaScript for everything else. Simple and accessible.
Two years later, Facebook took this concept and applied it to the mobile space, and the result was a spin-off of their massively successful React JavaScript framework, christened React Nativehttps://facebook.github.io/react-native/. Skype, Airbnb, Instagram and Facebook’s own mobile app eventually adopted this framework. The argument for React Native does indeed sound compelling:
Are you telling me that I can write a single application using just JavaScript and the Electron API we all know and love, and then I’ll be able to ship it to all Android and iOS devices alike with minimum effort? Sign me up for that!
This new wave of Electron-like apps or, as I have taken to call them, embedded-browser applications, have positively disrupted the dwindling native desktop application ecosystem. Finally, the operating system barriers can be lifted and Windows, macOS and Linux users alike can share the same codebase. For the application developers this means a few good things:
- Faster release cycles as there is almost no overhead in adapting the final binaries to every supported platform
- Less, simpler code to maintain: no more hacky shell scripts, no need to write Autoconf, cmake
- Fewer platform-specific bugs to squash
- Single framework to learn: before Electron came about, releasing the same application on all major operating systems would mean writing most of it from scratch for every platform you want it to run on: C#/VB.NET on Windows, Objective-C/Swift on macOS, and if the devs were kind enough to not forsake Linux completely, perhaps some awful-looking GTK+ app for us.
For the end user this is very good too because
- It’s simple: they’re able to simply download the application and start using it immediately.
- They can upgrade their software whenever they want, fully knowing it won’t break something else
- OS lock-in is reduced, as the same application is able to run just the same across Windows, macOS and Linux.
I guess languages like Java and Python would come close to offering a decent degree of cross-compatibility between OSs, although this would still require the end user to install and update a JRE or Python runtime to run them. And even so, these cross-platform apps don’t have the nicest GUIs…
However, not everything is as good as all I may have led you to believe. Embedded-browser apps like these come with their own set of caveats which I’ll briefly consider in the next couple of paragraphs.
For starters, there’s the fact that they’re not native desktop applications in the truest sense of the word: they’re merely web applications running on a web browser which is in turn chucked into a native OS window. This means your system will have to allocate resources both to the application in question and to the browser engine pulling the strings from behind.
Embedded-browser applications like Slack have certainly made headlines due to their appalling use of system memory. Things can indeed get a bit bloated at times…
- How do you mean?, you ask. Let’s have a look.
Spotify Linux desktop application memory usage as output by smem
(memory is output in KB):
# smem -t -P spotify
PID User Command Swap USS PSS RSS
11919 root sudo smem -t -P spotify 0 1892 2466 9744
11920 root python /bin/smem -t -P spot 0 6260 6804 10688
16551 liam /usr/lib64/spotify-client/s 0 440 14038 50792
16569 liam /usr/lib64/spotify-client/s 0 20088 36736 92024
16533 liam /usr/lib64/spotify-client/s 0 116192 138060 191008
16583 liam /usr/lib64/spotify-client/s 0 571512 589078 625048
-------------------------------------------------------------------------------
6 2 0 716384 787182 979304
If my first grade maths are correct, that’s almost a gigabyte. Not bad, eh? Let’s do another one, for laughs’ sake. This is a glance at Atom’s memory usage at the time of writing this article (brand-new install, no plug-ins and just a couple of Markdown files open with their respective preview windows).
# smem -t -P atom-beta
PID User Command Swap USS PSS RSS
11807 liam /bin/bash /usr/bin/atom-bet 0 300 658 1880
12153 root sudo smem -t -P atom-beta 0 1880 2460 9916
11819 liam /usr/share/atom-beta/atom - 0 856 6028 31296
11864 liam /usr/share/atom-beta/atom - 0 18592 28644 74052
12154 root python /bin/smem -t -P atom 0 31224 31764 35584
12057 liam /usr/share/atom-beta/atom - 0 99776 138058 216784
11810 liam /usr/share/atom-beta/atom - 0 190776 225688 313544
11890 liam /usr/share/atom-beta/atom - 0 1640300 1681404 1771060
-------------------------------------------------------------------------------
8 2 0 1983704 2114704 2454116
That’s coming dangerously close to 2.5GB of RAM (with or without smem
’s entries on the table). Talk about Visual Studio being bloated…
So which is best then?
Depends, obviously!
Electron simplifies development costs as well as the learning curve for your developers, which need to keep on top of fewer technologies and platform gotchas in order to reach the widest desktop audience possible. However, it is evident, no matter how fast the V8 engine gets, applications compiled to run natively on your end users’ system will always have the upper hand in terms of responsiveness and usage of system resources. This is a trade-off you need to consider: what’s most important to you, speed or portability? To the folks behind Spotify or Slack the answer is simple: hardware is very cheap nowadays, and most people have sufficient CPU and RAM to spare to cushion the impact these two-headed apps has on their systems.
Nevertheless, if speed or access to system-specific features is a must for you, and you happen to be a Linux software developer, perhaps you’d prefer to stick with whichever framework or language you use and simply deploy your application as a snap, a flatpak or an AppImage bundle.
† Packaged apps definitely offer a higher degree of security than a normal application running directly on your system would, as they run on a highly constrained environment, but even though I can’t find any literature at the moment to prove otherwise, I would handle them with the same care I would use with a non-packaged application.