[personal profile] mjg59
I was at the OpenStack Summit this week. The overwhelming majority of OpenStack deployments are Linux-based, yet the most popular laptop vendor (by a long way) at the conference was Apple. People are writing code with the intention of deploying it on Linux, but they're doing so under an entirely different OS.

But what's really interesting is the tools they're using to do so. When I looked over people's shoulders, I saw terminals and a web browser. They're not using Macs because their development tools require them, they're using Macs because of what else they get - an aesthetically pleasing OS, iTunes and what's easily the best trackpad hardware/driver combination on the market. These are people who work on the same laptop that they use at home. They'll use it when they're commuting, either for playing videos or for getting a head start so they can leave early. They use an Apple because they don't want to use different hardware for work and pleasure.

The developers I was surrounded by aren't the same developers you'd find at a technical conference 10 years ago. They grew up in an era that's become increasingly focused on user experience, and the idea of migrating to Linux because it's more tweakable is no longer appealing. People who spend their working day making use of free software (and in many cases even contributing or maintaining free software) won't run a free software OS because doing so would require them to compromise on things that they care about. Linux would give them the same terminals and web browser, but Linux's poorer multitouch handling is enough on its own to disrupt their workflow. Moving to Linux would slow them down.

But even if we fixed all those things, why would somebody migrate? The best we'd be offering is a comparable experience with the added freedom to modify more of their software. We can probably assume that this isn't a hugely compelling advantage, because otherwise it'd probably be enough to overcome some of the functional disparity. Perhaps we need to be looking at this differently.

When we've been talking about developer experience we've tended to talk about the experience of people who are writing software targeted at our desktops, not people who are incidentally using Linux to do their development. These people don't need better API documentation. They don't need a nicer IDE. They need a desktop environment that gives them access to the services that they use on a daily basis. Right now if someone opens an issue against one of their bugs, they'll get an email. They'll have to click through that in order to get to a webpage that lets them indicate that they've accepted the bug. If they know that the bug's already fixed in another branch, they'll probably need to switch to github in order to find the commit that contains the bug number that fixed it, switch back to their issue tracker and then paste that in and mark it as a duplicate. It's tedious. It's annoying. It's distracting.

If the desktop had built-in awareness of the issue tracker then they could be presented with relevant information and options without having to click through two separate applications. If git commits were locally indexed, the developer could find the relevant commit without having to move back to a web browser or open a new terminal to find the local checkout. A simple task that currently involves multiple context switches could be made significantly faster.

That's a simple example. The problem goes deeper. The use of web services for managing various parts of the development process removes the need for companies to maintain their own infrastructure, but in the process it tends to force developers to bounce between multiple websites that have different UIs and no straightforward means of sharing information. Time is lost to this. It makes developers unhappy.

A combination of improved desktop polish and spending effort on optimising developer workflows would stand a real chance of luring these developers away from OS X with the promise that they'd spend less time fighting web browsers, leaving them more time to get on with development. It would also help differentiate Linux from proprietary alternatives - Apple and Microsoft may spend significant amounts of effort on improving developer tooling, but they're mostly doing so for developers who are targeting their platforms. A desktop environment that made it easier to perform generic development would be a unique selling point.

I spoke to various people about this during the Summit, and it was heartening to hear that there are people who are already thinking about this and hoping to improve things. I'm looking forward to that, but I also hope that there'll be wider interest in figuring out how we can make things easier for developers without compromising other users. It seems like an interesting challenge.
From: (Anonymous)
You raise an interesting question, but I think it's not as dreary as you make it. That same point could have been made *against* the Mac, 10 years ago, yet they managed to do it. All that's needed is a slightly different mindset.

First, Alan Kay said "People who are really serious about software should make their own hardware". I don't see anyone in the free software camp being serious about making their own laptop computer (nope, not even them). That has a pretty high buy-in cost these days, so unless somebody like Intel is going to sponsor you, it's not really feasible. Still, since Apple is making the best laptops today, you can get 95% of the benefit by simply making free software work great on Apple laptops. And yet, while even Mr. Torvalds himself uses an Apple laptop, the support for free operating systems on Macbooks is only mediocre. The situation is not going to change until free operating systems run great on the best hardware of the day.

Second, all of these things take time. There's no one magic feature that, if we implemented it today, would cause everybody to switch operating systems. We need to attack each problem individually. That means we need to be able to set targets that take years to accomplish. Right now, everything changes from year to year in the free software world, and often has multiple competing standards that must exist at the same time. X11 didn't pan out, I guess, so we're switching to Wayland. If that's too low-level, you can use GTK+ -- or Qt. Sys V init is too limited, so we've replaced it with systemd, and also Upstart, and a couple others. For search, we had Beagle and Meta Tracker and GNOME Storage, which all seem to be dead today, so I honestly have no idea what I'd use.

It's impossible for serious end-user applications to target free operating systems today. Even when I was doing free software development full-time, I couldn't keep track of all the systems we were supposed to use. In the rare cases when I could (Beagle looks like what everyone is using!), things would get deprecated faster than I could implement them (Beagle is now dead!).

Apple and Microsoft have showed that the way to make progress is to pick something -- even something bad! -- and refine it. Free software makes one thing, and before it's finished throws it out and replaces it with something completely different, and then repeats the cycle. Nothing ever gets refined because nothing *can* be refined. Emacs on OS X is actually pretty decent because they've been developing against (more or less) the same platform since 2001. Emacs on Ubuntu looks like it came from 1991 because very little of a modern Ubuntu system, apart from the Linux kernel, has stayed stable for very long. None of us who use and hack on Emacs have wanted to commit to making it work well with GNOME because GNOME 2 was very different from GNOME 1, and GNOME 3 was more different, and now Ubuntu has switched to Unity which is even more different.

In short, give us an API with a shelf life longer than 3 years, and we'll get started. We've been burned many times in the past, though, so don't expect developers to believe you for a while. Steve Jobs could be a jerk, but he also had the power to stand up and say "We're using Spotlight for search!", and then 10,000 people believed him and used the Spotlight API in their apps.

Free software can never maintain a feature advantage over proprietary software, because they can always just copy what we do, but the converse is also true. We need to stop thinking about success as being That One Feature (that OS X can't copy), and more about stability and focus. Even Microsoft is beating free software at that right now.
From: [identity profile] pjc50.livejournal.com
Building your own hardware: do not underestimate the Raspberry Pi, which is exactly that and has done an extremely good job at the question of "how do we make a desktop for developers who are also small children?" (Answer: minecraft/python integration)

APIs: people seem to have trouble distinguishing between the system they're building and the API. This seems to be particularly bad in GNOME land, which has gone for the Windows approach of tight integration and unpopular look-and-feel overhauls. As you say, all the search programs are dead .. except good old 'find' and 'locate'.

I note that X11 has lasted so long due to API persistance, and having features which are harder to duplicate than people think (window manager protocol, network transparency). Even now Wayland to me is a "future" thing.

So, to get a Linux "search" platform, first define the API carefully, then build a rubbish client and server or two. Use this strawman to aggravate someone into building a more popular version *on the same API*.
From: (Anonymous)
This is spot on. The Emacs example is excellent. Now, of course you realize that, because refining, validating, fixing, polishing and maintaining compatibility is expensive and considered very boring by developers, all this requires a major corporation and major funds which means the universal Linux Desktop will never happen. In the very best case it will be something like the "Ubuntu Desktop" but not just "the Linux desktop".

From: (Anonymous)
"Expensive" is (to a degree) fungible with "time", when the APIs in question are stable.

When I write free software, if it's against a stable API, I'll polish them. Maybe not entirely at the start, but eventually. That's true for Cocoa, POSIX shell, a Linux-specific daemon, etc. And I'll put more effort in at the start, too.

But when I write free software for GNU/Linux desktops using e.g. GTK+, what's the point? In a year, attention to small details (e.g. pixel-perfect menubar spacing) will break. In three years, changes break core usability features (e.g. windows in good default positions, or DnD / other IPC protocol changes). By six years, fundamental toolkit changes mean I'll have to rewrite most of my view code and throw out all the polish I did.

Refining, validating, fixing, and polishing isn't exciting for a lot of people as new features, but it's only "very boring" when I often find myself doing it for the third or fifth or tenth time.

The idea that it "requires a major corporation" to do this is an invention caused by the high rate of churn resulting in an ecosystem that needs constant re-polishing to avoid breakage. On my darker days, I think they're doing it on purpose, to make sure they're "required."

Profile

Matthew Garrett

About Matthew

Power management, mobile and firmware developer on Linux. Security developer at Google. Ex-biologist. @mjg59 on Twitter. Content here should not be interpreted as the opinion of my employer.

Expand Cut Tags

No cut tags