When working on a codebase as a team, especially in a corporate environment, there are choices you can make that can have unexpected impacts. The tendency is for a standardized build, and on the surface this can be appealing. It’s easier for procurement, because they can bulk-buy and have standard parts. It’s arguably easier for build scripts and configuration, on the grounds that everyone has the same, and as long as you stick to the magic formula everything should “just work”. However, there’s a significant downside to this as well.
If you’ve been writing for the web for any time, you’ll probably be familiar with browser compatibility, and all the fun that this brings. Recently this has got a lot better, as browser vendors have embraced HTML standards, and been less tempted to do their own thing, but there are still niggles. For years, Internet Explorer was the bad citizen, with all sorts of quirks that would only show up in certain browser versions. Part of the problem here was that there was a fairly homogeneous choice of browser for developers – Chrome and Firefox. Many browser compatibility issues were simply not seen by developers, because they always used the same browsers, as did the rest of their team – even though people knew it was a likely problem! It fell to testers to ask for machines with non-standard operating systems, to install non-standard browsers on, to test and find these issues.
Of course the example of the browser compatibility is perhaps one of the better known ones. It’s particularly obvious, because when you write for the web you are writing code that clients will download and run on their machines, and you have very limited control over how they do that. One approach teams took was to simply track browser statistics, and make risk/value decisions in testing only the browser configurations that were seen in the wild.
If you’re writing for servers – code that runs on machines you control – this is much less obvious. This is where it can be more tempting to fall for a standardized build. The problem is that assuming there is, and will only ever be, one configuration is a fallacy. In technology, nothing stands still; the server operating systems will be patched, new versions will be released, and other components will be installed. We have to acknowledge that configuration can and will change, and be prepared for it.
By standardizing the configuration of developer machines, you are allowing compatibility problems to hide. It’s normally a good idea to bring problems forward, and solve them while they are small. If you allow yourself to rely on a “magic configuration”, you can fall further and further behind the rest of the world, to the point where eventually upgrading becomes a big challenge. A recent example showing the scale of this is Windows XP, released in 2003, where Microsoft eventually had to postpone the end-of-life and keep supporting it, because so many companies relied on it and could not update to newer versions. It’s far better to prevent this risk form building up.
If you allow your developers to run diverse configurations, they will find some of these problems. It’ll be annoying, it’ll take some time, but they will find fixes, and keep the codebase flexible. Be pragmatic about it; there’s no sense in adding friction for the sake of it, for example by trying to use particularly obscure examples, or examples that are far-removed from the supported usage of the components of your system. This also has the benefits of letting developers choose the tooling they are most effective with. If people would be more effective using a particular operating system, or a particular IDE or editor, why not? I’ll stress the key word again – effective – we’re here to deliver value, not to play with tech, so getting stuff done has to be the tie-breaker at all times.
Some examples where this applies include:
- Mandating the use of a particular operating system, or particular versions without updating
- Custom builds of 3rd party code, or forks of open source libraries that diverge from the main supported version
- Relying on “magic” machine set up, that is not configurable or scripted – such as “magic” source control folder paths, or magic local web server and database configuration, or magic hosts file settings
- Having heavily customized non-standard tooling, such as code formatting tools, that restrict choice of IDE
- Using testing libraries that need plug-ins to be installed, such that the choice and versions of IDEs and other tooling are restricted
- Expecting customers to only use your arbitrary list of “supported” things – browsers for example, where phone and tablets will have a completely different ecosystem
There will be times when there is no choice but to homogenize an element of configuration – but this is actually pretty rare, if you put effort into avoiding it. Allow people to choose the tools that work for them, while driving out configuration problems early – and fixing them!