Archive | February 2017

On Hiring: Developers Are Like Stocks

This post is for those of you who hire developers, and also junior developers who want to be hired. Let’s talk about how developers are just like individual stocks in the stock market. Time for a little role-playing: you’re now a stock market investor.

As a financial advisor, your company has given you $2,000,000 USD to invest in the stock market. It’s made very clear that the future of the company depends on the return on investment (herein called ROI) – “gains” – that your investments bring to the company. Your decisions will have a major impact on the company’s future. Given that kind of pressure, what’s your investment strategy for success? Begin by reviewing the kinds of stocks available to invest in.

Let’s Review Some Stocks

You take a look at stock #1. It has been on the market for a decade, has nearly consistently yielded high returns (with references you can investigate and check into), and is very reputable. Putting a good chunk of your money here is probably a reasonable call, since this stock is vetted and has historically provided value over time. It’s unlikely to suddenly drop to no value, and if you see it going south you can bail out before you lose it all.

Stock #2 is the interesting one. It’s brand new to the market. You can find no history on it, no performance trends, no reputation, nothing. It’s a total wildcard that has a reasonably low price tag – about 1/3 of stock #1. Its value could skyrocket resulting in incredible ROI. However, it could also end up being a dud, resulting in losing it all. You have little information to go on: this stock is truly a gamble. Do you invest heavily in it?

Given the two options above, what’s the smart move? In general, putting all of your money into a single thing is very risky, so you’re likely to diversify your portfolio a bit. It doesn’t make a ton of sense to invest heavily into stock #2 because it’s a major gamble, but there’s some room for potential and it might pay off. So why not put 80% into different stocks that fit the archetype of stock #1 and the other 20% in stocks that fit #2’s profile? That would make for a smart investment with some near-guaranteed returns, including some investment into gambles with high potential.

Stocks On Market == Devs On Market

Stock #1 is a senior developer with a proven track record and solid reputation. Stock #2 is a junior developer.

Hiring nothing but juniors is a recipe for high volatility and potential disaster, for reasons that become obvious when given the stock market analogy above. Hiring nothing but seniors is one way to get reasonable gains, however you miss out on significant potential to hire an incredible up-and-coming junior if you never hire any at all. A good strategy incorporates both, with seniors afforded time to mentor the juniors and develop their skills.

Getting Hired As A Junior

As a junior developer, the less artifacts that you can point to and show to companies, the more of a risk you are for them to take. You can mitigate some of this risk with a good interview, but if that interview doesn’t include coding tests which you ace then it might not be enough to get you in the door.

A junior should strive to create artifacts that reduce the risk of hiring them. These could take virtually any format, and given that everybody is different and we are not all afforded the same privileges and opportunities, one should strive to create artifacts that suit their situation. Single parent with 2 children and little free time? Put a few hours each week into an open source project (or contribute to other open source projects). You’ll be amazed how quickly that adds up. Unemployed with tons of free time? Create a project that shows off your skills and stretches your knowledge, which in turn causes you to learn. Struggling with the whole “I need a job to get the experience to get a job” thing? I’ve been there myself, and while my situation was surely not identical to yours, I found that investing some time into reading books and writing small applications to demonstrate my skills did wonders for potential employers.

The point is that all developers are going to sell themselves as hard as they can to a potential employer. To an employer, they may all look similar. Do what you can to stand out and reduce uncertainty by creating evidence of your potential and abilities, and show that to them instead. Talk is cheap, action speaks louder than words.

GitLab Data Loss: A Discussion

In case you missed the big news in the industry this week, a GitLab employee accidentally deleted a ton of production data and took their platform down for hours. It was only when everything was on fire and they were in deep trouble that they turned to their backup systems… only to find that none of them actually worked.

Backup Prod Data Regularly

Not exactly a groundbreaking statement, right? Everybody knows this. If there was a “working in corporate IT 101” manual it would have a chapter on this concept. It’s common sense.

Even still, a lot of people and companies – like GitLab – tend to “set and forget” their backups. They probably created their backup mechanism years ago, tested it at the time, confirmed that it worked, and then scheduled it to run every night at 1am EST or something. Then, since it was out of sight and out of mind, they promptly forgot about it and moved on to other things. After all, they never had a need to check on it right? Nothing had broken down. Until yesterday.

A Guide To Good Backup Process

The secret to ensuring that your backup process is effective and functional is to integrate it into your daily work. One of the best ways to do this is to use it to set up a new dev’s local environment. Have them configure and install the IDE and related tools, and then have them pull down the most recent backup and restore from it to set up their local database. What’s that, you say? It has PII and sensitive data? You’re probably right, which is why your backup process should, as appropriate, create 2 copies: 1 that strips the data (for local dev env) and 1 that doesn’t (for prod restore).

Great, so you’ve confirmed that your backups work for a local environment, but what about production? The next step in a good process is simple too: artificially destroy your production environment regularly. Set up fail-over tests at off hours (and compensate your amazing site reliability / IT team appropriately for conducting these tests in off hours too). I recommend once per quarter as a starting point: at 2am on Sunday drop your production database (but don’t delete it, just take it offline so you can bring it back if you find out that your backup system isn’t working). Let your staff work to restore a recent backup and bring the site back online. Announce the outage in advance to your users, and update people on social media or via email when it begins and ends.

There is much to be learned and gained from this intrusive and destructive process. For one, you will force your dev team to create a good “the site is down” experience since your customers will otherwise see infinitely spinning web pages or terrible error dumps. Another is that you can time the average outage and thus discern how long you’ll be down if your production database ever actually takes a spill. Finally, your disaster recovery staff will be fresh on their skills and able to fix your real outages quickly and predictably. There are many tangible and hidden benefits derived from just a few hours of planned outage per year.

GitLab Did One Thing Right

The final step in your solid, functional backup process which you test quarterly and use to spin up new dev hires is to document the hell out of everything. When you do these planned outages, have the disaster recovery staff document, step by step, the actions taken to fix it. When you have real live outages, document those too and share the knowledge with the public.

GitLab got this part right, and are being heralded as a great example and learning experience in the industry instead of spited for mysterious downtimes and no communication. I promise you that this week, many disaster recovery people are doing extra backup tests that they wouldn’t have thought to do otherwise – all as a direct result of the GitLab incident. Making your disasters and their recoveries public creates goodwill in the community, provides a learning experience, and shows people that you can be trusted.

GitLab took a bad situation and created the best possible outcome, both for themselves and the entire community. For that they should be thanked, not mocked. After all, we are all human and we all make mistakes. Knowing this, you’ll be really glad that you practice making mistakes every quarter when your production database actually goes down in flames.