The Trouble With Job Titles

I’ve had a good career so far. I began working full-time as a programmer in 2008. At that time my title was Junior Developer. I had a decent boss and cool co-workers, and I cut my teeth on Java and .NET. It was a good experience. After 2 years at that gig, I felt that it was time to move on.

I contacted recruiters, and one eventually found me a promotion: Systems Analyst. It came with a decent pay bump and so forth, as well as the luxury of dropping “Junior” from my job title. As this was a good deal all around, I took the offer.

A few more years went by and I found myself once again looking for “the next step” in my career. Lo-and-behold my current company had appreciated my skill as a Systems Analyst and wanted to bring me on as a Developer. Before I knew it I was a .NET Developer which, as my other job changes had previously, came with a pay raise and better perks. Things were going great.

I did 3 years total with this company, during which I was promoted to .NET Development Team Lead – a terrific title in the realm of programming. I ran a small team of 4-5 people and things were going pretty well.

Eventually, having maxed out my skills and knowledge at this company, I moved on. I became a .NET Architect with appropriate pay and benefits. It was a reasonable gig.

This is where I’ll introduce a sideline fact: during my entire career to this point, recruiters (both in-house and third-party) had been cold-calling and contacting me with jobs that matched my current title (with more contact as I got further in my career). As a .NET Development Team Lead, I was offered jobs that ranged from Senior Developer to Team Lead to Architect. As a .NET Architect, I was offered titles from Team Lead to Architect to Manager. So far things made sense.

About 5 months ago I took a job with a company that I love, Stack Exchange. Things are going great so far, and I’ve never been treated better in my entire development career. My pay and benefits are top-notch, my job perks are incredible, and my boss is a genuinely competent, solid human being – as are all of my co-workers. It is the current high-bar of my career track record.

Where it becomes interesting is in my job title. At Stack Exchange I am a Web Developer. That’s the whole title. Not Manager, or Team Lead, or Architect, but Web Developer. The reason for this is that Stack Exchange is not a company that cares for credentialism. We hire smart people who gets things done (their words, not mine) and developers here handle projects from start to finish. This means that we sometimes act as a Manager of others, sometimes a Team Lead of others, and often an Architect as we discuss and debate the best way to solve a particular problem or design a particular system.

It was not until I got this job that I realized how broken the current recruiting industry is. Just 6 months ago I was receiving cold calls and offers to interview for Team Lead, Architect, and Manager jobs (with expected pay). Can you guess what jobs they contact me with these days?

Junior Developer and Intermediate Developer. Complete with massive pay and benefits cut. Offering these positions to me is a complete waste of both my time and the recruiter’s.

It’s pretty obvious why this happens: recruiters are usually not contacting me personally based on researching my career and goals. They are contacting me as 1 of 10-50 others that they will contact via cold-call or e-mail on any given day. A shotgun approach of “fire it off and see what hits” as it were.

I am certainly not egotistical enough to be offended by being offered titles and pay well below my current skill set. What does bother me, however, is the fact that I cannot convince most recruiters of anything otherwise. I took it upon myself to explain to one recruiter that Web Developer at Stack Exchange is about the equivalent (in my opinion) of a Team Lead or Architect at many other companies. Their response was that if that were true, it would be my title here. I was dumbfounded.

Given that there are no current standards for developer titles, it is plain to see that a Developer at Company A could be anything from a Junior Developer to CIO at Company B. It’s a crap shoot. Especially when you consider the fact that some companies inflate the titles of their employees as a “free” form of making them feel valued (as opposed to, say, paying them properly).

I don’t know what the solution is to the recruiting industry’s problem, but it does make me realize the value of Stack Overflow Careers. We match candidates with jobs based on experience, interests, and open source projects, not via titles and credentialism. We explicitly ban companies that spam candidates with a job offering. This means that every single message written to you as a candidate is personalized and written by hand by the recruiter. Very few recruiting companies can claim similarly. We continue to evolve this product, and I hope that one day it sets a new standard for recruiting where people don’t blindly call you up with jobs that match your current title.

This lack of standards is a difficult problem to solve. I don’t pretend that I have the answers. It does show me precisely how you end up working with incompetent people though.

John Doe is an incompetent developer who works for a small dev shop that gives him low pay but a big title such as Senior Developer. A recruiter eventually offers this developer jobs of a similar title but that have the appropriate pay. John jumps at the giant raise and heads to the next company. What happens next is something we’ve surely all seen a few times… John can’t hack it, John gets 3-6 months in and gets let go. John was cheated by the system of credentialism. Who can blame him directly for grabbing the highest value offering he can get? We all would.

This broken system wastes time, tons of money, and screws up company culture. Nobody likes working with John as a peer when he knows 1/10th of what they do, does even less work, and gets paid the same or better. That causes your good developers to run for the hills, and your workplace culture to suck as another John fills the now open role of a good dev.

The current state of the recruiting industry is fairly toxic. Most recruiters will put anyone anywhere to make a buck and meet their draw or quota.

The question becomes: how do we, with Stack Overflow Careers, completely dismantle this system and rebuild it properly? Time will tell, and I’m excited to see the results.

iPhone 6: Style Over Substance

Like many of you, today I watched the Apple media event in which they announced both the iPhone 6 and Apple Watch. I’m not going to talk about the watch, but instead about the phone.

For years Apple has been a true cachet brand. They are a luxury item that is sought after for status and image. I don’t blame anyone for owning an iPhone: they’re reasonably sexy and you get to show off the Apple branding. Good on you.

My smartphone adventure started with an iPhone 3 in 2008. It was an amazing, groundbreaking piece of hardware back then. Android was nowhere near it and nothing else came close. I was able to put my iPod Nano in my dresser drawer and use the phone for both music and talk/text. It was amazingly innovative and nothing at the time compared.

This morning I was very excited to see Apple finally innovate again under the direction of Tim Cook, after years of what seems like stagnation on the mobile front. Instead, I was disappointed. I’m going to explain why in terms of comparable hardware and cost, not in terms of opinion (because they’re cheap and everybody has one).

Let’s compare the bleeding-edge Apple iPhone 6 and 6+ offering with my Nexus 5 (a phone that is nearly 1 year old and was released on Oct 31 2013). I highlight the winner in bold for each spec:

Nexus 5 iPhone 6/6+
Processor 2.26 GHz quad-core Krait 400 “Snapdragon” A8 chip specs not released, benchmarked at 1.4 GHz dual-core
RAM 2 GB Benchmarked at 1 GB
Battery 2300 mAH 6: 1810 mAH. 6+: 2900 mAH
Screen Quality 1920 x 1080, 445 pixels per inch 6: 1334 x 750, 326 pixels per inch. 6+: 1920 x 1080, 401 pixels per inch
Screen Size 5″ 6: 4.7″. 6+: 5.5″
Storage 16 GB 128 GB
Fingerprint Sensor Nope Yup
LTE Bands 6 20
Front Facing Camera 1.3 MP 3.2 MP
Rear Facing Camera 8 MP 8 MP
NFC, Bluetooth Yup Yup
Wireless 802.11ac 802.11ac
Screen Polarization/View Range Sucky Sucky
PRICE $349 for 16 GB no contract, unlocked 6: $649 for 16 GB no contract, unlocked. 6+: $749 for 16 GB no contract, unlocked

Thoughts: Apple has some better specs, but they tend to be with respect to the cheaper hardware aspects (LTE bands which are redundant anyway, storage space, front facing camera). The iPhone 6 standard model has a really poor display resolution, surprisingly, since Apple has made strides in display technology with their retina MacBook Pros and iPads. The battery on the standard iPhone 6 is also poor compared to the Nexus 5, though the 6+ gets a much larger battery with its larger size. In terms of performance (CPU/RAM), Nexus 5 dominates.

Conclusion: Apple cheaped out on this phone, big time. It BARELY competes technically with the Nexus 5, an Android phone released almost 1 year ago. I cannot believe that for the cost of 1 iPhone 6+ I can buy 2 Nexus 5’s. It truly must be a cachet phone if people are willing to pay the price for such mediocre hardware.

How I Got A Job At Stack Exchange

Almost exactly 1 month ago today I found myself on a video call with Joel Spolsky. It feels insane to write that, even now, as it was a banner moment in my career. For me it was the equivalent of meeting a movie star who I had idolized since I was old enough to know what movies were. There had always been this Joel Spolsky guy throughout my career that I regularly read about and whose opinions on software development agreed with mine, and suddenly I was talking with him face to face. It was awesome.

Reaching this conversation was not the easiest thing I’ve done in my life. It took a few weeks and in all honesty it was a bit trying to find time to have so many interviews. How many interviews, you ask? Prior to Joel I talked with 5 or 6 other amazing people (Marc Gravell and Nicholas Larsen included). Somehow I managed to impress each of them enough to reach the end boss: Joel Spolsky.

The conversation lasted about an hour. It felt like 5 minutes to me, probably because of the excitement. Joel and I discussed the pros and cons of various software methodologies, mistakes each of us had made in our careers, some of the challenges of Dache (my open source software), and a few other topics. Then he said something awesome: “We’d love for you to come and work with us at Stack Exchange!” So much adrenaline hit me at this moment that I could have lifted a car with my bare hands. It was surreal.

A few days from now I officially start working with Stack Exchange. I feel very fortunate and excited for the opportunity. So far the Stack Exchange team have proven themselves an insanely skilled and professional organization that treat employees as human beings instead of expendable resources. I’m already loving the culture and interactions with my coworkers.

IMPORTANT NOTES: First and foremost, the commentary here consists of my views alone and not those of Stack Exchange or any other entity. This post is merely to reflect upon the interview process and discuss the aspects and traits of my career and knowledge which I feel helped me get the job. This is not a tell-all or any sort of shortcut or easy way out. If you want a job at Stack Exchange, you will have to endure the same technical and soft skills challenges that I did – the details of which I will NOT be disclosing. :)

So, with that disclaimer out of the way, here are my thoughts on how I got a job at Stack Exchange, and how you can too:

Ego is the mind killer, so kill the ego. Most developers that I’ve met (including some prior versions of myself) have massive egos. Egos so big that the room can barely hold them. Egos that even put the illustrious Kanye West’s attitude to shame. This is natural given that we spend all day creating things from scratch (which is a god-like quality) that often generate significant revenue for companies. We start to feel very powerful and even fawned over. We learn the entirety of the software and hardware vertical of our current job’s domain, and then make a Superman-like flying leap to conclude that we know EVERYTHING about ALL software and hardware.

Thinking you know everything is the easiest way to suck as a programmer. If you believe that you know everything, you stop trying to learn new things (since you already know them, duh). So, while you’ve mastered ASP.NET MVC 3 at your current gig, 4 and 5 came out… the catch is that your company never upgraded because it’s too risky, and so you never learned them (or cared to). Now a year or two later you’re so far behind the current development stack that you can’t even see it with binoculars. And did I mention those little things called Ruby and PHP and even Java that you’ve never written a single line of? And how about MongoDB, Couchbase, Azure, EC2, and the literally thousands of other platforms and programming languages? I should hope that by now you realize that you know a little bit about a handful of languages and hardware configurations amidst a sea of thousands… By percentage, you’ve conquered maybe 0.1% of all that there is to know about development. So have I, and that’s OK.

Don’t be a rock star developer. This is something that I did for a few years and it only hurt my career. Many companies employ a strategy of intentionally furthering the developer ego in order to make them feel valuable (often without handing out appropriate compensation). Being a rock star sounds cool, but really it’s a nasty strategy that can cultivate incredibly destructive developer attitudes. Companies seek out and hire rock stars, and rock stars have a sense of entitlement. They develop huge egos and run their mouths at meetings, interrupting and talking over others. They seem to love circular logic (they’re right because you’re wrong because they’re right). They feel that all other team members exist simply to serve their every whim… and everyone loathes working with them.

The thing is – while you may actually be a 1 in 1 million bona fide rock star developer – nobody cares. Talk is cheap and in my experience people will say nearly anything and everything to portray an image of who they want you to believe they are. If you are really good at what you do, telling people doesn’t do anything other than make them despise you… Hearing about how amazing your proprietary code is gets annoying – especially when it’s the 5th time this week you’ve said it. A good developer doesn’t need to brag about how good they are: their work speaks louder than any boastful words could. People around them will naturally do the bragging for them. Hallway conversations to the tune of “wow, he’s really smart” or “she knocked that out in hours when we thought it’d take days” will be fostered, and that isn’t a bad thing. Let people talk about you all they’d like, but maintain a sense of humility and reality. You might be the best developer on your team or even at your company, but you are still a human being and this is still a job. Nobody has been hired to serve your ego (even if their job is to serve you). Being a good developer doesn’t make you better than anyone as a person; it just makes you successful in your career. Never lose sight of the fact that thousands of other developers are great at their jobs too. What separates you from the pack is being a great developer AND modest. It is a very rare combination in my experience, and the complete package that many companies are striving to hire. So, while it’s cool that you’re the very best developer that there ever was, stop believing your own hype and telling everyone who will listen. They don’t care, and you shouldn’t either.

Know that you know enough to know that you don’t know enough. Know what you do know, and know what you don’t know, and never be afraid to say “I don’t know” when it’s the truth. A developer who isn’t afraid to say “I don’t know” in front of a room full of people is a rare gem. By being honest you create trust and credibility. You also foster positive relationships with your peers and company. Nobody will remember that you’ve never heard of Angular.js or Couchbase, though they’ll always appreciate that you didn’t waste their time or money by pretending that you did. You can’t trust a developer who doesn’t know what they don’t know.

Know your data structures and algorithms. High level programming languages such as C# abstract so much away from the modern developer that many of us have no idea what’s actually happening “under the hood” when it executes. It’s cool that you can sling LINQ everywhere, but do you know the computational complexity of what you’ve done? Do you know what a hash table is and why it is useful? Could you sort a list of things efficiently? Can you think of a scenario where a stack is the best option? Note that you don’t need to memorize things like sorting algorithms (hell, I couldn’t if I tried), but a working knowledge of data structures such as trees, hash buckets, lists, queues, and stacks combined with the rudimentary knowledge of things like sorting, searching, and caching is a very valuable skillset. It’s the difference between a good programmer and a great one. Anyone can write C#, but only those who understand even the low level operations of each deliberate method call will write good, clean, efficient code. You owe yourself a fundamental understanding of how data structures like stacks and heaps work, as well as by-value vs by-reference memory addressing. These core concepts apply to ALL programming languages. Too many developers ignore the complexity of their algorithms and just call pre-made methods without understanding the implications. Educate yourself on data structures and algorithms and suddenly you’ll be ahead of the pack.

Know why your code works and why your code doesn’t work. Have you seen this image circulating on sites like Reddit?

Clueless

Despite being funny, the popularity of this image pisses me off. It claims that the essence of programming is having no freaking clue why your code does or does not run. I feel that this is unacceptable. A great developer strives for the WHY in every single thing that they do, not just HOW to quick-fix it. Code doesn’t compile? WHY? Race condition across threads? WHY? In asking “why” you further your knowledge and expand your skillset with the functional, rational “how” which allows you to become a better programmer. Most great programmers don’t repeat the same mistakes over and over, though they of course make mistakes… They just make new and interesting ones!

I remember the days of slinging shoddy code and then copy-pasting lines from blogs and sites like Stack Overflow until my code seemed to work (though I wasn’t sure if or why it did). Those days are long behind me. When my code doesn’t compile, 99% of the time I immediately know how to fix the issue. When my code has a bug, I usually know exactly how to track it down and resolve it, and in doing so I often learn how to avoid it in the future. Having no idea why your code works is like being a lawyer who has no idea why their client is not guilty: fairly useless, overpaid, and an amusing spectacle at times. Make sure that you know why your code works and why it doesn’t. In my opinion this is a basic competency of being a professional developer. It’s OK to create bugs and make mistakes – it’s not OK to make the same mistakes repeatedly.

At this point I feel that I’ve done a reasonable job of representing my skillset and core competencies. These are the things I showed to the Stack Exchange team in my interviews. I didn’t necessarily have the exact answer (or even most efficient answer) to each of their technical questions, but I was modest and never afraid to ask for help or say “I don’t know” when I got stuck. My answers involved efficient data structures and algorithms, and I was able to explain why the data structure was the best choice in solving the problem. When I created bugs I was mostly able to identify them and indicate how to fix them. In doing all of this, I demonstrated competency and confidence as a developer and fortunately ended up with my dream job.

If you think you’ve got the chops to join us at Stack Exchange, apply today!

How To Guarantee Dev Team Failure

The Problem

I think that most devs would agree when I state that the definition of success in the corporate world of development places less emphasis on “good” code and more emphasis on “working” code. Working code is code that can be released to production on or before the deadline, regardless of performance or even bugs in most cases. As a developer, you ultimately feel as if you’ve failed when you toil for nights on end to meet steep deadlines and churn out crappy code. As a business, however, you’ve succeeded when you hit the deadline. My experience tells me that the typical metric upon which development teams are measured is often not quality of code or unit tests or even performance, but instead ability to meet deadlines and deliver solutions to clients. You’ve failed when you do not meet the deadlines and thus piss off the clients/customers. Your job has become a veritable boolean result with the outcomes of true and false. Deadline met? True. Deadline missed? False.

Doesn’t it feel awful to be measured in such a binary way? All or nothing, success or failure, deliver or delay. These are the only outcomes according to the people who write and sign your paychecks.

The Conflict

Why does this happen? A little introspective thought brings light to the subject, at least for me. The reason for these types of metrics becomes obvious when you consider their source. You work for a company who pays you (often with I.T. seen as a cost-center or “money pit”) to accomplish things which the company can then sell to clients. You’re an expensive tool by which they accomplish their means. Though these companies often see software as a profit source, they see the means by which they get the software as an expense and cost. Kind of strange, really.

The problem begins at the very core of the organization; the structure of the company is the starting point for guaranteed failure. In my experience, the dichotomy that forms in most companies is “I.T.” versus “The Business” in a bout-to-knock-the-other-guy-out. The minute you create this relationship of opposing fronts, you’ve already guaranteed development failure. With competing and contrasting goals (the business wants to sell stuff ASAP while I.T. wants to build stuff properly which takes longer) it is not possible for trust to exist within the organization. The Business will not believe a word that I.T. says when it comes to estimates, deadlines, or things that need to happen to ensure stability of the product in the future (technical debt). I.T. will not trust The Business to make rational decisions when it comes to features, development timelines and ensuring product quality. The result is a boxing match where each side is trying to force the other into compliance. Now you have conflict. Conflict dismantles good companies.

The Measurables

The Business is used to tracking their sales teams by metrics like “how many calls did you make/receive today?” and “how many sales did you make?” and “did you make X sales by Y arbitrary date?” where Y could be the end of each month. These are things that they understand, and thus like to control. Ask your favourite sales person for their opinion on the metrics by which their success is measured, and I am confident you’ll find that most will sum it up as “the faster I sell things, and the more things I sell, the more successful I am.” This makes sense from an empirical, see-the-figures-on-paper-on-my-desk-in-my-executive-office point of view, but I bet that the sales person in this case is not loving their life. A constant push to sell more, make more money, and do more. Any success in the future just raises the bar for the success which must follow. It’s a losing scenario. Eventually, they either get promoted out of the trenches of sales or they move to another company, resetting the bar which has been set too high. This buys them another year or two of raising that bar, until they ultimately repeat the process again.

Sales people who are put under the gun in such situations often resort to employing any tactic that they can to reach their goals… One of these strategies is saying anything at all to sign the client up. “Sure, the software can create rainbows and unicorns, just sign on the dotted line!” they say. It’s unfortunate, because customers who are hooked into these contracts tend to be very unhappy with the product when they find out that the software does not, in fact, create rainbows or unicorns. Or even a colour wheel and horses. It doesn’t even come close.

In the above case, The Business fails to measure the things that, long-term, make you the most money: client satisfaction and relationships. A good sales person (they definitely exist) is one that keeps the client happy with rational discussions and promises, and who is very transparent about what can and cannot be done and why. A great sales person is one the client loves so much that they’ll keep using your product, even when a better product exists, simply because they fear losing the relationship. This client is a client for life (or at least a long while) and makes you a lot of money. But how do you measure “happiness” and “relationships” long term? It’s a hard problem. Dating sites have been trying to solve it for over a decade. The Business will likely not dedicate the time and resource to do so themselves. So, they measure phone calls, sales, and other crappy metrics to ensure that the sales team are doing their job.

Here’s where we get back to the topic: developers and failure. The Business, who in most cases pays I.T. to create things to sell, employs these same arbitrary measurements when grading development teams. They often only know how to see success as a measured outcome of facts, and so they create the only measurements that they can empirically apply: features and deadlines. Does the dev team build all of the features and hit the deadline? Great. Do they not? Not great. These measurements themselves are acceptable (even good), but the combination of them (lots of features on short deadlines) is the problem.

The Money Talks

Where it gets tricky is in the realization that “show me the money” is how business ultimately tends to run. The sales people very overtly make the money, so they are seen as successful and important people in the company. The dev team also makes money, but is perceived to cost money, and they are seen as a cost-center that must be carefully weighed and measured to avoid excessive spending. What this leads to is an unhealthy practice of allowing sales people the freedom to employ any tactics necessary to land sales and make the money. In a business such as The Business as described, your life as a developer begins to suck.

To close the deal, the sales person will often promise the client almost anything about the software that you develop. They may promise new feature X by the end of the month, they may even promise 10 new features by the middle of the month. Whatever makes the client sign on. Then, the client says let’s rock and your quality of life drops sharply.

The very next thing that happens is The Business casually tries to confirm what seems obvious and even mandatory to them. “So your team will have these 10 things ready to go by the 15th, right?” they say. “This is a million dollar client, and it would be horrible if we lost them because you couldn’t deliver!” and now the pressure is on to do the nearly-impossible in virtually no time.

The dev team might try to politely push back and say that this is practically impossible, but The Business sees the dollars on the dotted line and will not listen. Flip the kill switch. Forego the QA time, all developers must focus on all of these features, day and night, so that the deadline can be met. Why? Because that’s how the team is measured. If the team doesn’t hit that deadline, they’ve failed and the million dollar deal is lost with the dev team seemingly at fault. Developers don’t want to work extra? Order in pizzas and promise them time-in-lieu as soon as the deadline is over with. Note that they will likely never actually see this time-in-lieu because right after this deadline will be the next one, with similar outlandish expectations and even tighter deadlines. And after that, another one. And another one. And the cycle will probably never end.

The Mad Production Dash

So, as the developer, you develop it as fast as you can. The code starts to resemble Frankenstein as you tack on bits and pieces to make it work ASAP. You subdue your ego and uneasiness about the quality of code by commenting // HACK: undo this crap later everywhere. Somehow that makes you feel better as it creates the slight glimmer of hope that eventually you’ll have enough time to come back and undo this monstrous pile of garbage. But you never will get that time, because the next deal is coming down the pipe. And so the code becomes worse. Your development effort completes 1-2 days before the arbitrary sales deadline, and after your QA team flips their lids on having 48 hours to test 1000+ hours of work, they do “critical path testing” to make sure it at least does something correctly and certify it as “good enough.”

The team releases to production early in the morning of the deadline day, and though it takes 5 hours because there are 17 untested things to fix on-the-fly (and realistically they have no option to abort the release or roll back because the consequences will be dire), they eventually shove the hacked up code out the door and declare it done. The Business shows their appreciation in the form of a short, impersonal e-mail that doesn’t name any person of achievement specifically. The development team is feeling underappreciated and pissed off.

What does the future hold for such a company? The code will probably spiral into bug-filled oblivion until it can’t do anything correctly or in any reasonable amount of time. Despite the weeks and months during which the development team pleaded with the business for time to clean up the technical debt, they are brushed off because taking time off of features loses clients and thus money. Then, as it starts to come crashing down in production, they suddenly beg the developers for a quick fix. “Do whatever it is that needs to be done!” they plead as they see their sales going down the drain. And now, because it is on fire and burning to the ground, the dev team is finally given a moment to pay back some of the technical debt that has been accrued during this vicious cycle. Repeat.

The Solution

When a dev team has no say in the deadlines of the work they must do, they will usually fail. And when they are set up for failure from the start, they will likely get tired of being blamed for the problems without ever being given the time to devise the solutions. This leads to bad work culture, high turnover, and low productivity.

The way to guarantee dev team success is obvious at this point. It’s really as simple as trust between I.T. and The Business. They must keep each other in the loop as stakeholders. The Business has no product without I.T. and I.T. has no job without The Business’s clients. It’s a mutually beneficial relationship and it should be treated as such, rather than mutually parasitic.

A good company’s sales team will often consult with I.T. prior to promising any dates and deadlines when unknowns are involved. It is practical to ask the people responsible for a task how long it will take them to complete a task. This is much like how you might ask a waitress how long it will take for the food to arrive or a painter how many days they need to paint your home. This is a positive and productive discussion. Hallway conversations should become commonplace: “Hey dev team, I’ve got a client who wants to sign on but not until we build X, how long will that take?” The reply is as easy as “We’ll discuss it as a team and send you an estimate with some assumptions to confirm with the client” and just like that there’s a great working relationship that practically guarantees success. The team knows what work is coming, and also knows how long they have to complete it.

The Correct Measurements

If a dev team continues to fail in an environment where trust exists, then that team is likely not competent. They either cannot estimate correctly or cannot deliver within their own estimates. Sometimes devs suck at estimating because they’ve been making estimates under the oppressive sales gun for so long that they’ve effectively forgotten how to give themselves a fair amount of time. The blame for this remains entirely on the dev team, and they (or The Business) must repair the situation quickly and effectively to maintain the mutually beneficial relationship based on trust. As The Business owes I.T. input into the deadlines, I.T. carries the burden of being fair, accurate, and responsible with those deadlines.

Assuming that The Business now has a competent, skilled dev team, the question turns to the customers. If the customers do not like the estimates given to them, this may cost the company sales. Perhaps the customer wanted the impossible and The Business is giving them a dose of reality. Perhaps The Business does not want such a needy customer and they’re in a situation to be able to afford to tell them no thanks. Perhaps The Business realizes that the client’s request is reasonable but the timeframe of the estimate feels a bit long. In that case they can ask I.T. why. If the answer is not sufficient and justifiable, then perhaps the dev team is still not competent. No dev team should be let loose without checks and measures on productivity, but those metrics should be reasonable.

Ultimately, if you want to guarantee the failure of a development team, simply promise features to clients and customers without ever asking for (or trusting) the input of the team that is actually going to build those features. It’s just like telling the waitress that your food must be on the table in 10 minutes, without first asking the cooks how long it takes to safely and properly make it.

If this situation sounds familiar, try talking with The Business about it. Try to help them see it from your point of view. Ask them “how successful would you be if I demanded that you sell 20 new clients by Friday?” and perhaps some light bulbs will start to go on. Ultimately, we as developers often know nothing about sales and have no business dictating their measurable work expectations. They similarly have no business dictating ours, but a relationship of trust can be built to allow us all to work together and accomplish great things.

To Node.js Or Not To Node.js

Node.js – it has rapidly become the “new hotness” in the tech start-up realm. With each passing day, the fan base of Node lovers grows larger, spreading their rhetoric like a religion. How do you spot a Node.js user? Don’t worry, they’ll let you know. ;)

One day you’re at a regular user group meeting, sipping soda and talking with some colleagues, when the subject turns to Node. “Have you guys tried Node.js?” asks one of the people in your group. “It’s all the rage. All of the cool kids in Silicon Valley are using it!” “What does it do?” you ask, only to be bombarded with a sales pitch worthy of the best of used car lots. “Oh, it’s amazing!” they reply, sipping their diet coke and shuffling their hipster fedora and backpack with MacBook Pro in it (or something like that), “It’s server side JavaScript. It runs on a single thread and it can do 100,000 web requests a second!” They glance at the group for the oohs and ahhs, but most people just stare back with amazement in their eyes. Then, your hipster Node-loving friend drops the words that start wars: “It’s way better than .NET” – and just like that, your group is hooked. They go home, download the Node.js tools, write “Hello World”, and suddenly they’re on their way to the next user group meeting to talk about how great Node is.

Okay, so I might be exaggerating the appearance and demeanour of your average Node lover a little (read: a lot, almost entirely in fact). However, I have had this exact scenario happen repeatedly over the last six months, with ever-increasing intensity and frequency. Node users love Node. They want you to love Node. They’re excited about it.

Having given it some thought, why wouldn’t Node.js developers be excited? Want to fire up a “Hello World” web server in Node? It’s trivial:

// Load the http module to create an http server.
var http = require('http');

// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World\n");
});

// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(8000);

Want to do the same thing in .NET? Be prepared to learn about IIS, the Machine.config, the Web.config, the Process Model, how Global.asax works, either ASP.NET MVC or WebForms (huge paradigms in themselves), and how Visual Studio works. Don’t forget to learn how to create a solution, at least one web project, and how to deploy the application to IIS. Oh, and one little detail: go ahead and learn C# while you’re at it. All of that’s pretty much just as easy and intuitive as Node.js, right?

.NET is incredibly complicated. Node.js is incredibly simple. On the merits of that fact alone it’s no wonder that these .NET developers and fresh-out-of-college kids who have already dabbled in JavaScript are transferring these skills to the server side and standing up web servers in literally 5 minutes and 5 lines of code. How can you deny the sexiness of that? The bragging rights it gives you? The ease and comfort of a language you’re already familiar with?

This, in my opinion, is why Node.js is becoming huge. It has simplified and streamlined the development process and made programming very accessible to almost everyone (or at least anyone who has ever played with JavaScript).

However, to those who sell Node.js as single-threaded, and to those who sell Node.js as having significantly better performance than .NET, I say this: you are wrong.

With simplicity comes misunderstanding and the concept of “Leaky Abstractions.” As my good friend and colleague Thomas B. said to me during dinner last week: Node.js is opinionated. It has an opinion on how you should do things, and it forces you to do them a certain way.

Node.js is not single-threaded, though many Node developers in my experience believe it to be. Node’s creator believes that a single-threaded listener delegating I/O-bound work to a thread pool is the key to a highly available application. As a result, Node.js forces you into this paradigm of event-based asynchronous execution of I/O operations via a thread pool.

Node.js has a single thread listening for connections. All of the code which you as the Node developer write is executed on this single thread. This single thread is all that is exposed to you. As soon as a connection is received, Node’s listening thread executes your coded event on the same listener thread. This event either does quick, non-CPU intensive work (like returning static content to a client), or long-running I/O bound operations (like reading data from a database). In the case of the former, the listener thread does in fact block for the duration of the request, but the request happens so quickly that the delay is trivial. In the case of the latter, Node uses V8 and libuv (which it is built upon) to delegate the I/O work to a thread from an underlying pool of native C++ threads. The single listening thread kicks off the work to an I/O worker thread with a callback that says “tell me when you’re done” and immediately returns to listening for the next connection. It is thus plain to see that Node.js is indeed multi-threaded, though this functionality is not directly exposed to the Node developer.

An important note regarding Node.js is that any CPU-intensive code which you write will block the entire system and make your application scale poorly or become entirely unresponsive. As a result, you would not want to use Node.js when you need to write an application that will do CPU-intensive work such as performing calculations or creating reports.

This is how a single thread can handle multiple requests at once; receiving a request and either serving static/simple content or delegating it to an I/O thread from a thread pool are both very cheap and quick operations. When the thread pool thread that is doing the long-running I/O work signals to the single listener thread that the work is done, the listener thread picks up the response and sends it back to the user; this is another very cheap operation. The core idea is that the single listener thread never blocks: it only does fast, cheap processing or delegation of requests to other threads and the serving of responses to clients. The diagram below (taken from Stack Overflow) explains this visually:

Node.js Processing Model

This is a very good, scalable, highly-available way to write code; Node.js nailed their approach and developers benefit from it. However, as of .NET 4.5, you can easily create this exact paradigm/pattern in your .NET applications. The difference is that .NET does not force you to do so.

With the introduction of a very tidy wrapper around asynchronous programming in .NET 4.5 (async/await keywords), Microsoft made asynchronous, event-based programming quite a bit easier and more intuitive. And with recent conformance by Microsoft to the jointly-created OWIN specification, the web pipeline of .NET has become much simpler.

In fact, you can now write the “Hello World” asynchronous web server in .NET in about as few lines as Node.js! In this example, I host a web server in a console application which is terminated when a key is pressed:

/// <summary>
/// A simple program to show off an OWIN self-hosted web app.
/// </summary>
public class Program
{
    /// <summary>
    /// The entry point for the console application.
    /// </summary>
    /// <param name="args">The arguments to the execution of the console application. Ignored.</param>
    static void Main(string[] args)
    {
        // Start OWIN host
        using (WebApp.Start<Startup>(url: "http://localhost:8000"))
        {
            // Runs until a key is pressed
            Console.ReadKey();
        }
    }

    /// <summary>
    /// This code configures the OWIN web app. The Startup class is specified as a type parameter in the WebApp.Start method.
    /// </summary>
    private class Startup
    {
        /// <summary>
        /// Configures the web app.
        /// </summary>
        /// <param name="app">The app builder.</param>
        public void Configuration( IAppBuilder app )
        {
            // We ignore any rules here and just return the same response for any request
            app.Run( context =>
            {
                context.Response.ContentType = "text/plain";
                return context.Response.WriteAsync( "Hello World\n" );
            } );
        }
    }
}

One of the big positives of Node.js is that you opt-in to complexity. You start very simply and add on functionality as you need it. I’m a big fan of this approach and I feel that this is where Node really shines. Nothing bothers me more than starting an “Empty MVC 4 Web Application” from template in Visual Studio only to have it install about 15 Nuget packages, one of which is Entity Framework. Great, I fired up a blank application and already my ORM has been decided for me. Who said I even needed one in the first place?!

The above OWIN-based approach to hosting a web app in .NET allows you to benefit from Node’s simplistic approach. I have started out simply, and can now add Dapper if I need an ORM, Newtonsoft.Json if I need to serialize to and from JSON, Unity if I care about dependency injection, etc. It’s a nice, clean slate upon which I can build any framework that I desire.

This approach in .NET is very comparable to Node.js, with a few differences:

  • Node.js uses 1 listener thread, while .NET uses N listener threads. If your Node.js application does CPU-intensive work at all, it will block the entire system and potentially cause your application to become unresponsive. .NET, on the other hand, is designed to do CPU intensive work. Tying up a thread to do some CPU work is not of concern because there are other threads available in the listener thread pool to take other requests while this is happening. However, both Node.js and .NET are limited by the server resources; in either case, if you max out the CPU or RAM, your app will perform horribly, regardless of thread delegation. This is known as resource contention.
  • Node.js delegates I/O bound work to an I/O thread worker pool, and .NET implemented asynchronously (async methods and the async/await keywords) does the same.
  • Node.js uses an event-based paradigm, and .NET does also when implemented asynchronously.
  • Node.js offers high performance for I/O bound, low CPU operations, and .NET offers comparable performance when you skip the IIS pipeline. IIS tacks on a significant amount of performance overhead due to things like session state management, forms authentication, the process model, request lifecycle events, etc. These are not bad things to have and use, but if you don’t need IIS, session state, forms auth, request lifecycle events, or the process model, then don’t use them!
  • Node.js must parse/serialize to and from JSON, and .NET must serialize to and from JSON to interact with .NET objects. Parsing is going to be much cheaper in Node.js than serializing is in .NET, but .NET also enables you to serialize to XML, Atom RSS, and anything else that you desire. With Node, this is a bit trickier, and the serialization overhead comes back into play to even the field.

When someone compares Node.js to .NET, I find that they often actually compare Node.js to IIS hosted frameworks such as ASP.NET MVC, ASP.NET WebForms, and ASP.NET Web API (in IIS hosted mode). These are all tools built on top of ASP.NET to simplify enterprise web development and to do CPU-intensive calculations. In these scenarios, Node.js will have an advantage, because it is designed specifically to NOT do CPU-intensive calculations. You are effectively comparing CPU-intensive Apples to low-CPU-usage Oranges. It is not a fair comparison.

When someone compares Node.js to a self-hosted .NET web app which does I/O-bound long-running operations via asynchronous thread pool delegation, they find that there is not much of a difference in performance between the two runtimes. In fact, comparing Node.js to self-hosted Web API (NOT using IIS) doing low-CPU work, the numbers are very close:

Web API vs Node.js

This image was taken from a benchmark done in 2012 with the Web API Release Candidate (not Web API 2, and NOT OWIN hosted). Given that Web API 2 exists, and can be self-hosted via OWIN, I’d love to see a more recent benchmark comparing the two. In fact, I will try and do this for my next blog post.

So, to Node.js or not to Node.js? I submit these final thoughts:

I guess the point of all of this has been that neither Node.js or .NET is necessarily better/the best/the new hotness. They both serve a purpose, and while Node.js is much easier to use and more accessible to developers, .NET is very versatile and powerful as well. They are built to do different things: Node.js specializes in performing and scaling well for low-CPU, highly I/O-bound operations. .NET can perform well in this scenario as well, but can also perform well with high-CPU operations. In fact, I would argue that .NET excels at CPU-intensive operations, especially when compared to Node.

There are many .NET developers in the current tech scene that are capable and competent. This means that it’s not too hard to find, hire, and retain good .NET talent. They can pick up self-hosted OWIN web apps in a matter of days, and begin to write very scalable, high-performance web apps and services based on it. They can even easily host Web API in an OWIN web app via console, a Windows service, or Azure. There’s a community that has existed for over a decade that evolves the .NET framework with amazing tools and add-ons like Dapper, Unity, and Newtonsoft.Json. This community is mature and there are many prominent voices within it that offer advice and help.

Relative to .NET, there aren’t as many Node.js developers in the current tech scene that are capable and competent. This is because fortune favours the bold, and only a few have been early adopters of Node.js. In my experience, few Node developers will truly understand what is going on in the Node.js processing model and how to exploit it for maximum performance, though the opinionated paradigm of Node.js will force developers to write good asynchronous code. It can be hard to find, hire, and retain good Node.js talent. This will become less of a concern as Node’s following grows. The Node.js community is budding and has created some amazing tools and add-ons for Node.js as well such as ORMs and DI frameworks. This community is not yet mature and I am not aware of many prominent voices within it that offer advice and help. As a result, it could be difficult to find support and tools for Node.js if you encounter a problem.

In conclusion, both Node.js and .NET are great. Which one to pick for a particular solution/application, however, depends on many factors; it is not black and white but a full colour spectrum. It would be very foolish and naive for a .NET developer to use .NET to solve every single problem just because “that’s what we use.” It would be similarly foolish for a Node.js developer to propose Node.js as a solution for every project or problem that he or she encounters. One must choose the right tool for a given job, and be open to different paradigms in order to truly excel.

In general, use Node.js when you have highly I/O-bound operations that don’t use much CPU. Use .NET when you need to calculate things and use a lot of CPU.

Don’t use Node.js solely on the reasoning that it’s much faster and performs way better than .NET: it depends on how you use .NET. And don’t use .NET if all you’re doing is heavily I/O-bound operations with low CPU usage: that’s where Node.js excels.

C# Probably Getting New “Safe Navigation” Operator “?.”

It looks as if the Visual Studio dev team may be implementing a new operator in a future .NET release. This is due in large part to community demand, which is pretty cool because it shows that the VS team is listening to their customer base; a key part of a successful product.

This new operator is likely going to take the syntax of ?. and is known as the Safe Navigation Operator.

Its purpose is rather simple, and it eliminates a load of boiler-plate code by making the compiler do it for you.

Say you have these classes:

public class A
{
    public B B { get; set; }
}

public class B
{
    public C C { get; set; }
}

public class C
{
    public int D { get; set; }
}

It’s plain to see above that it’s possible that an instance of A may have a null B property, or even that an instance of B may have a null C property. If you want to get D from A safely (so as to avoid null reference exceptions), your code often ends up looking something like this:

int? result = A.B == null ? (int?)null : 
    (A.B.C == null ? (int?)null : A.B.C.D);

What an incredibly annoying pain to write out, and even to read. You could maybe make it clearer, at the cost of more verbosity and lines of code:

int? result = null;
if (A.B != null)
{
    if (A.B.C != null)
    {
        result = A.B.C.D;
    }
}

This is still a bit convoluted and boiler-plate for my liking. Really, all we want to do in English is get the integer value of D if it exists on A. It shouldn’t be so challenging!

Enter the new Safe Navigation Operator and how it might function:

int? result = A?.B?.C.D;

Note the ?. that is placed at the point where the A instance references B and B instance references C. This is much more readable, and intuitive to look at. To me, it says “if A’s B is not null and B’s C is not null, return D, else return null” which is pretty streamlined. Upon simple inspection, it also clearly says that A.B is nullable and B.C is nullable, which are great side effects of the shortened syntax.

Note that the final product, if included in C#, may behave differently than I’ve demonstrated and hypothesized here. The code in this post is mostly made on assumptions of compiler rules for this operator, and could in the future prove to be wrong. However, even still, this operator will be a great addition to C# and .NET if it makes the cut!

Trigger IValidatableObject.Validate When ModelState.IsValid is false

I recently came across an ASP.NET MVC issue at work where the validation for my Model was not firing correctly. The Model implemented the IValidatableObject interface and thus the Validate method which did some specific logic to ensure the state of the Model (the ModelState). This Model also had some DataAnnotation attributes on it to validate basic input.

Long story short, the issue I encountered was that when ModelState.IsValid == false due to failure of the DataAnnotation validation, the IValidatableObject.Validate method is not fired, even though I needed it to be. This problem arose due to a rare situation in which ModeState.IsValid was initially false but was later set to true in the Controller’s Action Method by some logic that removed errors from the ModelState.

I did some research and learned that the DefaultModelBinder of ASP.NET MVC short-circuits it’s logic: if the ModelState is not valid (AKA is false), the IValidatableObject logic that runs the Validate method is never fired.

To thwart this, I created a custom Model Binder, a custom Model Binder Provider (to serve my custom Model Binder), and then registered the Model Binder Provider in the Application_Start method of Global.asax.cs. Here’s the code for a custom Model Binder that always fires the IValidatableObject.Validate method, even if ModelState.IsValid == false:

ForceValidationModelBinder:

/// <summary>
/// A custom model binder to force an IValidatableObject to execute the Validate method, even when the ModelState is not valid.
/// </summary>
public class ForceValidationModelBinder : DefaultModelBinder
{
    protected override void OnModelUpdated( ControllerContext controllerContext, ModelBindingContext bindingContext )
    {
        base.OnModelUpdated( controllerContext, bindingContext );

        ForceModelValidation( bindingContext );
    }

    private static void ForceModelValidation( ModelBindingContext bindingContext )
    {
        // Only run this code for an IValidatableObject model
        IValidatableObject model = bindingContext.Model as IValidatableObject;
        if( model == null )
        {
            // Nothing to do
            return;
        }

        // Get the model state
        ModelStateDictionary modelState = bindingContext.ModelState;

        // Get the errors
        IEnumerable<ValidationResult> errors = model.Validate( new ValidationContext( model, null, null ) );

        // Define the keys and values of the model state
        List<string> modelStateKeys = modelState.Keys.ToList();
        List<ModelState> modelStateValues = modelState.Values.ToList();

        foreach( ValidationResult error in errors )
        {
            // Account for errors that are not specific to a member name
            List<string> errorMemberNames = error.MemberNames.ToList();
            if( errorMemberNames.Count == 0 )
            {
                // Add empty string for errors that are not specific to a member name
                errorMemberNames.Add( string.Empty );
            }

            foreach( string memberName in errorMemberNames )
            {
                // Only add errors that haven't already been added.
                // (This can happen if the model's Validate(...) method is called more than once, which will happen when there are no property-level validation failures)
                int index = modelStateKeys.IndexOf( memberName );

                // Try and find an already existing error in the model state
                if( index == -1 || !modelStateValues[index].Errors.Any( i => i.ErrorMessage == error.ErrorMessage ) )
                {
                    // Add error
                    modelState.AddModelError( memberName, error.ErrorMessage );
                }
            }
        }
    }
}

ForceValidationModelBinderProvider:

/// <summary>
/// A custom model binder provider to provide a binder that forces an IValidatableObject to execute the Validate method, even when the ModelState is not valid.
/// </summary>
public class ForceValidationModelBinderProvider : IModelBinderProvider
{
    public IModelBinder GetBinder( Type modelType )
    {
        return new ForceValidationModelBinder();
    }
}

Global.asax.cs:

protected void Application_Start()
{
    // Register the force validation model binder provider
    ModelBinderProviders.BinderProviders.Clear();
    ModelBinderProviders.BinderProviders.Add( new ForceValidationModelBinderProvider() );
}

MVC4 Conditional HTML Attributes

MVC4 made one simple and yet awesome improvement to View rendering that I don’t think many people are aware of.

Have you ever had to conditionally add an attribute to an HTML element in your MVC View based on the presence of a variable? The typical use case is applying a CSS class to a div. Most of the time that code looks something like this:

<div @(myClass == null ? "" : "class=\"" + myClass + "\"")></div>

What a pain – not only to write but to read… This destroys the View’s readability and clutters the HTML up big time!

In MVC4, this process is much simpler. Just do this instead:

<div class="@myClass"></div>

That’s all you need. If myClass is null, it will ignore the class attribute and render without the class:

<div></div>

If myClass is not null, it will apply the class to the div:

<div class="test"></div>

This should work with other HTML element attributes also, though I haven’t tested them all myself. What a great little improvement!

Dache is now Open Source

In my spare time over the last year or so, I’ve been writing a distributed caching solution for .NET called Dache. I built it because after using both AppFabric and CouchBase in a corporate environment which had significant load (> 30000 concurrent users), I felt that they both fell short of their potential. AppFabric requires installing PowerShell and a ton of other stuff, and reserves 50% of the server’s RAM for internal use leaving only the other 50% of the memory available for caching. CouchBase operates reasonably well but doesn’t really keep up with load, offering lower operations per second than other solutions. Our company talked to NCache and felt that their system was overpriced and didn’t deliver the features that we wanted. As a result, I cranked out Dache in my spare time as a native .NET solution that out-performs the competition.

Though I was originally going to sell the solution, I recently chose to make Dache open source and released it to GitHub. At the same time, I overhauled the website with a new layout that I’m very fond of. Ultimately I just want the .NET community to have access to good, stable distributed caching software that doesn’t require Linux to host.

Tonight I launched version 1.1.0 of Dache, which offers performance increases over 1.0.0 and simplifies the solution quite a bit. In particular it adds some neat features around the bulk API and tagging system to maximize the throughput of these features.

If you have a need for a distributed caching solution, please give Dache a go! I love feedback, so feel free to contribute on GitHub, contact me, or leave comments!

http://www.getdache.net