Three Lessons That Product & Engineering Leaders Have Been Trying To Teach Me For Years, But I Never Understood.
by Erik Severinghaus
For the past 25 years, I have been working in the general field of “building software and software companies”. I consider myself to be one of the more technically astute business people in the room when it comes to building software. Although it has been a while since I last used vi or GCC, I can usually make sense of an architecture diagram.
And yet, as I try to help build a software company that is all about building software, I’m amazed by how many retrospectively obvious things I just didn’t understand. If it’s better to “keep your mouth shut and be thought a fool than to open it and remove all doubt” — I’m going to go all in on being a fool.
Somewhere, there are VPs of Engineering and Product who are thinking, “Yeah man, I’ve been telling you that for years.” Mea culpa. I guess I’m a slow learner.
I’m willing to bet that every executive suite is dominated by people who don’t understand many of these things. Ultimately, product development sits downstream of decisions made at the board and executive level regarding resource allocation, prioritization, and business strategy. When executives don’t understand (or misunderstand) these ideas, they unwittingly create waste, inefficiency, and frustration in the development of their products.
Lesson 1: Story Points Shouldn’t Be Numbers
Most agile teams use “points” as a representation of the difficulty of a task, issue, card (whatever you call the unit of work) that needs to be done. This always made sense to me. How much work gets done is a simple equation — it’s a sum of the number of points accomplished in a sprint. It was fairly obvious to me that this was a metric that could be manipulated depending on how the number of points was allocated to a card — but whatever.
I didn’t understand that the numbers represent a category or a discrete variable rather than a numerical relationship. Let’s say that the numbers represent the month of the year, with January being “1”, February being “2”, and so on. If you’re asked for the expected temperature in any given month, there’s a relationship there. You generally know that August (month 8) is going to be much hotter than January (month 1), but it’s not necessarily 8 times hotter. And every once in a while, you’ll have a hotter day in January than you will in August.
To make matters more complicated, “points” often represents even more amorphous concepts than just “level of effort.” They can refer to things like “level of risk” and “uncertainty.”
What I have come to realize is that work labeled with large point values often represent cries for help. They are a way of saying “I’m being asked to do a thing that I don’t understand, and I don’t know how to make it work.” Usually, that’s the result of an upstream process issue.
It could be that the business stakeholders haven’t effectively articulated what the military would describe as “commander’s intent” — or why something is being asked of an organization. It also could be that the user research into how to effectively solve a problem hasn’t been well done or communicated. It could even be that the requirements of one piece of work conflict with the requirements of another piece of work.
What have I done about it?
I’ve stopped thinking that the “number of points accomplished” is a useful metric for productivity in the way that “quota achieved” is useful for a salesperson. It simply doesn’t translate that way.
Now, I ask the team what the highest point tasks are in upcoming sprints and use that information to try to figure out where there might be ambiguity upstream of the development team. I start from the premise that if something has a high point total associated with it, there may be places where the work needs to be better defined if we want it to be done on time.
In my mental model, I now think of “points” as a “string” or category variable, rather than an “integer” or continuous variable.
Lesson 2: TMTOWTDI
During the early development of the web, there was a significant debate between those who favored Python and those who favored Perl. These languages had a lot in common: both were interpreted (rather than compiled), which made them well-suited for web development. Natively open source, they attracted rabid fans who built lots of web libraries and other modules that helped dramatically accelerate development.
Although there were some technical differences, the main argument between the two camps was that Python was a structured programming language where white space mattered. As a result, Python code had a similar appearance to other code. Perl, on the other hand, was more artistic. Larry Wall, the lead developer of the language and author of the Camel Book, used to say that Perl stood for “Pathologically Eclectic Rubbish Lister.” There was Perl poetry, and an obfuscated Perl competition to see who could write the most indecipherable code. The unofficial motto of the Perl movement was “There’s More Than One Way to Do It” (TMTOWTDI). Perl enthusiasts considered themselves too hip for the stilted “sameness” of Python.
Fast forward 20 years, Python is the most in-demand programming language while Perl doesn’t make the top 10.. Standardization may have felt boring, but it drives consistency and interoperability. There needs to be some level of consistency to drive improvement.
Our current stage of product development reminds me of the old days of Perl-Love. Many product teams within companies take great pride in operating independently, using different tools, processes, languages, and metrics. Discovering the degree of variability within even relatively small development organizations has been an eye-opener for me.
If you are an executive who assumes that there is a level of standardization within your product development organization, you may be shocked by the reality.
What have I done about it?
I’m sure there are occasional reasons for this (a skunkworks R&D org may have a different iteration cycle than Production Support for instance) — but we have standardized on both process and toolset with variability as an exception, not a rule. For the same reason that my sales teams will use the same CRM and definition of a sales cycle, the product development team will use the same systems of record (inside a unified instance) with consistent process terminology. Furthermore, we will maintain executive visibility into what’s happening with that process, just as we do with sales, especially as the organization grows and scales.
Occasionally, there may be reasons for having different iteration cycles between a skunkworks R&D organization and Production Support. However, our standard practice is to maintain consistency in both process and toolset, with variability being the exception rather than the rule. Just as my sales teams use the same CRM and definition of a sales cycle, the product development team will use the same systems of record within a unified instance, with consistent process terminology. Furthermore, we will maintain executive visibility into the process, just as we do with sales, especially as the organization grows and scales.
I’m all for disruptive, creative innovation within the team. We encourage that. But we will do it within a framework that promotes standardization and consistency so that we can continuously improve. Our approach will feel more like Python than Perl. I expect the rest of the industry will eventually evolve in the same direction.
Lesson 3: Be really careful asking teams to context switch
As soon as we began measuring process efficiency, we discovered that the single biggest culprit associated with efficiency dropping was requests for context switching. It’s funny, because the process framework “Agile” is built to accommodate rapid changes in direction. But actually, asking the team to work on lots of random, unrelated work rather than a structured progression of work results in significantly less work actually getting done.
Once again, this gets to the process of structuring work.
It makes sense. In the same way that computers slow down when they run out of cache and have to access information that’s stored in slower memory, the human brain expends a tremendous amount of energy switching between different tasks. It’s particularly damaging to the need for a flow state that drives maximal productivity.
The idea of a “two week agile sprint” is also deceiving. It leads to the belief that there’s a clean slate every two weeks. Basically, as a business executive, I should be able to come up with whatever I think is important, and as long as I’m willing to wait until the next sprint — what’s the harm?
The reality is that the product development is not working in two week sprints. While development teams may be executing tasks in two-week intervals, there are weeks of work that go into structuring those tasks. Requirements definition, user interviews, design and similar activities don’t disappear because an organization is running “agile” instead of “waterfall.” That work still need to be done, and all of that work creates dependencies. Asking teams to switch focus and short-circuit this process dramatically reduces the amount of work that actually gets done.
What have I done about it?
Now, I spend a lot more time at the beginning of the process to ensure that the product team understands the requirements and intent from across the business, and that this understanding is incorporated into the work. Having learned that context switching later in the process is inefficient, I now put more energy into effectively gathering and articulating requirements early on to minimize the need for changes later.
CEOs, what lessons have you learned from your product and engineering leaders? Shoot us a line email@example.com