When OKRs work well (and don’t)

In 2009, I was challenged to do something seemingly simple—recreate the scorecards of measures and metrics my colleagues at Apple were using. My colleagues used scorecards to track and monitor the impact of their decisions on key business measures. As it turned out, they wanted to know how decisions I made also affected the outcomes they were reporting on. While I had been involved with big product decisions all along, I wasn't very good at articulating how my decisions correlated to those outcomes they were reporting on. So, challenged accepted.

My goal was to track and monitor the impact of my decisions on key business measures. Rather than starting from scratch, I started with the scorecard and measures my colleagues were already using and looked for any other approaches designers were using to do this sort of work. I remixed a couple different models and found a solution that worked pretty well!

And then, I left Apple.

What I found was, none of the other companies I've worked for since have tracked and monitored their decisions in the same way or with the same rigor. When I tried introducing my scorecard approach, it failed miserably because none of the other companies were talking about scorecards, they were talking about OKRs. They weren't just talking, they were really excited about OKRs and thought they'd solve everything.

And yet… when I took a deeper look, there wasn't much there beyond the talk. OKRs showed up a lot in keynote presentations and town-hall events, but very few leaders and teams were really trying to put them in place with rigor. Instead, the high-level OKR talk was good enough and when I came flying in on my high horse with measuring and scorecards, it freaked a whole bunch of people out.

And so, I had to start all over again.


Super quick summary of OKRs by me

I'm guessing if you're reading this, you have heard about OKRs. There's a lot of information about OKRs out there, so rather than give a detailed summary on what they are, here's my take:

Objectives and Key Results is a goal setting methodology. It's a structure that companies and teams use to define what they want to happen (objective) and how they'll know it's happening (key results). But, more simply put, it's how a lot of companies and teams try to prioritize what they're working on.

While the narrative around OKRs is quite familiar, do you really know the origin story beyond the narrative?


The part of the Google + OKR origin story that is often left out

FYI: I'm sharing a whole bunch of links in this section because I'm not focused on re-writing what's already been written.

If you don't know the history of OKRs and how they got famous, here's a brief summary. If that summary is just too much, here's the story as five bullet points:

OKRs haven't just worked at Google, but quite a few other startups as well. Seeing that success and reading those success stories has led to a lot of other companies to adopt the framework. But, lots of other companies and teams aren't seeing the same success with OKRs that Google has had.

Rather than focus on why they don't work, I want to talk about some of the factors that helped OKRs work at Google in 1999. These factors that aren't included in the wildly shared story:

  • At the time John Doerr (third guy from above) introduced OKRs to Google, Google leadership said, "yes" because they didn't have any other framework to use.

  • Google, at its core, started a company that was really comfortable with measuring and tracking data.

  • Google was very small (only like 10-12 people), so OKRs have been there basically from the beginning

  • Google separated OKRs from employee performance measurement and incentives. THIS IS A BIG DEAL.

As a reader, you may be wondering why I'm sharing any of these things and why should I care. Here's my belief. I believe Google succeeded with OKRs not because of OKRs, but because of the circumstances in which they were introduced. Those factors above were just as critical to their success as the framework itself.

Over the last 20+ years, many other companies have implemented (or tried to) OKRs with varying degrees of success. Within my own experience and in working with hundreds of design leaders over the past three years with Second Wave Dive, there are some clear patterns around what is and isn't working.

My biased insights; when OKRs work well (and don’t)

To be clear, the insights below are totally based on my experiences and observations. Other may and do experience other things.

In my opinion, OKRs tend to work well for:

  • Startups laser focused on one product

  • Organizations that do not structure themselves as multiple business units (Apple)

  • Companies and teams who have a deep rooted culture of measuring, tracking, and monitoring decisions. Btw, if you notice any designers who talk about measuring, they are often surrounded by other teams who measure well. This is really luck of the draw, not down to a framework.

In my opinion, OKRs do not work well when:

  • When a company has a culture of people in power trusting their gut

  • Multiple business units with separate budgets, strategies, outcomes, etc. Lots of Enterprise companies fall into this category here.

  • A company has more than one product

  • When there is no strategy in place

  • When the company ties OKRs to employee performance, bonuses, promotions, etc. This is just catastrophically bad and turns everything into a competition.

In the real world, there is just one person who can really drive the type of changes to address all of the above. I'm guessing the majority of you are not CEOs and you deserve some practical guidance on how you might begin addressing some of the gaps.


Previous
Previous

The biggest development gap for design executives and leaders

Next
Next

Introducing POKRs, a framework to help translate OKR theory into sense-making practice