Why don't we take risks anymore?
People oftentimes overestimate the gravity of the decisions they make, We should embrace making risky bets—it'll make us all happier.
I was speaking to an acquaintance the other day about his master’s thesis and he was describing the challenges of getting approvals to go ahead with his research. He wants to set up an experiment to see whether an existing, non-invasive medical device could have applications in a different population of patients than what the device was originally designed for.
As far as anyone can reasonably tell, there’s no possible way for the device to harm someone. It sits on the skin and can be removed at any time. On the other hand, if his hypothesis is correct, using it could significantly improve the welfare of this new population of patients and potentially even save lives.
To get clearance to start the experiment, he’s written many dozens of meticulously crafted pages for the proposal and has gone through multiple rounds of revisions with his supervisor—a process that’s taken months so far with more months to go.
I’m not wholly ignorant to the tales of bureaucracy within academia, but, being from the world of fast-scaling startups, the sluggish pace to get approved for running the experiment seemed crazy to me. Lives could be seriously improved, and there’s no real risk in trying it out to see if it works.
This isn’t isolated to academia, either. In my hometown of Vancouver, a major housing developer just filed for insolvency, stating that the slow and expensive approval processes from the local government for developing (desperately needed) housing was a major contributing factor to their demise. Due to this slow permitting process, over 2000 potential homes are just not being built.
Noah Smith recently wrote a blog post touching on this topic within the context of American infrastructure and R&D. For example, TSMC, a major chip manufacturer, said that there was around a 5x greater cost to building a chip fab in Arizona compared to Taiwan, attributing at least some of that to onerous government processes.
Outside of the context of government, the Harvard Business Review published a survey in 2017 of 7000 readers that found that 28% of respondents’ time at work was spent on bureaucratic chores with little or no value, that they spend 42% of their time dealing with internal issues, and that only 20% of respondents said that unconventional ideas were greeted with interest or enthusiasm in their organization. The trajectory of these companies was headed in the wrong direction too: nearly two-thirds of respondents said their organization had become more bureaucratic over the past few years; painting a picture that the natural evolution of a company tends towards more bureaucracy, not less.
There seems to be a general consensus that overbearing organizational processes slow down progress and frustrating people working within those systems. Why isn’t this being addressed as a major issue? What’s the solution? More broadly, where is the urgency?
People are afraid of failure and institute processes to avoid failure
I think of bureaucratic processes as speed bumps in an organization. A mistake occurs (a car hits a pedestrian in this dark metaphor), and so, in a well-intentioned effort to avoid making a similar mistake in the future, a process is put into place (a speed bump is installed).
Cars now slow down before they go over the speed bump in the future, and future injury or death is avoided. Mission accomplished!
Two things can go wrong with this:
Companies see the success of the speed bump at eliminating accidents, and they end up overbuilding processes throughout the organization. They end up receiving diminishing margins of returns for the 10th speed bump compared to the first one, and overall traffic slows dramatically. People might even end up driving dangerously down side-streets and alleyways to avoid the speed bumps altogether (which might materialize as skunkworks projects in a large company).
The speed bumps affect all forms of traffic. Cars can cause dangerous accidents, but now cyclists, skateboarders, and people riding scooters are also affected by these additional processes even though they are highly unlikely to be hitting pedestrians and causing any sort of serious damage.
Within the context of an organization, the accidents being avoided could be actually causing physical harm to people, or they could be budgetary, or reputational, or time lost, or some other harmful outcome.
Creating two lanes: the high-risk traffic and everything else
The reality is, within most organizations, the vast majority of initiatives and decisions are not high-risk. Nobody will die if you don’t choose the right stock photo for a brochure. The company won’t sink if you run an experiment on your website and conversion rates temporarily go down. The last season of Game of Thrones wasn’t ruined because there was a Starbucks cup visible in one of the shots (don’t get me wrong, that season was an abomination, but I don’t blame the Starbucks error for that).
If we treat low-risk work the same as high-risk work, the organization as a whole slows down. Your pace of iteration slows and you don’t get as many shots at bat to get something right.
Organizations should bifurcate any initiative or work into two categories. Paraphrasing from Jeff Bezos:
“Type 1” work is work that is high-risk and should not be rushed into. “Type 2” work is low risk and can be iterated upon and you can make a quick decision on it. Most work falls into Type 2.
Take a fintech company as an example. Type 1 work would be your payments infrastructure or your authentication systems. You really don’t want to get those wrong and have people’s money going to the wrong place, people being double-charged, being hacked, etc. A big enough error there actually could sink the company and/or harm your customers.
Type 2 work would be pretty much everything else. Your website, your mobile app, your email marketing campaigns. As long as their money is safe, most people will put up with the occasional UI bug, an incoherent pricing scheme, or a new feature that doesn’t seem to fit. On the flip side, by virtue of having made these mistakes, it means you are trying new things rather than remaining static. You are unlocking learnings with each experiment that you can use to better improve your business.
Oftentimes, an easy way of deciding whether a task is Type 1 or Type 2 is to ask yourself: is this decision reversible? And if so, how easy is that to do?
Organizations will typically err towards treating too many things as Type 1 work. They know that experimentation is key to invention, but they aren’t willing to embrace the string of failures necessary to get there.
The key to not have everything treated as high-risk work is to create a culture of psychological safety
I know, this sounds cheesy, but stay with me.
Organizations are, fundamentally, groups of people. If you’ve hired well, they are smart and hard working. But by and large, they work for a living; they aren’t independently wealthy. The average person isn’t going to want to risk their job. If they think that proposing or executing on a risky initiative is going to put their livelihood in danger, they won’t do it. If they think they’ll lose social standing within the organization, they probably won’t do it.
You need to have people truly believe that taking taking risky bets on Type 2 work is encouraged. They need to have a sense of psychological safety. (As an aside, a book recommendation: Amy C. Edmondson wrote a fantastic book on this topic: The Fearless Organization).
Within the context of all the tech layoffs and labour not being as tight as it was in 2021, it’s become a bit of a trend over the last year to push back against the cushy conditions of big tech. It’s commonly understood that it’s basically impossible to get fired from your job at these tech juggernauts unless you really really screw up.
Psychological safety is not about having a low bar for performance.
In fact, your team should trust you to keep the team makeup healthy—nobody likes working with low performers. As Reed Hastings of Netflix famously said, “We’re a team, not a family.” You wouldn’t fire your kid for having bad grades, but you should fire a team member for consistently not delivering.
Instead, what psychological safety does mean is that you won’t face punitive actions for taking risks, for questioning ideas coming from superiors, or for speaking out when you see something wrong. Toyota famously has the Andon cable, where anyone can stop the production line if they spot something they perceive to be a threat to vehicle quality, simply by pulling the cable.
When I worked at Brex, they often spoke of having a culture of transparency where feedback was solicited and celebrated in both directions: from the report and the manager. As a manager, taking critical feedback from a report and not responding punitively is a way to engender psychological safety.
Within the context of encouraging people to take risks, it should be pretty obvious then. You need to tell people to take risks, celebrate when they tried something risky even if they failed, and take risks yourself and publicly announce your own failures.
You need to empower your entire team to skip or remove processes when appropriate
Does your company have a skunkworks division? Do you have a “startup within the company” initiative? These are signs that processes are being over-used within the company and people are taking back alleys to escape the organizational speed bumps that have proliferated over time.
In the short term, introducing these shortcuts can be helpful to cut through a bureaucracy. Fundamentally though, these are symptoms of not enough risk being allowed within the organization as a whole. A startup within a company is essentially a “safe space” for experimentation—you need to expand that safe space as much as possible for the entire organization to benefit.
How to do that at a large company? It has to come from the top. The culture needs a shift, and executives and founders are best equipped to tackle this. As Brian Armstrong of Coinbase said, “The role of the founder is to inject risk into an organization”. You don’t have many levers to individually contribute to specific initiatives as an executive at a larger company, but you can shape culture with your words and the goals and metrics you set forth for the team.
As an example: I read a thread on hackernews of a C-level exec who came into a company and wanted the team to run more experiments. So they set a simple rule: teams were judged on the number of experiments they ran per month. At first, people gamed the metric and did nothingburger experiments to hit their numbers, but what eventually happened was that the organizational muscle of running experiments grew stronger from repetition and people started running genuinely useful experiments.
If you are an executive or a founder at a company and you are thinking that all this sounds well and good but that you can’t trust people to make decisions about whether something is a Type 1 or a Type 2 risk: you either need to train these team members better or you actually believe they are incompetent and you probably need to fire them. Obviously you can limit the scope of experimentation for certain roles: you don’t necessarily want an intern messing with your core business logic on their first week, but by and large, if you can’t trust your team to make decisions, your organization will be bottlenecked.
Now, do I think the salve to these issues is just slashing headcount, like what is happening at Meta with the great flattening along with pretty much all the other big tech companies? I think that’s a fairly blunt instrument to communicate a desired cultural shift towards greater efficiency. It’s not necessarily the wrong call, but it seems more to be a reactionary response to the macro environment coupled with the creeping bloat most companies flush with cash experience rather than a specific push to empower their employees to take risks. If anything, it has a likelihood of making employees even more risk averse as they fear for their jobs.
I don’t know whether anyone has directly studied the productivity losses associated with overzealous risk aversion, but the best I could find was a study that found that excess layers of management (which is certainly at least somewhat correlated to bureaucratic processes) is costing the U.S. $3 trillion per year. Can a dollar value in a spreadsheet really properly reflect the soul-crushingness of having an idea and not being able to give it a shot? Even if you remove the cost of all the useless paperwork and meetings being done—consider the opportunity costs from not starting initiatives or running experiments that could materially impact your business.
Luckily for us, the solution to these problems is simple. It is to treat people more like intelligent human beings and to empower them to make decisions. If that also leads to better outcomes for the economy, I think it’s worth a try.