All These Worlds Are Yours

Transcript of my keynote talk for Mind The Product, 19 Nov 2020. Rerecorded HD video above (37 minutes).

1 · utopian visions

If you’ll indulge me, I’d like to begin with some shameless nostalgia. Let’s step back a decade, to the heyday of techno-utopia.

Back in 2010, the mobile revolution had reached full pace. The whole world, and all its information, were just a tap away. Social media meant we could connect with people in ways we’d never dreamt of: whoever you were, you could find thousands of like-minded people across the globe, and they were all in your pocket every day. People flocked to Twitter, reconnected with old friends on Facebook. Analysts and tech prophets of the age wrote breathless thinkpieces about our decentralised, networked age, and promised that connected tech would transform the power dynamics of modern life; that obsolete hierarchies of society were being eroded as we spoke, and would collapse around us very soon.

And sure enough, the statues really did start to topple. The Arab Spring was hailed, at least in the West, as a triumph of the connected age: not only did smartphones help protesters to share information and mobilise quickly, but this happened in – let’s be honest – places the West has never considered technically advanced: Egypt, Tunisia, Libya, the Middle East… What a victory for progress, we thought. What a brave affirmation of liberal democracy, enabled and amplified by technology!

Still, there were a few dissenting voices, or at least a few that were heard. Evgeny Morozov criticised the tech industry’s solutionism – its habit of seeing tech as the answer to any possible problem. Nicholas Carr asked whether technology was affecting how we thought and remember. And growing numbers of users – mostly women or members of underrepresented minorities – complained they didn’t feel safe on these services we were otherwise so in love with.

But the industry wasn’t ready to listen. We were too enamoured by our successes and our maturing practices. Every recruiter asked candidates whether they wanted to change the world; every pitch deck talked about ‘democratising’ the technology of their choice.

We knew full well technology can have deep social impacts. We just assumed those impacts would always be positive.

Today, things look very different. The pace of innovation continues, but the utopian narrative is gone. You could argue the press has played a role in that; perhaps they saw an opportunity to land a few jabs at an industry that’s eroded a lot of their power. But mostly the tech industry only has itself to blame. We’ve shown ourselves to be largely undeserving of the power we hold and the trust we demand.

Over the last few years, we’ve served up repeated ethical missteps. Microsoft’s racist chatbot. Amazon’s surveillance doorbells that pipe data to local police forces. The semi-autonomous Uber that killed a pedestrian after classifying her as a bicycle. Twitter’s repeated failings on abuse that allowed the flourishing of Gamergate, a harassment campaign that wrote the playbook for the emerging alt-right. YouTube’s blind pursuit of engagement metrics that lead millions down a rabbit hole of radicalisation. Facebook’s emotional contagion study, in which they manipulated the emotional state of 689,000 users without consent.

The promises of decentralisation never came true either. The independent web is dead in the water: instead, a few major players have soaked up all the power, thanks to the dynamics of Metcalfe’s law and continued under-regulation. The first four public companies ever to reach a $1 trillion valuation were, in order, Apple, Amazon, Microsoft, and Alphabet. Facebook’s getting close. Today, 8 of the 10 largest companies in the world by market cap are tech firms. In 2010, just 2 were.

It seems we’ve drifted some way from the promised course. Our technologies have had severe downsides as well as benefits; the decentralised dreams we were sold have somehow given way to centralised authority and power.

2 · slumping reputation

For some time, though, it wasn’t actually clear whether the public cared. Perhaps they were happy with dealing with a smaller number of players; perhaps they viewed these ethical scandals as irrelevant compared to the power and convenience they got from their devices?

Well, it’s no longer unclear.

Screenshot 2020-11-21 at 16.53.55.png

Here’s some data from Pew Research on public attitudes toward the tech sector. It’s worth noting just how high the bar has been historically. The public has had very positive views of the industry for many years: there’s been a sort of halo effect surrounding the tech sector. But in the last few years there’s been a strong shift, a growing feeling that things are sliding downhill.

An interesting feature of this shift is that we see this sentiment across both political parties. We’ve heard a lot recently, particularly in the US, about how social media is allegedly biased against conservative viewpoints, or how YouTube’s algorithms are sympathetic to the far-right. But it seems hostility to the industry is now coming from both sides: perhaps this isn’t the partisan issue we’re told it is.

A common theme behind this trend is that people are mostly concerned about technology’s effects on the fabric of society, rather than on individuals.

Screenshot 2020-11-21 at 16.55.17.png

This study from doteveryone captures the trend: the British public, in this case, feel the internet has been a good thing for them as individuals, but say the picture’s murkier when they’re asked about the impacts on collective wellbeing.

At the heart of this eroding confidence is an alarming lack of trust. In the same study, doteveryone found just 19% of the British public think tech companies design with their best interests in mind. Nineteen percent!

Frankly, this is an appalling finding, and one that should humble us all. But it also suggests a profound dissonance in how the public approaches technology. People are still buying technology, after all: the tech sector is doing well despite the Covid crash, and stocks are up significantly. The public clearly still finds technology useful and beneficial, but the data suggests people also feel disempowered, resigned to being exploited by their devices. It’s as if the general public loves technology despite our best efforts.

We’ve all witnessed this through the anecdotal distrust we see all around us. We all have a friend who’s convinced that Facebook is listening through their phone, that apps are tracking their every move. We all see this learned helplessness around us: there’s nothing I can do about it, so why fight it?

The other way this bubbles to the surface is through the dystopian media that’s sprung up around the topic. In particular, I’d point to two Netflix productions. Black Mirror is one, of course. It’s captured the public imagination through an almost universally grim depiction of technologised futures: as a collective work of dystopian design fiction it does its job admirably.

And then there’s The Social Dilemma, the recent documentary featuring Tristan Harris and other contrite Silicon Valley techies. It’s fair to say it’s not been well received in the tech ethics community: to be candid, the film’s guilty of the same manipulative hubris it accuses the industry of. But the fact a documentary like this got made – and has been widely quite successful – suggests the public’s starting to see technology as a threat, not just a saviour.

3 · dark futures

The risk is, of course, that things get even worse. The decade ahead of us could well unleash deeper technological dangers. Certainly we’ll see disinformation and conspiracy playing a deeper role in social media, particularly with the advent of synthetic media – deepfake computer-generated audio and video, in other words – that blur the line even more between what seems real and what is real.

Right now we think of facial recognition mostly as a tool for personal identification – unlocking our phones, focusing cameras, tagging friends on nights out. But facial recognition is already wriggling beyond this personal locus of control. It’s going to colonise whole cities, and in turn pose quite serious threats to human rights.

Once you can identify people at distance and without consent, you can also map out their friendships, and assemble a ‘hypermap’ of not just their present movements, but also their past actions, from video footage recorded maybe years ago. It’s a short enough step from there to an automated law enforcement dragnet: a list of anyone who fits a certain description in a certain time and place, issued to anyone with a badge and an algorithm.

There are already movements underway to ban police and governments from using facial recognition on the general public: these might be successful in some cities or countries, but that battle will have to be won over and over again with each new terrorist attack. And authoritarian regimes won’t show the same sort of restraint that liberal states might.

As more companies and states put faith in artificial intelligence, we’ll also see more algorithmic decisions. Although as a community we’re starting to wise up to the dangers of algorithmic bias, it’s still likely that the people commissioning these systems will see them as objective, neutral, infallible tellers of truth. Even if that were true – which, of course, it’s not – citizens will rightly start to demand that these systems should explain the decisions they take. That’s tough luck for anyone relying on a deep learning system, which is computationally and mathematically opaque thanks to its design. I’m not sure we’ll be happy to sacrifice that power to satisfy what might seem like a pedantic request. But I’d argue perhaps we should: surely an important right within a democracy is to know why decisions are taken about you?

My biggest worry is when algorithmic decisions creep into military scenarios. Autonomous weapons systems are enormously appealing in theory: untiring, replaceable, scalable at low marginal cost. They could also cause carnage in ways obvious to anyone who’s watched a science fiction film since, what… 1968? There are some attempts to ban autonomous weapons too; the countries dragging their heels are pretty much the countries you’d expect.

2020 was, at last, the year the 21st century lived up to its threats. It was the first year that didn’t feel like an afterbirth of the 19-somethings; a year in which historic fires burned, racial tensions ignited, and an all-too-predictable and -predicted pandemic exposed just how ready some governments are to sacrifice their citizens for the good of the markets.

The coming decade might be worse still. I appreciate we’re all feeling temporarily buoyed by welcome uplifts at the tail-end of a dreadful year – vaccines around the corner, a disastrous head of state facing imminent removal – but the fundamental rot hasn’t been addressed. Deep inequality is still with us; automation still threatens to uproot our economies and livelihoods; vast climate disruption is now guaranteed: the only unknown is how severe that disruption will be.

But let’s take a breather: I don’t want to collapse into dismay. Let’s come back to the issues we control: the fate of the technology industry. How did we get here? What went wrong?

4 · responsibility as blame

I’ve been in the field of ethical and responsible technology for maybe four or five years now, after fifteen as a working designer in Silicon Valley and UK tech firms. If I may, I’d like to share one of the patterns I’m most confident about, having seen it repeatedly during that time.

Product managers are the primary cause of ethical harm in the tech industry.

It’s a blunt claim, and perhaps not a popular claim to make at a huge product conference. I feel I need to offer some caveats, or partial excuses. Maybe that old classic – some of my best friends are product managers. I’m even married to one. But there’s a more important point: this pattern is not intentional.

I’d be lying if I said I’ve never met a PM who relishes acting unethically: sadly, I have met one or two. But the vast majority of PMs – the ones who don’t demonstrate borderline sociopathy – mean well, and want to do well. Many of them have been reliable, valuable partners of mine in flourishing teams. Many of them care deeply about ethics and responsibility and want to take these issues seriously.

But I still see teams taking irresponsible decisions with damaging consequences. I think it happens because these consequences are unfortunate by-products of the things the product community values, the way PMs take decisions, and the skewed loyalties I think this field has adopted.

5 · empirical ideologies

Let’s talk first about Lean. I remember shortly after Eric Ries’s book, The Lean Startup, came out: every company I spoke with thought they were unique in adopting Lean methods. Now everyone does. Lean isn’t just a set of methods any more: it’s become an ideology. And the problem with ideologies is they’re pretty hard to shift.

One of the central precepts behind Lean Startup is that we now live in a state of such constant flux and extreme uncertainty that prediction is an unreliable guide.

‘As the world becomes more uncertain, it gets harder and harder to predict the future. The old management methods are not up to the task.’ —Eric Ries.

Instead, we should put our faith in empirical methods: create a hypothesis, then build, measure, learn, build, measure, learn. It’s all about validating your assumptions through a tight, accelerated feedback loop.

Makes sense. As a way to reduce internal waste and to stagger your way to product-market fit, I can see the appeal. But I also think there’s a major flaw in this way of thinking: it leaves no space, no opportunity to anticipate the damage our decisions might cause. It abandons the idea of considering the potential unintended consequences of our work and mitigating any harms.

It brings to mind that phrase that’s come to haunt our industry: move fast and break things. Breaking things is fine if you’re only breaking a filter UI. It’s not fine if you’re breaking relationships, communities, democracy. These pieces do not fit back together again in the point release. This idea of validated learning and incremental shipping has caused teams to casually pump ethical harm in the world, to only care about wider social impacts when they have a post-launch effect on metrics.

6 · overquantification

Believers in Lean ideologies also tend to have a strong bias toward metrics. and the belief that the only things that matter are things that can be measured. Numbers are valuable advisers but tyrannical masters. This leads to a common PM illness of overquantification, a disease that’s particularly infectious in data-driven companies.

Overquantification is a narrow, blinkered view of the world, and again one that makes ethical mistakes more likely. Ethical impacts are hard to measure: they’re all about very human and social qualities like fairness, justice, or happiness. These things don’t yield easily to numerical analysis. That means they tend to fall outside the interests of overquantified, data-driven companies.

I think a lot of people think Lean offers us a robust scientific method for finding product-market fit. I’d say it’s more a pseudoscience, but ok. It makes sense, then, that this idea of validated learning lends itself very well to experimentation: in fact, Lean Startup says that all products are themselves experiments. So a lot of Lean adherents rely heavily on A/B and multivariate testing as a way to tighten that feedback loop.

Product experiments can be a useful input to the design process; they can help you learn more about which approaches are successful, they can help you optimise conversion rates, and all that. But I’ve also witnessed companies where the framing of experimentation shifts. Instead of A/B and multivariate tests providing data points for validated learning, they start to become tools of behaviour manipulation. Without realising it, teams start talking about users as experimental subjects; and experiments become about find the best way to nudge or cajole users to behave in ways that make more money.

If you’re in this state – when you start thinking of users as masses, as aggregate red lines creeping up Tableau dashboards – you’re already beyond the ethical line. When you start thinking of people as not ends in their own right, but as means for you to achieve your own goals, you’re already in the jaws of unethical practice.

7 · business drift

I think we can trace the root of this tendency to a shift in the priorities of Product teams.

Screenshot 2020-11-21 at 16.57.59.png

I’m sure most of us are familiar with this Venn diagram: I knew of it long ago; it was only very recently I found out it was created by Martin Eriksson himself.

This diagram shows Product managers sitting at the intersection of UX, tech, and business. I like it as a framing. It’s a hell of a lot better, for example, than that arrogant trope about PMs being the CEO of a product. But what I’ve observed, though, is this isn’t really what happens. Or perhaps it once used to, but there’s been a drift.

Screenshot 2020-11-21 at 16.58.06.png

Far too many product managers have drifted into the lower circle. They’ve become metrics-chasers, business-optimisers, wringing every last drop of value out of customers and losing sight of this more balanced worldview.

I can understand this drift. Being on the business’s side is a comfortable place to be. You’ll always feel you have air cover, always feel supported by the higher-ups, always feel safe in your role. And you’ll also be biasing your team to cut ethical corners to hit their OKRs.

8 · user-centricity causes externalities

One more root cause to mention, and this one isn’t just about product managers, but designers too. To realise lasting business value, we’re taught we have to focus with laser precision on the needs of users.

This, too, has to change. We’re starting to learn, later than we should have, that user-centred design doesn’t really work in the twenty-first century. Or at least, it has significant blindspots.

The problem with focusing on users is our work doesn’t just affect users. The biggest advantage digital businesses have is scale: they can grow to serve huge numbers of customers at very low marginal cost. It’s not that much more expensive to run a search engine with 1 billion users than 1 million users. Create a social platform that catches fire and you might find yourself with 100 million users in a matter of a few months.

We’re now talking about global-scale, human-scale impact. Technologies of this scale don’t just affect users; they also affect non-users, groups, and communities. If you live next door to an Airbnb, your life changes, likely for the worse. Your new neighbours won’t care so much about the wellbeing of your community; they’ll be more likely to spend their money in the tourist traps than in the local small businesses; and, of course, their presence pushes up rents throughout the neighbourhood.

From a UX and product point of view, Airbnb is a fantastic service. It’s a classic two-sided platform that connects user groups for mutual gain. But all the costs, the harms, the externalities fall on people who haven’t used Airbnb at all: neighbours, local businesses, taxpayers… User-centricity has failed these people. Product-market fit has caused them harm.

Screenshot 2020-11-21 at 17.01.09.png

Large-scale technologies don’t just affect groups of people. They also affect social goods: in other words, concepts we think are valuable in society. There’s been a lot of talk about how Facebook, for example, has torn the fabric of democracy. Some sociologists and psychologists say Instagram filters could damage young people’s self-image. These values simply aren’t accounted for in user-centred thinking: we see them as abstract concepts, unquantifiable, out of scope for tangible product work.

And then there’s non-human life, which our current economic models see just as a resource awaiting exploitation, as latent value ready for harvest. Humans have routinely exploited animals in the name of progress. Think of poor Laika, sent to die in orbit, or the stray animals Thomas Edison electrocuted to discredit his rivals’s alternating current.

Alongside this, there’s the very health of the planet. The news on climate crisis is so terrifyingly bad, so abject, that it’s immoral to continue to build businesses and to design services that overlook the importance of our shared commons. Climate is the moral issue of our century: there’s no such thing as minimum viable icecaps.

So even if product managers position themselves at the heart of UX, tech, and business, they may well be missing their moral duties to this broader set of stakeholders, to non-users, to groups and communities, to social structures, nonhuman life, and our planet itself.

9 · it doesn’t have to be this way

I’ve been talking about some dark futures for technology and for our world. The good news is that it doesn’t have to be this way. One of the things any good futurist will tell you is that the future is plural. It’s not a single road ahead: it’s a network of potential paths. Some are paved, some muddy; some are steep, some are downhill. But we get to choose which route we take.

Sometimes I’m sharply critical of our industry, but please don’t misunderstand me. I truly believe technology can improve our world, can improve the lot of our species. Technology can help bring about better worlds to come. If I didn’t believe that, I wouldn’t still be doing this work.

What it will take, however, is for us to reevaluate our impacts along new axes: to actively seek out a more ethical, more responsible course.

In retrospect, we’ll look back on 2020 as a pivotal year. I’m sceptical of some of the grand narratives people offer about the post-Covid world, but I do think it’s true that the deck of possible futures has been thoroughly shuffled this year. We now have the opportunity to choose new directions. We might not get a reset of this magnitude again.

And I want you – the product community – to lead this charge. I expect it’s hard to appreciate this from the inside, but you all hold an immense amount of power within technology companies, and within the world. You are the professionals whose decisions will shape our companies and products; your decisions will change how billions of future users interact with technologies and, by extension, with each other. I want you to exercise that power with thought and compassion.

If this is going to happen, you need to rethink what you value; what drives your processes and decisions. That’s, by necessity, a long journey. It’ll involve lots of learning, plenty of failure. For the rest of my time here, I’d like to suggest some first steps.

Screenshot 2020-11-21 at 17.02.43.png

Here’s my fairly crude attempt to illustrate a responsible innovation process; let’s step through what it might mean for you.

10 · carving out space for ethical discussion

Perhaps most importantly, teams need to make space in their processes for ethical deliberations, to examine potential negative impacts, and look for ways to fix them before they happen. It doesn’t really matter when this takes place. You could adapt an existing ritual: a design critique, a sprint demo, a retro, a project pitch. Or maybe you ring-fence time some other way. But you must try.

When you do this, be prepared: someone will tell you it’s a waste of time. People use technologies in unpredictable ways, they’ll say: we can’t possibly foresee all of those. They might even quote the ‘law of unintended consequences’ at you, which says pretty much the same thing: you’ll always miss something.

Don’t listen. It is true that you’ll never foresee all the consequences of your decisions. But anticipating and mitigating even a few of those impacts is far better than doing nothing. And moral imagination, as we’d call it, isn’t a gift that only a few lucky souls have: it’s more like a muscle. It gets stronger with exercise. Maybe you only anticipate 30% of the potential harms this time around. Next time it might be 40%. 50%.

The easiest way to get started on this is a simple risk-mapping exercise: a set of prompt categories and questions you can run through to identify potential trouble spots. I recommend using the Ethical Explorer toolkit as a starting point; it’s been made by Omidyar Network for exactly this use case. It’s free and doesn’t need any special training to use.

11 · getting out of the building

Eventually you’ll realise this isn’t enough of course. Trying to conjure up potential unintended impacts from inside a meeting room or a Zoom call is worth doing, but is always going to be limited. Soon enough you’ll want to get out of the building.

You know this idea from customer development, of course. You know that talking with real people opens your eyes to new perspectives like nothing else can – that it helps you move from empathy to insight. So talk to your researchers about ways to better understand ethical impacts. I promise they’ll be delighted to help you: anything that’s not another usability test will make their eyes light up.

But remember here, it’s not just about individual users: it’s worth broadening your research to hear from other stakeholders. So look for ways to give voice to members of communities and groups, particularly if they’re usually underheard in this space. Reach out to activists and advocates. Listen to their experiences, understand their fears; work with them to prototype new approaches that reduce the risks they foresee.

12 · learning about ethics

Spotting potential harms in advance is a great start, but you then need to assess and evaluate them. Which are the problems that really matter? What do we consider in scope, and what’s outside our control? How do we weigh up competing benefits and harms?

There’s plenty of existing work to help us answer these questions. Plenty of people in the tech industry share an infuriating habit: they believe they’re the first brave explorers on any new shore. But this isn’t a topic to invent from scratch. There’s so much we can, and should, learn from the people already in this space.

When I moved into the field of ethical tech, around five years ago, I was stunned by the depth and quality of work that was already going on behind the industry’s back. Ethics isn’t about dusty tomes and dead Greeks! It’s a vital, living topic, full of artists, writers, philosophers, and critics, all exploring the most important questions facing us today: How should we live? What is the right way to act?

There’s a happening in this movement, and a lot to learn. Some people think ethics is fuzzy, subjective, difficult. If you take the time to deepen your knowledge, you’ll realise this isn’t really true. There’s strong groundwork that’s already been laid: there are robust ways to work through ethical problems. You can use these to take defensible decisions that make your team proud.

13 · committing to action

You then have to commit to action. There’s a sadly common trend these days of ethics-washing – companies going through performative public steps to present a responsible image but unwilling to make the changes that really matter. An ethical company has to be willing to take decisions in new ways, to act according to different priorities, if you’re going to live up to these promises.

Maybe it’s something simple, like committing as a team to never ship another dark pattern. Maybe it’s ensuring your MVP includes basic user safety features, so you reduce the risk that vulnerable people are harassed or attacked.

I particularly want to mention product experimentation here. It astounds me how ethically lax our industry is when it comes to experimentation. Experimentation and A/B testing is participant research! In my view, this means you have a moral obligation to get informed consent for it. So tell customers that A/B tests are going on. Let people find out which buckets they’re in and, crucially, offer them a chance to opt out. Be sure that you remove children from experimental buckets.

14 · positive opportunities

But this step isn’t just about mitigating harms. One welcome trend I’ve seen is that the conversation about ethics is moving beyond risk. Companies are starting to realise that responsible practices are also a commercial advantage.

Jonathan Haidt, who’s a professor of business ethics at NYU, has found that companies with a positive ethical reputation can command higher prices, pay less for capital, and land better talent.

ehu-infographic-research-report-986x675.jpg

There’s plenty of evidence that customers want companies to show strong values. Salesforce Research found 86% of US consumers are more loyal to companies that demonstrate good ethics. 75% would consider not doing business with companies that don’t.

So this stage also means capitalising on opportunities you spot in your anticipation work. This means thinking differently about ethics.

I think too many people see ethics as a negative force: obstructive, abstract, something that be a drag on innovation. I see it differently. Yes, ethical investment will help you avoid painful mistakes and dodge some risks. But ethics can be a seed of innovation, not just a constraint.

The analogy I keep returning to is a trellis. Your commitment to ethics builds a frame around which your products can grow. They’ll take on the shape of your values. Your thoughtfulness, your compassion, and your honesty will reveal themselves through the details of your products.

And that’s a powerful advantage. It’ll help you stand out in wildly crowded marketplace. It can help you make better decisions, and build trust that keeps customers loyal for life.

As this way of thinking become more of a habit, you need to support its development. A one-off process isn’t much good to anyone: you need to build your team’s capacity, processes, and skills, so responsible innovation becomes an ethos, not just a checklist.

So you need to spend some time creating what I call ethical infrastructure. This might include publishing a set of responsible guidelines for your team or your company. Maybe it’s including ethical behaviours in career ladders. Perhaps in some cases you’ll want to create some ethical oversight – a committee, a team, or at least a documented process for working through tough ethical calls.

It’s easy to go too quickly with this stuff, and build infrastructure you don’t yet need. Keep it light. Match the infrastructure to your need, to your team’s maturity with these issues. If you keep your eyes open, it’ll soon become clear what support you need.

15 · close

These are still pretty early days for the responsible tech movement. We may be stumbling around in the gloom, but we’re starting to find our way around. Personally, I find it thrilling – and challenging – to be in this nascent space. Something exciting is building: maybe there’s hope for an era of responsible innovation to come.

Because we’re starting to realise we won’t survive the 21st century with the methods of the 20th. We’re beginning to understand how user-centricity has blinded us to our wider responsibilities; that the externalities of our work are actually liabilities. Business are learning they have to move past narrow profit-centred definitions of success, and actively embrace their broader social roles.

After all, we’re human beings, not just employees or directors. Our loyalties must be to the world, not just our OKRs.

Wherever it is we’re headed, we’ll need support, courage, and responsibility. My product friends, you have a power to influence our futures that very few other professions have. All these worlds are yours: you just have to choose which worlds you want.

Cennydd Bowles

Designer and futurist.

http://cennydd.com
Previous
Previous

It’s fine to call it user testing

Next
Next

Norman on ‘To Create a Better Society’