Thoughts on harmful design

Last week, I was invited to be part of an ICO + CMA workshop on harmful / deceptive design, and gave a position statement for a panel with Sarah Gold and Google’s Abigail Gray. Here’s what I said, lightly edited:

The cause of ethics in tech has reached a difficult moment. There’s a backlash against the techlash. We’re told tech ethics, sustainability, and social responsibility are the enemies, preventing humankind from reaching a ‘a far superior way of living and being’. This has coincided – it is just a coincidence, right? – with the tech crash, which has eroded the worker power that has driven the tech ethics movement. Meanwhile, an AI landrush is incentivising companies to cut ethical corners in favour of grabbing market opportunities. So there’s good cause for pessimism.

Harmful patterns are common because they’re the exact outcomes the system rewards. While we talk about harmful design, design culture isn’t really the problem: designers tend to be user-focused, empathetic people who typically try to do the right thing. The problem is metrics-driven product management; it’s growth teams given carte blanche to see users as faceless masses to be manipulated; it’s the twin altars of profit and scale; it’s the idea that externalities – that harms themselves – are someone else’s problem, something businesses needn’t worry about.

So these are entrenched problems, which is why progress is so hard. Nevertheless, we are making progress. The ICO/CMA joint paper is a landmark and, I think, a warning shot. Academics have done a good job taxonomising and highlighting deceptive patterns. And deceptive design is now a recognised topic in industry, the subject of conference talks, books, and the like.

But harmful and deceptive practices are still prevalent, and I think fighting them will only get harder in the AI era. We need more approaches at more levels. There’s still a role for promoting these inside companies despite the headwinds, to corral the support of people who are motivated to make tech more responsible. That’s where I come in. But we also need we need activists and political theorists who can discuss the structures and business models that would better promote ethical practice. We need regulators to enforce against bad practice, and lawmakers who can protect users as new harms emerge. We need academics who can investigate these practices and offer new ways of thinking about them. We also need dialogue with the public, particularly vulnerable people most at risk from the harms of technology. In short, we have a long way to go. That’s where you come in.

Cennydd Bowles

Designer and futurist.

http://cennydd.com
Previous
Previous

Our Future Health tech advisory board

Next
Next

Could machines help us be more ethical?