Cennydd Bowles Cennydd Bowles

Our Future Health tech advisory board

I’ve joined the technology advisory board at Our Future Health, the charity partnering with the NHS (and other stakeholders) to develop new ways to prevent, detect and treat disease.

I’ll be offering them input on ethics and privacy across the innovation process, and I’m looking forward to learning from my fellow advisory board members on a range of topics from ML to security.

Read More
Cennydd Bowles Cennydd Bowles

Thoughts on harmful design

Last week, I was invited to be part of an ICO + CMA workshop on harmful / deceptive design, and gave a position statement for a panel with Sarah Gold and Google’s Abigail Gray. Here’s what I said, lightly edited:

The cause of ethics in tech has reached a difficult moment. There’s a backlash against the techlash. We’re told tech ethics, sustainability, and social responsibility are the enemies, preventing humankind from reaching a ‘a far superior way of living and being’. This has coincided – it is just a coincidence, right? – with the tech crash, which has eroded the worker power that has driven the tech ethics movement. Meanwhile, an AI landrush is incentivising companies to cut ethical corners in favour of grabbing market opportunities. So there’s good cause for pessimism.

Harmful patterns are common because they’re the exact outcomes the system rewards. While we talk about harmful design, design culture isn’t really the problem: designers tend to be user-focused, empathetic people who typically try to do the right thing. The problem is metrics-driven product management; it’s growth teams given carte blanche to see users as faceless masses to be manipulated; it’s the twin altars of profit and scale; it’s the idea that externalities – that harms themselves – are someone else’s problem, something businesses needn’t worry about.

So these are entrenched problems, which is why progress is so hard. Nevertheless, we are making progress. The ICO/CMA joint paper is a landmark and, I think, a warning shot. Academics have done a good job taxonomising and highlighting deceptive patterns. And deceptive design is now a recognised topic in industry, the subject of conference talks, books, and the like.

But harmful and deceptive practices are still prevalent, and I think fighting them will only get harder in the AI era. We need more approaches at more levels. There’s still a role for promoting these inside companies despite the headwinds, to corral the support of people who are motivated to make tech more responsible. That’s where I come in. But we also need we need activists and political theorists who can discuss the structures and business models that would better promote ethical practice. We need regulators to enforce against bad practice, and lawmakers who can protect users as new harms emerge. We need academics who can investigate these practices and offer new ways of thinking about them. We also need dialogue with the public, particularly vulnerable people most at risk from the harms of technology. In short, we have a long way to go. That’s where you come in.

Read More
Cennydd Bowles Cennydd Bowles

World Interaction Design Day – London event

I’m helping out the IxDA London crew with an evening discussing ethics and responsibility, part of World Interaction Design Day, next Tue 26 Sep:

"We all want to be more responsible, ethical, and equitable in our design decisions, but it’s often hard to find the time or mutual support to develop these ideas. Please join us for a special World Interaction Design Day where we’ll dedicate an evening to diving into design ethics in practice.

We’ll begin with a discussion of the latest developments in the fast-changing ethical tech and design movements, before moving into three open conversation sessions. In the first, you’ll discuss a contemporary design ethics issue in detail, learning how to understand and argue ethical cases with compelling reasoning. Then, a chance to discuss your own professional ethical challenges with fellow designers in a private, confidential environment. Finally, we reconvene as a group to discuss how interaction, UX, and product designers can push for change in environments that don’t always prioritise ethics and responsibility."

Signups are open now: www.meetup.com/ixda-london/events/296203931

Read More
Cennydd Bowles Cennydd Bowles

Taking aim: ICO & CMA on harmful design

Designers and product managers, I urge you to pay attention to this new publication on ‘harmful design’ aka deceptive patterns. It’s a joint position paper by the ICO and CMA, the UK’s privacy and competition regulators respectively.

I wasn’t heavily involved in this work – I had my hands full with the privacy design guidance – and I’m no longer at the ICO. So I have some leeway to give my own (strictly personal) interpretation of this paper in a way the authors and employees can’t.

Have no doubt: this is a warning shot.

Two powerful regulators have joined forces to put industry on notice over deceptive patterns. The language is carefully couched but IMO the implication is clear. This is step one. Step two will be robust. I won’t be surprised to see direct enforcement (i.e. legal action against companies that keep using deceptive patterns) or strict policy stances (essentially, outright prohibitions) in the near-ish future.

It’s rare and difficult for regulators to join forces like this. Two regulators expressing their joint disapproval of the same design patterns: that’s huge. Not one big stick but two.

Here are the five patterns the paper highlights:

  • Harmful nudges

  • Confirmshaming

  • Biased framing

  • Bundled consent

  • Default settings

The paper gives specific, mocked-up examples, and both regulators explain why they’re concerned about each pattern, pointing to UK laws already in effect.

So, my advice: if you work for a technology team in the UK or on a digital product with UK customers, act now. Read the document. Identify whether you’re using these deceptive patterns. If so, remove them now. If you don’t have that authority, show the document (and this post too, if you like) to your most senior product leader and your legal team.

This paper is the regulators cocking the gun. You don’t want the barrel pointing at you.

Read More
Cennydd Bowles Cennydd Bowles

Ethics in Design course: edition 2

Announcing new dates for Ethics in Design, a three-week online course that Ariel Guersenzvaig and I ran with Service Design College earlier this year. Cohort two will begin 23 October. Over four 90-minute live sessions, we’ll explore:

  • why and how ethical issues permeate every design and technology decision;

  • how to transform moral hunches into more grounded, robust ways to think about ethics;

  • methods for kick-starting ethical conversations inside your organisation;

  • how to overcome common objections to ethical discussion;

  • how to navigate conflicts between your personal and professional spheres;

  • areas of emerging focus in responsible design.

The course costs $295, and it’s suited to anyone in a design-related role, including product, UX, and UI designers, DesignOps folks, researchers, and managers.. Hope to see you there.

Read More
Cennydd Bowles Cennydd Bowles

Fulbright Visiting Scholar 2024

Many of you know this already, but at last I’m formally allowed to announce that I’ve been awarded a Fulbright scholarship and will spend the first half of 2024 as a Fulbright Visiting Scholar at Elon University, North Carolina.

It’s one of the most prestigious scholarships in the world, with a rigorous selection process, so I’m delighted to be one of the lucky recipients.

I’ll be researching anticipatory ethics – think ways to foresee & evaluate potential harms of emerging tech – and teaching a postgraduate module on ethics in interactive media. I also expect to visit other US institutions across academia and industry to give guest lectures and help advance discussion of this important topic. Please drop me a line if you might be able to host me for a visit.

Since the Fulbright programme is also focused on cultural exchange, I’ll also be going all-in on college sports fandom, BBQ wars, community pop-up chess nights, making Welsh cakes for confused Americans, etc.

I’m excited about this chance to participate in a scholarship programme that delivers real impact, advancing human knowledge and tackling global challenges, and I’ll be sharing more about my experience as I go.

Read More
Cennydd Bowles Cennydd Bowles

Do the benefits of AI outweigh the risks?

Stylised illustration of plant leaves. A section is zoomed in as if part of a computer vision system.

I was kindly invited to a Raspberry Pi Foundation offsite to debate ‘Do the benefits of AI outweigh the risks?’ Here’s the short statement I shared:

Sometimes the role of an ethicist is to ask distinguishing questions. One such question is ‘for whom?’. Do the benefits of AI outweigh the risks? Well, the benefits for whom? The risks to whom? These two questions will probably have very different answers, right? The benefits and burdens of AI, as with almost every other innovation, won’t fall equally.

Technologies are always imprinted with values. This goes for even the crudest objects, things we don’t even think of as technologies. Think of razor wire. Razor wire is a shockingly opinionated object: it argues that someone’s right to private property is so important that we should injure anyone who violates that right.

AI people love talking about the value alignment problem: the idea that if we create a superintelligence we’d better make sure it holds the same things dear that we do, otherwise it might destroy them. But what happens before that? What values can we see imprinted within the AI systems we’re building today? When I look at modern AI, I see plausibility trumping truth. I see speed galloping ahead of safety. I see disruption hailed as inevitable, as destiny. Now, these may not be intentional design decisions but nevertheless, they have real-world impact. And the choice not to engage with the values and ethics of our technology is itself an ethical choice: an affirmation of the status quo, a vote to stay on our current heading.

AI could well be the largest force multiplier we’ve ever made. But we already feel society’s invisible, systemic forces acutely. Some people are elevated and empowered by these forces. Some are crushed. If we keep fostering the same values in technology that we do today, then I think these injustices will only increase. People who lack power today will end up further robbed of their autonomy and dignity. Entire creative classes may also find themselves dragged down by the technological undercurrents. It’s not hard to imagine a world in which the tech giants collect handsome royalties for their AI’s creations, while painters and novelists have to collect the recycling.

But it doesn’t have to be this way. Technology doesn’t hold the reins. We do. If we can subvert the default values of today’s tech sector and instead build AIs that prioritise compassion, justice, respect then yes, I think the benefits of AI will far outweigh the risks. How we achieve this within the confines of growth and profit is perhaps another question.

Image by Alan Warburton / © BBC / Better Images of AI / Plant / CC-BY 4.0

Read More
Cennydd Bowles Cennydd Bowles

New public course: ‘Ethics in Design’

I’ve spent a lot of time training teams in responsible and ethical innovation, but it’s always been a solo endeavour. So I’m happy to announce something new. I’m partnering with Ariel Guersenzvaig on a new online, public course called Ethics in Design.

The course is split over three weeks, featuring four 90-minute discussion-led sessions. Between sessions you’ll reflect on what we covered, apply it in your work, and read a few short pieces about the ideas we’ll review next. So while we’re not offering the in-depth theory of an academic course, it’ll be a more considered, reflective environment than a typical one-day workshop.

My co-instructor Ariel is a professor of design at Elisava, and author of The Goods of Design, one of the few ethics books I recommend to switched-on practitioners. Ariel has a extensive teaching and academic experience but also has a superb design and UX background himself.

So we’re aiming for the best of both worlds, discussing important real-world design and tech issues while backing up the learning with deep expertise and academic pedigree. We’re also hoping our differing perspectives as instructors will highlight the complexities of ethical design, so you can weigh up and come to your own conclusions.

Here’s a snippet from the description.

‘In this 3-week training, you will learn how to turn your best intentions into grounded, robust methods for acting more ethically and responsibly. Two experienced instructors will guide you beyond moral hunches towards a more profound understanding of ethical design.’

Hosted by the folks at Service Design College, this will be suitable for anyone in a design-related role, including product, UX, and UI designers, DesignOps folks, researchers, and managers.

The course starts on Tue 23 May and sign-ups are open now, starting at $295. It’s probably the only public training I’ll be running for a while, so grab a ticket while you can.

Read More
Cennydd Bowles Cennydd Bowles

Privacy in the product design lifecycle

In the whirlwind that was the last fortnight, I never properly shared the big project I shipped at the ICO. Designers, PMs, and engineers: this is for you.

Under GDPR (article 25), a data controller has to consider privacy through their entire product development process – this is called Data Protection by Design and Default. Through kickoff, research, design, development, and launch, you need to be able to prove you’ve done this work. You can’t ignore it and leave your legal or privacy team to make excuses later; companies are now being fined heavily for failing to live up to this requirement. (€265 million in Meta’s case, for example.)

The ICO only wants to fine companies as a last resort. It’s better for everyone if companies comply with the law properly.

So, in collaboration with a ton of ICO colleagues, I wrote and published guidance on Privacy in the product design lifecycle. It’s written directly for designers, PMs, and engineers, stepping through each stage of product development and clarifying what you must, should, and could do at each stage to protect users and help you comply with GDPR. There’s also info about the case for privacy, so you can convince your teammates this isn’t just about legal compliance, but building trust and keeping people and societies safe.

I might share more about writing regulatory guidance later on: it’s rather more complex than you might expect. But if you’re building products and services that handle personal data, I strongly recommend you check the guidance out: Privacy in the product design lifecycle.

Read More
Cennydd Bowles Cennydd Bowles

Back into self-employment

Yesterday I wrapped up my time at the ICO. There’ll be time later for proper reflection on the experience, but first: I’m heading back into private consulting and starting to book work in for spring and summer.

You know my angles by now: responsible design and innovation, technology ethics, anticipating potential harms of our work. I’m obviously pretty strong on privacy design too.

I’m open to training, talks (in-house and conference), consulting, and some hands-on product design as schedule permits. My profile’s up-to-date with my recent work and topics of interest. As we all know, it’s not a wildly fertile environment for niche solo consultants right now, so I’d welcome leads and shares alike. Thanks!

Read More
Cennydd Bowles Cennydd Bowles

Announcing ‘Privacy, Seriously’

Logo for ICO event Privacy, Seriously.

Delighted to finally announce ‘Privacy, Seriously’, a free ICO mini-conference for product designers and PMs. It’s on 23 February, running 2–6pm (UK), online. Here’s the blurb:

‘In a changing technology landscape, privacy isn't just about legal compliance: it's about living up to your values through every feature and interaction. Get it right and privacy becomes a powerful differentiator, helping you forge trusted, respectful customer relationships that last for life.

Join us on 23 February for ‘Privacy, Seriously’, part of the ICO’s ongoing series of events for designers and product managers. At this free, online mini-conference, design and product leaders will reveal how they put privacy at the heart of responsible innovation. You’ll learn from the experts and organisations at the cutting edge of technology and regulation, and maybe even catch a glimpse of where the tech sector goes next.’

We’ve got keynotes from Robin Berjon and Eva PenzeyMoog, plus panels on real-world privacy design and deceptive designs. Also, an announcement or two from the ICO. More on those soon. It’s been a fair bit of work, so please share widely and recklessly, and don’t forget to sign up yourselves. See you there!

Details and sign-up link.

Read More
Cennydd Bowles Cennydd Bowles

Book review: Deliberate Intervention

Cover of Alexandra Schmidt’s Deliberate Intervention. A smartphone’s outline is illustrated as a guillotine, blade primed to sever the unwary user’s finger.

I call them ‘the outflankers’.

For perhaps seven years now, I’ve argued for tech teams to prioritise ethics. But there are always some who insist I’m wasting my time: that companies will never change their capitalist spots, that exploitation is in the sector’s genes. What we need, the outflankers argue, is structural change. Regulation. Oversight. Policy.

Which, yes, sure. I harbour my suspicions that some outflankers are more interested in being seen to have seen further than in offering genuine advice, but I can’t disagree with their premise. Of course we need to pull those broader levers, even though I see it as a ‘yes and’. I still work with tech teams since I know how they think and behave, and I can use that knowledge to help teams act more responsibly.

I can’t deny, though, that product/UX designers are painfully ignorant about policy and regulation. I’ve always found these concepts obscure and abstract myself: I didn’t understand how policy works, how it gets formed, or how industry should parse it. Candidly, one reason I joined the ICO was to cover this hole in my knowledge. But designers who don’t want to make that drastic a leap are now in luck, thanks to Alexandra Schmidt’s new book Deliberate Intervention, an excellent introduction to the colliding design and policy worlds.

Deliberate Intervention begins by discussing what readers can do if, like the author, they suspect something’s not right in the world of technology design. The sections on anticipating harms are among the best I’ve read in a practitioner book, starting with historical examples from toy safety to seat belts, then turning to ways of spotting emergent harms (a particular focus of mine these days). Schmidt then turns to deceptive design patterns, elegantly classing them as an intentional subset of these harms.

From there, the book’s horizons expand further. Schmidt argues that even the contrasting communities of industry and civil society approach innovation in remarkably similar ways: define, design, implement, evaluate, repeat. The comparison doesn’t always hold, though. Product design and policy horizons are wildly out-of-sync – six months vs. a decade or more – and their value systems are different or even directly opposed: capitalist profit for private-sector UXers vs. public good for policy specialists. But Schmidt is optimistic about the possibilities for better integration, arguing that persistence and creativity can help bridge the gap between corporate ethics, regulatory oversight, and social good.

As I read the book, I was reminded of perhaps my favourite quote about design:

Always design a thing by considering it in its next larger context – a chair in a room, a room in a house, a house in an environment, an environment in a city plan.
— Eliel Saarinen

Maybe policy and governance are the larger contexts for UX and product design: if so, we owe them our attention. Deliberate Intervention, then, is a grown-up, skilfully written book that our industry might just be ready for.

Demystifying regulation and policy isn’t just useful for the outflankers. It’s important for anyone who wants technology’s power to be applied responsibly as its scope increases. So I think Deliberate Intervention is a useful read for senior practitioners, particularly strategic designers. In fact, it’s probably helpful for anyone designing in a regulated industry, or one that’s about to be. In other words, pretty much everyone.


Ethics note: I bought this book through my own company budget. Deliberate Intervention references my own work on occasion; I didn’t know this in advance. I consider Lou Rosenfeld, owner of the publisher Two Waves, a friend but this review is freely given and has not been solicited. There are no affiliate links on this post.

Read More