Cennydd Bowles Cennydd Bowles

From unintended to unconsidered

WIRED’s post on tech’s unintended consequences suggests a neat rebranding: call them unconsidered consequences instead. It’s a fair point. Most tech companies haven’t even tried to anticipate possible social and ethical impacts of their work.

The usual defence is you can’t imagine impact at scale. But there’s a puzzling contradiction here. Teams seem entirely happy to imagine scale impacts on technical issues like server load, but when humanity enters the picture, there’s a collective shrug, an alarming unwillingness to consider what might happen next, whom might it benefit, and whom it might harm.

WIRED quotes Aza Raskin heavily, retreading the contrite techie narrative told by The Social Dilemma. I don’t think this is a great look: taking this stance undermines the piece’s arguments by suggesting we should let techies off the hook for failing to anticipate harms. But it’s a story the media loves, so for now we’re stuck with it.

Anyway, Raskin suggests three solutions to the hell he hath wrought. The first two – Hippocratic Oath-type clauses in open-source licenses, and progressive regulation that scales with adoption – have some merits. But his third is the most important: companies should simply try to anticipate harms.

Raskin suggests red teams isolated from typical product and leadership processes. It’s worth discussing whether these work best outside or inside product teams (both make some sense), but the idea is solid and entirely doable. True, the skills may be unfamiliar, but there are already disciplines that excel at drawing upon signals and trends to depict future states. Tech teams can and should learn from them, and can and must anticipate the harm they could do before it happens. Even better, they should use these skills to hear from vulnerable groups, since we are always hampered by our own perspectives.

This sort of anticipatory ethics is underexplored in tech and philosophy, but I’m sure it can help make tech safer and more beneficial. Expect more from me on anticipation in the future, particularly if certain academic plans come to fruition. For starters, my workshop What Could Go Wrong? is about precisely this idea: teams can learn practical anticipation tools and do this work themselves.

It’s true that when you try to anticipate future harms, you won’t spot them all. But as the muscle gets stronger, your success rate improves and you develop better foresight senses. But even spotting some harms is preferable to not looking in the first place.

Read More
Cennydd Bowles Cennydd Bowles

A three-line WIP

Torn about this article: Welcome to the WIP. One on hand, it codifies some longstanding truths: our linear design narratives are fictions; regular crit makes better products, etc. So far, so solid. But as I read it, I liked its vibes less and less. I think there are two reasons.

1. The piece was written by the Figma CPO Yuhki Yamashita, and published on the Figma blog. The thrust is that Figma is highlighting emerging design trends, and supporting them through its product choices. But I question the direction of travel. Figma’s market dominance arguably means it gets to establish design trends. Figma has always prioritised showing WIP / cross-functional design collaboration / people poking their damn noses into incomplete work they lack the expertise to properly evaluate (delete as appropriate): this piece doesn’t interrogate the company’s role – and the role of tooling more generally – in shaping industry practice. Figma isn’t paving the cowpaths: it’s bulldozing the construction site.

2 – and this is where I indulge some industry-elder-type grumbling – these trends, whether emergent or engineered, contribute to the ongoing commoditisation and devaluation of design skill (pioneered by our uncritical embracing of design systems), and to a forced ideological commitment to incrementalism, faux empiricism, and to launching mediocre products.

‘And yes, this means some imperfect launches. But customers aren’t judging our products based on that singular moment’ says Yamashita. Perhaps so for Figma, but elsewhere, customers absolutely do judge products on singular moments. With a thousand competitors, a botched launch means an instant install-and-delete, and customers lost to you forever.

There’s certainly a place for scrappiness in design, and a WIP-iterative way of working. But, whatever empirical dogmatists might have you believe, there’s also a role for polish and finesse, even before launch. Advanced design expertise involves matching the approach to the scenario. I’m not sure Figma understands or welcomes that fluidity.

Read More
Cennydd Bowles Cennydd Bowles

The ethics of watching Qatar 2022

The World Cup shouldn’t be in Qatar. We all know why not: the human rights abuses, the suffering of workers, the FIFA corruption, the oppression of LGBTQ+ people… by now, it’s well-trodden ground. But should you watch the tournament on TV? Ethically, I think it’s ok. Here’s why.

First, the arguments against watching. These mostly concern the effect on aggregate viewing figures. Large TV audiences for Qatar 2022 will:

  • embolden FIFA to discount human rights when considering future bids;

  • successfully launder Qatar’s reputation, their main aim in hosting;

  • lead to profit for advertisers and sponsors.

This suggests there’s a case for opting out. But does individual action really make a difference? This question comes up a lot in climate ethics. There, I believe the answer is yes, but only for heavy emitters. An American who flies frequently for business could (and probably should) eliminate perhaps 20 tons of CO₂e by reducing this travel. A Lesothan farmer’s annual impact is nowhere near this, so he or she essentially has no footprint to reduce. The moral onus is almost entirely on the rich to change their harmful behaviours.

The other big impact of individual action is the social signal. By stopping needless flying, say, you send an ethical message – maybe we shouldn’t do this any more – that can encourage others to do the same. In climate action this is powerful: heavy emitters’ friends and peers are typically heavy emitters too, meaning this signal can have large collective effects if it changes others’ behaviours.

Watching the World Cup is different. The differences between individuals are negligible, which makes the case for individual action weaker. Other than the minor difference in value to advertisers, one fewer viewer is just one fewer viewer: there will still be 3.5bn people watching the final, whether you’re in that number or not. Same goes for the social signal. You may convince a few others to join your boycott, but there’s no opportunity for outsized impact as there is with climate. If you choose not to watch, the consequences will be very minor.

Now, the arguments for watching. The strongest is the most obvious: you enjoy it. Don’t underestimate how important that is. Pleasure is central to almost any ethical definition of well-being; some ethicists even say it’s the only thing that’s good for you, although these days that’s a minority view.

The other big benefit is cultural. For all the World Cup’s flaws, I think there is still something meaningful and culturally valuable about bringing the world together in competition. It’s a chance to learn more about other countries and cultures, even if it’s just whether they employ a high press or a low block. It’s a chance to explore shared loves amid our differences. Admittedly I’m flirting with misty-eyed idealism here, but in our era of isolation and nationalism, a world uniting around a simple game is surely a good thing.

Of course, it’s your decision. You may say it’s the principle of the thing that matters, and you feel an obligation to boycott. Or perhaps you feel the climate impact of such a massive tournament is indefensible. And sure, if those arguments weigh heavily on you, I won’t tell you you’re wrong. I've ignored that perspective on ethics in favour of examining the consequences of the decision. I do think, however, a TV boycott certainly isn’t ethically required, and is probably being too hard on yourself.

Ethics shouldn’t be an act of self-flagellation. We should all stop doing really harmful things, of course, but moral perfection is asking too much. In practice, being an ethical person is about trying to live a bit better each day, making progress toward the values and aspirations you have of your future self.

The World Cup shouldn’t be in Qatar. We should recognise and speak out against the suffering it has caused. We should discuss the awful LGBTQ+ stance of the hosts, while recognising that British football has deep problems with homophobia too. But if you enjoy the World Cup and want to support your team at home, I think it’s ok to watch Qatar 2022.

Read More
Cennydd Bowles Cennydd Bowles

Twitter and the void beyond

Some thoughts on Twitter. Perhaps I’ve not much to add beyond what others have said, but I want to say it anyway. (This was, after all, the point of Twitter.)

Twitter has been the most significant digital space of my life. I met my wife there. I met my community there. I made lifelong friends and a few mediocre enemies. I loved it so much I worked there. It was a mixed bag: pockets of excellent people stunted by woeful morale and exhausting leadership flux. The IPO was, I think, the reward these workers deserved for making Twitter a success despite everything.

And now, it belongs to one of the worst owners I could think of.

Everything we say on Twitter is now raw material for the world’s richest man to squeeze profit from. Every tweet is validation for his free-speech absolutism and teenage trolling, a vote for the weird-nerd Muskian cult.

I’m not going to leave entirely: there’s too much emotional geography there. Those walls hold memories. But I certainly won’t be around as much. I don’t think I’m going to Mastodon: from here, it looks like a nightmare. I will have to be on LinkedIn more, because I have a niche consulting business to run amid a grinding recession, and capitalism forces us to constantly pursue our own debasement.

Beyond that, I expect I’ll be screaming into the void on this here website, and hoping others see it somehow. We’ve always needed better indie-web connective infrastructure (readers solved only a marginal use-case); that need just became more urgent.

More than anything, though, I’ll just be elsewhere. I need to excise the social media brainworms, to unlearn the habit of thinking in short-form and seeking validation from numbers in blue dots. I hope to bump into you on those muddier roads.

Read More
Cennydd Bowles Cennydd Bowles

Working for the ICO

Summer 2022 ground me into a fine paste and spread me thinly over some dry, tasteless psychological crackers, meaning I forgot to mention I started a new job. I’m now Principal Technology Adviser at the ICO, the UK’s privacy regulator, specialising in product design. The plan is to help the UK design community do better on privacy. Some promising stuff in the pipeline: watch this space.

It’s a part-time role, so I can still balance my academic duties and some light freelancing, assuming no conflicts of interest. At times it’s been a whiplash-inducing, Verdana-drenched challenge, but I’m starting to find my rhythm now, and hell: if you want impact on an entire sector, regulators offer huge opportunities. Hopefully I can make the most of them.

Read More
Cennydd Bowles Cennydd Bowles

Echoing some home truths

Alexa’s got more pushy of late. Seems like almost every query invites some hostile FYI upsell: ‘by the way…’, ‘did you know…’. Presumably some Seattle product manager is on the hook for steep usage targets and is having to deal with the insurmountable issue that voice UIs are notoriously undiscoverable.

So… has anyone else taken to swearing at it as way to influence the algorithm? I’m positive a company like Amazon is listening for abusive replies and coding them as strong negative feedback signals. (If they’re not, they’re really missing a trick.)

I’m generally sentimental about the prospect of treating machine partners well, at least in years to come (see Future Ethics, ch8 – I’m also writing an essay that’s warm on robot moral patiency for my AI Ethics module). But I don’t see we have any other forms of recourse or protest, other than throwing the things into a river.

My point, I guess: it might be okay to tell Alexa to fuck off, you know.

Read More
Cennydd Bowles Cennydd Bowles

New workshop: What Could Go Wrong?

Want to design more responsible, ethical products? If so, you need to understand how your decisions might harm others.

I’m writing a new workshop, What Could Go Wrong?, which critics have labelled ‘wow, a really good title for a workshop’. It’ll introduce anticipatory techniques from strategic foresight, practical ethics, and even science fiction to explore the unintended consequences of design. Attendees will learn new ways to anticipate ethical risks so they can stop them before they happen.

It’ll debut at UX London on 28 June. Grab a ticket: https://2022.uxlondon.com/schedule/day-one/

Read More
Cennydd Bowles Cennydd Bowles

The ethical risks of emotional mimicry

Picture of WALL-E anthropomorphic robot, made in Lego. It looks sad.

When might a robot or AI deserve ‘moral status’? In other words, how sophisticated would an AI have to get to, say, claim rights, or for us to have a moral duty to treat it well? Sci-fi writers love this question, of course, and it’s an ongoing research topic in AI ethics.

One view: we should base this decision on behaviour. If an AI acts like other beings – i.e. humans or animals – that already have moral status, maybe the AI deserves moral status too. So, does it (seem to) dislike and avoid pain? Does it (appear to) have preferences and intentions? Does it (pretend to) display emotions? Things like that might count.

I think some engineers and designers bristle at this idea. After all, we know mimicking this sort of thing isn’t theoretically too tough: we can imagine how we’d make a robot that seemed to flinch from pain, that lip-wobbled on demand, etc.

Nevertheless, this theory, known as ethical behaviourism, is still one some philosophers take seriously. In part that’s because, well… what other useful answers are there? We can’t see into other people’s minds, so can’t really know if they feel or suffer. And we can’t rely on physiology and biomechanics: it’s all silicon here, not nerves and brains. So what other options do we have, apart from observed behaviour?

And imagine if we ever got it wrong. If we made an AI that could suffer, without realising it – a false negative – we’d end up doing some awful things. So it seems reasonable to err on the side of caution.

Back to design. Designers love emotions. We try to engender them in humans (delight!), we talk about them ad nauseam (empathy!), and so we’re naturally tempted to employ them in our products and services. But I think emotional mimicry in tech – along with other forms of anthropomorphism – is risky, even dangerous. First, tech that fakes emotion can manipulate humans more effectively, meaning deceptive designs become even more powerful. 

Second, the idea of ethical behaviourism suggests that at some future point we might become so good at mimicry that we face all sorts of unintended moral and even legal consequences. A dystopia in which the Duolingo owl is so unhappy you skipped your vocab test that you could be prosecuted for cruelty. A chatbot so real we’re legitimately forced to worry whether it’s lonely. Is it even ethical to create something that can suffer? Have we, in effect, just spawned a million unwanted puppies?

Design is complicated enough already: I don’t think we want to sign up for that world in a hurry. I’d rather keep emotion out of it.

Read More
Cennydd Bowles Cennydd Bowles

What it is I actually do, actually

Been doing a lot of introspection / abyss-gazing about what it is I do these days, and how I communicate it to others. Here’s where I’ve ended up. I’d love to know if these scenarios resonate with others.

If you’re a product, design, or engineering leader, you might feel the rise of tech ethics has made life even more complicated.

Your employees and job candidates are asking tough questions about your company’s impact on the world. You’re learning it’s not enough to just mean well: you need expert guidance on what ethics and responsibility should mean for your team.

You’ve seen toxic tech firms make ethical mistakes that harm the entire sector. You won’t let that happen to your team. So you need to understand the unintended harms your work could do, and reduce risks before they happen.

Perhaps your C-suite has made bold pledges of a sustainable, ethical, responsible future. Sounds great, but now it’s on you to make it happen. Where do you start? Your company’s CSR or ESG initiatives seem somewhat distant; it’s not clear how technology teams on the ground can align with these efforts.

So here’s my new pitch, my new value proposition if you like. I help tech teams build more responsible, ethical products. And I do that through a mix of design, training, speaking, and consulting. I’ve finally rebuilt my website to round all this up neatly and tightly: cennydd.com.

I have some availability from May, so if you’re a product, design, or engineering leader and this strikes a chord, let’s chat. (If you’re not one of those people, please feel free to share with someone who is. Thanks!)

Read More
Cennydd Bowles Cennydd Bowles

No to masked FaceID

I told iOS 15.4 I don’t want it to recognise my face while I’m wearing a mask.

I trust  and their privacy-enhancing implementation of masked FaceID – using Secure Enclave, in this case – and I don’t love the often-tinfoilish surveillance capitalism rhetoric. However, I do think there are valid ethical reasons to reject facial recognition that bypasses masking.

The risk isn’t Apple leaking data: it’s us collectively accepting that machines should be able to recognise us from just half a face / our eyes / our gait / etc etc. That’s a premise worth challenging, at least. I won’t passively contribute to a norm that could draw us closer to real harm.

Read More
Cennydd Bowles Cennydd Bowles

On strike

Today should be my first day as an associate lecturer at The Manchester Metropolitan University, delivering my first session on design ethics to an apprentice group I’ve looked forward to meeting. Instead, I’m on strike.

The UCU union has called nationwide strikes over pay, workload, inequality, and casualisation. I’m not yet a member of the union, and as a practitioner working to an unpredictable schedule, casual contracts suit me better than regular commitments. Perhaps this doesn’t look like my fight.

But the minute I’m paid to prep, teach, and mark a module I become an educator and even, to my surprise, an academic. My loyalty has to be to my new colleagues. The cost of living is soaring, but our peers in academia find themselves undervalued, their opportunities squeezed, pensions slashed, and hostile policies devaluing their expertise and futures.

I’ve been absurdly fortunate to fall into a career that’s overpaid me and given me valuable knowledge. It means I have power. I want to engage with academia but I can do it on my terms: I don’t need the work, and to be candid the pay won’t make a difference to my life. I can choose my moments.

So I have to use that power wisely. I could easily, carelessly make things worse, swooping in and working when others won’t, further restricting opportunities for lecturers with better qualifications and a lifetime of pedagogical skills I don’t have. I can’t and won’t do that.

Of course, it’s a dreadful situation for students too. I feel for them. I hope they realise that academics are striking precisely because their working conditions don’t allow them to educate students properly. I should also point out the administrators on the MMU programme have been gracious and totally understanding. My quarrel isn’t with them.

Stepping into a unionised, precarious industry (academia) while also working in a non-unionised, in-demand industry (tech) has been a whiplash-inducing challenge. I’ve had to think deeply and quickly about the power I have and how I can use that to support my values. Showing solidarity and refusing to cross the picket line is my answer. Academics deserve better, as do students. Educating the next generation is perhaps the best way I can help my field, but I can’t lecture anyone about ethics without first standing up for the values I hold myself.

Read More
Cennydd Bowles Cennydd Bowles

If you think all design is manipulation, please stop designing

I posted a question today which took off: when does design become manipulation? I have thoughts of my own, and I’m giving some short talks on it soon, but I wanted to survey the wider community’s opinion.

The most common response by far: all design is manipulation. I found this response surprising, let’s say, so I pressed for a few explanations. Mostly, people told me it’s a natural feature of design, but that’s ok because manipulation is an ethically neutral concept.

To which I say: bullshit.

Manipulation is bad. And unless you’re ethically trained and can argue convincingly about minor philosophical exceptions, I say you know full well it’s bad. If you manipulate someone, you use them as means to your own ends; you undermine their consent and ability to exercise free choice; you withhold your true intent. People would describe you as self-centred, controlling, and deceptive. If your spouse asked how work went today, would you feel proud to reply ‘Well, I manipulated a bunch of people’?

The negative ethical connotation is obvious and well-accepted in general parlance. So I don’t buy this neutrality excuse: it shows a blasé acceptance of harm that’s unbecoming of a professional, and I’m alarmed so many designers seem to believe it.

Design influences. It persuades. But if it manipulates, something’s wrong. The difference isn’t just semantic; it’s moral. A manipulative designer abuses their power and strips people of their agency, reducing them to mere pawns. I see almost no circumstances in which that’s ethically acceptable.

So if you think all design is manipulation, please stop designing.

Picture credit: ZioDave

Read More
Cennydd Bowles Cennydd Bowles

Web3 and Lexit

(This deserves to be a longer post, but I don’t have the political knowledge to do it justice. So I’ll present it here as a throwaway thought and leave it to the theorists to tell me if I’m onto something.)

I can’t help seeing parallels between leftists embracing web3 and leftists who embraced Brexit (aka the ‘Lexit’ crowd). Sure, I can see the purist theoretical appeal, how there might be a better world ahead if certain steps unfold a certain way. But you’re also putting yourself on the same side as some dreadful people who hold antithetical visions for the future. If you can’t subsequently dispossess them of power – or at least compete for your vision to prevail instead – you’ve contributed to a dystopia.

In other words, ‘I’m into web3 because together we can topple the hegemony of Big Tech’ and ‘I’m into web3 because I can make shit-tons of untraceable, untaxable profit’ are going to come into direct conflict, and my money’s on the latter winning out, because money usually does.

I see this as a big problem for Universal Basic Income too. I’m deeply sympathetic to UBI, but I also recognise many libertarians also love it, but only because they see it as a way to scrap other welfare, means testing, etc.

It’s a high, high risk strategy to pursue the same means as people who want opposing ends.

Read More
Cennydd Bowles Cennydd Bowles

What writing is for

Here’s an idea I recognise and agree with: writing is networking for introverts. I’ve earned lasting connections and friendships – and plenty of work – thanks to something I wrote, Future Ethics in particular. And I’m always happier in conversations where each side has some idea of what interests the other. See also Leisa Reichelt’s insight that Twitter et al might lend us a sense of ‘ambient intimacy’ although wow, how long ago that hope seems now.

Another benefit of writing: it forces you to figure out what you actually think about something. Those mental plasma clouds have to coalesce into some sort of starlike objects; if you’re lucky, they sometimes form constellations others can see from afar.

Read More
Cennydd Bowles Cennydd Bowles

Technological trespassers

Header image of No Trespassing sign outside an old gasworks

In 2018, philosopher Nathan Ballantyne coined the term epistemic trespassers to describe people who ‘have competence or expertise to make good judgments in one field, but move to another field where they lack competence—and pass judgment nevertheless.’

It’s a great label (for non-philosophy readers, epistemology is the study of knowledge), and an archetype we all recognise. Ballantyne calls out Richard Dawkins and Neil deGrasse Tyson, scientists who now proffer questionable opinions on theology and philosophy; recently we could point to discredited plagiarist Johann Hari’s writing on mental health, which has caused near-audible tooth-grinding from domain experts.

This is certainly a curse that afflicts the public intellectual. There are always new books to sell, and while the TV networks never linger on a single topic, they sure love a trusted talking head. Experts can grumble on the sidelines all they like; it’s audience that counts.

(As an aside, I also wonder about education’s role in this phenomenon. As a former bright kid who perhaps mildly underachieved since, I’ve been thinking about how certain education systems – particularly private schools – flatter intelligent, privileged children into believing they will naturally excel in whatever they do. Advice intended to boost self-assurance and ambition can easily instil arrogance instead, creating men – they’re almost always men, aren’t they? – who are, in Ballantyne’s words, ‘out of their league but highly confident nonetheless’. I can identify, shall we say.)

Within and without borders

Epistemic trespass is rampant in tech. The MBA-toting VC’s brainwormish threads on The Future of Art; the prominent Flash programmer who decides he’s a UX designer now. Social media has created thousands of niche tech microcelebrities, many of whom carry their audiences and clout to new topics without hesitation.

Within tech itself, this maybe isn’t a major crime. Dabbling got many of us here in the first place, and a field in flux will always invent new topics and trends that need diverse perspectives. But by definition, trespass happens on someone else’s property; it’s common to see a sideways disciplinary leap that puts a well-known figure ahead of existing practitioners in the attention queue.

This is certainly inefficient: rather than spending years figuring out the field, you could learn it in months by reading the right material or being mentored by an expert. But many techies have a weird conflicted dissonance of claiming to hate inefficiency while insisting on solving any interesting problem from first principles. I think it’s an ingrained habit now, but if it’s restricted to purely technical domains I’m not overly worried.

Once they leave the safe haven of the technical, though, technologists need to be far more cautious. As our industry finally wises up to its impacts, we now need to learn that many neighbouring fields – politics, sociology, ethics – are also minefields. Bad opinions here aren’t just wasteful, but harmful. An uninformed but widely shared reckon on NFTs is annoying; an uninformed, widely shared reckon on vaccines or electoral rights is outright dangerous.

Epistemic humility

Ballantyne offers the conscientious trespasser two pieces of advice: 1. dial down the confidence, 2. gain the new expertise you need. In short, practice epistemic humility.

There’s a trap in point 2. It’s easy to confuse knowledge and skills, or to assume one will naturally engender the other in time. Software engineers, for example, develop critical thinking skills which are certainly useful elsewhere, but simply applying critical thinking alone in new areas, without foundational domain knowledge, easily leads to flawed conclusions. ‘Fake it until you make it’ is almost always ethically suspect, but it’s doubly irresponsible outside your comfort zone and in dangerous lands.

No one wants gatekeeping, or to be pestered to stay in their lane, and there are always boundary questions that span multiple disciplines. But let’s approach these cases with humility, and stop seeing ourselves as the first brave explorers on any undiscovered shore.

We should recognise that while we may be able to offer something useful, we’re also flawed actors, hampered by our own lack of knowledge. Let’s build opinions like sandcastles, with curiosity but no great attachment, realising the central argument we missed may just act as the looming wave. This means putting the insight of others ahead of our own, and declining work – or better, referring it to others who can do it to a higher standard – while we seek out the partnerships or training we need to build our own knowledge and skills.

Read More