Cennydd Bowles Cennydd Bowles

Major new work: harmful design in browser choice

For a few months I’ve been working with the esteemed Harry Brignull to investigate design patterns in the browser space. Finally we can reveal the fruits of our labour: a major report, ‘Over the Edge: How Microsoft’s Design Tactics Compromise Free Browser Choice’.

User behaviour has become a battleground. In pursuit of competitive advantage, many tech firms employ various design techniques to encourage users to act in certain ways. Some of these strategies are acceptable, such as persuasive designs that merely provide information and leave the user in full control of their decision. However, some design techniques are problematic or even harmful.

Examining these patterns first-hand, and referring to a harmful design taxonomy evolved over many years and supported by academic research, we find Microsoft repeatedly uses harmful design to influence users into using Edge. The report describes how in significant detail.

Of all Microsoft’s tactics, the most objectionable for me is their dissuasive messaging injected directly into the Chrome download page. As we write, ‘for a browser vendor to interfere with the contents of a competitor’s website – or indeed any website – with neither due cause nor user consent is highly irregular and ethically indefensible.’

Harry and I go way back. Aside from being erstwhile Clearleft colleagues, I was tech reviewer for his book Deceptive Patterns and contributed some thoughts for his first presentation on the topic. We work well together. He knows the domain and its design strategies like the back of his hand, and performed a comprehensive job auditing the relevant software. I provided writing support, input on the ethical problems these patterns pose, and help connecting our findings to real-world harms.

Of course, we had to be mindful of ethics ourselves. The study was commissioned by Mozilla, who as a browser manufacturer clearly have interests in certain findings. But they were superb clients, and showed a deep respect for research integrity. They were understandably keen to share their previous findings with us, but there was never any doubt: as independent researchers Harry and I had full control of the report.

We’re proud of the result. I’m particularly pleased with how the report connects Microsoft’s behaviour patterns to harms, describes how these harms are likely to fall harder upon those already vulnerable, and provides deeper ethical grounding by describing these patterns’ reliance upon coercion, deception, or manipulation.

As far as I know, this is one of the most in-depth case studies of harmful and deceptive date to date. I hope it advances the debate on the topic and helps others do similar work. But more importantly, I hope Microsoft and regulators alike take note – and take action.

Links

Press coverage

(Note that, whatever a journalist might write, I am not a Mozilla researcher, nor have I ever been. Nor do we use the outdated term ‘dark pattern’ in the report.)

Read More
Cennydd Bowles Cennydd Bowles

Our Future Health tech advisory board

I’ve joined the technology advisory board at Our Future Health, the charity partnering with the NHS (and other stakeholders) to develop new ways to prevent, detect and treat disease.

I’ll be offering them input on ethics and privacy across the innovation process, and I’m looking forward to learning from my fellow advisory board members on a range of topics from ML to security.

Read More
Cennydd Bowles Cennydd Bowles

Thoughts on harmful design

Last week, I was invited to be part of an ICO + CMA workshop on harmful / deceptive design, and gave a position statement for a panel with Sarah Gold and Google’s Abigail Gray. Here’s what I said, lightly edited:

The cause of ethics in tech has reached a difficult moment. There’s a backlash against the techlash. We’re told tech ethics, sustainability, and social responsibility are the enemies, preventing humankind from reaching a ‘a far superior way of living and being’. This has coincided – it is just a coincidence, right? – with the tech crash, which has eroded the worker power that has driven the tech ethics movement. Meanwhile, an AI landrush is incentivising companies to cut ethical corners in favour of grabbing market opportunities. So there’s good cause for pessimism.

Harmful patterns are common because they’re the exact outcomes the system rewards. While we talk about harmful design, design culture isn’t really the problem: designers tend to be user-focused, empathetic people who typically try to do the right thing. The problem is metrics-driven product management; it’s growth teams given carte blanche to see users as faceless masses to be manipulated; it’s the twin altars of profit and scale; it’s the idea that externalities – that harms themselves – are someone else’s problem, something businesses needn’t worry about.

So these are entrenched problems, which is why progress is so hard. Nevertheless, we are making progress. The ICO/CMA joint paper is a landmark and, I think, a warning shot. Academics have done a good job taxonomising and highlighting deceptive patterns. And deceptive design is now a recognised topic in industry, the subject of conference talks, books, and the like.

But harmful and deceptive practices are still prevalent, and I think fighting them will only get harder in the AI era. We need more approaches at more levels. There’s still a role for promoting these inside companies despite the headwinds, to corral the support of people who are motivated to make tech more responsible. That’s where I come in. But we also need we need activists and political theorists who can discuss the structures and business models that would better promote ethical practice. We need regulators to enforce against bad practice, and lawmakers who can protect users as new harms emerge. We need academics who can investigate these practices and offer new ways of thinking about them. We also need dialogue with the public, particularly vulnerable people most at risk from the harms of technology. In short, we have a long way to go. That’s where you come in.

Read More
Cennydd Bowles Cennydd Bowles

World Interaction Design Day – London event

I’m helping out the IxDA London crew with an evening discussing ethics and responsibility, part of World Interaction Design Day, next Tue 26 Sep:

"We all want to be more responsible, ethical, and equitable in our design decisions, but it’s often hard to find the time or mutual support to develop these ideas. Please join us for a special World Interaction Design Day where we’ll dedicate an evening to diving into design ethics in practice.

We’ll begin with a discussion of the latest developments in the fast-changing ethical tech and design movements, before moving into three open conversation sessions. In the first, you’ll discuss a contemporary design ethics issue in detail, learning how to understand and argue ethical cases with compelling reasoning. Then, a chance to discuss your own professional ethical challenges with fellow designers in a private, confidential environment. Finally, we reconvene as a group to discuss how interaction, UX, and product designers can push for change in environments that don’t always prioritise ethics and responsibility."

Signups are open now: www.meetup.com/ixda-london/events/296203931

Read More
Cennydd Bowles Cennydd Bowles

Taking aim: ICO & CMA on harmful design

Designers and product managers, I urge you to pay attention to this new publication on ‘harmful design’ aka deceptive patterns. It’s a joint position paper by the ICO and CMA, the UK’s privacy and competition regulators respectively.

I wasn’t heavily involved in this work – I had my hands full with the privacy design guidance – and I’m no longer at the ICO. So I have some leeway to give my own (strictly personal) interpretation of this paper in a way the authors and employees can’t.

Have no doubt: this is a warning shot.

Two powerful regulators have joined forces to put industry on notice over deceptive patterns. The language is carefully couched but IMO the implication is clear. This is step one. Step two will be robust. I won’t be surprised to see direct enforcement (i.e. legal action against companies that keep using deceptive patterns) or strict policy stances (essentially, outright prohibitions) in the near-ish future.

It’s rare and difficult for regulators to join forces like this. Two regulators expressing their joint disapproval of the same design patterns: that’s huge. Not one big stick but two.

Here are the five patterns the paper highlights:

  • Harmful nudges

  • Confirmshaming

  • Biased framing

  • Bundled consent

  • Default settings

The paper gives specific, mocked-up examples, and both regulators explain why they’re concerned about each pattern, pointing to UK laws already in effect.

So, my advice: if you work for a technology team in the UK or on a digital product with UK customers, act now. Read the document. Identify whether you’re using these deceptive patterns. If so, remove them now. If you don’t have that authority, show the document (and this post too, if you like) to your most senior product leader and your legal team.

This paper is the regulators cocking the gun. You don’t want the barrel pointing at you.

Read More
Cennydd Bowles Cennydd Bowles

Ethics in Design course: edition 2

Announcing new dates for Ethics in Design, a three-week online course that Ariel Guersenzvaig and I ran with Service Design College earlier this year. Cohort two will begin 23 October. Over four 90-minute live sessions, we’ll explore:

  • why and how ethical issues permeate every design and technology decision;

  • how to transform moral hunches into more grounded, robust ways to think about ethics;

  • methods for kick-starting ethical conversations inside your organisation;

  • how to overcome common objections to ethical discussion;

  • how to navigate conflicts between your personal and professional spheres;

  • areas of emerging focus in responsible design.

The course costs $295, and it’s suited to anyone in a design-related role, including product, UX, and UI designers, DesignOps folks, researchers, and managers.. Hope to see you there.

Read More
Cennydd Bowles Cennydd Bowles

Fulbright Visiting Scholar 2024

Many of you know this already, but at last I’m formally allowed to announce that I’ve been awarded a Fulbright scholarship and will spend the first half of 2024 as a Fulbright Visiting Scholar at Elon University, North Carolina.

It’s one of the most prestigious scholarships in the world, with a rigorous selection process, so I’m delighted to be one of the lucky recipients.

I’ll be researching anticipatory ethics – think ways to foresee & evaluate potential harms of emerging tech – and teaching a postgraduate module on ethics in interactive media. I also expect to visit other US institutions across academia and industry to give guest lectures and help advance discussion of this important topic. Please drop me a line if you might be able to host me for a visit.

Since the Fulbright programme is also focused on cultural exchange, I’ll also be going all-in on college sports fandom, BBQ wars, community pop-up chess nights, making Welsh cakes for confused Americans, etc.

I’m excited about this chance to participate in a scholarship programme that delivers real impact, advancing human knowledge and tackling global challenges, and I’ll be sharing more about my experience as I go.

Read More
Cennydd Bowles Cennydd Bowles

Do the benefits of AI outweigh the risks?

Stylised illustration of plant leaves. A section is zoomed in as if part of a computer vision system.

I was kindly invited to a Raspberry Pi Foundation offsite to debate ‘Do the benefits of AI outweigh the risks?’ Here’s the short statement I shared:

Sometimes the role of an ethicist is to ask distinguishing questions. One such question is ‘for whom?’. Do the benefits of AI outweigh the risks? Well, the benefits for whom? The risks to whom? These two questions will probably have very different answers, right? The benefits and burdens of AI, as with almost every other innovation, won’t fall equally.

Technologies are always imprinted with values. This goes for even the crudest objects, things we don’t even think of as technologies. Think of razor wire. Razor wire is a shockingly opinionated object: it argues that someone’s right to private property is so important that we should injure anyone who violates that right.

AI people love talking about the value alignment problem: the idea that if we create a superintelligence we’d better make sure it holds the same things dear that we do, otherwise it might destroy them. But what happens before that? What values can we see imprinted within the AI systems we’re building today? When I look at modern AI, I see plausibility trumping truth. I see speed galloping ahead of safety. I see disruption hailed as inevitable, as destiny. Now, these may not be intentional design decisions but nevertheless, they have real-world impact. And the choice not to engage with the values and ethics of our technology is itself an ethical choice: an affirmation of the status quo, a vote to stay on our current heading.

AI could well be the largest force multiplier we’ve ever made. But we already feel society’s invisible, systemic forces acutely. Some people are elevated and empowered by these forces. Some are crushed. If we keep fostering the same values in technology that we do today, then I think these injustices will only increase. People who lack power today will end up further robbed of their autonomy and dignity. Entire creative classes may also find themselves dragged down by the technological undercurrents. It’s not hard to imagine a world in which the tech giants collect handsome royalties for their AI’s creations, while painters and novelists have to collect the recycling.

But it doesn’t have to be this way. Technology doesn’t hold the reins. We do. If we can subvert the default values of today’s tech sector and instead build AIs that prioritise compassion, justice, respect then yes, I think the benefits of AI will far outweigh the risks. How we achieve this within the confines of growth and profit is perhaps another question.

Image by Alan Warburton / © BBC / Better Images of AI / Plant / CC-BY 4.0

Read More
Cennydd Bowles Cennydd Bowles

New public course: ‘Ethics in Design’

I’ve spent a lot of time training teams in responsible and ethical innovation, but it’s always been a solo endeavour. So I’m happy to announce something new. I’m partnering with Ariel Guersenzvaig on a new online, public course called Ethics in Design.

The course is split over three weeks, featuring four 90-minute discussion-led sessions. Between sessions you’ll reflect on what we covered, apply it in your work, and read a few short pieces about the ideas we’ll review next. So while we’re not offering the in-depth theory of an academic course, it’ll be a more considered, reflective environment than a typical one-day workshop.

My co-instructor Ariel is a professor of design at Elisava, and author of The Goods of Design, one of the few ethics books I recommend to switched-on practitioners. Ariel has a extensive teaching and academic experience but also has a superb design and UX background himself.

So we’re aiming for the best of both worlds, discussing important real-world design and tech issues while backing up the learning with deep expertise and academic pedigree. We’re also hoping our differing perspectives as instructors will highlight the complexities of ethical design, so you can weigh up and come to your own conclusions.

Here’s a snippet from the description.

‘In this 3-week training, you will learn how to turn your best intentions into grounded, robust methods for acting more ethically and responsibly. Two experienced instructors will guide you beyond moral hunches towards a more profound understanding of ethical design.’

Hosted by the folks at Service Design College, this will be suitable for anyone in a design-related role, including product, UX, and UI designers, DesignOps folks, researchers, and managers.

The course starts on Tue 23 May and sign-ups are open now, starting at $295. It’s probably the only public training I’ll be running for a while, so grab a ticket while you can.

Read More
Cennydd Bowles Cennydd Bowles

Privacy in the product design lifecycle

In the whirlwind that was the last fortnight, I never properly shared the big project I shipped at the ICO. Designers, PMs, and engineers: this is for you.

Under GDPR (article 25), a data controller has to consider privacy through their entire product development process – this is called Data Protection by Design and Default. Through kickoff, research, design, development, and launch, you need to be able to prove you’ve done this work. You can’t ignore it and leave your legal or privacy team to make excuses later; companies are now being fined heavily for failing to live up to this requirement. (€265 million in Meta’s case, for example.)

The ICO only wants to fine companies as a last resort. It’s better for everyone if companies comply with the law properly.

So, in collaboration with a ton of ICO colleagues, I wrote and published guidance on Privacy in the product design lifecycle. It’s written directly for designers, PMs, and engineers, stepping through each stage of product development and clarifying what you must, should, and could do at each stage to protect users and help you comply with GDPR. There’s also info about the case for privacy, so you can convince your teammates this isn’t just about legal compliance, but building trust and keeping people and societies safe.

I might share more about writing regulatory guidance later on: it’s rather more complex than you might expect. But if you’re building products and services that handle personal data, I strongly recommend you check the guidance out: Privacy in the product design lifecycle.

Read More
Cennydd Bowles Cennydd Bowles

Back into self-employment

Yesterday I wrapped up my time at the ICO. There’ll be time later for proper reflection on the experience, but first: I’m heading back into private consulting and starting to book work in for spring and summer.

You know my angles by now: responsible design and innovation, technology ethics, anticipating potential harms of our work. I’m obviously pretty strong on privacy design too.

I’m open to training, talks (in-house and conference), consulting, and some hands-on product design as schedule permits. My profile’s up-to-date with my recent work and topics of interest. As we all know, it’s not a wildly fertile environment for niche solo consultants right now, so I’d welcome leads and shares alike. Thanks!

Read More
Cennydd Bowles Cennydd Bowles

Announcing ‘Privacy, Seriously’

Logo for ICO event Privacy, Seriously.

Delighted to finally announce ‘Privacy, Seriously’, a free ICO mini-conference for product designers and PMs. It’s on 23 February, running 2–6pm (UK), online. Here’s the blurb:

‘In a changing technology landscape, privacy isn't just about legal compliance: it's about living up to your values through every feature and interaction. Get it right and privacy becomes a powerful differentiator, helping you forge trusted, respectful customer relationships that last for life.

Join us on 23 February for ‘Privacy, Seriously’, part of the ICO’s ongoing series of events for designers and product managers. At this free, online mini-conference, design and product leaders will reveal how they put privacy at the heart of responsible innovation. You’ll learn from the experts and organisations at the cutting edge of technology and regulation, and maybe even catch a glimpse of where the tech sector goes next.’

We’ve got keynotes from Robin Berjon and Eva PenzeyMoog, plus panels on real-world privacy design and deceptive designs. Also, an announcement or two from the ICO. More on those soon. It’s been a fair bit of work, so please share widely and recklessly, and don’t forget to sign up yourselves. See you there!

Details and sign-up link.

Read More
Cennydd Bowles Cennydd Bowles

Book review: Deliberate Intervention

Cover of Alexandra Schmidt’s Deliberate Intervention. A smartphone’s outline is illustrated as a guillotine, blade primed to sever the unwary user’s finger.

I call them ‘the outflankers’.

For perhaps seven years now, I’ve argued for tech teams to prioritise ethics. But there are always some who insist I’m wasting my time: that companies will never change their capitalist spots, that exploitation is in the sector’s genes. What we need, the outflankers argue, is structural change. Regulation. Oversight. Policy.

Which, yes, sure. I harbour my suspicions that some outflankers are more interested in being seen to have seen further than in offering genuine advice, but I can’t disagree with their premise. Of course we need to pull those broader levers, even though I see it as a ‘yes and’. I still work with tech teams since I know how they think and behave, and I can use that knowledge to help teams act more responsibly.

I can’t deny, though, that product/UX designers are painfully ignorant about policy and regulation. I’ve always found these concepts obscure and abstract myself: I didn’t understand how policy works, how it gets formed, or how industry should parse it. Candidly, one reason I joined the ICO was to cover this hole in my knowledge. But designers who don’t want to make that drastic a leap are now in luck, thanks to Alexandra Schmidt’s new book Deliberate Intervention, an excellent introduction to the colliding design and policy worlds.

Deliberate Intervention begins by discussing what readers can do if, like the author, they suspect something’s not right in the world of technology design. The sections on anticipating harms are among the best I’ve read in a practitioner book, starting with historical examples from toy safety to seat belts, then turning to ways of spotting emergent harms (a particular focus of mine these days). Schmidt then turns to deceptive design patterns, elegantly classing them as an intentional subset of these harms.

From there, the book’s horizons expand further. Schmidt argues that even the contrasting communities of industry and civil society approach innovation in remarkably similar ways: define, design, implement, evaluate, repeat. The comparison doesn’t always hold, though. Product design and policy horizons are wildly out-of-sync – six months vs. a decade or more – and their value systems are different or even directly opposed: capitalist profit for private-sector UXers vs. public good for policy specialists. But Schmidt is optimistic about the possibilities for better integration, arguing that persistence and creativity can help bridge the gap between corporate ethics, regulatory oversight, and social good.

As I read the book, I was reminded of perhaps my favourite quote about design:

Always design a thing by considering it in its next larger context – a chair in a room, a room in a house, a house in an environment, an environment in a city plan.
— Eliel Saarinen

Maybe policy and governance are the larger contexts for UX and product design: if so, we owe them our attention. Deliberate Intervention, then, is a grown-up, skilfully written book that our industry might just be ready for.

Demystifying regulation and policy isn’t just useful for the outflankers. It’s important for anyone who wants technology’s power to be applied responsibly as its scope increases. So I think Deliberate Intervention is a useful read for senior practitioners, particularly strategic designers. In fact, it’s probably helpful for anyone designing in a regulated industry, or one that’s about to be. In other words, pretty much everyone.


Ethics note: I bought this book through my own company budget. Deliberate Intervention references my own work on occasion; I didn’t know this in advance. I consider Lou Rosenfeld, owner of the publisher Two Waves, a friend but this review is freely given and has not been solicited. There are no affiliate links on this post.

Read More
Cennydd Bowles Cennydd Bowles

From unintended to unconsidered

WIRED’s post on tech’s unintended consequences suggests a neat rebranding: call them unconsidered consequences instead. It’s a fair point. Most tech companies haven’t even tried to anticipate possible social and ethical impacts of their work.

The usual defence is you can’t imagine impact at scale. But there’s a puzzling contradiction here. Teams seem entirely happy to imagine scale impacts on technical issues like server load, but when humanity enters the picture, there’s a collective shrug, an alarming unwillingness to consider what might happen next, whom might it benefit, and whom it might harm.

WIRED quotes Aza Raskin heavily, retreading the contrite techie narrative told by The Social Dilemma. I don’t think this is a great look: taking this stance undermines the piece’s arguments by suggesting we should let techies off the hook for failing to anticipate harms. But it’s a story the media loves, so for now we’re stuck with it.

Anyway, Raskin suggests three solutions to the hell he hath wrought. The first two – Hippocratic Oath-type clauses in open-source licenses, and progressive regulation that scales with adoption – have some merits. But his third is the most important: companies should simply try to anticipate harms.

Raskin suggests red teams isolated from typical product and leadership processes. It’s worth discussing whether these work best outside or inside product teams (both make some sense), but the idea is solid and entirely doable. True, the skills may be unfamiliar, but there are already disciplines that excel at drawing upon signals and trends to depict future states. Tech teams can and should learn from them, and can and must anticipate the harm they could do before it happens. Even better, they should use these skills to hear from vulnerable groups, since we are always hampered by our own perspectives.

This sort of anticipatory ethics is underexplored in tech and philosophy, but I’m sure it can help make tech safer and more beneficial. Expect more from me on anticipation in the future, particularly if certain academic plans come to fruition. For starters, my workshop What Could Go Wrong? is about precisely this idea: teams can learn practical anticipation tools and do this work themselves.

It’s true that when you try to anticipate future harms, you won’t spot them all. But as the muscle gets stronger, your success rate improves and you develop better foresight senses. But even spotting some harms is preferable to not looking in the first place.

Read More
Cennydd Bowles Cennydd Bowles

A three-line WIP

Torn about this article: Welcome to the WIP. One on hand, it codifies some longstanding truths: our linear design narratives are fictions; regular crit makes better products, etc. So far, so solid. But as I read it, I liked its vibes less and less. I think there are two reasons.

1. The piece was written by the Figma CPO Yuhki Yamashita, and published on the Figma blog. The thrust is that Figma is highlighting emerging design trends, and supporting them through its product choices. But I question the direction of travel. Figma’s market dominance arguably means it gets to establish design trends. Figma has always prioritised showing WIP / cross-functional design collaboration / people poking their damn noses into incomplete work they lack the expertise to properly evaluate (delete as appropriate): this piece doesn’t interrogate the company’s role – and the role of tooling more generally – in shaping industry practice. Figma isn’t paving the cowpaths: it’s bulldozing the construction site.

2 – and this is where I indulge some industry-elder-type grumbling – these trends, whether emergent or engineered, contribute to the ongoing commoditisation and devaluation of design skill (pioneered by our uncritical embracing of design systems), and to a forced ideological commitment to incrementalism, faux empiricism, and to launching mediocre products.

‘And yes, this means some imperfect launches. But customers aren’t judging our products based on that singular moment’ says Yamashita. Perhaps so for Figma, but elsewhere, customers absolutely do judge products on singular moments. With a thousand competitors, a botched launch means an instant install-and-delete, and customers lost to you forever.

There’s certainly a place for scrappiness in design, and a WIP-iterative way of working. But, whatever empirical dogmatists might have you believe, there’s also a role for polish and finesse, even before launch. Advanced design expertise involves matching the approach to the scenario. I’m not sure Figma understands or welcomes that fluidity.

Read More
Cennydd Bowles Cennydd Bowles

The ethics of watching Qatar 2022

The World Cup shouldn’t be in Qatar. We all know why not: the human rights abuses, the suffering of workers, the FIFA corruption, the oppression of LGBTQ+ people… by now, it’s well-trodden ground. But should you watch the tournament on TV? Ethically, I think it’s ok. Here’s why.

First, the arguments against watching. These mostly concern the effect on aggregate viewing figures. Large TV audiences for Qatar 2022 will:

  • embolden FIFA to discount human rights when considering future bids;

  • successfully launder Qatar’s reputation, their main aim in hosting;

  • lead to profit for advertisers and sponsors.

This suggests there’s a case for opting out. But does individual action really make a difference? This question comes up a lot in climate ethics. There, I believe the answer is yes, but only for heavy emitters. An American who flies frequently for business could (and probably should) eliminate perhaps 20 tons of CO₂e by reducing this travel. A Lesothan farmer’s annual impact is nowhere near this, so he or she essentially has no footprint to reduce. The moral onus is almost entirely on the rich to change their harmful behaviours.

The other big impact of individual action is the social signal. By stopping needless flying, say, you send an ethical message – maybe we shouldn’t do this any more – that can encourage others to do the same. In climate action this is powerful: heavy emitters’ friends and peers are typically heavy emitters too, meaning this signal can have large collective effects if it changes others’ behaviours.

Watching the World Cup is different. The differences between individuals are negligible, which makes the case for individual action weaker. Other than the minor difference in value to advertisers, one fewer viewer is just one fewer viewer: there will still be 3.5bn people watching the final, whether you’re in that number or not. Same goes for the social signal. You may convince a few others to join your boycott, but there’s no opportunity for outsized impact as there is with climate. If you choose not to watch, the consequences will be very minor.

Now, the arguments for watching. The strongest is the most obvious: you enjoy it. Don’t underestimate how important that is. Pleasure is central to almost any ethical definition of well-being; some ethicists even say it’s the only thing that’s good for you, although these days that’s a minority view.

The other big benefit is cultural. For all the World Cup’s flaws, I think there is still something meaningful and culturally valuable about bringing the world together in competition. It’s a chance to learn more about other countries and cultures, even if it’s just whether they employ a high press or a low block. It’s a chance to explore shared loves amid our differences. Admittedly I’m flirting with misty-eyed idealism here, but in our era of isolation and nationalism, a world uniting around a simple game is surely a good thing.

Of course, it’s your decision. You may say it’s the principle of the thing that matters, and you feel an obligation to boycott. Or perhaps you feel the climate impact of such a massive tournament is indefensible. And sure, if those arguments weigh heavily on you, I won’t tell you you’re wrong. I've ignored that perspective on ethics in favour of examining the consequences of the decision. I do think, however, a TV boycott certainly isn’t ethically required, and is probably being too hard on yourself.

Ethics shouldn’t be an act of self-flagellation. We should all stop doing really harmful things, of course, but moral perfection is asking too much. In practice, being an ethical person is about trying to live a bit better each day, making progress toward the values and aspirations you have of your future self.

The World Cup shouldn’t be in Qatar. We should recognise and speak out against the suffering it has caused. We should discuss the awful LGBTQ+ stance of the hosts, while recognising that British football has deep problems with homophobia too. But if you enjoy the World Cup and want to support your team at home, I think it’s ok to watch Qatar 2022.

Read More