From unintended to unconsidered

WIRED’s post on tech’s unintended consequences suggests a neat rebranding: call them unconsidered consequences instead. It’s a fair point. Most tech companies haven’t even tried to anticipate possible social and ethical impacts of their work.

The usual defence is you can’t imagine impact at scale. But there’s a puzzling contradiction here. Teams seem entirely happy to imagine scale impacts on technical issues like server load, but when humanity enters the picture, there’s a collective shrug, an alarming unwillingness to consider what might happen next, whom might it benefit, and whom it might harm.

WIRED quotes Aza Raskin heavily, retreading the contrite techie narrative told by The Social Dilemma. I don’t think this is a great look: taking this stance undermines the piece’s arguments by suggesting we should let techies off the hook for failing to anticipate harms. But it’s a story the media loves, so for now we’re stuck with it.

Anyway, Raskin suggests three solutions to the hell he hath wrought. The first two – Hippocratic Oath-type clauses in open-source licenses, and progressive regulation that scales with adoption – have some merits. But his third is the most important: companies should simply try to anticipate harms.

Raskin suggests red teams isolated from typical product and leadership processes. It’s worth discussing whether these work best outside or inside product teams (both make some sense), but the idea is solid and entirely doable. True, the skills may be unfamiliar, but there are already disciplines that excel at drawing upon signals and trends to depict future states. Tech teams can and should learn from them, and can and must anticipate the harm they could do before it happens. Even better, they should use these skills to hear from vulnerable groups, since we are always hampered by our own perspectives.

This sort of anticipatory ethics is underexplored in tech and philosophy, but I’m sure it can help make tech safer and more beneficial. Expect more from me on anticipation in the future, particularly if certain academic plans come to fruition. For starters, my workshop What Could Go Wrong? is about precisely this idea: teams can learn practical anticipation tools and do this work themselves.

It’s true that when you try to anticipate future harms, you won’t spot them all. But as the muscle gets stronger, your success rate improves and you develop better foresight senses. But even spotting some harms is preferable to not looking in the first place.

Cennydd Bowles

Designer and futurist.

http://cennydd.com
Previous
Previous

Book review: Deliberate Intervention

Next
Next

A three-line WIP