A future owners test
A question we should ask more: ‘Is this technology safe in the hands of plausible future owners?’. Naturally, we assess ethical risk mostly in the present-day world, looking at today’s norms, laws, and policies. But things change.
Say you’re building a public-sector app or algorithm. The department you’re working for may have good protocols in place – regulations, perhaps, or internal processes – that give you confidence they’ll oversee this technology properly. But what about future governments? You’re hopefully building something that will stick around, and we live in unstable times. Would this system still be safe in the hands of a government pursuing different politics? A police state? An ethnonationalist government? An autocracy? Policies and laws can be overturned; you might be relying on protections a future authority could easily revoke.
Same goes for commercial work too. There’s a hostile takeover of your company, or maybe it fails and its digital assets are snapped up in a fire sale, and suddenly your system belongs to someone else. Would it still be safe in the hands of a defence contractor? A data broker? Palantir?
Or perhaps the nature of your company itself shifts, and an initially benign use case takes on different colour when the company moves into a new sector…
I’d like to see more of this thinking – maybe we could call it the future owners test – in contemporary responsible tech work. We mustn’t get so wrapped up in today that we overlook tomorrow.
Endnote: the word ‘plausible’ is, of course, doing a lot of work. The depth of this questioning should be proportionate to the risk; many eventualities can be safely ignored in many contexts. I’m not worried about, say, the far-right getting their hands on Candy Crush data, but I sure would be if they inherited a national carbon-surveillance programme.