The ethical risks of emotional mimicry
When might a robot or AI deserve ‘moral status’? In other words, how sophisticated would an AI have to get to, say, claim rights, or for us to have a moral duty to treat it well? Sci-fi writers love this question, of course, and it’s an ongoing research topic in AI ethics.
One view: we should base this decision on behaviour. If an AI acts like other beings – i.e. humans or animals – that already have moral status, maybe the AI deserves moral status too. So, does it (seem to) dislike and avoid pain? Does it (appear to) have preferences and intentions? Does it (pretend to) display emotions? Things like that might count.
I think some engineers and designers bristle at this idea. After all, we know mimicking this sort of thing isn’t theoretically too tough: we can imagine how we’d make a robot that seemed to flinch from pain, that lip-wobbled on demand, etc.
Nevertheless, this theory, known as ethical behaviourism, is still one some philosophers take seriously. In part that’s because, well… what other useful answers are there? We can’t see into other people’s minds, so can’t really know if they feel or suffer. And we can’t rely on physiology and biomechanics: it’s all silicon here, not nerves and brains. So what other options do we have, apart from observed behaviour?
And imagine if we ever got it wrong. If we made an AI that could suffer, without realising it – a false negative – we’d end up doing some awful things. So it seems reasonable to err on the side of caution.
Back to design. Designers love emotions. We try to engender them in humans (delight!), we talk about them ad nauseam (empathy!), and so we’re naturally tempted to employ them in our products and services. But I think emotional mimicry in tech – along with other forms of anthropomorphism – is risky, even dangerous. First, tech that fakes emotion can manipulate humans more effectively, meaning deceptive designs become even more powerful.
Second, the idea of ethical behaviourism suggests that at some future point we might become so good at mimicry that we face all sorts of unintended moral and even legal consequences. A dystopia in which the Duolingo owl is so unhappy you skipped your vocab test that you could be prosecuted for cruelty. A chatbot so real we’re legitimately forced to worry whether it’s lonely. Is it even ethical to create something that can suffer? Have we, in effect, just spawned a million unwanted puppies?
Design is complicated enough already: I don’t think we want to sign up for that world in a hurry. I’d rather keep emotion out of it.