Saturday, March 26, 2016

Tay bot

The most interesting thing about the Tay bot (the chat-bot started by Microsoft, which was promptly seduced by trolls and trained to be a Nazi), or at least the aspect of this story that I find most interesting and troubling, is how people immediately started talking about it as about a person. If you google for "Miscrosoft Tay", you can find all kinds of titles, from "Microsoft kills its first sentient AI" to "Microsoft deletes a teen girl for racist tweets". And even when the title itself is more objective, it seems to me that the language tends to antropomorphize this programming experiment a lot.

Which is actually not that surprising. Humans are really good in ascribing agency to everything, from earthworms, to cars, to weather. No wonder an AI bot that was marketed as a model for a "teenage girl", and was given an avatar-like userpic, registered in the collective subconsciousness as a kind-of-sentient "somebody" rather than "something".

And I think it's both cool and troubling. Cool because it means that humans are, in a way, ready for AI: they are ready to interact with AI as with another being. Which is good, as it means that human-robot interfaces are really easy to build: humans like to be gullible; they jump at the opportunity. But it's also bad, as it means that the ethical nightmare may start much earlier than one could have expected. I may be overreacting, but from posts about Tay it seems that people may be opposed to "unplugging robots" years before any AI passes a Turing test.

And that's a fun thought. How do you even troubleshoot a sentient AI, from the ethical point of view? How do you troubleshoot an AI that learns and "develops" psychologically, similar to a human child? It does not have to be exactly like a human child, and the process may be much faster, but there almost bound to be some similarities. The only way to troubleshoot a program is to run it, look at its performance, kill it (stop it), change something, and then try again. Can this approach be applied to an AI? Or to a project like a "Blue Brain", where a human cortex will be modeled? Or to an "uploaded personality" (another recent fad)? At what point troubleshooting "virtual humans" will become unethical? Or, on a more practical note, at which point will the human community rebel against this troubleshooting?

And here is a really nice youtube video, also post-Tay, but with a twist. Still extremely relevant:

No comments:

Post a Comment