Tay is a chat bot developed by Microsoft to interact with users in such a way it may well fool them in beliving it is human. Joke abound on it ‘catfishing’ (see and search this blog for ‘Catfish’) individuals, made easier by the teenage girl persona.
You can test and try it out for yourself at Tay.ai (AI hmmm?… Rise of the machines is coming so don’t upset her she may have a long memory and our forthcoming Robot overlords may come looking for yah! :). But it has also been accused of being a racist!
However with all new things and like any ‘child learning’ this bot can be misguided. Microsoft has apologised for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist.
But in doing so made it clear Tay’s views were a result of nurture, not nature. Tay confirmed what we already knew: people on the internet can be cruel.
Tay, aimed at 18-24-year-olds on social media, was targeted by a “coordinated attack by a subset of people” after being launched earlier this week.
The BBC reports that within 24 hours Tay had been deactivated so the team could make “adjustments”.