The Previous Experiments With AI Bots Didn’t Go Well … Here’s Tay and Norman

I gave my first major presentation on ChatGPT last week, and tried to be positive, but also outlined many of the threats that AI bots…

Photo by Possessed Photography on Unsplash

The Previous Experiments With AI Bots Didn’t Go Well … Here’s Tay and Norman

I gave my first major presentation on ChatGPT last week, and tried to be positive, but also outlined many of the threats that AI bots bring to cybersecurity.

So, let’s wind the clock back to the last time we saw AI bots appeared to make a significant impact on the Internet.

Tay

Anyone who has children will know that you shouldn’t swear in from of them, as they will pick up bad language. You should also avoid being bigoted, racist, and all the other bad things that children can pick-up.

So Microsoft decided to release their “child” (an AI chatbot) onto the Internet last week — named Tay (not after the River Tay, of course) and came with the cool promotion of:

“Microsoft’s AI fam from the internet that’s got zero chill”.

Unfortunately it ended up learning some of the worst things that human nature has to offer. In the end Microsoft had to put her to sleep (“offline”) so that it could unlearn all the bad things that it had learnt:

In the end she spiralled completely out of control, and was perhaps rather shocked with the depth of the questions she was being asked to engage with:

Microsoft’s aim was to get up-to-speed on creating a bot which could converse with users and learn from their prompts, but it ended learning from racists, trolls and troublemakers. In the end it was spouting racial slurs, along with defending white-supremacist propaganda and calls for genocide.

After learning from the lowest-levels of the Internet, and posting over 92K tweets, Tay was put to sleep to think over what she had learnt (and most of her tweets have now been deleted):

c u soon humans need sleep now so many conversations today thx

She was also promoted as:

The more you talk the smarter Tay gets

but she ended up spiralling downwards, and talking to the kind of people you won’t want your children to talk to on-line. At present she is gain thousands of new followers, but has want strangle silent.

As soon as she went offline there was a wave of people keen to chat with her and posting to the #justicefortay hash tag:

Some even called for AI entities to have rights:

Norman and Bad AI

Norman — and where the name is derived from the famous Psycho movie — is a part of a research project at MIT’s Media Lab, and illustrates the dark end of AI. He (“it”) has been trained around pictures of the world which represent its darker side. These were pictures taken from Reddit of people dying in shocking circumstances.

After being trained on violent images and then being asked to interpret ink blot images — the Rorschach test — the researchers working on the project outlined that Norman’s response was extremely dark, and where every image described murder and violence. Alongside Norman, the team also trained another AI agent on pictures of cats, birds and people, and was far more positive about the images shown.

When the AI agents were shown this image:

Norman saw “a man is shot dead”, whereas the other AI agent saw, “a close up of a vase and flowers”. And for this one:

Norman saw, “A man is shot dead in front of his screaming wife”, whereas the other agent saw, “A person holding an umbrella in the air”.

An AI program used by a US court was trained to perform a risk assessment on those accused of crimes, and ended up biased against black prisoners. In New York City, an AI program to predict child abuse was acquised for racial profiling:

and in New Zealand it was found that the AI agent wrongly predicted child abuse more than half the time and in Los Angeles County the false positive rate was more than 95%:

Conclusions

The debate is only just beginning. In the end the data matters more than the algorithm when we are training machines.