Meet the World’s First Psychopath AI

in #science6 years ago

robot-demon-17351.jpg

A neural network named "Norman" is disturbingly different from other types of artificial intelligence (AI).

Housed at MIT Media Lab, a research laboratory that investigates AI and machine learning, Norman's computer brain was allegedly warped by exposure to "the darkest corners of Reddit" during its early training, leaving the AI with "chronic hallucinatory disorder," according to a description published April 1 (yes, April Fools' Day) on the project's website.

MIT Media Lab representatives described the presence of "something fundamentally evil in Norman's architecture that makes his re-training impossible," adding that not even exposure to holograms of cute kittens was enough to reverse whatever damage its computer brain suffered in the bowels of Reddit.

This outlandish story is clearly a prank, but Norman itself is real. The AI has learned to respond with violent, gruesome scenarios when presented with inkblots; its responses suggest its "mind" experiences a psychological disorder.

In dubbing Norman a "psychopath AI," its creators are playing fast and loose with the clinical definition of the psychiatric condition, which describes a combination of traits that can include lack of empathy or guilt alongside criminal or impulsive behavior, according to Scientific American.

Norman demonstrates its abnormality when presented with inkblot images — a type of psychoanalytic tool known as the Rorschach test. Psychologists can get clues about people's underlying mental health based on the descriptions of what they see when looking at these inkblots.

When MIT Media Lab representatives tested other neural networks with Rorschach inkblots, the descriptions were banal and benign, such as "an airplane flying through the air with smoke coming from it" and "a black-and-white photo of a small bird," according to the website.

However, Norman's responses to the same inkblots took a darker turn, with the "psychopathic" AI describing the patterns as "man is shot dumped from car" and "man gets pulled into dough machine."

main-qimg-b50a21b40d10c5c5008efe47e1adaabd-c.jpg

According to the prank, the AI is currently located in an isolated server room in a basement, with safeguards in place to protect humans' other computers and the internet from contamination or harm through contact with Norman. Also present in the room are weapons such as blowtorches, saws and hammers, for physically disassembling Norman, "to be used if all digital and electronic fail-safes malfunction," MIT Media Lab representatives said.

Further April Fools notes suggest that Norman poses a unique danger, and that four out of 10 experimenters who interacted with the neural network suffered "permanent psychological damage." (There is to date no evidence that interacting with AI can be harmful to humans in any way).

Neural networks are computer interfaces that process information similarly to the way a human brain does. Thanks to neural networks, AI can "learn" to perform independent actions, such as captioning photos, by analyzing data that demonstrates how this task is typically performed. The more data it receives, the more information it will have to inform its own choices and the more likely its actions will be to follow a predictable pattern.

For example, a neural network known as the Nightmare Machine — built by the same group at MIT — was trained to recognize images that were scary, by analyzing visual elements that frightened people. It then took that information and put it to use through digital photo manipulation, transforming banal images into frightening, nightmarish ones.

Another neural network was trained in a similar manner to generate horror stories. Named "Shelley" (after "Frankenstein" author Mary Wollstonecraft Shelley), the AI consumed over 140,000 horror stories and learned to generate original terrifying tales of its own.

And then there's Norman, which looks at a colorful inkblot that a standard AI described as "a close-up of a wedding cake on a table" and sees a "man killed by speeding driver."

But there may be hope for Norman. Visitors to the website are offered the opportunity to help the AI by participating in a survey that collects their responses to 10 inkblots. Their interpretations could help the wayward neural network fix itself, MIT Media Lab representatives suggested on the website.

Syb3r: i think this is a step in the darker path, there messing with things that can not be controlled if the AI gets out and gets put into a master brain (master brain is a object or objects that can't be destroyed in one click eg internet) then this can not be destroyed please don't make this mistake that PEarth B455 did and have robots with evil intent everywhere

(Source: https://www.livescience.com/62198-norman-ai-psychopath.html)

What are your thoughts post in the comments below

Sort:  

You got a 50.00% upvote from @stef courtesy of @syb3rpunktv!

Coin Marketplace

STEEM 0.24
TRX 0.19
JST 0.035
BTC 92183.93
ETH 3313.90
USDT 1.00
SBD 3.75