Humans are preoccupied with robots. Leonardo da Vinci attempted to build one in the 16 th century, and the Jetsons were served by Rosie the robot maid. Today’s pop culture robots are indistinguishable from living, exhaling humans( some examples: Blade Runner ,< em> Westworld ,< em> Ex Machina , and Black Mirror ).
We’re obsessed with the endeavours of replicating or supplanting ourselves. But strangely, the same preoccupation hasn’t really been applied to pets.
aibo( stylized in all lowercase letters, as opposed to its all-caps predecessor AIBO) might change that. The company’s iconic robotic puppy was originally introduced in the early 2000 s. At the time, Sony timed AIBO’s release with the only two research papers written by a group of computer scientists that dived into understanding how A.I. could simulate animal intelligence, which is to say our understanding here is pretty scant. The two papers detailed how the company used analyzes on animal behaviour( ethology) to program the bots. One paper described how the team basically broke down basic animal behavior into a series of modules that the robo-pup could simulate, like squeaking for attention, and the other described how the team modeled AIBO’s complex emotional system to match predictable, relatable puppy behavior that humans could form a connection to.
AIBO wasn’t the only robo-dog of the early aughts–the far less expensive Poo-Chi toys were wildly popular in the same time frame, and the instinct to raise a robotic being fed the popularity of digital critters like Neopets to Pokemon.
Despite their introduction in the early 2000 s, tangible robotic pets remain a novelty–until now. Sony has hitherto discontinued production on AIBO in 2006. But on November 1, the company announced that it would be restoring the robotic bird-dog. The new aibo, available exclusively in Japan in January, will be packed with A.I ., including software that allows it to “learn” in a rudimentary fashion by reiterating behavior that gets positive feedback from its owners, according to the New York Times. aibo’s novelty is that it’s a device that is really requires your input–it’s specifically made to be interacted with, playing with, and talked to, unlike other now-ubiquitous connected devices.
That need for human care frankly intimidates the crap out of experts like Sherry Turkle, a psychologist at MIT who has written extensively about human beings &# x27; interactions with “sociable” computers. The threat in forming a bond with a robot or nurturing it like a living beast, Turkle said, is in assuming that the bond runs both ways.
” When personal computers or robot seems to ask for our help we treat it as though it cares about us ,” Turkle told The Daily Beast via email.” We are vulnerable here. We are vulnerable to feeling that objects that had not yet been care for us do have care for us .”
Turkle “re just saying that”” synthetic pets” still wouldn’t be capable of feeling feeling. Our living, exhaling pets today do, albeit in slightly different ways( a 2017 analyse, for example, found that dogs have strong brain responses to the smell of familiar humans and to emotional cues in verbal speech, a testament to the two species’ 30, 000 -year bond ). Turkle “re just saying that” people turn to a synthetic pet, which has ” no capacity for a relationship with us” for the emotional gratification we typically reserve for something that they are able “love” us back, it sets” fake emotion” into “peoples lives”.” Developmentally, I can see merely harm ,” she said.
But as A.I. improvements, it is likely to get harder and harder to tell the difference between “real” and “synthetic.” Turkle’s opinion is that A.I. will always remain artificial, and any emotions it presents are simulated. In humanoid A.I ., of course, we wrestle with this definition: If a simulation of consciousness, feeling, and humanity becomes indistinguishable from the real thing, who’s to say it’s not real?
A.I. researchers have proposed a number of well-defined processes or tests for determining whether or not a robot is conscious .~ ATAGEND One of the oldest and most rudimentary tests to demonstrating animal consciousness is the Turing test, a procedure designed to figure out whether or not an A.I simulate consciousness and intelligence well enough to fool a human being into thinking it’s one of them.
But there isn’t any such “Turing Test” for pets. In fact, we still aren’t sure what makes an animal “conscious” or not; performing the same tests on computers is even more difficult. Dr. Manuel Blum, a professor of Computer Science at Carnegie Mellon University who originally learnt under Marvin Minsky, one of the godfathers of A.I ., told The Daily Beast that he’s still trying to formulate a good situated of qualifications that would exam for “consciousness” in a machine.
In animals, Blum explained, researchers can perform a very rudimentary exam to determine whether or not a being is self-aware. In the “mirror test,” an unsuspecting animal is celebrated with some kind of paint, on a part of their body they cannot realize, like their forehead. The animal is then depicted a reflect. If they learn their reflection, with the paint on their forehead, and attempt to wipe it off, they pass the test — they can recognize themselves in the mirror, and connect that the paint they see in the mirror’s reflection is on them in real life.( Dogs, interestingly, don &# x27; t pass the test .~ ATAGEND Elephants and other smarter animals do .)
But Blum said trying to apply a similar exam of consciousness to A.I. quickly falls apart. It’s very easy to code a program to pass the mirror test, and consciousness has to require more than that, like some form of inner thought process that can choose actions beyond knee-jerk reactions to stimulu, for one. Still, he said we’re probably approaching the time when these conversations become necessary.
” I’m very optimistic about what computers can do ,” Blum said in an interview.” I’m very optimistic about A.I .” This barrier–when simulated intelligence becomes nigh-indistinguishable from the real thing, either a puppy or a human–is close.” I think that these machines are very close to achieving it .”
Blum is optimistic, and seems to regard the coming singularity–when personal computers can simulate your pet, or your fellow man–with curiosity. For Turkle, it’s more of an existential threat.” The simulation of envisioning ,” she said, in reference to a Turing test,” may be enough for us to be content to take it as reasoning. But the simulation of feeling is not mood, the simulation of enjoy never enjoy .”
A robotic bird-dog may be able to simulate adoration. It may even be able to simulate waking you up at 5 a.m ., whining for food that it does not want. But ultimately, it’s up to us to decide if that constructs it real or not.