AI development should not be left in the hands of mega-companies like Facebook and Google, warns Ine Gevers, curator of an exhibition in Eindhoven that examined the importance of emotion in the relationship between humans and robots.
Gevers – one of the Netherlands' most radical curators – said it was "unethical" that the vast majority of data processing and development for artificial intelligence (AI) was being done by big companies like Google, Facebook and Amazon, and through a relationship with devices like smartphones that are designed to be addictive.
"The companies that in the end can develop better robots are the ones that have this data, that have this AI – the Googles, the Amazons, the Facebooks – and that needs to change," Gevers told Dezeen.
"It's unethical because it's not an honest trade. The way we, for instance, engage with our smartphones is completely addictive. You could compare it to smoking," said the curator of Robot Love.
Gevers said that the current media attitude to robots and AI in the West was "unreal" and populist, and that humans needed to think more about how we could welcome robots into our lives and learn from them instead of simply exploiting them.
The human instinct to anthropomorphise – attribute human characteristics to objects and animals – can make us feel empathy towards robots and we needed to teach them a similar process to protect our own future, she added.
"We will have to live together with robots and AI, but we don't understand them, we have no clue, so we need them to teach us and explain themselves why they act in a certain way or why they make certain decisions. It's for our own survival," said Gevers.
"For robots and AI to become more human aware... it's necessary that we give them deliberate human feedback, not only in terms of data but much more consciously," she said.
Gevers' Robot Love exhibition was one of the highlights of last year's Dutch Design Week, and attracted swarms of visitors to the former Campina milk factory in Eindhoven from September until December 2018.
The show questioned the love-hate relationship between human and robot through a series of interactive art, design and engineering installations. Many of these played with ideas of empathy.
Examples included Annelies, Looking For Completion – a crying, human-like robot by art duo LA Raeven; an AI in the form of a pink kitten that narrated a possible future history of the world created by Pinar Yoldas; and HellYeahWeFuckDie by artist Hito Steyerl – named after the most popular words in American pop songs – with video footage of high-tech companies testing out robots that fall and stumble.
Among the other exhibits were Grotto by designer Bart Hess, which featured fleshy pillars that looked like human skin; Bob, a hyper-realistic sculpture of a human hybrid by Margriet van Breevoort; a Tickle Robot that learned through touch by Driessens & Verstappen; and an installation that gathered data from visitors kissing by Lancel/Maat.
A book of the works presented in the exhibition is still available to buy online. Robot Love was the latest project from the Niet Normaal Foundation, led by Gevers, which stages large-scale exhibitions every few years with a focus on taking the public out of their comfort zone on specific topics.
Read an edited transcript from our interview with Ine Gevers:
Anna Winston: Why is the show called Robot Love?
Ine Gevers: In 2016, I curated an exhibition called Hacking Habitat and it was about how we are captured, held hostage even, by globally connected, high-tech systems and how to hack our life back. It was quite dystopian. It was in a prison, so we were talking about the smartphone as a panopticon within a panopticon.
But we were also thinking about what could help us, how we could unite again, how we could fight back, how we could take some control of our lives. And so the question of love, not so much romantic love but something much broader, came up.
It made me think about what love would mean in a technocratic society. So I began to invest my time researching robotics and AI – especially culturally, because there are big cultural differences in how we relate to robots and AI.
You can get frightened, you can walk away, you can say that it will take a long time before they can compete with us, but you can also think about how it would be if we would invite those robots and AIs in. Usually if there's something new, where there's women or people of different colours, we start by enslaving each other or even worse. So what do we do with robots?
There's a lot of momentum in the discussion about how to make people robot proof, but there's a lot less investment in how we can make robots, especially AI, that are human aware [or empathetic]. We have to teach them that.
Humans mostly do this for ourselves by instinct. Anthropomorphism is the instinct to immediately project something human onto something that you can touch, especially when it has eyes. That is what Robot Love tries to trigger, and of course it also asks questions and gives space to ethical dilemmas in that context.
Anna Winston: Is there a problem with the current media dialogue around robots in the West?
Ine Gevers: There's an underestimation and there's an overestimation, it's completely unreal. On the one hand there is lots of marketing, like with sex robots that can do all this stuff – it's bullshit really – the robots that are in talk shows, etc, these are still puppets. So we are kind of bluffing a lot.
And on the other hand, the ones who have gained the most money out of this whole technological age are the ones that now are warning us the most in terms of how afraid we should be.
It's always dangerous if you polarise too much, if you take a very populist position. Robot Love tries to – in a good sense – complicate things. Most of us have this instinct [to anthropomorphise], so we care about the robot that is crying and we want to comfort her.
We have an immediate engagement with this tickle machine that is giving you the best massage ever, or with this big robot arm that is doing this balancing act. I think it's a gift that we can relate to non-human entities so easily.
Anna Winston: What needs to happen to move the conversation forward more widely?
Ine Gevers: I think it's important to realise that it's not only something for technological people, but that we all should be aware. Our smartphones are robots. Some people have very narrow framed ideas of what a robot is.
For robots and AI to become more human aware... it's necessary that we give them deliberate human feedback, not only in terms of data but much more consciously. For that we need to put them in a relationship with us and ask if they could [explain something] so that we can understand.
We will in the future have to live together with robots and AI, but we don't understand them, we have no clue, so we need them to teach us and explain themselves why they act in a certain way or why they make certain decisions. It's for our own survival.
We always think about robots and AI over there and humans over there, but there is a full spectrum. Most of us have been cyborgs for a long time, so for the human race itself it's necessary that we embrace diversity.
Anna Winston: Could you explain what you mean by most of us are cyborgs?
Ine Gevers: 100 years ago we had the iron lung. The big difference now that we have 3D printing and all these other technologies is that now cyborgs can come out of the closet much more easily, because it's visible.
We are entering into a future where you have robots, AI, cyborgs and all sorts of other hybrids and other chimera. I think it will be fun really. Our culture is already developing in those ways. We want to look like robots, we have this kind of selfie culture, so it's moving in that direction even in terms of our visions of beauty. It's digitised.
Anna Winston: How do you define what a robot is?
Ine Gevers: It is very connected of course to AI. The first thing most people do is draw some kind of human figure and that's dangerous. Everything that is a thing but is smart as a thing, because it can do stuff and make decisions on its own, is a robot. So a smartphone, a refrigerator, a self-driving car etc.
Without AI, robotics is not so interesting really. It's AI that is moving fast because the [companies developing it] have access to lots of data, and that's a much more problematic area.
The companies that in the end can develop better robots are the ones that have this data, that have this AI – the Googles, the Amazons, the Facebooks – and that needs to change. It's very important that we learn to cope with it and become more digitally aware.
Anna Winston: Is it unethical that these big companies are the ones that are controlling the future of AI?
Ine Gevers: That's a very complex question. I personally think that it's unethical because it's not an honest trade. The way we, for instance, engage with our smartphones is completely addictive. You could compare it to smoking. Those companies eventually were sued because it was unethical.
Same thing with all those big companies that did land grabbing and all that stuff to get at oil. At a certain moment in time, we think it's unethical in relation to the planet and the climate. The same thing will happen here, I'm absolutely convinced.
Actually those companies are taking the role of what used to be communism. They get every source, every energy and every data from us in exchange for a certain ease, a certain comfort and, later on, even insurance, health etc. That's coming. I find that quite creepy.