HINDUISM: Holika dahan

In a dramatisation of the victory of...

COUNTRY RANKINGS AS FARCE

Very recently, a “global institute” outlined an...

Gwadar attack exposes fault lines in Pak

Even though Gwadar is under a security...

Toby Walsh, AI expert, is racing to stop the killer robots

TechToby Walsh, AI expert, is racing to stop the killer robots

Toby Walsh, a professor at the University of New South Wales in Sydney, is one of Australia’s leading experts on artificial intelligence. He and other experts have released a report outlining the promises, and ethical pitfalls, of the country’s embrace of AI.

Recently, Walsh, 55, has been working with the Campaign to Stop Killer Robots, a coalition of scientists and human rights leaders seeking to halt the development of autonomous robotic weapons.

We spoke briefly at the annual meeting of the American Association for the Advancement of Science, where he was making a presentation, and then for two hours via telephone. Below is an edited version of those conversations.

Q. You are a scientist and an inventor. How did you become an activist in the fight against “killer robots”?

A. It happened incrementally, beginning around 2013. I had been doing a lot of reading about robotic weaponry. I realised how few of my artificial intelligence colleagues were thinking about the dangers of this new class of weapons. If people thought about them at all, they dismissed killer robots as something far in the future.

From what I could see, the future was already here. Drone bombers were flying over the skies of Afghanistan. Though humans on the ground controlled the drones, it’s a small technical step to render them autonomous.

So in 2015, at a scientific conference, I organised a debate on this new class of weaponry. Not long afterward, Max Tegmark, who runs MIT’s Future of Life Institute, asked if I’d help him circulate a letter calling for the international community to pass a preemptive ban on all autonomous robotic weapons.

I signed, and at the next big AI conference, I circulated it. By the end of that meeting, we had over 5,000 signatures—including people like Elon Musk, Daniel Dennett, Steve Wozniak.

Q. What was your argument?

A. That you can’t have machines deciding whether humans live or die. It crosses new territory. Machines don’t have our moral compass, our compassion and our emotions. Machines are not moral beings.

The technical argument is that these are potentially weapons of mass destruction, and the international community has thus far banned all other weapons of mass destruction.

What makes these different from previously banned weaponry is their potential to discriminate. You could say, “Only kill children,” and then add facial recognition software to the system.

Moreover, if these weapons are produced, they would unbalance the world’s geopolitics. Autonomous robotic weapons would be cheap and easy to produce. Some can be made with a 3D printer, and they could easily fall into the hands of terrorists.

Another thing that makes them terribly destabilising is that with such weapons, it would be difficult to know the source of an attack. This has already happened in the current conflict in Syria. Just last year, there was a drone attack on a Russian-Syrian base, and we don’t know who was actually behind it.

Q. Why ban a weapon before it is produced?

A. The best time to ban such weapons is before they’re available. It’s much harder once they are falling into the wrong hands or becoming an accepted part of the military tool kit. The 1995 blinding laser treaty is perhaps the best example of a successful preemptive ban.

Sadly, with almost every other weapon that has been regulated, we didn’t have the foresight to do so in advance of it being used. But with blinding lasers, we did. Two arms companies, one Chinese and one American, had announced their intention to sell blinding lasers shortly before the ban came into place. Neither company went on to do so.

Q. Your petition—who was it addressed to?

A. The United Nations. Whenever I go there, people seem willing to hear from us. I never in my wildest dreams expected to be sitting down with the undersecretary-general of the UN. and briefing him about the technology. One high UN official told me, “We rarely get scientists speaking with one voice. So when we do, we listen.”

So far, 28 member countries have indicated their support. The European Parliament has called for it. The German foreign minister has called for it. Still, 28 countries out of 200! That’s not a majority.

Q. Who opposes the treaty?

A. The obvious candidates are the US, the UK, Russia, Israel, South Korea. China has called for a preemptive ban on deployment, but not on development of the weapons.

It’s worth pointing out there is going to be a huge amount of money being made by companies selling these weapons and the defences to them.

Q. Proponents of robotic weapons argue that by limiting the number of human combatants, the machines might make warfare less deadly.

A. I’ve heard those arguments, too. Some say that machines might be more ethical because people in warfare get frightened and do terrible things. Some supporters of the technology hope that this wouldn’t happen if we had robots fighting wars, because they can be programmed to abide by international humanitarian law.

The problem with that argument is that we don’t have any way to program for something as subtle as international humanitarian law.

Now, there are some things that the military can use robotics for—clearing a minefield is an example. If a robot goes in, gets blown up, you get another robot.

Q. Since 2013, you’ve been spending as much time on your activism as you have on scientific research. Any regrets?

A. No. This is important to be doing right now. Twenty years ago, like many of my colleagues, I felt that what we were doing in AI was so far from practice that we didn’t have to worry about moral consequences. That’s no longer true.

I have a 10-year-old daughter. When she’s grown, I don’t want her to ask, “Dad, you had a platform and authority—why didn’t you try to stop this?”

© 2019 The New York Times

- Advertisement -

Check out our other content

Check out other tags:

Most Popular Articles