It’s Judgment Day for Killer Robots at the United Nations
Has the age of the Robocop and Terminator arrived? The U.N. thinks it might be around the corner.
On Tuesday, the world body holds its first-ever multinational convention on “lethal autonomous weapons systems.”
While fully autonomous weapons don’t really exist yet, some
attendees at the convention—like the coalition of non-governmental
organizations calling itself the “Campaign to Stop Killer Robots”—will
argue that technology is moving fast in the direction of creating them.
South Korea, for example, already deploys semi-autonomous machine-gun robots outside its demilitarized zone with North Korea. The Israeli Defense Forces also operates similar robotic guns on several of its borders.
Tuesday’s meeting in Geneva is scheduled as an informal
meeting of experts on lethal, autonomous weapon systems and will take
place over three days in Geneva under the framework of the Convention on
Certain Conventional Weapons, of which 117 states are members. The
convention aims to ban or restrict
conventional weapons considered to cause unnecessary or unjustifiable
suffering to combatants or civilians. It currently covers things like
mines, booby traps and “blinding” laser weapons.
The meeting will attempt first to help define what an
autonomous weapon is, and whether it fits into the definition governed
under the convention. The highly technical agenda will also delve into
legal and ethics questions. In a session Wednesday afternoon, for
instance, attendees will be asked to consider “How could the use of
(lethal autonomous weapon systems) impact on the principles of
proportionality, distinction and precaution (Jus in bello),” in
reference to international humanitarian law.
Apart from robotics, military, and human rights law
experts, the meeting will hear from the International Committee of the
Red Cross, who held a seminar on the issue this March.
In its report, the ICRC said that there was sense of “deep discomfort
with the idea of allowing machines to make life-and-death decisions on
the battlefield with little or no human involvement.”
“As weapon systems become more autonomous they may become
less predictable, raising doubts as to how they could be guaranteed to
operate within the law,” the report said.
Ahead of the UN meeting, Human Rights Watch on Monday released a report
saying that in the not-too-distant future, fully autonomous
weapons—killer robots—could be used by law enforcement agencies and thus
trigger questions about international human rights law.
The report, co-published by HRW and Harvard Law School’s
International Human Rights Clinic, finds that “fully autonomous weapons
threaten to violate the foundational rights to life and…undermine the
underlying principle of human dignity.”
Professor Ronald C. Arkin of the Georgia Institute of Technology, who
will be debating the pros and cons of autonomous weapons at the UN
meeting, says he is not in favor of an outright ban, arguing that
autonomous weapons could, if properly designed, reduce human casualties
in war.Arkin told The Wall Street Journal that systems like these should not be deployed unless they can comply with international humanitarian law. He added that the machines should be able to outperform human combatants from an ethical perspective, with the hope of leading to a reduction in civilian casualties in conflict. “If that bar cannot be met then they should not be deployed,” he said, adding that he supports a moratorium until such time as that bar can be met. “The original call for moratoria by the UN Special rapporteur were self imposed. I’m not sure if a ban or anything stronger is enforceable,” Arkin said.
No comments:
Post a Comment