搜尋此網誌

2019年5月21日星期二

我們身處在ㄧ埸沒有贏面的AI軍備競賽的邊缘

我們身處在ㄧ埸沒有贏面的AI軍備競賽的邊缘
We are on the verge of a no-win AI arms race, warns NGO

May 9 2019

The close-in weapons system features many autonomous characteristics, which can inform policymakers about how to craft international treaties regarding any future developments towards lethal autonomous weapon systems. (Rufus Hucks/Navy)


When it comes to deciding to kill a human in a time of war, should a machine make that decision or should another human?
The question is a moral one, brought to the foreground by the techniques and incentives of modern technology. It is a question whose scope falls squarely under the auspices of international law, and one which nations have debated for years. Yet it’s also a collective action problem, one that requires not just states, but also companies and workers in companies to come to an agreement to forgo a perceived advantage. The danger is not so much in making a weapon, but in making a weapon that can choose targets independently of the human responsible initiating its action.
In a May 8 report from Pax — a nonprofit with the explicit goal of protecting civilians from violence, reducing armed conflict, and building a just peace — the agency looked at the existing state of artificial intelligence in weaponry and urged nations, companies and workers to think about how to prevent an AI arms race, instead of thinking about how to win one. Without corrective action, the report warned, the status quo could lead all participants into a no-win situation, with any advantage gained from developing an autonomous weapon temporary and limited.

“We see this emerging AI arms race and we think if nothing happens that that is a major threat to humanity,” said Frank Slijper, one of the authors on the report. “There is a window of opportunity to stop an AI arms race from happening. States should try to prevent an AI arms race and work toward international regulation. In the meantime, companies and research institutes have a major responsibility themselves to make sure that that work in AI and related fields is not contributing to potential lethal autonomous weapons.”
The report is written with a specific eye toward the seven leading AI powers. These include the five permanent members of the UN security council: China, France, Russia, the United Kingdom and the United States. In addition, the report details the artificial intelligence research of Israel and South Korea, both countries whose geographic and political postures have encouraged the development of military AI.

“We identified the main players in terms of use and research and development efforts on both AI and military use of AI in increasingly autonomous weapons. I couldn’t think of anyone, any state we would have missed out from these seven,” Slijper said. “Of course, there’s always a number eight and the number nine.”
For each covered AI power, the report examines the state of AI, the role of AI in the military, and what is known of cooperation between AI developers in the private sector or universities and the military. With countries like the United States, where military AI programs are named, governing policies can be pointed to, and debates over the relationship of commercial AI to military use is known, the report details that process. The thoroughness of the research is used to underscore Pax’s explicitly activist mission, though it also provides a valuable survey of the state of AI in the world.
As the report maintains throughout, this role of AI in weaponry isn’t just a question for governments. It’s a question for the people in charge of companies, and a question for the workers creating AI for companies.
Much of it has to do with the rather unique character of AI-infused weapons technology,” Slijper said. “Traditionally, a lot of the companies now working on AI were working on it from a purely civilian perspective to do good and to help humanity. These companies weren’t traditionally military producers or dominant suppliers to the military. If you work for an arms company, you know what you’re working for.”
In the United States, there’s been expressed resistance to contributing to Pentagon contracts from laborers in the tech sector. Google workers complained after learning of the company’s commitment to Project Maven, which developed a drone-footage processing AI for the military,and then the company’s leadership agreed to sunset the project. (Project Maven is now managed by the Peter Thiel-backed Andruil.)

Microsoft, too, experienced worker resistance to military use of its augmented reality tool HoloLens, with some workers writing a letter stating that in the Pentagon’s hands, the sensors and processing of the headset made it dangerously close to a weapon component. The workers specifically noted that they had built HoloLens“to help teach people how to perform surgery or play the piano, to push the boundaries of gaming, and to connect with the Mars Rover,” all of which is a far cry from aiding the military in threat identification on patrol.

“It is, for a lot of people working in the tech sector, quite disturbing that, while initially, that company was mainly or only working on civilian applications of that technology, now more and more they see these technologies also been used for military projects or even lethal weaponry,” said Slijper.

Slijper points to the Protocol on Blind Weapons as a way the international community regulated a technology with both civilian and military applications to ensure its use fell within the laws of war.
In an April 25 hearing before the Defense Innovation Board, Pentagon general counsel tried pitching Silicon Valley on developing AI for the military as a way to make weapons more precise and more ethical. The argument received some public criticism, with one observer noting the difference between accuracy in hitting a target and accuracy in correctly identifying a lawful target of war before deciding to fire a weapon.

Slijper echoed that ethical concern, and used as an example the difference between a missile that hits a target five minutes after it is fired, versus a loitering munition that could stay aloft for two days before it selects a target.
“When you fire a missile, there is no unclarity about what the target will be,” said Slijper. “If there is a larger amount of time between the launch and the eventual impact, things can happen in between. Civilian targets may show up that are not recognized by that loitering munition as such. Other changes in the environment may take place to change the original target plan. The longer distance in time in this case makes human control insufficient to be in line with the requirements of international humanitarian law.”
The above scenario is ethically fraught in irregular warfare, and could have catastrophic implications if it involved competing weapons from multiple AI powers. Of the seven named AI-capable nations, only South Korea explicitly lacks a nuclear arsenal. Decisions made by machines that select targets without human involvement, and faster than humans can meaningfully oversee, could lead to catastrophic escalations.
Arms control treaties have been known to change the behavior, not just of the initial signatories, but of the nations that interact with them and shape the entire global arms availability. The ban on cluster munitions and treaties on landmines have led nations, including nations that that have not signed the treaties, to change how they employ weapons.
"In other arms control treaties that even where some states don’t cooperate, a stigma around those weapons develops,” says Slijper. “It has to start somewhere. It’s the only way to prevent the arms race."

https://www.c4isrnet.com/unmanned/2019/05/09/we-are-on-the-verge-of-a-no-win-ai-arms-race-warns-ngo/









沒有留言: