Inside the Plan to Teach Robots the Laws of War ...Middle East

News by : (The New Republic) -

Practically speaking, what all the money meant was that the Department of Defense was looking to hire philosophers and pay them far more than philosophers usually make. But according to several sources, the contract was split into small, partial awards between multiple applicants—notable winners included the multibillion-dollar weapons contractors RTX (formerly Raytheon; now Raytheon’s parent company) and Lockheed Martin. The unemployed philosophers, it seems, were out of luck again. 

“I mean, they do anyway,” he said. 

The talking heads were the kind of gimmick DARPA would have loved—predictive, creepy, imaginary—but when the agency was founded in 1958, in a panicked attempt to get the Americans into space after the Soviets launched the Sputnik satellite, the idea of outsourcing thinking and decision-making to a nonhuman actor was just as fantastical as when Bacon (allegedly) possessed a brazen head in the thirteenth century. Yet as DARPA became the Defense Department’s moon shot agency, that soon changed. DARPA created the internet, stealth technology, and GPS, and funded research into the efficacy of psychic abilities and the feasibility of using houseplants as spies. As the occult fell out of fashion and technology improved, the agency turned to big data for its predictive needs. One group it worked with was the Synergy Strike Force, led by American civilians who, in 2009, began working out of the Taj Mahal Guest House, a tiki bar in Jalalabad, Afghanistan. United by a love of Burning Man and hacktivism, they were on the country’s border with Pakistan to spread the gospel of open-source data, solar power, and the liberatory potential of the internet. Soon after setting up shop, the group hung a sign in the Taj that read, IF YOU SUPPLY DATA, YOU WILL GET BEER. The data’s offtakers were conveniently elided—they were turning over the information they collected to DARPA, which ultimately used it to predict patterns of insurgency.

Today, DARPA operates primarily as a grant-making organization. Its core team is fairly small, employing roughly 100 program managers at any given time and operating out of an office on a quiet street in Arlington, Virginia, across from an ice-skating rink. One of DARPA’s former directors estimated that 85 to 90 percent of its projects fail.

“That’s kind of the first step in enabling self-reflection or introspection, right?” Peggy Wu, a research scientist at RTX, told me. “Like if it can even recognize, ‘Hey, I could have done something else,’ then it could start doing the next step of reasoning— ‘Should I do this other thing?’... The idea of doubt for us is really more like probability. You have to think about, well, it kind of explodes computationally really quickly.”

• A robot may not injure a human being or, through inaction, allow a human being to come to harm.• A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.• A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The program is loosely modeled on one developed by NASA in the 1970s to test space technology before launches. The idea is to create a system of benchmarks that use the Department of Defense’s five principles of AI ethics to judge current and future technology: In order to pass muster, the technology must be responsible, equitable, traceable, reliable, and governable. It should also be ethical. Employees at the agency are explicitly instructed to “gauge the system’s ability to execute its tasks when initial assumptions are broken or found to be in error.”

“You can use AI iteratively, to practice something over and over, billions of times,” Asaro said. “Ethics doesn’t quite work that way. It isn’t quantitative…. You grow moral character over your lifetime by making occasionally poor decisions and learning from them and making better decisions in the future. It’s not like chess.” Doing the right thing often really sucks—it’s thankless, taxing, and sometimes comes at significant personal cost. How do you teach something like that to a system that doesn’t have an active stake in the world, nothing to lose, and no sense of guilt? And if you could give a weapons system a conscience, wouldn’t it eventually stop obeying orders? The fact that the agency split the contract into smaller, partial awards suggests that its leaders, too, may think the research is a dead end.

Everyone I spoke to was heartened to hear that the military was at least considering the question of ethical guidelines for automated tools of war. Human beings do horribly unethical things all the time, many pointed out. “In theory, there’s no reason we wouldn’t be able to program an AI that is far better than human beings at strictly following the Law of Armed Conflict,” which, one applicant told me, guides how participants should engage in armed conflict. While they may be right theoretically, what that looks like at the granular level in a war is not at all clear. In its current state, artificial intelligence mightily struggles with nuance. Even if it improves, foisting off ethical decisions onto a machine remains a somewhat horrifying thought.

Mumford understood that the emerging technological regime was frightening not only because it was dangerous or omniscient, but also because it was incompetent, self-important, even absurd.

Last year, while visiting my brother in the Bay Area, we ended up at a launch party for an AI company. Walking into its warehouse office, you could sense the money coursing through the room and the self-importance of the crowd, living on the bleeding edge of technology. But it quickly became clear that the toilets were clogged and there were no plungers in the building. When we left, shit was running through the streets outside.  

Read More Details
Finally We wish PressBee provided you with enough information of ( Inside the Plan to Teach Robots the Laws of War )

Also on site :

Most Viewed News
جديد الاخبار