Roboti, dronovi i slične skalamerije

<< < (2/13) > >>

Meho Krljic:
Stopping killer robots and other future threats

--- Quote ---Only twice in history have nations come together to ban a weapon before it was ever used. In 1868, the Great Powers agreed under the Saint Petersburg Declaration to ban exploding bullets, which by spreading metal fragments inside a victim’s body could cause more suffering than the regular kind. And the 1995 Protocol on Blinding Laser Weapons now has 104 signatories, who have agreed to ban the weapons on the grounds that they could cause excessive suffering to soldiers in the form of permanent blindness.
Today a group of non-governmental organizations is working to outlaw another yet-to-be used device, the fully autonomous weapon or killer robot. In 2012 the group formed the Campaign to Stop Killer Robots to push for a ban. Different from the remotely piloted unmanned aerial vehicles in common use today, fully autonomous weapons are military robots designed to make strike decisions for themselves. Once deployed, they identify targets and attack them without any human permission. None currently exist, but China, Israel, Russia, the United Kingdom, and the United States are actively developing precursor technology, according to the campaign.
It’s important that the Campaign to Stop Killer Robots succeed, either at achieving an outright ban or at sparking debate resulting in some other sensible and effective regulation. This is vital not just to prevent fully autonomous weapons from causing harm; an effective movement will also show us how to proactively ban other future military technology.

Fully autonomous weapons are not unambiguously bad. They can reduce burdens on soldiers. Already, military robots are saving many service members’ lives, for example by neutralizing improvised explosive devices in Afghanistan and Iraq. The more capabilities military robots have, the more they can keep soldiers from harm. They may also be able to complete missions that soldiers and non-autonomous weapons cannot.
But the potential downsides are significant. Militaries might kill more if no individual has to bear the emotional burden of strike decisions. Governments might wage more wars if the cost to their soldiers were lower. Oppressive tyrants could turn fully autonomous weapons on their own people when human soldiers refused to obey. And the machines could malfunction—as all machines sometimes do—killing friend and foe alike.
Robots, moreover, could struggle to recognize unacceptable targets such as civilians and wounded combatants. The sort of advanced pattern recognition required to distinguish one person from another is relatively easy for humans, but difficult to program in a machine. Computers have outperformed humans in things like multiplication for a very long time, but despite great effort, their capacity for face and voice recognition remains crude. Technology would have to overcome this problem in order for robots to avoid killing the wrong people.
A government that deployed a weapon that struck civilians would violate international humanitarian law. This serves as a basis for the anti-killer robot campaign. The global humanitarian disarmament movement used similar arguments to achieve international bans on landmines and cluster munitions, and is making progress towards a ban on nuclear weapons.
If the Campaign to Stop Killer Robots succeeds, it will achieve a rare feat. It is no surprise that weapons are rarely banned before they are ever used, because doing so requires proactive effort, whereas people tend to be reactive. When a vivid, visceral event occurs, people are especially motivated to act. Hence concern about global warming spiked after Hurricane Katrina devastated New Orleans in 2005, and concern about nuclear power plant safety spiked after the 2011 Fukushima disaster.
The successful humanitarian campaigns against landmines and cluster munitions made very effective use of the many victims maimed by these weapons. The current humanitarian campaign against nuclear weapons similarly relies on the hibakusha—the victims of the 1945 Hiroshima and Nagasaki bombings—and victims of nuclear test detonations. The victims’ presence and their stories bring the issue to life in a way that abstract statistics and legal arguments cannot. Today there are no victims of fully autonomous weapons, so the campaign must be proactive rather than reactive, relying on expectations of future harm.
Protection from the dangers that could be caused by killer robots is a worthy end in its own right. However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril.
Editor's note: The views presented here are the author’s alone, and not those of the Global Catastrophic Risk Institute.

--- End quote ---

Meho Krljic:
Fokskon planira da većinu svojih radnika (onih što prete skakanjem sa krovova i žale se na neljucke uslove rada) zamene robotima unutar tri godine:

Foxconn expects robots to take over more factory work

--- Quote --- The electronics industry may still be reliant on human workers to assemble products, but Apple supplier Foxconn Technology Group is hopeful that robots will take over more of the workload soon.  Featured Resource      Presented by Scribe Software  10 Best Practices for Integrating Data  Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
 Learn More  In three years, Foxconn will probably use robots and automation to complete 70 percent of its assembly line work, said company CEO Terry Gou on Thursday in news footage circulated online.
Although the Taiwanese manufacturing giant employs over 1 million workers in mainland China, it has also been investing in robotics research. Previously Gou said he hoped to one day deploy a ”robot army” at the company’s factories, as a way to offset labor costs and improve manufacturing.
Foxconn’s biggest client is Apple, but the two companies have faced criticism over labor conditions in China, following a string of worker suicides in 2010. Labor watchdog groups have complained that Foxconn workers have in the past faced long hours and harsh treatment from management.
Both Apple and Foxconn have vowed to improve the labor conditions. But increasingly, robots are replacing human workers at the manufacturing giant.
Last year, Gou said that the company already had a fully automated factory in the Chinese city of Chengdu that can run 24 hours a day with the lights off.
Gou declined to say more about the factory, or what it produced, but Foxconn has been adding 30,000 industrial robots to its facilities each year, he said in June.
Despite the increasing automation, certain Foxconn facilities in China still heavily rely on human workers. The factory in Zhengzhou, China, assembling Apple’s iPhone for instance, has been known to employ 300,000 workers.
On Thursday, Gou said his company needed to adopt more automation, due to the potential for labor shortages.
“I think in the future, young people won’t do this kind of work, and won’t enter the factories,” he said.
--- End quote ---

Meho Krljic:
O autonomnim oružjima aka robotima ubicama će se raspravljati u Ujedinjenim Nacijama:
 Should robots make life/death decisions? UN to discuss lethal autonomous weapons next week
Naravno, mnogi traže da se to preventivno zabrani:
 UN urged to ban 'killer robots' before they can be developed
U Gardijanovom članku se pominje ovaj izveštaj:
 Mind the Gap
Ovo je sažvakano i prokomentarisano to iz izveštaja:
 The ‘Killer Robots’ Accountability Gap

Meho Krljic:
Još malo o autonomnim i poluautonomnim oružjima i tome kako ih je, uz današnje definicije zapravo poteško razlikovati:
 Semi-autonomous and on their own: Killer robots in Plato’s Cave

--- Quote --- As China, Russia, the United States, and 115 other nations convene in Geneva for their second meeting on lethal autonomous weapons systems, that phrase still has no official definition. It was coined by the chair of last year’s meeting, who assured me that the word “lethal” did not mean that robots armed with Tasers and pepper spray would be okay, or that weapons meant only to attack non-human targets—such as vehicles, buildings, arms and the other matériel of war—would be off-topic. Rather, there was a sense that the talks should focus on weapons that apply physical “kinetic” force directly and avoid being drawn into ongoing debates about the legality of cyber weapons that disrupt computers and networks.
But consensus about definitions is not the main problem; the heart of the matter is the need to prevent the loss of human control over fateful decisions in human conflict. The US military has been grappling with this issue for decades, because it has long been possible to build weapons that can hunt and kill on their own. An example is the Low-Cost Autonomous Attack System, a small cruise missile that was intended to loiter above a battlefield searching for a something that looked like a target, such as a tank, missile launcher, or personnel. The program was canceled in 2005, amid concerns about its reliability, controllability, and legality. But the winds have since shifted in favor of such weapons, with increasing amounts of money spent on research and development on robotic weapons that are able to seek out and hit targets on their own, without human intervention.
The problem in trying to nip such research and development in the bud is that whenever one proposes preventive arms control to block the development of new weaponry, “indefinable” is usually the first objection, closely followed by “unverifiable,” and finally, “it’s too late; everybody’s already got the weapons.” In fact, some autonomous weapons advocates have already jumped to the latter argument, pointing to existing and emerging weapons systems that make increasingly complex lethal decisions outside of human control. Does this mean that killer robots are already appearing on the scene? If so, shouldn’t this spur efforts to stop them?
The roadmap we’re using, and the road we’re on. Overriding longstanding resistance within the military, the Obama administration issued its Directive on Autonomy in Weapon Systems in 2012. This document instructs the Defense Department to develop, acquire, and use “autonomous and semi-autonomous weapons systems” and lays out a set of precautions for doing so: Systems should be thoroughly tested; tactics, techniques and procedures for their use should be spelled out; operators trained accordingly; and so forth.
These criteria are unremarkable in substance and arguably should apply to any weapons system. This directive remains the apparent policy of the United States and was presented to last year’s meeting on lethal autonomous weapons systems as an example for other nations to emulate.
The directive does contain one unusual requirement: Three senior military officials must certify that its precautionary criteria have been met—once before funding development, and again before fielding any new “lethal” or “kinetic” autonomous weapon. This requirement was widely misinterpreted as constituting a moratorium on autonomous weaponry; a headline from Wired reported it as a promise that “[a] human will always decide when a robot kills you.” But the Pentagon denies there is any moratorium, and the directive clearly indicates that officials can approve lethal autonomous weapons systems if they believe the criteria have been satisfied. It even permits a waiver of certification “in cases of urgent military operational need.”
More important, the certification requirement does not apply to any semi-autonomous systems, as the directive defines them, suggesting that these are of less concern. In effect, weapons that can be classified as semi-autonomous—including those intended to kill people—are given a green light for immediate development, acquisition, and use.
Understanding what this means requires a careful review of the definitions given, and also those not.
The directive defines an autonomous weapons system as one that “once activated, can select and engage targets without further intervention by a human operator.” This is actually quite helpful. Sweeping away arguments about free will and Kantian moral autonomy, versus machines being programmed by humans, it clarifies that “autonomous” just means that the system can act in the absence of further human intervention—even if a human is monitoring and could override the system. It also specifies the type of action that defines an autonomous weapons system as “select and engage targets,” and not, for example, rove around and conduct surveillance, or gossip with other robots.
A semi-autonomous weapons system, on the other hand, is defined by the directive as one that “is intended to only engage individual targets or specific target groups that have been selected by a human operator.” Other than target selection, semi-autonomous weapons are allowed to have every technical capability that a fully autonomous weapon might have, including the ability to seek, detect, identify and prioritize potential targets, and to engage selected targets with gunfire or a homing missile. Selection can even be done before the weapon begins to seek; in other words, it can be sent on a hunting mission.
Given this, it would seem important to be clear about whatever is left that the human operator must do. But the directive just defines “target selection” as “the determination that an individual target or a specific group of targets is to be engaged.”
That leaves a great deal unexplained. What does selection consist of? A commander making a decision? An operator delivering commands to a weapons system? The system telling a human it’s detected some targets, and getting a “Go”?
How may an individual target or specific group be specified as selected? By name? Type? Location? Physical description? Allegiance? Behavior? Urgency? If a weapon on a mission locates a group of targets, but can only attack one or some of them, how will it prioritize?
If these questions are left open, will their answers grow more permissive as the technology advances?
A war of words about the fog of war. In reality, the technological frontiers of the global robot arms race today fall almost entirely within the realm of systems that can be classified as semi-autonomous under the US policy. These include both stationary and mobile robots that are human-operated but may automate every step from acquiring, identifying, and prioritizing targets to aiming and firing a weapon. They also include missiles that, after being launched, can search for targets autonomously and decide on their own that they have found them.
A notorious example of the latter is the Long-Range Anti-Ship Missile, now entering production and slated for deployment in 2018. As depicted in a highly entertaining video released by Lockheed-Martin, this weapon can reroute around unexpected threats, search for an enemy fleet, identify the one ship it will attack among others in the vicinity, and plan its final approach to defeat antimissile systems—all out of contact with any human decision maker (but possibly in contact with other missiles, which can work together as a team).
At the Defense Advanced Research Projects Agency’s Robotics Challenge trials on the Homestead racetrack outside Miami in December 2013, I asked officials whether the long-range anti-ship missile was an autonomous weapons system—which would imply it should be subject to senior review and certification. They did not answer, but New York Times reporter John Markoff was later able to obtain confirmation that the Pentagon classifies the missile as merely “semi-autonomous.” What it would have to do to be considered fully autonomous remains unclear.
Following a series of exchanges with me  on this subject, two prominent advocates for autonomous weapons, Paul Scharre and Michael Horowitz, explained that, in their view, semi-autonomous weapons would be better distinguished by saying that they are “intended to only engage individual targets or specific groups of target that a human has decided are to be engaged.” Scharre was a principal author of the 2012 directive, but this updated definition is not official, little different from the old one, and clarifies very little.
After all, if the only criterion is that a human nominates the target, then even The Terminator (from the Hollywood movie of the same name) might qualify as semi-autonomous, provided it wasn’t taking its orders from an evil computer.
Scharre and Horowitz would like to “focus on the decision the human is making” and “not apply the word ‘decision’ to something the weapon itself is doing, which could raise murky issues of machine intelligence and free will.” Yet from an operational and engineering standpoint—and as a matter of common sense—machines do make decisions. Machine decisions may follow algorithms entirely programmed by humans, or may incorporate machine learning and data that have been acquired in environments and events that are not fully predictable. As with human decisions, machine decisions are sometimes clear-cut, but sometimes they must be made in the presence of uncertainty—as well as the possible presence of bugs, hacks, and spoofs.
An anti-ship missile that can supposedly distinguish enemy cruisers from hapless cruise liners must make a decision as it approaches an otherwise unknown ship. An antenna collects radar waveforms; a lens projects infrared light onto a sensor; the signals vary with aspect and may be degraded by conditions including weather and enemy countermeasures. Onboard computers apply signal processing and pattern recognition algorithms and compare with their databases to generate a score for a “criteria match.” The threshold for a lethal decision can be set high, but it cannot be 100 percent certainty, since that would ensure the missile never hits anything.
For killer robots, as for humans, there is no escape from Plato’s cave. In this famous allegory, humans are like prisoners in a cave, able to see only the shadows of things, which they think are the things themselves. So when a shooter (human or robot) physically aims a weapon at an image, it might be a mirage, or there might be something there, and that might be the enemy object that a human or computer has decided should be engaged. Which is the real target: the image aimed at, the intended military objective, or the person who actually gets shot? If a weapons system is operating autonomously, then the “individual” or “specific” target that a faraway human (perhaps) has selected becomes a Platonic ideal. The robot can only take aim at shadows.
Taking the mystery out of autonomy. Two conclusions can be drawn. One is that using weapons systems that autonomously seek, identify, and engage targets inevitably involves delegating fatal decisions to machines. At the very least, this is a partial abdication of the human responsibility to maintain control of violent force in human conflict.
The second conclusion is that, as my colleague Heather Roff has written, autonomous versus semi-autonomous weapons is “a distinction without a difference.” As I wrote in 2013, the line is fuzzy and broken. It will not hold against the advance of technology, as increasingly sophisticated systems make increasingly complicated decisions under the rubric of merely carrying out human intentions.
For the purposes of arms control, it may be better to reduce autonomy to a simple operational fact—an approach I call “autonomy without mystery.” This means that when a system operates without further human intervention, it should be considered an autonomous system. If it happens to be working as you intended, that doesn’t make it semi-autonomous. It just means it hasn’t malfunctioned (yet).
Some weapons systems that automate most, but not all, targeting and fire control functions may be called nearly autonomous. They are of concern as well: If they only need a human to say “Go,” they could be readily modified to not need that signal. Human control is not meaningful if operators just approve machine decisions and avoid accountability. As soon as a weapons system no longer needs further human intervention to complete an engagement, it becomes operationally autonomous.
Autonomy without mystery implies that many existing, militarily important weapons must be acknowledged as actually, operationally autonomous. For example, many missile systems use homing sensors or “seekers” that are out of range when the missiles are launched, and must therefore acquire and home in on their targets autonomously, later on in flight. The 2012 directive designates such weapons as “semi-autonomous,” which exempts them from the certification process while implicitly acknowledging their de facto autonomy.
Scharre and Horowitz point to the Nazi-era Wren torpedo, which could home on a ship’s propeller noise, as evidence that such systems have existed for so long that considering them as autonomous would imply that “this entire discussion is a lot of fuss for nothing.” But the fuss is not about the past, it is about the future. The Long-Range Anti-Ship Missile, with its onboard computers identifying target ships and planning how to attack, gives us a glimpse of one possible future. When some call it just a “next-generation precision-guided weapon,” we should worry about this next generation, and even more about the generations to come.
Follow your guiding stars, and know where you don’t want to go. One cannot plausibly propose to ban all existing operationally autonomous weapons, but there is no need to do so if the main goal is to avert a coming arms race. The simplest approach would be grandfathering; it is always possible to say “No guns allowed, except for antiques.” But a better approach would be to enumerate, describe, and delimit classes of weaponry that do meet the operational definition of autonomous weapons systems, but are not to be banned. They might be subjected to restrictions and regulations, or simply excluded from coverage under a new treaty.
For example, landmines and missiles that self-guide to geographic coordinates have been addressed by other treaties and could be excluded from consideration. Automated defenses against incoming munitions could be allowed but subjected to human supervision, range limitations, no-autonomous-return-fire, and other restrictions designed to block them from becoming autonomous offensive weapons.
Nearly autonomous weapons systems could be subjected to standards for ensuring meaningful human control. An accountable human operator could be required to take deliberate action whenever a decision must be made to proceed with engagement, including any choice between possible targets, and any decision to interpret sensor data as representing either a previously “selected” target or a valid new target. An encrypted record of the decision, and the data on which it was made, could be used to verify human control of the engagement.
Autonomous hunter-killer weapons like the Long-Range Anti-Ship Missile and the canceled Low-Cost Autonomous Attack System should be banned outright; if any are permitted, they should be subject to strict quantitative and qualitative limits to cap their development and minimize their impact on arms race and crisis stability. No autonomous systems should ever be permitted to seek generic target classes, whether defined by physical characteristics, behavior or belligerent status, nor to prioritize targets based on the situation, lest they evolve into robotic soldiers, lieutenants, and generals—or in a civil context, police, judge, jury, and executioner all in one machine.
Negotiating such a treaty, under the recognition that autonomy is an operational fact, will involve bargaining and compromise. But at least the international community can avoid decades of dithering over the meaning of autonomy and searching for some logic to collapse the irreducible complexity of the issue—while technology sweeps past false distinctions and propels the world into an open-ended arms race.
Thinking about autonomous weapons systems should be guided by fundamental principles that should always guide humanity in conflict: human control, responsibility, dignity, sovereignty, and above all, common humanity, as the world faces threats to human survival that it can only overcome by global agreement.
In the end, where one draws the line is less important than that it is drawn somewhere. If the international community can agree on this, then the remaining details become a matter of common interest and old-fashioned horse trading.

--- End quote ---

May the force be Android

Google has patented the ability to control a robot army


[0] Message Index

[#] Next page

[*] Previous page

Go to full version