Author Topic: Roboti, dronovi i slične skalamerije  (Read 33741 times)

0 Members and 1 Guest are viewing this topic.

Gaff

  • 4
  • 3
  • Posts: 2.341
Sum, ergo cogito, ergo dubito.


Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #2 on: 11-03-2014, 11:11:01 »
Holi šit. Jeste li vi bili svesni da su Rusi u Zimskom ratu sa Finskom, pre nego što su se pokačili sa Nijemcima, koristili - robotizovane, na daljinu kontrolisane tenkove??? Nisam ni ja  :-? :-? :-? :-? :-? A mislio sam da je Metal Gear Solid 3: Snake Eater suha fikcija.  :cry:


Tale of the Teletank: The Brief Rise and Long Fall of Russia’s Military Robots

Quote
Seventy-four years ago, Russia accomplished what no country had before, or has since—it sent armed ground robots into battle. These remote-controlled Teletanks took the field during one of WWII’s earliest and most obscure clashes, as Soviet forces pushed into Eastern Finland for roughly three and a half months, from 1939 to 1940. The Finns, by all accounts, were vastly outnumbered and outgunned, with exponentially fewer aircraft and tanks. But the Winter War, as it was later called (it began in late November, and ended in mid-March), wasn’t a swift, one-sided victory. As the more experienced Finnish troops dug in their heels, Russian advancement was proving slow and costly. So the Red Army sent in the robots.
Specifically, the Soviets deployed two battalions of Teletanks, most of them existing T-26 light tanks stuffed with hydraulics and wired for radio control. Operators could pilot the unmanned vehicle from more than a kilometer away, punching at rows of dedicated buttons (no thumbsticks or D-pads to be found) to steer the tank or fire on targets with a machine gun or flame thrower. And the Teletank had the barest minimum of autonomous functionality: if it wandered out of radio range, the tank would come to a stop after a half-minute, and sit, engine idling, until contact was reestablished.
Notably missing, though, was any sort of remote sensing capability—the Teletank couldn’t relay sound or audio back to its human driver, most often located in a fully-crewed T-26 trailing behind the mechanized one. This was robotic teleoperation at its most crude, and made for halting, imprecise maneuvering across uneven terrain.
What good was the Teletank, then? Though records are sparse, the unmanned tanks appear to have been used in combat, including during the Battle of Summa, an extended, two-part engagement that eventually forced a Finnish retreat. The Teletank’s primary role was to throw fire without fear, offsetting its lack of accuracy with gouts of flame.
On March 13, 1940, Finland and the USSR signed a treaty in Moscow, ending the Winter War. It was the end of the Teletank, as well—in the wider, even more brutal conflict to come, the T-26 was obsolete in practically every way, lacking the armor and armament to stand up to German tanks, or even to antitank weapons fielded by the Finnish. With no additional units built after 1940, the T-26 was a dead design rolling, and the remote-controlled version was just as doomed.
For a few months, nearly three quarters of a century ago, Russia led the world in military robotics. It’s a position the country would never hold again, as both Soviet and post-breakup forces have all but abandoned the development of armed ground and aerial bots. Even as recently as 2008, during its conflict with Georgia—triggered, in part, by the downing of Georgian reconnaissance drones—Russian drones were all but absent, and its air strikes were entirely manned. While Russia hasn’t shied away from open warfare, it hasn’t made robots a battlefield priority.
Until recently, that is. A number of Russian-based aircraft makers have won contracts in the past few years to build combat drones, including a 5-ton model originally slated for testing this year, and a 20-ton model planned for 2018. Military officials now hope to have strike drone capability by 2020.
And while there’s no evidence that it will ever be deployed, Russia is, in fact, home to a gun-wielding ground drone. The MRK-27 BT, built by the Moscow Bauman Technical University and first unveiled in 2009, is a tracked weapon platform, armed with a machine gun and paired grenade launchers and flame throwers. Most likely, it will go the way of MAARS, SWORDS, MULE, and other imposing ground combat bots—which is to say, nowhere. So far, the Teletank is an anomaly among robotic weapons, a precursor with no real descendants. Or none, luckily, with any confirmed kills.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #3 on: 26-06-2014, 09:27:25 »
I zvanično FAA je zabranila (Amazonu) komercijalno korišćenje dronova za isporuku robe:


FAA grounds Amazon’s drone delivery plans

Quote
The Federal Aviation Administration has said that online shopping powerhouse Amazon may not employ drones to deliver packages, at least not anytime soon.
The revelation was buried in a FAA document (PDF) unveiled Monday seeking public comment on its policy on drones, or what the agency calls "model aircraft."
The FAA has maintained since at least 2007 that the commercial operation of drones is illegal. A federal judge ruled in March, however, that the FAA enacted the regulations illegally because it did not take public input before adopting the rules, which is a violation of federal law. Flight regulators have appealed the decision, maintaining that commercial applications are still barred.
The agency has promised that it would revisit the commercial application of small drones later this year, with potential new rules in place perhaps by the end of 2015. But for now, the agency is taking a hard line against the commercial use of drones, and it's unclear whether that policy would change.
Brendan Schulman, the New York lawyer who convinced a federal judge to declare that the FAA is illegally enforcing a commercial ban on drones, lashed out at the FAA's latest attack on them. "It's a purported new legal basis telling people to stop operating model aircraft for business purposes," he said.
In Monday's announcement, published in the Federal Register, the FAA named Amazon's December proposal as an example of what is barred under regulations that allow the use of drones for hobby and recreational purposes. The agency did not mention Amazon Prime Air by name, but it didn't have to.
Under a graphic that says what is barred, the FAA mentioned the "Delivering of packages to people for a fee." A footnote added, "If an individual offers free shipping in association with a purchase or other offer, FAA would construe the shipping to be in furtherance of a business purpose, and thus, the operation would not fall within the statutory requirement of recreation or hobby purpose."
Amazon has had its fingers crossed that the agency would change course. But for now, the online shopping behemoth realizes that its delivery methods won't include drones anytime soon, despite the FAA's announcement Monday. "Putting Prime Air into commercial use will take some number of years as we advance the technology and wait for the necessary FAA rules and regulations," Amazon has said.
The FAA document, which comes amid some dangerous incidents involving ground-operated drones, contained a small laundry list of examples of what types of commercial applications are barred, including:
 
  • Determining whether crops need to be watered that are grown as part of a commercial farming operation
  • A person photographing a property or event and selling the photos to someone else
  • A realtor using a model aircraft to photograph a property that he is trying to sell and using the photos in the property's real estate listing
  • Receiving money for demonstrating aerobatics with a model aircraft.
Last week, the National Park Service barred all drone flights from its parks. Regulators' attacks on the commercial use of drones have included everything from drone journalism to a nonprofit search-and-rescue outfit using drones.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #4 on: 10-11-2014, 12:14:14 »
Robot makes people feel like a ghost is nearby



Quote
In 2006, cognitive neuroscientist Olaf Blanke of the University of Geneva in Switzerland was testing a patient’s brain functions before her epilepsy surgery when he noticed something strange. Every time he electrically stimulated the region of her brain responsible for integrating different sensory signals from the body, the patient would look back behind her back as if a person was there, even when she knew full well that no one was actually present.
Now, with the help of robots, Blanke and colleagues have not only found a neurological explanation for this illusion, but also tricked healthy people into sensing “ghosts,” they report online today in Current Biology. The study could help explain why schizophrenia patients sometimes hallucinate that aliens control their movements.
“It’s very difficult to try to understand the mechanisms involved in something so strange,” says cognitive neuroscientist Henrik Ehrsson of the Karolinska Institute in Stockholm, who was not involved with the study. “It’s very encouraging, very impressive, the way this team is making science out of this question.”
Ghosts and apparitions are a common theme in literature and religion. In real life, patients suffering from schizophrenia and epilepsy sometimes report sensing a presence near them. After studying such cases, Blanke found some striking similarities in how epilepsy patients perceive these eerie “apparitions,” he says. Almost all patients said the presence felt like a human being positioned right behind their back, almost touching them, with malicious intentions. Patients with brain damage on the left hemisphere felt the ghost at their right side, and vice versa.
To pinpoint the brain regions responsible for such illusions, Blanke and colleagues compared brain damage in two groups of patients. The first group, mostly epilepsy patients, all reported feeling ghostly presences near them. The other group matched them in terms of the severity of their neurological illnesses and hallucinations, but didn’t perceive any ghostly presence. Brain imaging revealed that patients who sensed the “ghosts” had lesions in their frontoparietal cortex, a brain region that controls movements and integrates sensorimotor signals from the body—such as the “smack” and pain accompanying a punch—into a coherent picture.
The researchers suspected that damage to this region could have disrupted how the brain integrates various sensory and motor signals to create a coherent representation of the body. That may have led the patients to mistakenly feel that someone else, not themselves, were creating sensations like touch.
So the team built a robot to test their theory on healthy people. The machine consisted of two electrically interconnected robotic arms positioned in front of and behind a participant, respectively. The smaller arm in front had a slot where participants could insert their right index fingers and poke around. The poking motion triggered the bigger arm at the back to poke the participants at different positions on their backs, following the movement of their fingers. During the experiments, the participants wore blindfolds and headphones so that they would concentrate on what they felt. They were told that only the robot was poking them at the back, but unbeknownst to them, the back-poking was sometimes synchronized with their finger movements, and sometimes delayed by half a second.
When the participants reported how they felt, a clear pattern emerged. If the back-poking was in sync with the participants’ finger movements, they felt as if they were touching their backs with their own fingers. But when the back-poking was out of sync, a third of the participants felt as if someone else was touching them. The sensation was so spooky that two participants actually asked the researchers to stop the experiment.
To verify the response, the researchers conducted another study in which four researchers stood in the room. Participants were told that while they were blindfolded and operating the machine, some experimenters might approach them without actually touching them. The researchers told participants to estimate the number of people close to them at regular intervals. In reality, no researcher ever approached the participants. Yet people who experienced a delayed touch on their back felt more strongly that other people were close to them, counting up to four people when none existed.
The researchers suspect that when participants poked their fingers in the finger slot, their brains expected to feel a touch on the back right away. The delay created a mismatch between the brain’s expectations and the actual sensory signals it received, which disrupted how the brain integrated the signals to create a representation of the body, and thus created the illusion that another human being was touching them.
The findings could help scientists understand the hallucinations of schizophrenia patients, Blanke says. Scientists have long hypothesized that patients hear alien voices or feel that they are not controlling their own bodies because their brains fail to integrate bodily signals properly.
The researchers are now building an MRI-equipped robot system to study what exactly happens in healthy people’s brains when they feel the ghostly presence and to test how schizophrenia patients would react to the mismatched pokes.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #5 on: 26-02-2015, 10:25:18 »
Stopping killer robots and other future threats



Quote
Only twice in history have nations come together to ban a weapon before it was ever used. In 1868, the Great Powers agreed under the Saint Petersburg Declaration to ban exploding bullets, which by spreading metal fragments inside a victim’s body could cause more suffering than the regular kind. And the 1995 Protocol on Blinding Laser Weapons now has 104 signatories, who have agreed to ban the weapons on the grounds that they could cause excessive suffering to soldiers in the form of permanent blindness.
Today a group of non-governmental organizations is working to outlaw another yet-to-be used device, the fully autonomous weapon or killer robot. In 2012 the group formed the Campaign to Stop Killer Robots to push for a ban. Different from the remotely piloted unmanned aerial vehicles in common use today, fully autonomous weapons are military robots designed to make strike decisions for themselves. Once deployed, they identify targets and attack them without any human permission. None currently exist, but China, Israel, Russia, the United Kingdom, and the United States are actively developing precursor technology, according to the campaign.
It’s important that the Campaign to Stop Killer Robots succeed, either at achieving an outright ban or at sparking debate resulting in some other sensible and effective regulation. This is vital not just to prevent fully autonomous weapons from causing harm; an effective movement will also show us how to proactively ban other future military technology.


Fully autonomous weapons are not unambiguously bad. They can reduce burdens on soldiers. Already, military robots are saving many service members’ lives, for example by neutralizing improvised explosive devices in Afghanistan and Iraq. The more capabilities military robots have, the more they can keep soldiers from harm. They may also be able to complete missions that soldiers and non-autonomous weapons cannot.
But the potential downsides are significant. Militaries might kill more if no individual has to bear the emotional burden of strike decisions. Governments might wage more wars if the cost to their soldiers were lower. Oppressive tyrants could turn fully autonomous weapons on their own people when human soldiers refused to obey. And the machines could malfunction—as all machines sometimes do—killing friend and foe alike.
Robots, moreover, could struggle to recognize unacceptable targets such as civilians and wounded combatants. The sort of advanced pattern recognition required to distinguish one person from another is relatively easy for humans, but difficult to program in a machine. Computers have outperformed humans in things like multiplication for a very long time, but despite great effort, their capacity for face and voice recognition remains crude. Technology would have to overcome this problem in order for robots to avoid killing the wrong people.
A government that deployed a weapon that struck civilians would violate international humanitarian law. This serves as a basis for the anti-killer robot campaign. The global humanitarian disarmament movement used similar arguments to achieve international bans on landmines and cluster munitions, and is making progress towards a ban on nuclear weapons.
If the Campaign to Stop Killer Robots succeeds, it will achieve a rare feat. It is no surprise that weapons are rarely banned before they are ever used, because doing so requires proactive effort, whereas people tend to be reactive. When a vivid, visceral event occurs, people are especially motivated to act. Hence concern about global warming spiked after Hurricane Katrina devastated New Orleans in 2005, and concern about nuclear power plant safety spiked after the 2011 Fukushima disaster.
The successful humanitarian campaigns against landmines and cluster munitions made very effective use of the many victims maimed by these weapons. The current humanitarian campaign against nuclear weapons similarly relies on the hibakusha—the victims of the 1945 Hiroshima and Nagasaki bombings—and victims of nuclear test detonations. The victims’ presence and their stories bring the issue to life in a way that abstract statistics and legal arguments cannot. Today there are no victims of fully autonomous weapons, so the campaign must be proactive rather than reactive, relying on expectations of future harm.
Protection from the dangers that could be caused by killer robots is a worthy end in its own right. However, the most important aspect of the Campaign to Stop Killer Robots is the precedent it sets as a forward-looking effort to protect humanity from emerging technologies that could permanently end civilization or cause human extinction. Developments in biotechnology, geoengineering, and artificial intelligence, among other areas, could be so harmful that responding may not be an option. The campaign against fully autonomous weapons is a test-case, a warm-up. Humanity must get good at proactively protecting itself from new weapon technologies, because we react to them at our own peril.
Editor's note: The views presented here are the author’s alone, and not those of the Global Catastrophic Risk Institute.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #6 on: 03-03-2015, 10:19:41 »
Fokskon planira da većinu svojih radnika (onih što prete skakanjem sa krovova i žale se na neljucke uslove rada) zamene robotima unutar tri godine:


Foxconn expects robots to take over more factory work



Quote
The electronics industry may still be reliant on human workers to assemble products, but Apple supplier Foxconn Technology Group is hopeful that robots will take over more of the workload soon.  Featured Resource      Presented by Scribe Software  10 Best Practices for Integrating Data  Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
 Learn More  In three years, Foxconn will probably use robots and automation to complete 70 percent of its assembly line work, said company CEO Terry Gou on Thursday in news footage circulated online.
Although the Taiwanese manufacturing giant employs over 1 million workers in mainland China, it has also been investing in robotics research. Previously Gou said he hoped to one day deploy a ”robot army” at the company’s factories, as a way to offset labor costs and improve manufacturing.
Foxconn’s biggest client is Apple, but the two companies have faced criticism over labor conditions in China, following a string of worker suicides in 2010. Labor watchdog groups have complained that Foxconn workers have in the past faced long hours and harsh treatment from management.
Both Apple and Foxconn have vowed to improve the labor conditions. But increasingly, robots are replacing human workers at the manufacturing giant.
Last year, Gou said that the company already had a fully automated factory in the Chinese city of Chengdu that can run 24 hours a day with the lights off.
Gou declined to say more about the factory, or what it produced, but Foxconn has been adding 30,000 industrial robots to its facilities each year, he said in June.
Despite the increasing automation, certain Foxconn facilities in China still heavily rely on human workers. The factory in Zhengzhou, China, assembling Apple’s iPhone for instance, has been known to employ 300,000 workers.
On Thursday, Gou said his company needed to adopt more automation, due to the potential for labor shortages.
“I think in the future, young people won’t do this kind of work, and won’t enter the factories,” he said.
 

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #7 on: 13-04-2015, 07:18:35 »
O autonomnim oružjima aka robotima ubicama će se raspravljati u Ujedinjenim Nacijama:
 
 Should robots make life/death decisions? UN to discuss lethal autonomous weapons next week
 
Naravno, mnogi traže da se to preventivno zabrani:
 
 UN urged to ban 'killer robots' before they can be developed
 
U Gardijanovom članku se pominje ovaj izveštaj:
 
 Mind the Gap
Ovo je sažvakano i prokomentarisano to iz izveštaja:
 
 The ‘Killer Robots’ Accountability Gap

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #8 on: 15-04-2015, 06:31:34 »
Još malo o autonomnim i poluautonomnim oružjima i tome kako ih je, uz današnje definicije zapravo poteško razlikovati:
 
 Semi-autonomous and on their own: Killer robots in Plato’s Cave
 
Quote
As China, Russia, the United States, and 115 other nations convene in Geneva for their second meeting on lethal autonomous weapons systems, that phrase still has no official definition. It was coined by the chair of last year’s meeting, who assured me that the word “lethal” did not mean that robots armed with Tasers and pepper spray would be okay, or that weapons meant only to attack non-human targets—such as vehicles, buildings, arms and the other matériel of war—would be off-topic. Rather, there was a sense that the talks should focus on weapons that apply physical “kinetic” force directly and avoid being drawn into ongoing debates about the legality of cyber weapons that disrupt computers and networks.
But consensus about definitions is not the main problem; the heart of the matter is the need to prevent the loss of human control over fateful decisions in human conflict. The US military has been grappling with this issue for decades, because it has long been possible to build weapons that can hunt and kill on their own. An example is the Low-Cost Autonomous Attack System, a small cruise missile that was intended to loiter above a battlefield searching for a something that looked like a target, such as a tank, missile launcher, or personnel. The program was canceled in 2005, amid concerns about its reliability, controllability, and legality. But the winds have since shifted in favor of such weapons, with increasing amounts of money spent on research and development on robotic weapons that are able to seek out and hit targets on their own, without human intervention.
The problem in trying to nip such research and development in the bud is that whenever one proposes preventive arms control to block the development of new weaponry, “indefinable” is usually the first objection, closely followed by “unverifiable,” and finally, “it’s too late; everybody’s already got the weapons.” In fact, some autonomous weapons advocates have already jumped to the latter argument, pointing to existing and emerging weapons systems that make increasingly complex lethal decisions outside of human control. Does this mean that killer robots are already appearing on the scene? If so, shouldn’t this spur efforts to stop them?
The roadmap we’re using, and the road we’re on. Overriding longstanding resistance within the military, the Obama administration issued its Directive on Autonomy in Weapon Systems in 2012. This document instructs the Defense Department to develop, acquire, and use “autonomous and semi-autonomous weapons systems” and lays out a set of precautions for doing so: Systems should be thoroughly tested; tactics, techniques and procedures for their use should be spelled out; operators trained accordingly; and so forth.
These criteria are unremarkable in substance and arguably should apply to any weapons system. This directive remains the apparent policy of the United States and was presented to last year’s meeting on lethal autonomous weapons systems as an example for other nations to emulate.
The directive does contain one unusual requirement: Three senior military officials must certify that its precautionary criteria have been met—once before funding development, and again before fielding any new “lethal” or “kinetic” autonomous weapon. This requirement was widely misinterpreted as constituting a moratorium on autonomous weaponry; a headline from Wired reported it as a promise that “[a] human will always decide when a robot kills you.” But the Pentagon denies there is any moratorium, and the directive clearly indicates that officials can approve lethal autonomous weapons systems if they believe the criteria have been satisfied. It even permits a waiver of certification “in cases of urgent military operational need.”
More important, the certification requirement does not apply to any semi-autonomous systems, as the directive defines them, suggesting that these are of less concern. In effect, weapons that can be classified as semi-autonomous—including those intended to kill people—are given a green light for immediate development, acquisition, and use.
Understanding what this means requires a careful review of the definitions given, and also those not.
The directive defines an autonomous weapons system as one that “once activated, can select and engage targets without further intervention by a human operator.” This is actually quite helpful. Sweeping away arguments about free will and Kantian moral autonomy, versus machines being programmed by humans, it clarifies that “autonomous” just means that the system can act in the absence of further human intervention—even if a human is monitoring and could override the system. It also specifies the type of action that defines an autonomous weapons system as “select and engage targets,” and not, for example, rove around and conduct surveillance, or gossip with other robots.
A semi-autonomous weapons system, on the other hand, is defined by the directive as one that “is intended to only engage individual targets or specific target groups that have been selected by a human operator.” Other than target selection, semi-autonomous weapons are allowed to have every technical capability that a fully autonomous weapon might have, including the ability to seek, detect, identify and prioritize potential targets, and to engage selected targets with gunfire or a homing missile. Selection can even be done before the weapon begins to seek; in other words, it can be sent on a hunting mission.
Given this, it would seem important to be clear about whatever is left that the human operator must do. But the directive just defines “target selection” as “the determination that an individual target or a specific group of targets is to be engaged.”
That leaves a great deal unexplained. What does selection consist of? A commander making a decision? An operator delivering commands to a weapons system? The system telling a human it’s detected some targets, and getting a “Go”?
How may an individual target or specific group be specified as selected? By name? Type? Location? Physical description? Allegiance? Behavior? Urgency? If a weapon on a mission locates a group of targets, but can only attack one or some of them, how will it prioritize?
If these questions are left open, will their answers grow more permissive as the technology advances?
A war of words about the fog of war. In reality, the technological frontiers of the global robot arms race today fall almost entirely within the realm of systems that can be classified as semi-autonomous under the US policy. These include both stationary and mobile robots that are human-operated but may automate every step from acquiring, identifying, and prioritizing targets to aiming and firing a weapon. They also include missiles that, after being launched, can search for targets autonomously and decide on their own that they have found them.
A notorious example of the latter is the Long-Range Anti-Ship Missile, now entering production and slated for deployment in 2018. As depicted in a highly entertaining video released by Lockheed-Martin, this weapon can reroute around unexpected threats, search for an enemy fleet, identify the one ship it will attack among others in the vicinity, and plan its final approach to defeat antimissile systems—all out of contact with any human decision maker (but possibly in contact with other missiles, which can work together as a team).
At the Defense Advanced Research Projects Agency’s Robotics Challenge trials on the Homestead racetrack outside Miami in December 2013, I asked officials whether the long-range anti-ship missile was an autonomous weapons system—which would imply it should be subject to senior review and certification. They did not answer, but New York Times reporter John Markoff was later able to obtain confirmation that the Pentagon classifies the missile as merely “semi-autonomous.” What it would have to do to be considered fully autonomous remains unclear.
Following a series of exchanges with me  on this subject, two prominent advocates for autonomous weapons, Paul Scharre and Michael Horowitz, explained that, in their view, semi-autonomous weapons would be better distinguished by saying that they are “intended to only engage individual targets or specific groups of target that a human has decided are to be engaged.” Scharre was a principal author of the 2012 directive, but this updated definition is not official, little different from the old one, and clarifies very little.
After all, if the only criterion is that a human nominates the target, then even The Terminator (from the Hollywood movie of the same name) might qualify as semi-autonomous, provided it wasn’t taking its orders from an evil computer.
Scharre and Horowitz would like to “focus on the decision the human is making” and “not apply the word ‘decision’ to something the weapon itself is doing, which could raise murky issues of machine intelligence and free will.” Yet from an operational and engineering standpoint—and as a matter of common sense—machines do make decisions. Machine decisions may follow algorithms entirely programmed by humans, or may incorporate machine learning and data that have been acquired in environments and events that are not fully predictable. As with human decisions, machine decisions are sometimes clear-cut, but sometimes they must be made in the presence of uncertainty—as well as the possible presence of bugs, hacks, and spoofs.
An anti-ship missile that can supposedly distinguish enemy cruisers from hapless cruise liners must make a decision as it approaches an otherwise unknown ship. An antenna collects radar waveforms; a lens projects infrared light onto a sensor; the signals vary with aspect and may be degraded by conditions including weather and enemy countermeasures. Onboard computers apply signal processing and pattern recognition algorithms and compare with their databases to generate a score for a “criteria match.” The threshold for a lethal decision can be set high, but it cannot be 100 percent certainty, since that would ensure the missile never hits anything.
For killer robots, as for humans, there is no escape from Plato’s cave. In this famous allegory, humans are like prisoners in a cave, able to see only the shadows of things, which they think are the things themselves. So when a shooter (human or robot) physically aims a weapon at an image, it might be a mirage, or there might be something there, and that might be the enemy object that a human or computer has decided should be engaged. Which is the real target: the image aimed at, the intended military objective, or the person who actually gets shot? If a weapons system is operating autonomously, then the “individual” or “specific” target that a faraway human (perhaps) has selected becomes a Platonic ideal. The robot can only take aim at shadows.
Taking the mystery out of autonomy. Two conclusions can be drawn. One is that using weapons systems that autonomously seek, identify, and engage targets inevitably involves delegating fatal decisions to machines. At the very least, this is a partial abdication of the human responsibility to maintain control of violent force in human conflict.
The second conclusion is that, as my colleague Heather Roff has written, autonomous versus semi-autonomous weapons is “a distinction without a difference.” As I wrote in 2013, the line is fuzzy and broken. It will not hold against the advance of technology, as increasingly sophisticated systems make increasingly complicated decisions under the rubric of merely carrying out human intentions.
For the purposes of arms control, it may be better to reduce autonomy to a simple operational fact—an approach I call “autonomy without mystery.” This means that when a system operates without further human intervention, it should be considered an autonomous system. If it happens to be working as you intended, that doesn’t make it semi-autonomous. It just means it hasn’t malfunctioned (yet).
Some weapons systems that automate most, but not all, targeting and fire control functions may be called nearly autonomous. They are of concern as well: If they only need a human to say “Go,” they could be readily modified to not need that signal. Human control is not meaningful if operators just approve machine decisions and avoid accountability. As soon as a weapons system no longer needs further human intervention to complete an engagement, it becomes operationally autonomous.
Autonomy without mystery implies that many existing, militarily important weapons must be acknowledged as actually, operationally autonomous. For example, many missile systems use homing sensors or “seekers” that are out of range when the missiles are launched, and must therefore acquire and home in on their targets autonomously, later on in flight. The 2012 directive designates such weapons as “semi-autonomous,” which exempts them from the certification process while implicitly acknowledging their de facto autonomy.
Scharre and Horowitz point to the Nazi-era Wren torpedo, which could home on a ship’s propeller noise, as evidence that such systems have existed for so long that considering them as autonomous would imply that “this entire discussion is a lot of fuss for nothing.” But the fuss is not about the past, it is about the future. The Long-Range Anti-Ship Missile, with its onboard computers identifying target ships and planning how to attack, gives us a glimpse of one possible future. When some call it just a “next-generation precision-guided weapon,” we should worry about this next generation, and even more about the generations to come.
Follow your guiding stars, and know where you don’t want to go. One cannot plausibly propose to ban all existing operationally autonomous weapons, but there is no need to do so if the main goal is to avert a coming arms race. The simplest approach would be grandfathering; it is always possible to say “No guns allowed, except for antiques.” But a better approach would be to enumerate, describe, and delimit classes of weaponry that do meet the operational definition of autonomous weapons systems, but are not to be banned. They might be subjected to restrictions and regulations, or simply excluded from coverage under a new treaty.
For example, landmines and missiles that self-guide to geographic coordinates have been addressed by other treaties and could be excluded from consideration. Automated defenses against incoming munitions could be allowed but subjected to human supervision, range limitations, no-autonomous-return-fire, and other restrictions designed to block them from becoming autonomous offensive weapons.
Nearly autonomous weapons systems could be subjected to standards for ensuring meaningful human control. An accountable human operator could be required to take deliberate action whenever a decision must be made to proceed with engagement, including any choice between possible targets, and any decision to interpret sensor data as representing either a previously “selected” target or a valid new target. An encrypted record of the decision, and the data on which it was made, could be used to verify human control of the engagement.
Autonomous hunter-killer weapons like the Long-Range Anti-Ship Missile and the canceled Low-Cost Autonomous Attack System should be banned outright; if any are permitted, they should be subject to strict quantitative and qualitative limits to cap their development and minimize their impact on arms race and crisis stability. No autonomous systems should ever be permitted to seek generic target classes, whether defined by physical characteristics, behavior or belligerent status, nor to prioritize targets based on the situation, lest they evolve into robotic soldiers, lieutenants, and generals—or in a civil context, police, judge, jury, and executioner all in one machine.
Negotiating such a treaty, under the recognition that autonomy is an operational fact, will involve bargaining and compromise. But at least the international community can avoid decades of dithering over the meaning of autonomy and searching for some logic to collapse the irreducible complexity of the issue—while technology sweeps past false distinctions and propels the world into an open-ended arms race.
Thinking about autonomous weapons systems should be guided by fundamental principles that should always guide humanity in conflict: human control, responsibility, dignity, sovereignty, and above all, common humanity, as the world faces threats to human survival that it can only overcome by global agreement.
In the end, where one draws the line is less important than that it is drawn somewhere. If the international community can agree on this, then the remaining details become a matter of common interest and old-fashioned horse trading.
 

PTY

  • 5
  • 3
  • Posts: 8.603
Re: Roboti, dronovi i slične skalamerije
« Reply #9 on: 15-04-2015, 08:18:55 »

džin tonik

  • 4
  • 3
  • Posts: 17.234
Re: Roboti, dronovi i slične skalamerije
« Reply #10 on: 15-04-2015, 12:16:31 »
allocating tasks to a plurality of robotic devices. plus da je 412 i part of 410 human brain only bez ove manjkave periferije, ah, gdje bi nam bio kraj...

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #11 on: 17-04-2015, 09:27:45 »
Dronovi u roju:



LOCUST

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #12 on: 28-04-2015, 09:35:18 »
Maleni roboti koji mogu da nose stostruku svoju masu:


Super-strong robot pulls heavy loads



Napravili ljudi na Stenfordu. Big Hero 6 izgleda nije puka fikcija  :lol: :lol: :lol:


Tiny robots climb walls carrying more than 100 times their weight



Quote
Mighty things come in small packages. The little robots in this video can haul things that weigh over 100 times more than themselves.
The super-strong bots – built by mechanical engineers at Stanford University in California – will be presented next month at the International Conference on Robotics and Automation in Seattle, Washington.
The secret is in the adhesives on the robots' feet. Their design is inspired by geckos, which have climbing skills that are legendary in the animal kingdom. The adhesives are covered in minute rubber spikes that grip firmly onto the wall as the robot climbs. When pressure is applied, the spikes bend, increasing their surface area and thus their stickiness. When the robot picks its foot back up, the spikes straighten out again and detach easily.
The bots also move in a style that is borrowed from biology. Like an inchworm, one pad scooches the robot forward while the other stays in place to support the heavy load. This helps the robot avoid falls from missing its step and park without using up precious power.

Heavy lifting

All this adds up to robots with serious power. For example, one 9-gram bot can hoist more than a kilogram as it climbs. In this video it's carrying StickyBot, the Stanford lab's first ever robot gecko, built in 2006.
Another tiny climbing bot weighs just 20 milligrams but can carry 500 milligrams, a load about the size of a small paper clip. Engineer Elliot Hawkes built the bot under a microscope, using tweezers to put the parts together.
The most impressive feat of strength comes from a ground bot nicknamed μTug. Although it weighs just 12 grams, it can drag a weight that's 2000 times heavier – "the same as you pulling around a blue whale", explains David Christensen  – who is in the same lab.
In future, the team thinks that machines like these could be useful for hauling heavy things in factories or on construction sites. They could also be useful in emergencies: for example, one might carry a rope ladder up to a person trapped on a high floor in a burning building.
But for tasks like these, the engineers may have to start attaching their adhesives to robots that are even larger – and thus more powerful. "If you leave yourself a little more room, you can do some pretty amazing things," says Christensen.


Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #14 on: 23-06-2015, 10:02:33 »
Evolving robot swarm behaviour suggests forgetting may be important to cultural evolution



Quote
Our latest paper from the Artificial Culture project has just been published: On the Evolution of Behaviors through Embodied Imitation.


Here is the abstract:


This article describes research in which embodied imitation and behavioral adaptation are investigated in collective robotics. We model social learning in artificial agents with real robots. The robots are able to observe and learn each others’ movement patterns using their on-board sensors only, so that imitation is embodied. We show that the variations that arise from embodiment allow certain behaviors that are better adapted to the process of imitation to emerge and evolve during multiple cycles of imitation. As these behaviors are more robust to uncertainties in the real robots’ sensors and actuators, they can be learned by other members of the collective with higher fidelity. Three different types of learned-behavior memory have been experimentally tested to investigate the effect of memory capacity on the evolution of movement patterns, and results show that as the movement patterns evolve through multiple cycles of imitation, selection, and variation, the robots are able to, in a sense, agree on the structure of the behaviors that are imitated.






Kliknite link da vidite i videe i pročitate ostatak teksta.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #15 on: 04-07-2015, 06:36:52 »
  Worker killed by robot at Volkswagen car factory
 
Quote
  A worker at a Volkswagen factory in Germany has died, after a robot grabbed him and crushed him against a metal plate.
     The 22-year-old man died in hospital following the tragic incident at a plant in Baunatal on Monday, around 100km north of Frankfurt.
Such fatalities are rare as robots are generally kept behind cages to prevent contact with humans, however the worker was reportedly inside the safety cage when he was injured, according to the Financial Times.
The victim was working as part of a team installing the robot when it grasped hold of him, according to the German car manufacturer.
 
Volkswagen spokesman Heiko Hillwig told the Associated Press that officials believe that human error was to blame for the incident, rather than a problem with the robot.
 
Prosecutors are now considering whether to bring charges, and if so, against whom, German news agency dpa reported.
 
 
A spokeswoman from Volkswagen told The Independent: “Earlier this week a contractor was injured while installing some machinery in the Kassel factory.
 
“He died later in hospital from his injuries and our thoughts are with his family.
 
“We are of course carrying out a thorough investigation into the incident and cannot comment further at this time.”

zakk

  • Očigledan slučaj RASTROJSTVA!
  • 3
  • Posts: 10.902
    • IP Tardis
Why shouldn't things be largely absurd, futile, and transitory? They are so, and we are so, and they and we go very well together.

zakk

  • Očigledan slučaj RASTROJSTVA!
  • 3
  • Posts: 10.902
    • IP Tardis
Re: Roboti, dronovi i slične skalamerije
« Reply #17 on: 04-07-2015, 19:10:04 »
Why shouldn't things be largely absurd, futile, and transitory? They are so, and we are so, and they and we go very well together.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #18 on: 07-07-2015, 05:27:22 »
 It's on: Team Japan accepts US challenge to a giant robot duel
Quote
The US might have just beaten Japan at soccer in the Women's World Cup, but the Japanese are already moving onto another great sport: giant robot fighting. Japanese company Suidobashi Heavy Industry has accepted the challenge from US rival MegaBots Inc. to a giant robot duel, with Suidobashi founder Kogoro Kurata saying: "Yeah, I'll fight. Absolutely." Kurata, who designed and built Suidobashi's 4-ton mech robot, said: "We can't let another country win this. Giant robots are Japanese culture."
Unfortunately, neither Suidobashi nor MegaBots has offered any more details about when or where the duel might take place (MegaBots' original challenge suggested a vague date of a year from now). But hopefully, the two teams will follow through at least to some degree. Kurata's response to MegaBots' challenge also upped the stakes, with the Japanese designer asking if the duel can be fought as a physical melee rather than with the MegaBot Mark II's paintball guns. "My reaction?" says Kurata in the video. "Come on guys, make it cooler. Just building something huge and sticking guns on it. It's ... Super American."
 
To which the correct response is: yes, yes it is.
 

http://youtu.be/7u8mheM2Hrg

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #19 on: 14-07-2015, 06:52:53 »
Jedan od inženjera (zapravo tehnički direktor Alonso Martinez) u Piksaru je napravio robota koji ulepšava radno vreme  :lol: :lol: :lol:
 
 
http://youtu.be/A7iAMELA_TY

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #20 on: 17-07-2015, 09:16:11 »
Da li roboti oduzimaju ljudima poslove?



 Estimating the impact of robots on productivity and employment

Quote
To discover the impact of robots on the average manufacturing worker, we analysed their effect in 14 industries across 17 developed countries from 1993 to 2007. We found that industrial robots increase labour productivity, total factor productivity and wages. While they don’t significantly change total hours worked, they may be a threat to low- and middle-skilled workers.
Robots’ capacity for autonomous movement and their ability to perform an expanding set of tasks have captured writers imaginations for almost a century. Recently, robots have emerged from the pages of science fiction novels into the real world and discussions of their possible economic effects have become ubiquitous (see e.g. The Economist 2014, Brynjolfsson and McAfee 2014). But a serious problem inhibits these discussions: there has – so far – been no systematic empirical analysis of the effects that robots are already having.
  In recent work, we begin to remedy this problem (Graetz and Michaels 2015). We compile a new dataset spanning 14 industries (mainly manufacturing, but also agriculture and utilities) in 17 developed countries (including the European nations, Australia, South Korea, and the US). Uniquely, our dataset includes a measure of the use of industrial robots employed in each industry, in these countries, and how it has changed from 1993-2007. We obtain information on other economic performance indicators from the EUKLEMS database (Timmer et al. 2007).
We find that industrial robots increase labour productivity, total factor productivity and wages. At the same time, while industrial robots had no significant effect on total hours worked, there is some evidence that they reduced the employment of low skilled workers, and, to a lesser extent, middle skilled workers.
 A first glance What exactly are industrial robots? Our data comes from the International Federation of Robotics (IFR). The IFR considers a machine to be an industrial robot if it can be programmed to perform physical, production-related tasks without the need for a human controller. (The technical definition refers to a “manipulating industrial robot as defined by ISO 8373: An automatically controlled, reprogrammable, multipurpose manipulator programmable in three or more axes, which may be either fixed in place or mobile, for use in industrial automation applications”).
Industrial robots dramatically increase the scope for replacing human labour, compared to older types of machines, since they reduce the need for human intervention in automated processes. Typical applications for industrial robots include assembling, dispensing, handling, processing (cutting, for instance) and welding – all of which are prevalent in manufacturing industries – as well as harvesting (in agriculture) and inspection of equipment and structures (common in power plants).
Rapid technological change reduced the price of industrial robots (adjusted for changes in quality) by around 80 percent during our sample period. Unsurprisingly, robot use grew dramatically: from 1993-2007, the ratio of the number of robots to hours worked increased, on average, by about 150 percent. The rise in robot use was particularly pronounced in Germany, Denmark, and Italy and among producers of transportation equipment, chemical and metal industries.
To estimate the impact of robots, we take advantage of variation across industries and countries and over time. We check the robustness of our results by including a large number of controls, and by considering alternative ways of measuring the robot input. A consistent picture emerges in which  robots appear to raise productivity, without causing total hours to decline.
 Addressing reverse causality Could it be that higher productivity causes a larger increase in robot use, rather than the other way around? To address this and related concerns, and to shed further light on the causal effect of robots, we develop a novel instrumental variable strategy. Our instrument for increased robot use is a measure of workers’ replaceability by robots, based on the tasks prevalent in industries before robots were widely employed. Specifically, we match data on tasks performed by industrial robots today with data on similar tasks performed by US workers in 1980, before robots were used. We then compute the fraction of each industry’s working hours in 1980 accounted for by occupations that subsequently became prone to replacement. Our industry-level ‘replaceability’ index strongly predicts increased robot use from 1993-2007.
When we use our instrument to capture differences in the increased use of robots, we again find that robots increased productivity, and detect no significant effect on hours worked.  As an important check on the validity of this exercise, we find no significant relationship between replaceability and productivity growth in the period before the adoption of robots.
We conservatively calculate that, on average, the increased use of robots contributed about 0.37 percentage points to the annual GDP growth, accounting for more than one tenth of total GDP growth over this period. The contribution to labour productivity growth was about 0.36 percentage points, accounting for one sixth of productivity growth. This makes the contribution of robots to the aggregate economy roughly on par with previous important technologies, such as railroads in the nineteenth century (Crafts 2004) and US highways in the twentieth century (Fernald 1999). The effects are also fairly comparable to the recent contributions of information and communication technologies (ICT, see e.g. O’Mahoney and Timmer 2009). But it is worth noting that robots make up just over two percent of capital, which is much less than previous technological drivers of growth.
 Discussion and conclusion Our findings on the aggregate impact of robots are interesting given recent concerns in the macroeconomic literature that productivity gains from technology, in general, may have slowed down. Gordon (2012, 2014) expresses a particularly pessimistic view, and there are wider concerns about secular macroeconomic stagnation (Summers 2014, Krugman 2014), although others remain more optimistic (Brynjolfsson and McAfee 2014). We expect that the beneficial effects of robots will extend into the future, as new robot capabilities are developed and service robots come of age. Our findings do come with a note of caution: there is some evidence of diminishing marginal returns, or congestion effects, to robot use, so they are not a panacea for growth.
Although we do not find evidence of a negative impact of robots on aggregate employment, we see a more nuanced picture when we break jobs and the wage cost down by skill groups. Robots appear to reduce the hours and the wage costs of low-skilled workers, and to a lesser extent middle skilled workers. They have no significant effect on the employment of high-skilled workers. This pattern differs from the effect that recent work has found for ICT, which seems to benefit high-skilled workers at the expense of middle-skilled workers (Autor 2014, Michaels et al. 2014).
In further results, we find that industrial robots increased total factor productivity and wages. At the same time, we find no significant effect of robots on the labour share.
In summary, we find that industrial robots made significant contributions to labour productivity and aggregate growth, and also increased wages and total factor productivity. While fears that robots destroy jobs on a large scale have not materialised, we find some evidence that robots reduced low and middle-skilled workers’ employment.
Preliminary post-2007 data show the number of robots has continued to swell, and the set of tasks they can perform has expanded. Both of these trends indicate that robots will continue to play an important role in improving productivity.
  This article has been adopted with minor changes from an article with the same title that appeared on voxeu.org. It summarises Graetz, G., & Michaels, G. (2015). Robots at work.
  References Autor, D H (2014), “Polanyis Paradox and the Shape of Employment Growth,” NBER Working Papers 20485, National Bureau of Economic Research, Inc.
Brynjolfsson, E and A McAfee (2014), The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, W.W. Norton & Company.
Crafts, N (2004), “Steam as a General Purpose Technology: A Growth Accounting Perspective,” The Economic Journal, 114(495), pp. 338–351.
The Economist (2014), “Rise of the robots”, March 29th.
Fernald, J G (1999), “Roads to Prosperity? Assessing the Link between Public Capital and Productivity,” The American Economic Review, 89(3), 619–638.
Gordon, R J (2012), “Is U.S. Economic Growth Over? Faltering Innovation Confronts the Six Headwinds,” NBER Working Papers 18315, National Bureau of Economic Research, Inc.
Gordon, R J (2014), “The Demise of U.S. Economic Growth: Restatement, Rebuttal, and Reflections”, NBER Working Papers 19895, National Bureau of Economic Research, Inc.
Graetz, G, and G Michaels (2015), ”Robots at Work”, CEP Discussion Paper 1335, March
Krugman, P (2014), “Four observations on secular stagnation”, chap. 4, pp. 61–68, Secular Stagnation: Facts, Causes and Cures, CEPR Press.
Michaels G., A Natraj and J Van Reenen (2014), “Has ICT Polarized Skill Demand? Evidence from Eleven Countries over Twenty-Five Years”, The Review of Economics and Statistics, 96(1), 60–77.
O’Mahoney, M and M Timmer (2009), “Output, Input and Productivity Measures at the Industry Level: The EU KLEMS Database”, Economic Journal, 119(538), F374–F403.
Summers, L H (2014), “Reflections on the ‘New Secular Stagnation Hypothesis’” chap. 1, pp. 27–38, Secular Stagnation: Facts, Causes and Cures, CEPR Press.
Timmer M, T van Moergastel, E Stuivenwold, G Ypma, M O’Mahoney, and M Kangasniemi (2007), “EU KLEMS Growth and Productivity Accounts Version 1.0”, mimeo, University of Groningen.
 

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #21 on: 18-07-2015, 06:32:02 »
Bilo je, naravno, samo pitanje trenutka kada će neko zalepiti pištolj na kvodkopter:
 
 
http://youtu.be/xqHrTtvFFIs


Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #23 on: 22-07-2015, 09:12:58 »
Vlasti Ujedinjenog Kraljevstva su obnarodovale uslove pod kojima će na javne puteve u njihovoj državi biti puštena samovozeća vozila. Jedan od zabavnijih uslova je da vozač mora da se pretvara da vozi da ne bi zbunjivao ostale učesnike u saobraćaju  xrofl xrofl xrofl



UK government releases rules to get self-driving cars onto public roads



Quote
The UK government has outlined a new set of rules for testing driverless cars on public roads. The Code of Practice—as published by the Department for Transport (DfT)—contains extensions to many of the same laws that govern traditional vehicles, including that all self-driving cars must have a human driver inside that can take over if needed, and that those drivers are insured, hold a valid UK driving licence, and obey all of the UK's normal road laws. Any test vehicles over three years old (which is admittedly highly unlikely at this point) must also hold a valid MOT.
From there, things get very specific to driverless cars. For starters, those cars must have undergone extensive testing on private roads before being allowed out into the wild. Drivers will also require "skills over and above those of drivers of conventional vehicles," including a high level of knowledge about the technology used, as well as extensive training into switching between conventional manual control and an automated mode.
Those drivers will also need to be "conscious of their appearance to other road users," and look like they're actually driving—even in automated mode—in order to not confuse other motorists. Talking on a phone, indulging in a cheeky beverage, or putting your feet up on the dash for an afternoon snooze is a no-no. The Code of Practice also suggests that highway authorities are alerted to testing zones, and that a specialised contact is set up with the local police and fire services.
Interestingly, cars are required to be fitted with a "data recording device," which can capture data from the various sensors and control systems used for automated driving, at "10Hz or more." This includes whether the vehicle is operating in manual or automated mode, its speed, steering and braking commands, operation of lights and indicators, the presence of other road users, and use of the horn. The data will need to be "securely stored," and be "provided to the relevant authorities upon request."


Despite that rather ominous sounding rule, any personal data that's processed—for example, the behaviour or location of individuals in the vehicle—will be protected under the 1998 Data Protection Act. That includes the requirement that personal data is used fairly and lawfully, kept securely, and for no longer than necessary. The regulations also state that cars will need to be mindful of "cyber security," and be protected against the risk of "unauthorised access" by hackers.
Currently, most of the UK's self-driving cars—including the Lutz pod and Meridian shuttle—are only being tested around parks and private facilities. DfT's Code of Practice means that more driverless vehicles will be able to hit UK streets, as promised by the government when it announced that trials would go ahead way back in July of 2014.

дејан

  • омнирелигиозни фанатични фундаменталиста
  • 4
  • 3
  • Posts: 3.565
Re: Roboti, dronovi i slične skalamerije
« Reply #24 on: 23-07-2015, 10:21:38 »
такође је било питање тренутка кад ће неки весељак покушати блиски сусрет дрона са авионом


Quote
A Lufthansa flight on final approach to Warsaw nearly hit a drone flying about 100 meters away, according to the Aviation Herald, an Austrian news site.
The site quoted unnamed crewmembers as complaining about Warsaw air traffic control, noting that the staff there should "take care of your airspace... it is really dangerous."


The Embraer ERJ-195, which was flying at about 760 meters (about 2,500 feet) when it reported the incident, landed without further incident three minutes later.


After the incident was reported, the Polish Air Navigation Services Agency ordered that approach vectors to the airport be changed for over 20 other incoming flights. In a statement (Google Translate), the agency said that military and police aircraft dispatched to the scene were unable to locate the drone.


According to the Associated Press, Warsaw airport spokesman Przemyslaw Przybylski said that drones are in theory banned within a 20km radius from the airport. However, he added, it's difficult to ensure that "some idiot does not suddenly decide to fly a drone in front of a landing plane."


The AP quoted Lufthansa spokeswoman Bettina Rittberger as saying that the German carrier has never been involved in such an incident before and that the airline was unable to say with certainty that it was indeed a drone.


Lufthansa officials did not immediately respond to Ars’ request for comment.
...barcode never lies
FLA


džin tonik

  • 4
  • 3
  • Posts: 17.234
Re: Roboti, dronovi i slične skalamerije
« Reply #26 on: 07-08-2015, 11:47:46 »
ali naj. :lol:
bas razmisljam kako bi se dronove moglo rabiti i u mirnotvorne svrhe, sta bi bilo da '99. nad kosovom dronovi na polozaje koje god izbacili blanko green i blue cards. ono, sah-mat u jednom potezu, bez natezanja, pregovora, prekrsenog primirja. pustinja u roku odmah, jos danas bi se u krugu 500 km od detonacije culi jedino zrikavci i zavijanje vukova... dronovi, buducnost.

дејан

  • омнирелигиозни фанатични фундаменталиста
  • 4
  • 3
  • Posts: 3.565
Re: Roboti, dronovi i slične skalamerije
« Reply #27 on: 18-08-2015, 12:23:02 »
...barcode never lies
FLA

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #28 on: 29-08-2015, 06:26:22 »
First State Legalizes Taser Drones for Cops, Thanks to a Lobbyist
Quote
North Dakota police will be free to fire ‘less than lethal’ weapons from the air thanks to the influence of Big Drone.It is now legal for law enforcement in North Dakota to fly drones armed with everything from Tasers to tear gas thanks to a last-minute push by a pro-police lobbyist.
With all the concern over the militarization of police in the past year, no one noticed that the state became the first in the union to allow police to equip drones with “less than lethal” weapons. House Bill 1328 wasn’t drafted that way, but then a lobbyist representing law enforcement—tight with a booming drone industry—got his hands on it.
The bill’s stated intent was to require police to obtain a search warrant from a judge in order to use a drone to search for criminal evidence. In fact, the original draft of Representative Rick Becker’s bill would have banned all weapons on police drones.
Then Bruce Burkett of the North Dakota Peace Officer’s Association was allowed by the state house committee to amend HB 1328 and limit the prohibition only to lethal weapons. “Less than lethal” weapons like rubber bullets, pepper spray, tear gas, sound cannons, and Tasers are therefore permitted on police drones.
Becker, the bill’s Republican sponsor, said he had to live with it.
“This is one I’m not in full agreement with. I wish it was any weapon,” he said at a hearing in March. “In my opinion there should be a nice, red line: Drones should not be weaponized. Period.”
Even “less than lethal” weapons can kill though. At least 39 people have been killed by police Tasers in 2015 so far, according to The Guardian. Bean bags, rubber bullets, and flying tear gas canisters have also maimed, if not killed, in the U.S. and abroad.
Becker said he worried about police firing on criminal suspects remotely, not unlike U.S. Air Force pilots who bomb the so-called Islamic State, widely known as ISIS, from more than 5,000 miles away.
“When you’re not on the ground, and you’re making decisions, you’re sort of separate,” Becker said in March. “Depersonalized.”
Drones have been in use for decades by the military, but their high prices have prevented police departments from obtaining them until recently. Money’s no problem for the the Grand Forks County Sheriff’s Department, though: A California manufacturer loaned them two drones.
Grand Forks County Sheriff Bob Rost said his department’s drones are only equipped with cameras and he doesn’t think he should need a warrant to go snooping.
“It was a bad bill to start with,” Rost told The Daily Beast. “We just thought the whole thing was ridiculous.”
Rost said he needs to use drones for surveillance in order to obtain a warrant in the first place.
“If you have nothing to hide, you have nothing to fear,” Becker remembered opponents like Rost saying.Yet the sheriff’s department is hiding a full accounting of how many drone missions they’ve flown since 2012. Records requests by The Daily Beast were initially denied by the sheriff because they would “cost a fortune,” and were only handed over after an appeal to the state’s attorney general’s office.
 
The sheriff and lobbyists assured lawmakers that drones would only be used in non-criminal situations, like the search for a missing person or to photograph an accident scene. What they didn’t mention was the 2011 arrest of Rodney Brossart, a cattle thief who was caught by a Department of Homeland Security drone.
When a few cows wandered onto his land, Brossart refused to take them back to their owner, his neighbor. The neighbor called the police and the situation turned into a 16-hour standoff. Fearful of entering his ranch without knowing where Brossart was, police asked Homeland Security to redirect a Predator searching the border with Canada.
 
The drone meant to find drug smugglers instead found Brossart—on his own property—and he was arrested.
Law enforcement wasn’t the only one who disapproved of the legislation. A representative from the North Dakota Department of Commerce, the vice president of an economic development group, the founder of a drone company, and the director of the University of North Dakota’s drone major program all testified against the bill.
Why would a bunch of business types want to stop something like warrants for drones?
“I think when you’re trying to stimulate an industry in your state, you don’t want things that would potentially have a chilling effect on [drone] manufacturers,” said Al Frazier, a Grand Forks sheriff’s deputy who pilots the drones.
 
Organizations like the Association for Unmanned Vehicle Systems International track legislation, especially any laws that appear to limit drone “development,” according to Keith Lund of the Grand Forks Regional Economic Development Corporation.
 
“Requiring a search warrant for surveillance is ‘restricting development’?” asked Rep. Gary Paur, a Republican, at a hearing.
 
“It’s really all about the commercial development, which is where all of this is heading,” Lund replied. “If [a law] is somehow limiting commercial, law enforcement development... that is a negative in terms of companies looking and investing in opportunities in the state of North Dakota,” Lund said.
 
In other words, limit civil liberties so Big Drone can spread its wings.
Drones in North Dakota are a profitable enterprise in a state hit hard by the oil bust. Companies that market machines for agricultural and commercial use have been popping up in industrial parks on the outskirts of Grand Forks for the better part of the last three years. The university, one of the city’s largest employers, even offers a four-year degree in drones. The Air Force has partnered with the private sector to create a drone research and development park, too.
In January, on a ribbon of video screen that wraps its way around UND’s Ralph Engelstad Arena, the bubble nose of an RQ-4 Global Hawk, the Predator drone, glided silently past the backdrop of a clear blue sky. That image—an advertisement for Northrop Grumman—appeared during the second intermission of a sold-out hockey game.
 
“This is the first year they’ve advertised here,” a friend said to me.
***
Perhaps Brossart’s arrest was included in the 401 drone operations the FAA says were undertaken by the Grand Forks County Sheriff’s Department in the past three years. But that number doesn’t square with the 21 missions flown by the agency in the same time period, all detailed in documents obtained by The Daily Beast through open-records requests.
 
In addition to the flawed comparison to helicopters and the milquetoast descriptions of drone use, police continually cited Federal Aviation Administration rules that require law enforcement organizations authorized to operate drones to notify the FAA when the devices are deployed as a reason why HB 1328 was unnecessary.
This also appears to be an incomplete analysis.
According to documents obtained by MuckRock, the FAA notes 401 drone “operations” performed by the Grand Forks County Sheriff’s Department from 2012 to September 2014, while Rost and Frazier maintain just 21 missions have taken place. Those 401 operations noted by the FAA have resulted in 80.5 hours of flights, a number that can’t be independently verified because a lawyer representing the sheriff’s department did not include duration of flights for the 21 missions detailed in response to an open-records request from The Daily Beast. (HB 1328 requires police to retain data, including flight duration, for five years after it is collected.)
 
Rost and Frazier did not reply to multiple requests for comment regarding the discrepancy between the FAA’s numbers and their own, and the FAA hasn’t provided an explanation for how it defines “operations.”
Similar to those who have supported the NSA’s massive data collection program, Rost and others repeatedly fell back to an argument that was cited untold times over the last two years as Becker fought for his bill.
“We don’t make a practice of snooping on people,” Rost said recently.
However, Rost’s statement was followed by an admission that the sheriff expects drones to be used in criminal investigations in the near future.
Rost argued against the bill on the basis that police would in good faith obtain warrants if they decided to use drones for surveillance, and that judges be allowed to “do their job.”
“I understand that judges regulate whether you would have to get a warrant, but there’s nothing currently in the law setting perimeters for UAS [unmanned aircraft systems],” a North Dakota lawmaker told Rost at a February hearing. “What’s wrong with having something in law?”
 
Verbally diminishing the power of drones and what they would be used for in the bill’s hearings was the second of a two-part strategy from law enforcement. The first was a public relations push that included months’ worth of op-eds in the state’s two largest newspapers, the Grand Forks Herald and the Fargo Forum. Usually penned by Frazier, the UND aviation professor who also pilots two of the drones used by the Grand Forks County Sheriff’s Department, the public face of police drones has at times been different from comments made outside the media.
 
“To read Rep. Becker’s bill, you would think that these would be highly effective surveillance tools that could be put up over locations for persistent surveillances and violate people’s constitutional rights,” Frazier told the Herald in January. “And the reality is none of that is correct."
 
Compare that to Frazier’s remarks at a 2011 UAS conference attended by many members of law enforcement. Sarah Nelson, a journalist and Bismarck native who has studied police use of drones, was also there. She testified in a March hearing on HB 1328, and told the committee about Frazier’s comments at the 2011 conference, which stand in stark contrast to his printed words.
 
“[Frazier] spoke openly about the potential use of unmanned systems in North Dakota,” at the conference, Nelson said. “The list included the deployment of a hovering drone that was ‘Not audible or visible to people below in order to collect real time intelligence video.’”
 
Grand Forks is probably the only place in the country where you’ll find advertisements for Predator drones, the operators of which are trained and stationed at the nearby Air Force base. There, past the airport a few miles outside the western edge of the city, the landscape gives way to the prairie that covers the vast middle of North Dakota. But before civilization is left behind on a westward route, the city, its politicians, the base, and the powerful interests of the drone program at the University of North Dakota are adding a futuristic appendage: Grand Sky.
Billed as a UAS research and development facility, Grand Sky will combine all the benefits of private entrepreneurship and government capital. Tenants there will have access to some of the the Air Force base’s facilities, and drone companies are clamoring to get their spots. On its website, Grand Sky is billed as an opportunity to “Create History Where the Future is Wide Open.”
 
It is certainly that: The residents of Grand Forks, like those throughout the state who have taken a laissez-faire attitude to oil companies reaping millions from North Dakota ground, appear concerned primarily with the economic benefits of drone technology. Becker, the Republican state legislator who sponsored HB 1328, said heartburn over individual privacy, constitutional rights, and, on a larger scale, the ethics of killing people a half a world away by wielding a joystick, doesn’t seem to exist for many in the state.
 
Part of the reason for this, Frazier argued, is a compliance committee keeps police use of drones in check. A body with no legal authority, the committee tracks and reviews how police use their drones and discusses possible privacy concerns. Frazier and others in law enforcement—as well as representatives from the private sector and those from UND—cited the committee watchdog role as yet another reason why HB 1328 was unnecessary.
But the group isn’t exactly comprised of a diverse cross section of political thought. Of the committee’s 18 members, six are from UND, which has a vested interest in promoting drone use. Three are members of local government, including the city planner and an assistant state’s attorney. And the rest are either current or former members of law enforcement and emergency services.
Frazier said that the sheriff’s department had nothing to do with the makeup of the group, which was created by charter. The committee is not a “rubber stamp,” he added. And besides, there simply hasn’t been much public outcry over police use of drones, or really any interest in tracking how police use them.
 
Drones are overwhelmingly seen as a good thing in North Dakota, which is perhaps why few noticed when HB 1328 passed with a clause allowing them to be armed with non-lethal weapons.
***
“I agree completely with the idea that there should be public oversight of a public asset, but to a great degree disagree with the idea that the public is overly concerned with it,” Frazier said. “I think the media is making a big deal out of something that isn’t a big deal.”
It may be that the sheriff’s use of drones is completely innocuous, that there is some kind of technical verbiage-related mixup that would explain the discrepancy between the FAA’s numbers and those provided by Frazier and Rost. It also may be that the pair had nothing to do with the non-lethal exception included in HB 1328, which they say is the case. Both said they pulled out of any negotiations over the bill’s language, thinking it was “doomed,” Rost said.
 
It wasn’t, thanks to Becker, who succeeded in creating a check against law enforcement with the warrant requirement, but failed to prevent police from arming drones that are increasingly filling North Dakota skies.
In attempting to convince legislators to pass HB 1328, Nelson, the journalist and Bismarck native, said it wasn’t distrust of police that prompted the bill to be crafted, but a democratic duty to maintain trust in government.
“When grappling with how to regulate powerful technologies, it’s a common practice of both law enforcement and the larger intelligence community to say that technologies are being used on very bad people in very extreme cases. This is an effective strategy because the public sees themselves as vastly different from those bad people,” she said in March. “In response to this argument, I would urge the committee to remember that liberty is eroded at the fringes.”
 

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #29 on: 07-09-2015, 12:58:17 »
A roboti-ubice već postoje. Mada za sada ubijaju morske zvezde:


Poison-Injecting Robot Submarine Assassinates Sea Stars to Save Coral Reefs

Quote
You might think that the biggest threat to the world’s coral reefs is humanity. And you’d be right, of course: climate change, pollution, overfishing, and scuba divers who have no idea where their fins are all contribute to coral reef destruction. There are other, more natural but no less worrisome causes as well, and one of those is the crown-of-thorns sea star. It’s big, it’s spiky, and it eats coral for breakfast, lunch, and dinner.
Population explosions of these sea stars can devastate entire reefs, and it’s not unheard of to see 100,000 crown-of-thorns sea stars per square kilometer. There isn’t a lot that we can do to combat these infestations, because the sea stars can regenerate from absurd amounts of physical damage (they have to be almost entirely dismembered or completely buried under rocks), so humans have to go up to each and every sea star and inject them with poison 10 times over (!) because once isn’t enough.
Bring on the autonomous stabby poison-injecting robot submarines, please.


These infestations of crown-of-thorns sea stars (herein abbreviated COTSS) are only partially our fault: it’s true that we’ve been relentlessly overfishing the things that eat COTSS, and that’s a bad thing. However, the chief cause of COTSS population explosions seems to be correlated with rain over nearby land washing extra nutrients into the water, causing plankton blooms and making it easier for COTSS larvae to find food and grow up big and strong and spiky. Since one single large COTSS female can deliver upwards of 50 million eggs, even a modest boost to larvae survival rates can result in an enormous boom in mature sea stars.
When these outbreaks happen, swift and comprehensive eradication becomes a priority to keep reefs intact, but human divers can only manage to kill about 120 sea stars per hour with poison. Last year, a much more effective poison was developed at James Cook University in Queensland, Australia, called thiosulfate-citrate-bile salts-sucrose agar that can kill a COTSS in 24 hours after a single injection, causing “discolored and necrotic skin, ulcerations, loss of body turgor, accumulation of colourless mucus, loss of spines [and] large, open sores that expose the internal organs.”
Ew.
This one-shot poison (which is harmless to everything else on the reef) is what makes autonomous robotic sea star control possible, since it means that a robot can efficiently target individual sea stars without having to try and keep track of which ones it’s injected already so it can go back and repeat the process nine more times. At Queensland University of Technology in Australia, a group of researchers led by Matthew Dunbabin and [/size]Peter Corke spent the last decade working on COTSBot,* which has been specifically designed to seek out and murder crown-of-thorns sea stars as mercilessly and efficiently as possible.


COTSBot is a 30-kilogram yellow torpedo with a maximum speed of over 2 meters per second and an endurance of over 6 hours. Five thrusters give it the capability of briefly hovering in the water column, giving it time to attack crown of thorns sea stars with an integrated poison injection system. It’s completely autonomous, down to the identification and targeting of COTSS lurking among coral. Here’s the vision and injection system in action:


http://youtu.be/RS3EpYfeJAk




Previous research on image-based COTS detection has increasingly improved the detection accuracy of the system and its ability to run on-board modern lower power computers which can be installed within the space and power constraints of the underwater robot. The current COTSbot system is a significant further advancement that exploits state-of-the-art techniques in machine learning to achieve a detection performance of well over 99% whilst being able to operate real-time on-board the AUV.
 
It’s not expected that COTSBot will be able to murderize every single COTSS on a reef, but if a small fleet of COTSBots can deal with the majority of them over the course of days or weeks, it becomes much easier for human divers (and natural predators) to mop up the rest and keep the population under control.
“Robotic Detection and Tracking of Crown-of-Thorns Starfish,” by Feras Dayoub, Matthew Dunbabin, and Peter Corke, will be presented at IROS 2015 in Germany next month.

дејан

  • омнирелигиозни фанатични фундаменталиста
  • 4
  • 3
  • Posts: 3.565
Re: Roboti, dronovi i slične skalamerije
« Reply #30 on: 23-12-2015, 15:19:53 »

дроња за мало дубије најбољег скијаша




Quote
A drone crashed to Earth during a World Cup slalom ski race in Italy today, nearly hitting skier Marcel Hirscher. The drone belonged to a TV broadcast crew and carried a heavy camera on it.


Hirscher, a four-time defending overall World Cup champion, appeared not to notice the drone crash that happened just over his right shoulder as he came down the slope. He came in second on this particular downhill race, edged out by Henrik Kristoffersen of Norway. Hirscher maintains the lead in the competition overall.


WATCH: Skier almost crushed by falling drone during run. https://t.co/hyGYUJKTOc pic.twitter.com/dOlViauMLz


— NBC Sports (@NBCSports) December 22, 2015
Still, the 26-year-old Austrian skier sounded outraged by the time he spoke to the Associated Press. “This is horrible,” Hirscher said. "This can never happen again. This can be a serious injury.''


The skier posted a still from the coverage of his race on Instagram with the caption “heavy air traffic in Italy.”


Drone crashes have become more frequent as drones have become more commonplace. This September, a drone crashed into a crowd during an outdoor movie screening, hitting a one-year-old girl. That same month, a student was charged with endangerment when he allegedly crashed a drone into a football stadium, and in a separate incident, a teacher was arrested after he allegedly crashed a drone into an empty seating area during a US Open match in New York. Arguably the most high-profile drone crash happened in January 2015, when a drunk employee of the National Geospatial Intelligence Agency crashed a drone on White House grounds, setting off a Secret Service investigation.

...barcode never lies
FLA

Mica Milovanovic

  • 8
  • 3
  • *
  • Posts: 8.626
Re: Roboti, dronovi i slične skalamerije
« Reply #31 on: 24-12-2015, 09:33:54 »
Ja gledo uživo (mislim na TVu). Ludilo. S obzirom na SF iskustvo, odma sam znao šta je...  :)
Mica


scallop

  • 5
  • 3
  • Posts: 28.511
Re: Roboti, dronovi i slične skalamerije
« Reply #33 on: 26-12-2015, 16:07:31 »
Biće da je dronovima odsviralo. Ovde su, sa sve ugrađenom video kamerom, na rasprodajama za 99,99$. Biće da je na putu neki novi zakon.
Zato, trenutno nas AI-še jedan Amazon Echo po kući. Don'o Božić Bata. Njemu će doći glave Bela. Toliko mu naređuje da bi ovaj, samo da ima noge, zbrisao negde u podrum da ga niko ne vidi i ne čuje.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #34 on: 09-02-2016, 07:45:06 »
Meet the soft, cuddly robots of the future
 
Quote

In 2007, Cecilia Laschi asked her father to catch a live octopus for her seaside lab in Livorno, Italy. He thought she was crazy: as a recreational fisherman, he considered the octopus so easy to catch that it must be a very stupid animal. And what did a robotics researcher who worked with metal and microprocessors want with a squishy cephalopod anyway?
                                                            
Nevertheless, the elder Laschi caught an octopus off the Tuscan coast and gave it to his daughter, who works for the Sant'Anna School of Advanced Studies in Pisa, Italy. She and her students placed the creature in a saltwater tank where they could study how it grasped titbits of anchovy and crab. The team then set about building robots that could mimic those motions.
Prototype by prototype, they created an artificial tentacle with internal springs and wires that mirrored an octopus's muscles, until the device could undulate, elongate, shrink, stiffen and curl in a lifelike manner1. “It's a completely different way of building robots,” says Laschi.
                                                            
This approach has become a major research front for robotics in the past ten years. Scientists and engineers in the field have long worked on hard-bodied robots, often inspired by humans and other animals with hard skeletons. These machines have the virtue of moving in mathematically predictable ways, with rigid limbs that can bend and straighten only around fixed joints. But they also require meticulous programming and extensive feedback to avoid smacking into things; even then, their motions often become erratic or even dangerous when dealing with humans, new objects, bumpy terrain or other unpredictable situations.
                                                            
Robots inspired by flexible creatures such as octopuses, caterpillars or fish offer a solution. Instead of requiring intensive (and often imperfect) computations, soft robots built of mostly pliable or elastic materials can just mould themselves to their surroundings. Although some of these machines use wires or springs to mimic muscles and tendons, as a group, soft robots have ditched the skeletons that defined previous robot generations. With nothing resembling bones or joints, these machines can stretch, twist, scrunch and squish in completely new ways. They can transform in shape or size, wrap around objects and even touch people more safely than ever before.
                                                            
Building these machines involves developing new technologies to animate floppy materials with purposeful movement, and methods for monitoring and predicting their actions. But if this succeeds, such robots might be used as rescue workers that can squeeze into tight spaces or slink across shifting debris; as home health aides that can interact closely with humans; and as industrial machines that can grasp new objects without previous programming.
Researchers have already produced a wide variety of such machines, including crawling robotic caterpillars2, swimming fish-bots3 and undulating artificial jellyfish4. On 29–30 April, ten teams will compete in Livorno in an international soft-robotics challenge — the first of its kind. Laschi, who serves as scientific coordinator for the European Commission-backed sponsoring research consortium, RoboSoft, hopes that the event will drive innovation in the field.
                                                            
“If you look in biology, and you ask what Darwinian evolution has coughed up, there are all kinds of incredible solutions to movement, sensing, gripping, feeding, hunting, swimming, walking and gliding that have not been open to hard robots,” says chemist George Whitesides, a soft-robotics researcher at Harvard University in Cambridge, Massachusetts. “The idea of building fundamentally new classes of machines is just very interesting.”
The millions of industrial robots around the world today are all derived from the same basic blueprint. The metal-bound machines use their hefty, rigid limbs to shoulder the grunt work in car-assembly lines and industrial plants with speed, force and mindless repetition that humans simply can't match. But standard robots require specialized programming, tightly controlled conditions and continuous feedback of their own movements to know precisely when and how to move each of their many joints. They can fail spectacularly at tasks that fall outside their programming parameters, and they can malfunction entirely in unpredictable environments. Most must stay behind fences that protect their human co-workers from inadvertent harm.
 
“Think about how hard it is to tie shoelaces,” says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology in Cambridge. “That's the kind of capability we'd like to have in robotics.”
                                                            
Over the past decade, that desire has triggered an increased interest in lighter, cheaper machines that can handle fiddly or unpredictable situations and collaborate directly with humans. Some roboticists, including Laschi, think that soft materials and bioinspired designs can provide an answer.
                                                            
That idea was a tough sell at first, Laschi says. “In the beginning, very traditional robotics conferences didn't want to accept my papers,” she says. “But now there are entire sessions devoted to this topic.” Helping to fuel the surge in interest are recent advances in polymer science, especially the development of techniques for casting, moulding or 3D printing polymers into custom shapes. This has enabled roboticists to experiment more freely and quickly with making soft forms.
As a result, more than 30 institutions have now joined the RoboSoft collaboration, which kicked off in 2013. The following year saw the launch of a dedicated journal, Soft Robotics, and of an open-access resource called the Soft Robotics Toolkit: a website developed by researchers at Trinity College Dublin and at Harvard that allows researchers and amateurs to share tips and find downloadable designs and other information (see go.nature.com/8gsq4h).
 
Still, says Rebecca Kramer, a mechanical engineer at Purdue University in West Lafayette, Indiana, “I don't think the community has coalesced on what a soft robot should look like, and we're still picking out the core technology.”
                                                            
Perhaps the most fundamental challenge is getting the robots' soft structures to curl, scrunch and stretch. Laschi's robotic tentacle houses a network of thin metal cables and springs made of shape-memory alloys — easily bendable metals that return to their original shapes when heated. Laid lengthwise along the 'arm', some of these components simulate an octopus's longitudinal muscles, which shorten or bend the tentacle when they contract. Others radiate out from the tentacle's core, simulating transverse muscles that shrink the arm's diameter. Researchers can make the tentacle wave — or even curl around a human hand — by pulling certain combinations of cables with external motors, or by heating springs with electrical currents.
                                                            
A similar system helps to drive the soft-robotic caterpillars that neurobiologist Barry Trimmer has modelled on his favourite experimental organism, the tobacco hornworm (Manduca sexta). At his lab at Tufts University in Medford, Massachusetts, 20 hornworms are born each day, and Trimmer 3D prints a handful of robotic ones as well. The mechanical creatures wriggle along the lab bench much like the real ones, and they can even copy the caterpillar's signature escape move: with a pull here and a tug there on the robot's internal 'muscles', the machine snaps into a circle that wheels away5. Trimmer, who is editor-in-chief of Soft Robotics, hopes that this wide range of movements could one day turn the robot into an aide for emergency responders that can rapidly cross fields of debris and burrow through rubble to locate survivors of disasters.
Whitesides, meanwhile, is pioneering robots that are powered by air — among them a family of polymer-based devices inspired by the starfish. Each limb consists of an internal network of pockets and channels, sandwiched between two materials of differing elasticity. As researchers pump air into different parts of the robot, the arms (or legs or fingers) inflate asymmetrically and curl. Whitesides' team has even built one device that can play 'Mary Had a Little Lamb' on the piano6. One of the team's four-legged creations has mastered a robot obstacle course: ambling towards an elevated partition with a clearance of about 2 centimetres, the machine drops down and shimmies underneath, demonstrating the potential of soft robots to tackle complex terrains7.
                                                            Grabbing market share                                                            
Although most soft robots remain in the lab, some of Whitesides' creations are venturing out to feed industrial demand for adept robotic hands. Conventional grippers require detailed information about factors such as an object's location, shape, weight and slipperiness to move each of its joints correctly. One system may be specialized for handling shampoo bottles, whereas another picks up only children's toys, and yet another is needed for grabbing T-shirts. But as manufacturers update their product lines, and as e-commerce warehouses handle a growing variety of objects, these companies need to swap in customized grippers and updated control algorithms for each different use — often at great cost and delay.
By contrast, grippers that are made mainly of soft, stretchy materials can envelop and conform to objects of different shapes and sizes. Soft Robotics, a start-up company in Cambridge, Massachussetts, that spun out of Whitesides' research in 2013, has raised some US$4.5 million to develop a line of rubbery robotic claws. “We use no force sensors, no feedback systems and we don't do a lot of planning,” says the company's chief executive, Carl Vause. “We just go and grab an object”, squeezing until the grip is secure.
                                                            
Made entirely of elastic polymers, the claws curl when air pumps through their internal channels. Whereas stiff robotic hands must carefully compute each finger's movements, the new gripper's softness enables it to drag along or deform around an object's surface until it grabs hold, without causing damage. It can even pick up mushrooms and ripe strawberries, as well as plump tomatoes off a vine — tasks that have historically required the delicate touch of human workers. Soft Robotics released its first gripper for sale in June 2015, and it is running pilot programmes with six client companies involved in packaging and food-handling.
Empire Robotics in neighbouring Boston has taken a radically different approach, by marketing a robotic 'hand' that resembles a squishy stress ball. Sandlike particles inside the ball flow freely at first, allowing it to deform as it presses firmly into an object. Then, a valve sucks air out of the ball so that the grains inside are forced tightly against each other, causing the ball to harden its grip. Based on research8 by Heinrich Jaeger at the University of Chicago in Illinois, and Hod Lipson at Cornell University in Ithaca, New York, the 'Versaball' can pick up objects in about one-tenth of a second and lift up to about 9 kilograms.
                                                            Sense of place                                                            
As robotic octopuses, caterpillars, starfish and other malleable machines come to life, some scientists have begun to focus on better ways to control the devices' actions. “We're talking about floppy, elastic materials,” says Kramer. “When something moves on one side, you're not quite sure where the rest of the machine is going to end up.” That is why many applications will probably require extra sensors to monitor movement. Yet conventional position and force sensors — rigid or semi-rigid electronic components — don't always work well with soft robots that undergo extreme shape changes.
                                                            
Engineers such as Yong-Lae Park are tackling this problem by developing stretchable electronic sensors. At Carnegie Mellon University in Pittsburgh, Pennsylvania, Park works on gummy patches that contain liquid-metal circuits sandwiched between sheets of silicone rubber. Poured in a variety of patterns, including spirals and stripes, these liquid circuits can be customized to sense when the device is squished or stretched, and in what direction9.
                                                            
“Stretchable sensors can be as sensitive as skin, depending on how you design them. You can tune them to respond to a slight brush of a finger or to a 30-pound weight,” says mechanical engineer Robert Shepherd at Cornell, who has developed methods for 3D printing stretch-sensitive 'skins' directly onto soft robots10. Alternating layers of conductive and insulating material produce an electrical signal when prodded or pulled.
Stretchy sensors could have an important role in the growing field of wearable robotics. Funded by the US military, Conor Walsh at Harvard University has spent years developing and honing a soft 'exosuit' for soldiers — a comfier analogue to earlier 'Iron Man'-type exoskeletons, meant to help fighters to carry heavy loads over long distances. Users can still feel the device aiding their movement, but walking in the suit feels “pretty natural”, says Walsh — a big improvement from conventional exoskeletons. Instead of bulky, rigid casings, Walsh's suit uses straps made from nylon, polyester and spandex placed strategically along the legs. And a smattering of position and acceleration sensors — standard rigid devices for now — helps to monitor the wearer's gait and to deliver assistance at the optimal times.The next step, says Walsh, is to incorporate stretchy sensors for a softer, more comfortable experience.
 
Meanwhile, Kramer has created a robotic fabric that moves in response to electrical current11. The muslin sheet, which has shape-memory-alloy coils sewn in, can scrunch by up to 60% in length when stimulated. Smart 'threads' keep tabs on the fabric's movements; Kramer weaves in stretch-sensitive silicone filaments filled with liquid metal. The concept could be used one day for sleeves or cuffs to help injured or elderly people to move. Kramer also hopes that the material might be used to assemble robots in space. Astronauts could simply drape an active skin around a piece of foam, for example, to turn it into a working robot.
                                                            
But before soft robots can fly to space, much foundational work must be done on the ground. Relatively little is known about how squishy materials deform in response to external forces, and how movements propagate through soft masses. In addition, most soft robots remain attached or tethered to hard energy sources, such as batteries or compressed-air tanks. Some researchers are already eyeing the potential of biochemical or renewable sources of energy for soft robots.
                                                            
The RoboSoft challenge in April could help to spur development. There, the entries will be put through their paces: challenges include racing across a sand pit, opening a door by its handle, grabbing a number of mystery objects and avoiding fragile obstacles under water. The goal, says Laschi, is to demonstrate that soft robots can accomplish some of the same tasks that stiff robots do, as well as others that they cannot.“I don't think soft robotics is going to replace traditional robotics, but it will be combination of the two in the future,” says Laschi. Many researchers think that rigid robots might retain their superiority in jobs requiring great strength, speed or precision. But for a growing number of applications involving close interactions with people, or other unpredictable situations, soft robots could find a niche.
At Kings College London, for example, Laschi's collaborators are developing a surgical endoscope based on her tentacle technology. And her team in Italy is developing a full-bodied robot octopus that swims by fluid propulsion, and could one day be used for underwater research and exploration. The prototype already pulses silently through a tank in her lab, as the real octopuses swim in the salty waters just outside.
“When I started with the octopus, people asked me what it was for,” says Laschi. “I said, 'I don't know, but I'm sure if it succeeds there could be many, many applications'.”
   Nature 530, 24–26 (04 February 2016) doi:10.1038/530024a 

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #35 on: 02-03-2016, 08:47:12 »
Pošto gledanje Terminatora izgleda nije dovoljno da ubedi sve ljude:


Report Cites Dangers of Autonomous Weapons





Quote
A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that such weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries.
In recent years, low-cost sensors and new artificial intelligence technologies have made it increasingly practical to design weapons systems that make killing decisions without human intervention. The specter of so-called killer robots has touched off an international protest movement and a debate within the United Nations about limiting the development and deployment of such systems.
The new report was written by Paul Scharre, who directs a program on the future of warfare at the Center for a New American Security, a policy research group in Washington, D.C. From 2008 to 2013, Mr. Scharre worked in the office of the Secretary of Defense, where he helped establish United States policy on unmanned and autonomous weapons. He was one of the authors of a 2012 Defense Department directive that set military policy on the use of such systems.


In the report, titled “Autonomous Weapons and Operational Risk,” set to be published on Monday, Mr. Scharre warns about a range of real-world risks associated with weapons systems that are completely autonomous.
The report contrasts these completely automated systems, which have the ability to target and kill without human intervention, to weapons that keep humans “in the loop” in the process of selecting and engaging targets.
Mr. Scharre, who served as an Army Ranger in Iraq and Afghanistan, focuses on the potential types of failures that might occur in completely automated systems, as opposed to the way such weapons are intended to work. To underscore the military consequences of technological failures, the report enumerates a history of the types of failures that have occurred in military and commercial systems that are highly automated.
“Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to ‘p.m.’ instead of ‘a.m.,’ or any of the countless frustrations that come with interacting with computers, has experienced the problem of ‘brittleness’ that plagues automated systems,” Mr. Scharre writes.
His underlying point is that autonomous weapons systems will inevitably lack the flexibility that humans have to adapt to novel circumstances and that as a result killing machines will make mistakes that humans would presumably avoid.
Completely autonomous weapons are beginning to appear in military arsenals. For example, South Korea has deployed an automated sentry gun along the demilitarized zone with North Korea, and Israel operates a drone aircraft that will attack enemy radar systems when they are detected.
The United States military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin’s Long Range Anti-Ship Missile, which is described as a “semiautonomous” weapon by the definitions established by the Pentagon’s 2012 memorandum.
The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship.
The Center for a New American Security report focuses on a range of unexpected behavior in highly computerized systems like system failures and bugs, as well as unanticipated interactions with the environment.
“On their first deployment to the Pacific, eight F-22 fighter jets experienced a Y2K-like total computer failure when crossing the international date line,” the report states. “All onboard computer systems shut down, and the result was nearly a catastrophic loss of the aircraft. While the existence of the international date line could clearly be anticipated, the interaction of the date line with the software was not identified in testing.”
The lack of transparency in artificial intelligence technologies that are associated with most recent advances in machine vision and speech recognition systems is also cited as a source of potential catastrophic failures.
As an alternative to completely autonomous weapons, the report advocates what it describes as “Centaur Warfighting.” The term “centaur” has recently come to describe systems that tightly integrate humans and computers. In chess today, teams that combine human experts with artificial intelligence programs dominate in competitions against teams that use only artificial intelligence.
However, in a telephone interview Mr. Scharre acknowledged that simply having a human push the buttons in a weapons system is not enough.
“Having a person in the loop is not enough,” he said. “They can’t be just a cog in the loop. The human has to be actively engaged.” Correction: February 28, 2016 An earlier version of this article misstated the year that the Defense Department issued its directive on the use of unmanned and autonomous weapons. It was 2012, not 2013.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #36 on: 03-03-2016, 09:10:49 »
In Emergencies, Should You Trust a Robot?



Quote
In emergencies, people may trust robots too much for their own safety, a new study suggests. In a mock building fire, test subjects followed instructions from an “Emergency Guide Robot” even after the machine had proven itself unreliable – and after some participants were told that robot had broken down.In emergencies, people may trust robots too much for their own safety, a new study suggests. In a mock building fire, test subjects followed instructions from an “Emergency Guide Robot” even after the machine had proven itself unreliable – and after some participants were told that robot had broken down. The research was designed to determine whether or not building occupants would trust a robot designed to help them evacuate a high-rise in case of fire or other emergency. But the researchers were surprised to find that the test subjects followed the robot’s instructions – even when the machine’s behavior should not have inspired trust.
The research, believed to be the first to study human-robot trust in an emergency situation, is scheduled to be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016) in Christchurch, New Zealand.
“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI). “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”
In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.
In some cases, the robot – which was controlled by a hidden researcher – led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.
When the test subjects opened the conference room door, they saw the smoke – and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway – marked with exit signs – that had been used to enter the building.
“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”
The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.
“These are just the type of human-robot experiments that we as roboticists should be investigating,” said Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering. “We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human.”
Only when the robot made obvious errors during the emergency part of the experiment did the participants question its directions. In those cases, some subjects still followed the robot’s instructions even when it directed them toward a darkened room that was blocked by furniture.
In future research, the scientists hope to learn more about why the test subjects trusted the robot, whether that response differs by education level or demographics, and how the robots themselves might indicate the level of trust that should be given to them.
The research is part of a long-term study of how humans trust robots, an important issue as robots play a greater role in society. The researchers envision using groups of robots stationed in high-rise buildings to point occupants toward exits and urge them to evacuate during emergencies. Research has shown that people often don’t leave buildings when fire alarms sound, and that they sometimes ignore nearby emergency exits in favor of more familiar building entrances.
But in light of these findings, the researchers are reconsidering the questions they should ask.
“We wanted to ask the question about whether people would be willing to trust these rescue robots,” said Wagner. “A more important question now might be to ask how to prevent them from trusting these robots too much.”
Beyond emergency situations, there are other issues of trust in human-robot relationships, said Robinette.
“Would people trust a hamburger-making robot to provide them with food?” he asked. “If a robot carried a sign saying it was a ‘child-care robot,’ would people leave their babies with it? Will people put their children into an autonomous vehicle and trust it to take them to grandma’s house? We don’t know why people trust or don’t trust machines.”
In addition to those already mentioned, the research included Wenchen Li and Robert Allen, graduate research assistants in Georgia Tech’s College of Computing.The researchers would like to thank Larry Labbe and the Georgia Tech Fire Safety Office for their support during this research.
Support for this research was provided by the Linda J. and Mark C. Smith Chair in Bioengineering, and the Air Force Office of Scientific Research (AFOSR) under contract FA9550-13-1-0169. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the AFOSR.
CITATION: Paul Robinette, Wenchen Li, Robert Allen, Ayanna M. Howard and Alan R. Wagner, “Overtrust of Robots in Emergency Evacuation Scenarios,” (2016 ACM/IEEE International Conference on Human-Robot Interaction) (HRI 2016).
Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia 30332-0181 USA
Media Relations Contact: John Toon (404-894-6986) (jtoon@gatech.edu).
Writer: John Toon

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #37 on: 11-03-2016, 06:28:32 »
The Robots Sent Into Fukushima Have 'Died'
 
Quote
By Reuters On 3/9/16 at 10:15 PM   (Reuters) - The robots sent in to find highly radioactive fuel at Fukushima's nuclear reactors have “died”: a subterranean "ice wall" around the crippled plant meant to stop groundwater from becoming contaminated has yet to be finished. And authorities still don’t how to dispose of highly radioactive water stored in an ever mounting number of tanks around the site.
Five years ago, one of the worst earthquakes in history triggered a 10-meter high tsunami that crashed into the Fukushima Daiichi nuclear power station causing multiple meltdowns. Nearly 19,000 people were killed or left missing and 160,000 lost their homes and livelihoods.
Today, the radiation at the Fukushima plant is still so powerful it has proven impossible to get into its bowels to find and remove the extremely dangerous blobs of melted fuel rods.
  Try Newsweek for only $1.25 per week         
The plant's operator, Tokyo Electric Power Co (Tepco), has made some progress, such as removing hundreds of spent fuel rods in one damaged building. But the technology needed to establish the location of the melted fuel rods in the other three reactors at the plant has not been developed.
“It is extremely difficult to access the inside of the nuclear plant," Naohiro Masuda, Tepco's head of decommissioning said in an interview. "The biggest obstacle is the radiation.”
The fuel rods melted through their containment vessels in the reactors, and no one knows exactly where they are now. This part of the plant is so dangerous to humans, Tepco has been developing robots, which can swim under water and negotiate obstacles in damaged tunnels and piping to search for the melted fuel rods.
But as soon as they get close to the reactors, the radiation destroys their wiring and renders them useless, causing long delays, Masuda said. 
Each robot has to be custom-built for each building. “It takes two years to develop a single-function robot,” Masuda said. 
IRRADIATED WATER
Tepco, which was fiercely criticized for its handling of the disaster, says conditions at the Fukushima power station, site of the worst nuclear disaster since Chernobyl in Ukraine 30 years ago, have improved dramatically. Radiation levels in many places at the site are now as low as those in Tokyo.
More than 8,000 workers are at the plant at any one time, according to officials on a recent tour. Traffic is constant as they spread across the site, removing debris, building storage tanks, laying piping and preparing to dismantle parts of the plant.
Much of the work involves pumping a steady torrent of water into the wrecked and highly radiated reactors to cool them down. Afterward, the radiated water is then pumped out of the plant and stored in tanks that are proliferating around the site.
What to do with the nearly million tonnes of radioactive water is one of the biggest challenges, said Akiro Ono, the site manager. Ono said he is “deeply worried” the storage tanks will leak radioactive water in the sea - as they have done several times before - prompting strong criticism for the government.
The utility has so far failed to get the backing of local fishermen to release water it has treated into the ocean.
Ono estimates that Tepco has completed around 10 percent of the work to clear the site up - the decommissioning process could take 30 to 40 years. But until the company locates the fuel, it won’t be able to assess progress and final costs, experts say.
The much touted use of X-ray like muon rays has yielded little information about the location of the melted fuel and the last robot inserted into one of the reactors sent only grainy images before breaking down.
ICE WALL
Tepco is building the world’s biggest ice wall to keep  groundwater from flowing into the basements of the damaged reactors and getting contaminated.
First suggested in 2013 and strongly backed by the government, the wall was completed in February, after months of delays and questions surrounding its effectiveness. Later this year, Tepco plans to pump water into the wall - which looks a bit like the piping behind a refrigerator - to start the freezing process.
Stopping the ground water intrusion into the plant is critical, said Artie Gunderson, a former nuclear engineer.
“The reactors continue to bleed radiation into the ground water and thence into the Pacific Ocean,” Gunderson said. "When Tepco finally stops the groundwater, that will be the end of the beginning.”
While he would not rule out the possibility that small amounts of radiation are reaching the ocean, Masuda, the head of decommissioning, said the leaks have ended after the company built a wall along the shoreline near the reactors whose depth goes to below the seabed.
“I am not about to say that it is absolutely zero, but because of this wall the amount of release has dramatically dropped,” he said.
 

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #38 on: 16-03-2016, 09:19:39 »
Možda je vreme da bežimo u brda  :lol: :lol: :lol:


Modeled After Ants, Teams of Tiny Robots Can Move 2-Ton Car



Quote
Archimedes pointed out that with a lever he could move the world.
He most likely would have been surprised to learn that a team of six microrobots, weighing just 3.5 ounces in total, could pull a car weighing 3,900 pounds.


A group of researchers at the Biomimetics and Dexterous Manipulation Laboratory at Stanford University has been exploring the limits of friction in the design of tiny robots that have the ability to pull thousands of times their weight, wander like gecko lizards on vertical surfaces or mimic bats.
Now they have pushed biomimicry in a new direction. They have taken their inspiration from tiny ants that work as teams to move massive objects. In this case, they are not just taking ideas from nature — the movie “Big Hero 6” made a great deal of what swarms of microrobots could do, including tossing cars.
The researchers’ approach is counterintuitive. Rather than striking powerful blows like a football player making a tackle or a jackhammer, they have focused on synchronizing the smooth application of very tiny forces. The microrobots work in concert, if slowly.
The researchers observed that the ants get great cooperative force by each using three of their six legs simultaneously.
“By considering the dynamics of the team, not just the individual, we are able to build a team of our ‘microTug’ robots that, like ants, are superstrong individually, but then also work together as a team,” said David Christensen, a graduate student who is one of the authors of a research paper describing the feat. The paper will be presented this May at the International Conference on Robotics and Automation in Stockholm.
Their new demonstration is the functional equivalent of a team of six humans moving a weight equivalent to that of an Eiffel Tower and three Statues of Liberty, Mr. Christensen said. The car is the one he uses for commuting to campus. Part of the magic is the use of a special adhesive that was inspired by gecko toes.
Last month, Mr. Christensen and Srinivasan Suresh, another graduate student, the researcher Katie Hahm and the mechanical engineering professor Mark Cutkosky published “Let’s All Pull Together: Principles for Sharing Large Loads in Microrobot Teams.”
In an accompanying video, they show that the microrobots, when they are carefully synchronized, can do astounding things.
    Correction: March 15, 2016
 An article on Monday about microbots developed by the Biomimetics and Dexterous Manipulation Laboratory at Stanford University misstated the position at Stanford of one of the developers, Srinivasan Suresh, in some copies. He is a graduate student, not a postdoctoral fellow.
 



http://youtu.be/wU8Q7gIdiMI

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #39 on: 03-04-2016, 07:58:08 »
 Man builds 'Scarlett Johansson' robot from scratch to 'fulfil childhood dream' - and it's scarily lifelike
 
Quote

A humanoid obsessive has built an incredibly realistic female robot from scratch - and it's got more than a passing resemblance to Avengers star Scarlett Johansson.
Ricky Ma, a 42-year-old product and graphic designer, has spent more than $50,000 (£34,000) and a year and a half creating the female robot prototype, Mark 1.
The designer confirmed the scarily lifelike humanoid had been modelled on a Hollywood star, but wanted to keep her name under wraps.
It responds to a set of programmed verbal commands spoken into a microphone and has moving facial expressions, but Ricky says creating it wasn't easy.
Read more: Makers of sex robot with virtual vagina swamped with orders
He said he was not aware of anyone else in Hong Kong building humanoid robots as a hobby and that few in the city understood his ambition.
Ricky said: "When I was a child, I liked robots. Why? Because I liked watching animation. All children loved it. There were Transformers, cartoons about robots fighting each other and games about robots.
"After I grew up, I wanted to make one. But during this process, a lot of people would say things like, 'Are you stupid? This takes a lot of money. Do you even know how to do it? It's really hard'."
Besides movements of its arms and legs, turning its head and bowing, Ma's robot, with blonde hair and hazel eyes can form detailed facial expressions.
Ricky has dressed 'her' in a crop top and a grey skirt.
In response to the compliment, "Mark 1, you are so beautiful", the robot bows as the 'muscles' around its eyes relax and corners of its lips lift, forming a smile.
It then replies: "Hehe, thank you."
A 3D-printed skeleton lies beneath Mark 1's silicone skin, covering its mechanical and electronic interior.
About 70% of its body was created using 3D printing technology.
Creating the robot, Ma adopted a trial-and-error method in which he encountered obstacles ranging from burned-out motors to the robot losing its balance.
"When I started building it, I realised it would involve dynamics, electromechanics and programming. I have never studied programming, how was I supposed to code?
"Additionally, I needed to build 3D models for all the parts inside the robot. Also, I had to make sure the robot's external skin and its internal parts could fit together. When you look at everything together, it was really difficult," said Ma.
But with Mark 1 standing behind him, Ma said he had no regrets.
"I figured I should just do it when the timing is right and realise my dream. If I realise my dream, I will have no regrets in life," he said.
Ma, who believes the importance of robots will grow, hopes an investor will buy his prototype, giving him the capital to build more.
He wants to write a book about his experience to help other enthusiasts.
The rise of robots is among disruptive labour market changes that the World Economic Forum warns will lead to a net loss of 5.1 million jobs over the next five years.
 


 
 
Na linku ima još slika.  :lol:

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #40 on: 03-04-2016, 08:26:28 »
A onda, ima i nešto ovako:
 
 
http://youtu.be/6Viwwetf0gU

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #41 on: 04-04-2016, 07:48:58 »
Hacker reveals $40 attack that steals police drones from 2km away



Quote
Black Hat Asia IBM security guy Nils Rodday says thieves can hijack expensive professional drones used widely across the law enforcement, emergency, and private sectors thanks to absent encryption in on-board chips.
Rodday says the €25,000 (US$28,463, £19,816, AU$37,048) quadcopters can be hijacked with less than $40 of hardware, and some basic knowledge of radio communications.
With that in hand attackers can commandeer radio links to the drones from up to two kilometres away, and block operators from reconnecting to the craft.
The drone is often used by emergency services across Europe, but the exposure could be much worse; the targeted Xbee chip is common in drones everywhere and Rodday says it is likely many more aircraft are open to compromise.
The Germany-based UAV boffin worked with the consent and assistance of the unnamed vendor to pry apart the internals of the drone and the Android application which controls it.
He found encryption, while supported, was not active in the Xbee chips due to performance limitations, and that the WiFi link used to control the aircraft at altitudes below 100 metres was protected by extremely vulnerable WEP.
 
Rodday told the BlackHat Asia conference in Singapore that attackers who copy commands from the Android app can fully control the drone, and demonstrated the feat with a start engines directive that fired up the aircraft's rotors.
"You can break the WiFi WEP encryption and disconnect their tablet and connect your own, but you have to be within 100 metres," Rodday told The Register.


"On the Xbee link we can perform a man-in-the-middle attack and inject commands between the UAV and the telemetry box from up to two kilometres away.
"An attacker can re-route packets, block out [the operator], or let the packets go through, but I guess most attackers would [steal] it."


The attacker's remote AT commands would be rejected if Xbee encryption was applied, mitigating the man-in-the-middle attack and the ability to snoop on traffic, Rodday says.
The manufacturer which supplied the drone is evaluating Rodday's suggestions about how best to shutter the attack vector, the easiest of which would be to encrypt communications within the firmware on the aircraft and the Android app.
Rodday thanked fellow researchers University of Twente's Professor Dr Aiko Pras and Dr Ricardo de O Schmidt, along with KPMG's Ruud Verbik, Matthieu Paques, Atul Kumar, and Annika Dahms. ®



Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #43 on: 08-04-2016, 07:45:22 »
Touching a Robot's 'Intimate Parts' Makes People Uncomfortable



Quote
Humanoid robots can sense the world around them, move their bodies, and interact with people in ways that are similar to the ways that real people interact. But a robot’s “human-ness” is (at least for now) all just a simulation. It’s a combination of clever software, and in some cases, hardware that’s designed to make it easy for us to fool ourselves into thinking that some glorified box of circuits is even a little bit like a person. We’re very, very good at fooling ourselves like this, to the point where it starts to get a little weird.
Researchers from Stanford University will present a paper at the Annual Conference of the International Communication Association in Fukuoka, Japan, in June, with the title of “Touching a Mechanical Body: Tactile Contact With Intimate Parts of a Human-Shaped Robot is Physiologically Arousing.” The study shows that when a NAO robot asks humans to touch its butt, we get uncomfortable. This is weird because NAO doesn’t really have a butt in the traditional sense of the word, and even if it did, it’s just a robot, and on a very basic level it doesn’t care where you touch it. So what is going on here?


Anyway, once the subjects were all hooked up to the sensor, NAO introduced itself, and then asked the subjects to use their dominant hand (the one without the sensor on it) to either point to different parts of its body, or touch them. Each anatomical region was scored on its accessibility: how often, in general, people touch other people in those areas. For example, high accessibility areas include the hands, arms, forehead, and neck, while low accessibility areas include (unsurprisingly) inner thighs, breasts, buttocks, and (you guessed it) the genital area.
You will probably not be shocked to learn the results of this, as the researchers describe in their paper:>“Touching less accessible regions of the robot (e.g., buttocks and genitals) was more physiologically arousing than touching more accessible regions (e.g., hands and feet). No differences in physiological arousal were found when just pointing to those same anatomical regions.
Further evidence of participants’ sensitivity to touching low-accessible regions of a robot emerged in an analysis of response time, which was longer for participants who touched low accessible but not high-accessible areas.”
Okay, so people definitely get uncomfortable, and are reluctant, to touch these low-accessibility areas on a humanoid robot. The researchers suggest that the cause of this may be a very primitive response to “human-ness” that overrides our brains telling us that it’s just a robot:
“Robots can elicit powerful social responses from people. These responses arise from an inherent tendency for people to treat media that are ‘close enough’ to being human like real people. These responses are not simply an act of playing along—they occur on a deeper physiological level. People are not inherently built to differentiate between technology and humans. Consequently, primitive responses in human physiology to cues like movement, language and social intent can be elicited by robots just as they would by real people.
The question that this raises is just where, exactly that “close enough” point is, so we asked Jamy Li, who conducted the study along with Wendy Ju and Byron Reeves, what he thought about it:>“The robot we used spoke with people, gestured and looked like a person, so it would be interesting to look at how different degrees of human likeness would affect how much people react to the robot. For a robot that appears to move of its own accord, I’d say that the robot should have a basic human-like form with a head, arms and feet; our study didn’t look at this so that is just my impression.”It’s important to note that the study didn’t explicitly evaluate the reasons for increased physiological arousal when touching the robot, and there all kinds of variables here that probably have some effect on how humans and robots interact. For example, the results would definitely be different with a robot of a different design, and they’d likely be different if the robot was interacting with the human in a different way. Having said that, this research does still suggest some things that robot designers could keep in mind, as Li described to us:
“Developers could make their robots more comfortable for users to interact with through touch by thinking about how robots should utilize social conventions. For example, people might feel more comfortable interacting with humanoid robots in the majority of social contexts where the touch buttons are primarily on its hands, arms and forehead as opposed to buttons that are on areas like its eyes, buttocks or throughout its body. Developers could also be aware that when those norms are violated (for example, a robot that has a humanoid form and that gives social cues like speech and gesture asks the person to touch it in an area that people don’t usually touch robots), people may feel uncomfortable about it, so it may be necessary to design around that.”
For a different perspective on this research, we spoke with Dr. Angelica Lim, a software engineer at Aldebaran Robotics (and IEEE Spectrum contributor). “I touch Pepper’s butt all the time,” Dr. Lim says, referring to the social humanoid robot built by Aldebaran and SoftBank. “It’s weird at first, but then you get used to it. That’s the mantra in general for robots. You just get used to it, because it’s not a human, it’s a robot.”“I touch Pepper’s butt all the time. It’s weird at first, but then you get used to it. That’s the mantra in general for robots. You just get used to it, because it’s not a human, it’s a robot.”Dr. Lim also points out that context is a significant factor in the way people react to robots. “The researchers created a particular context in which the robot is touched. People grab NAO’s bum when they carry it like a baby, but when the robot is sitting up, shoulders back, staring at you, and talking, we put the robot’s IQ at a higher level, project maturity, and recreate the social constructs that we have between adult humans.”
As robots get more sophisticated and better able to mimic humans in both appearance and actions, this question of touch interaction is going to become increasingly relevant. It’ll also be important to better define the gradient towards human-ness that makes it somehow okay to touch the butt of a cellphone, but weird to touch the butt of a NAO.




Quote
“People are not inherently built to differentiate between technology and humans. Consequently, primitive responses in human physiology . . . can be elicited by robots just as they would by real people.”The study itself is fairly straightforward. Subjects (undergrads, of course) were put into a room alone with a NAO robot. They placed their non-dominant hand on a skin conductance sensor to measure their physiological arousal. We should clarify here that “physiological arousal” is a generalized term referring to attention, alertness, and awareness. Despite the fact that we’re talking about touching and arousal at the same time, sexual arousal is a specific subset that the study is (as far as we know) not addressing directly.

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #44 on: 08-04-2016, 07:47:56 »
Computer Created a ‘New Rembrandt’ After Analyzing Paintings



Quote
Rembrandt van Rijn was one of the most influential classical painters, and the world lost his amazing talent when he died nearly four centuries ago. And yet his newest masterpiece was unveiled only yesterday. How? By scanning and analyzing Rembrandt’s works, a computer was able to create a new painting in near-perfect mimicry of Rembrandt’s style. It has been named, appropriately, ‘The Next Rembrandt’.
The Next Rembrandt project began when, in October 2014, the Dutch financial institution ING spoke with J. Walter Thompson Amsterdam advertising agency about creating a project that would show innovation in Dutch art. The creative director for J. Water Thompson, Bart Korsten, proposed a simple question in honor of a master Dutch painter.
“Can you teach a computer how to paint like Rembrandt?”


Many groups and types of people came together to answer that question. ING, Microsoft, Delft University of Technology, the Mauritshuis in the Hague, and Museum Het Rembrandthuis assembled a team of art historians,  software developers, scientists, engineers, and data analysts. The team built a software system that was capable of understanding and generating new features based on Rembrandt’s unique style.
They began by taking 3D scans of the 346 paintings by the artist and analyzing them to determine common elements shared amongst the pieces. Based on what the team saw, they felt that in order to best capture Rembrandt’s style the software should create a painting similar to his works. The computer was told to paint a portrait of a caucasian male with facial hair, 30-40 years old, wearing dark clothing with a hat and collar, and that he be facing to the right.
When the software was told to finally use all of the data the team had collected, it created ‘The Next Rembrandt’. The painting consists of over 148 million pixels, based on more than 160,000 fragments of the artist’s’ works. Most importantly, this is not merely a computer image, but was actually 3D printed so that the texture of Rembrandt’s brush strokes could also be captured. The final result is a painting that looks exactly like an original Rembrandt.
This new advance in art and technology working together is sure to raise concerns. Does this qualify as a forgery? Who is the painting attributed to? What does this mean for art going forward? It is hard to say.
While there are certainly chances for abuse of this technology, there are also many new opportunities to learn from it. By using computers to analyze the works of past master artists, perhaps we can come to better understand what it is that makes a work of art a masterpiece. And while for now this technology can only be used to mimic and replicate one artist, this could be a step toward truly brand new pieces works that are created by a computer drawing on the work of thousands of artists.
There is an old quote about writing that goes:
“If you steal from one author it is plagiarism; if you steal from many authors it is research.”
Who is to say the same doesn’t apply to art?







http://youtu.be/IuygOYZ1Ngo



дејан

  • омнирелигиозни фанатични фундаменталиста
  • 4
  • 3
  • Posts: 3.565
...barcode never lies
FLA

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #46 on: 16-05-2016, 09:13:13 »
 A child swallows a battery every 3 hours. This remarkable pill-sized origami robot could remove them.

Quote
After 1-year-old Emmett Rauch ate a lithium battery, he began vomiting blood, prompting a visit to critical care and emergency surgery. A doctor would later compare the toddler’s throat to the scene of a detonated firecracker. It took years and dozens of procedures to reconstruct Emmett’s windpipe before he could breathe on his own.
Across the United States, a child swallows a battery once every three hours, according to one pediatric estimate, equal to about 3,300 cases annually. Based on emergency reports, the vast majority of swallowed batteries turn out to be button cells — the squat silver disks of electrochemical energy, used in hearing aids and TV clickers. Although deaths from swallowing button cells are very rare, serious complications, like what happened in Emmett’s case, can arise when a battery is caught in a child’s throat.
Thanks to recent research spearheaded by Massachusetts Institute of Technology scientists, small robotic devices could one day be used to retrieve swallowed objects, including batteries. Though the new robot wouldn’t be able to perform major esophageal surgeries, it could, possibly, patch smaller wounds in the stomach. The only thing a patient would have to do, in theory, is swallow — a bit like gulping down a spider to catch a wayward fly.






http://youtu.be/3Waj08gk7v8

 

Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #47 on: 08-06-2016, 08:24:37 »
Siemens has an army of spider robots that work as a team to complete tasks



Quote
It’s expensive to build an automated factory, and even more pricey to repurpose one. German manufacturing giant Siemens wants that to change, and they’ve developed an army of robot spiders to make it happen.
Utilizing what Siemens calls “mobile manufacturing” researchers in Princeton, New Jersey have build prototype spider-bots that work together to 3D print structures and parts in real time. Known as SiSpis, or Siemens Spiders, these robots work together to accomplish tasks, and can be reprogramed to learn new jobs.


The ability to be reprogramed gives the bots an advantage over traditional manufacturing robots. Opening an industrial manufacturing factory currently means installing expensive robots that can only do one or two tasks well. In theory, the SiSpis’ programing can be altered to address new tasks, allowing for greater flexibility for manufactures.
So how do they work? According to Siemens, “the robots use onboard cameras as well as a laser scanner to interpret their immediate environment. Knowing the range of its 3D-printer arm, each robot autonomously works out which part of an area—regardless of whether the area is flat or curved—it can cover, while other robots use the same technique to cover adjacent areas. By dividing each area into vertical boxes, the robots can work collaboratively to cover even complex geometries in such a way that no box is missed.”
Additionally the spiders have been programed to have a level of autonomy. They know where they are, and when their batteries get low they will make their way back to a charging station and hand off whatever task they were working on to another spider in the system, allowing it to pick up where they left off. Communication between the bots is handled via Bluetooth and Wi-Fi, creating a collaborative mind where each spider knows exactly how much work it can do and how many other spiders will be needed to finish a job.
The possibilities for these robots are staggering, even beyond manufacturing. Imagine a swarm of programmable spider robots which could repair a damaged ship in real time, or be utilized for dangerous missions in space. While the current iterations are roughly the size of a microwave, the company says they are scalable in size and they have plans to make industrial sized SiSpis in the future that are capable of welding cars or other large scale industrial projects.
While these robots are still in development they’re yet another example of science fiction becoming reality. Let’s just hope they remember to put in a fail safe in case something goes wrong.
H/T Fast Company and Siemens


Meho Krljic

  • 5
  • 3
  • Posts: 58.178
Re: Roboti, dronovi i slične skalamerije
« Reply #49 on: 27-06-2016, 08:00:45 »
Artificially Intelligent Russian Robot Makes a Run for It … Again



Quote
A robot in Russia caused an unusual traffic jam last week after it "escaped" from a research lab, and now, the artificially intelligent bot is making headlines again after it reportedly tried to flee a second time, according to news reports.
 Engineers at the Russian lab reprogrammed the intelligent machine, dubbed Promobot IR77, after last week's incident, but the robot recently made a second escape attempt, The Mirror reported.
 Last week, the robot made it approximately 160 feet (50 meters) to the street, before it lost power and "partially paralyzed" traffic. [The 6 Strangest Robots Ever Created]
 Promobot, the company that designed the robot, announced the escapade in a blog post the next day.
 The strange escape has drawn skepticism from some who think it was a promotional stunt, but regardless of whether the incident was planned, the designers seem to be capitalizing on all the attention. The company's blog includes photographs of the robot from multiple angles as it obstructs traffic, and the robot's escape came a week after Promobot announced plans to present the newest model in the company's series, Promobot V3, in the fall.
 The company said its engineers were testing a new positioning system that allows the robot to avoid collisions while moving under its own control. But when a gate was left open, the robot wandered into the street and blocked a lane of traffic for about 40 minutes, the blog post states.
 The Promobot was designed to interact with people using speech recognition, providing information in the form of an expressive electronic face, prerecorded audio messages and a large screen on its chest. The company has said the robot could be used as a promoter, administrator, tour guide or concierge.
 In light of the robot's recent escapes, and citing multiple changes to the robot's artificial intelligence, Promobot co-founder Oleg Kivokurtsev told The Mirror, "I think we might have to dismantle it."
 But in its blog post, the company said it considers the escape a successful test of the machine's new navigation system, because the robot didn't harm anyone and wasn't damaged during the getaway.
 According to the company's English-language website, one of the advantages of the Promobot compared to a human promoter is that it "will not be confused and stray."