NAUKA I KVAZINAUKA (izvorište inspiracije za mnoga SF dela) > TEHNIČKE NAUKE, SAOBRAĆAJ, KOSMONAUTIKA

"Leteci automobil" na trzistu

<< < (2/8) > >>

scallop:
A koje gorivo troši uređaj koji komprimuje vazduh? Peni po kilometru je i dalje više od litar goriva na 100 km, što za ovakve kutije nije nikakav problem. A Indusi će lako u njega potrpati celu familiju. :lol:

Meho Krljic:
Ma, da se razumemo, deset hiljada dolara za ovakva kola u startu zvuči preskupo a peni po kilometru je, što ti kažeš skuplje od benzina. S druge strane, manje valjda zagađuje, pa ima neku prednos.

Lord Kufer:
Jel ovo za vožnju po kući?

Meho Krljic:
Kalifornija je legalizovala samovozeće automobile:
 
http://blog.seattlepi.com/techchron/2012/09/25/california-affirms-legality-of-driverless-cars/
 

--- Quote ---
California has become the third state to explicitly legalize driverless vehicles, setting the stage for computers to take the wheel along the state’s highways and roads — at least eventually.
On Tuesday, Governor Jerry Brown signed SB 1298, which affirms that so call autonomous vehicles are legal in California, while requiring the Department of Motor Vehicles to establish and enforce safety regulations for manufacturers. The governor put pen to paper at Google’s headquarters in Mountain View, where the technology giant has been developing and testing driverless Toyota Prii for years.
“Today we’re looking at science fiction becoming tomorrow’s reality,” Gov. Brown said. “This self-driving car is another step forward in this long, march of California pioneering the future and leading not just the country, but the whole world.”
The law immediately allows for testing of the vehicles on public roadways, so long as properly licensed drivers are seated at the wheel and able to take over. It also lays out a roadmap for manufacturers to seek permits from the DMV to build and sell driverless cars to consumers. It requires the department to adopt regulations covering driverless vehicles “as soon as practicable,” but at least by Jan. 2015.
In other words, don’t expect the highways to be overrun with robot drivers just yet. Which is good, since most companies and researchers say there’s much work still to be done.
But Senator Alex Padilla (D-Pacoima), who introduced the bill, and Google, which lobbied for it, say autonomous vehicles could vastly improve public safety in the near future. Google co-founder Sergey Brin added that driverless cars will provide the handicapped greater mobility, give commuters back the productive hours they now waste sitting in traffic and reduce congestion on roads (and by extension, pollution).
“It really has the power to change people’s lives,” he said.
The case for improved safety certainly makes intuitive sense, assuming the technology is adequately developed. A 2006 Department of Transportation study found driver error occurred in almost 80 percent of car accidents. Computers, on the other hand, never get tired or distracted. Presumably they also won’t speed, run red lights, forget to signal or tailgate.
But it’s worth noting that there’s no wide-scale testing of the premise to date. And as every computer user knows well, machines are fallible and occasionally unpredictable. The artificial intelligence software operating these vehicles is making predictions about appropriate responses based on programmed rules and huge volumes of data, including maps and previous miles logged.
But there are always unknown unknowns, unique conditions the software might not have encountered before and might not react to in the way we’d hope.
Ryan Calo, an assistant professor of law focused on robotics at the University of Washington, noted in an earlier interview that a vehicle might know to avoid baby strollers and shopping carts; but might make the wrong choice if suddenly presented with a choice between the two.
Calo thinks autonomous vehicles can improve safety, but notes that public perception of the technology could turn on events like these, even if the machines prove statistically safer than humans. In other words, we’ll be tough and unfair critics. That makes it all the more critical that the technology works well before it’s widely deployed.
This leaves the DMV to tackle all sorts of weighty questions concerning safety and liability, including: How safe is safe enough? How should these vehicles be evaluated against that goal? And how do you create regulations for technology that’s still under development?
“The hard work is left to be done by the DMV,” said Bryant Walker Smith, a fellow at Stanford’s Center for Automotive Research.
He has pointed to a statistical basis for safety that the DMV might consider as it begins to develop standards.
After crunching data on crashes by human drivers, Walker Smith noted in a blog post earlier this year: “Google’s cars would need to drive themselves (by themselves) more than 725,000 representative miles without incident for us to say with 99 percent confidence that they crash less frequently than conventional cars. If we look only at fatal crashes, this minimum skyrockets to 300 million miles. To my knowledge, Google has yet to reach these milestones.”
On Tuesday, Brin said Google cars have now traveled more than 300,000 miles, the last “50,000 or so … without safety critical intervention.”
“But that’s not good enough,” he said.
Brin said there should continue to be extensive field tests, as well as safety evaluations in labs and closed courses.
“The self-driving cars will face far greater scrutiny than a human driver would, and appropriately so,” he said.
In order for the DMV to adequately understand the safety issues potentially posed by an artificial intelligence program, it must reach out to a broad array of stakeholders, Calo said on Tuesday.
“It’s crucial that the DMV speak to technologists, and not just Google,” he said.
Calo added that the DMV should also talk to academic researchers and car companies developing new safety features that could tip into “autonomous” territory. Among other things, it should be cautious about defining “autonomous” vehicles in a way that could discourage companies from adding features that could improve safety, by subjecting them to rigorous new rules, he said.
Another concern about driverless cars is privacy. The machines will have to collect and store certain information as part of its basic functioning, as well as to improve over time.
Due to pressure from privacy advocates, the final version of law now requires manufacturers to provide a written disclosure describing what data is collected. But John Simpson, director of Consumer Watchdog’s privacy project, says that doesn’t go far enough.
“We think the provision needs to be that information should be gathered only for the purpose of navigating the vehicle, retained only as long as necessary for the navigation of the vehicle and not used for any other purpose whatsoever, unless the consumer specifically gives their permission,” he said.
Technically, driverless vehicles are already legal in many states insofar as no one ever thought to make them illegal. That’s why Google has been able to test its cars on California’s roads. But those sort of advances have pushed a number of states to take up the issue.
Nevada’s governor signed a driverless car bill last year, as did Florida’s earlier this year. Meanwhile, legislatures in Hawaii, Oklahoma and Arizona have considered similar measures.
 

--- End quote ---

Meho Krljic:
Još malo o automobilima koji voze same sebe i asimovljevskim etičkim dilemama


Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings


--- Quote ---Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake.
Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car.
Did your robot car make the right decision? This scene, of course, is based on the infamous “trolley problem” that many folks are now talking about in AI ethics. It’s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.
The thought-experiment is a moral dilemma, because there’s no clearly right way to go. It’s generally better to harm fewer people than more, to have one person die instead of five. But the car manufacturer creates liability for itself in following that rule, sensible as it may be. Swerving the car directly results in that one person’s death: this is an act of killing. Had it done nothing, the five people would have died, but you would have killed them, not the car manufacturer which in that case would merely have let them die.


Even if the car didn’t swerve, the car manufacturer could still be blamed for ignoring the plight of those five people, when it held the power to save them. In other words: damned if you do, and damned if you don’t.
So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.
Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right?  In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?
 It Doesn’t Solve Liability for the Company If the goal is to limit liability for the car manufacturer, this tactic fails, as even if the user ultimately determines the weighting of different values factored into a crash decision the company can still be liable. To draw out that point, let’s make the ethical choices outrageous:
Imagine that manufacturers created preference settings that allow us to save hybrid cars over gas-guzzling trucks, or insured cars over uninsured ones, or helmeted motorcyclists over unhelmeted ones. Or more troubling, ethics settings that allow us to save children over the elderly, or men over women, or rich people over the poor, or straight people over gay ones, or Christians over Muslims.
In an accident that requires choosing one victim over another, the manufacturer could still be faulted for giving the user any option at all—that is, the option to discriminate against a particular class of drivers or people. Saving, protecting, or valuing one kind of thing effectively means choosing another kind to target in an unavoidable crash scenario.
Granted, some of these choices seem offensive and inappropriate in the first place. Some are rooted in hate, though not all are. But for many of us, it is also offensive and inappropriate to assume that your own life matters more than the lives of others—especially more than five, 10, 20, or 100 lives anonymous to you. (Is love of one’s self much different than hatred or indifference toward others?)
To be that self-centered seems to be a thoughtless or callous mindset that’s at the root of many social problems today. Likewise, many of us would be offended if life-and-death decisions about others were made according to costs (legal or financial) incurred by you. Doing the right thing is often difficult, exactly because it goes against our own interests.
Whatever the right value is to put on human life isn’t the issue here, and it’d be controversial any which way. In the same survey above, 36 percent of respondents would want a robot car to sacrifice their life to avoid crashing into a child, while 64 percent would want the child to die in order to save their own life. This is to say that we’re nowhere near a consensus on this issue.
The point is this: Even with an ethics setting adjusted by you, an accident victim hit by your robot car could potentially sue the car manufacturer for (1) creating an algorithm that makes her a target and (2) allowing you the option of running that algorithm when someone like her—someone on the losing end of the algorithm—would predictably be a victim under a certain set of circumstances.
 Punting Responsibility to Customers Even if an ethics setting lets the company off the hook, guess what? We, the users, may then be solely responsible for injury or death in an unavoidable accident. At best, an ethics setting merely punts responsibility from manufacturer to customer, but it still doesn’t make progress toward that responsibility. The customer would still need to undergo soul-searching and philosophical studies to think carefully about which ethical code he or she can live with, and all that it implies.
 In an important sense, any injury that results from our ethics setting may be premeditated if it’s foreseen. And it implies a lot. In an important sense, any injury that results from our ethics setting may be premeditated if it’s foreseen. By valuing our lives over others, we know that others would be targeted first in a no-win scenario where someone will be struck. We mean for that to happen. This premeditation is the difference between manslaughter and murder, a much more serious offense.
In a non-automated car today, though, we could be excused for making an unfortunate knee-jerk reaction to save ourselves instead of a child or even a crowd of people. Without much time to think about it, we can only make snap decisions, if they’re even true decisions at all, as opposed to merely involuntary reflexes.
 Deus in Machina So, an ethics setting is not a quick workaround to the difficult moral dilemma presented by robotic cars. Other possible solutions to consider include limiting manufacturer liability by law, similar to legal protections for vaccine makers, since immunizations are essential for a healthy society, too. Or if industry is unwilling or unable to develop ethics standards, regulatory agencies could step in to do the job—but industry should want to try first.
With robot cars, we’re trying to design for random events that previously had no design, and that takes us into surreal territory. Like Alice’s wonderland, we don’t know which way is up or down, right or wrong. But our technologies are powerful: they give us increasing omniscience and control to bring order to the chaos. When we introduce control to what used to be only instinctive or random—when we put God in the machine—we create new responsibility for ourselves to get it right.
--- End quote ---



Isti autor o pitanju koje već izvesno vreme promiče kroz diskusije: kad autonomna kola uskoro postanu svačija realnost, da li će to promeniti način na koji se kola koriste kao terorističko oružje:



Don’t fear the robot car bomb


--- Quote --- Within the next few years, autonomous vehicles—alias robot cars—could be weaponized, the US Federal Bureau of Investigation (FBI) fears. In a recently disclosed report, FBI experts wrote that they believe that robot cars would be “game changing” for law enforcement. The self-driving machines could be professional getaway drivers, to name one possibility. Given the pace of developments on autonomous cars, this doesn’t seem implausible.
But what about robotic car bombers? If car bombs no longer require sacrificing the driver’s life, then criminals and terrorists might be more likely to use them. The two-page FBI report doesn’t mention this idea directly, but this scenario has caused much public anxiety anyway—perhaps reasonably so. Car bombs are visceral icons of terrorism in modern times, from The Troubles of Northern Ireland to regional conflicts in the Middle East and Asia.
In the first half of 2014, about 4,000 people were killed or injured in vehicle bombs worldwide. In the last few weeks alone, more than 150 people were killed by car bombs in Iraq, Afghanistan, Yemen, Somalia, Egypt, and Thailand. Even China saw car bombings this summer.
America is no stranger to these crude weapons either. In the deadliest act of domestic terrorism on US soil, a truck bomb killed 168 people and injured about 700 others in Oklahoma City in 1995. That one explosion caused more than $650 million in damage to hundreds of buildings and cars within a 16-block radius. In 1993, a truck bomb parked underneath the World Trade Center killed six people and injured more than a thousand others in the ensuing chaos. And earlier this year, jihadists were calling for more car bombs in America. Thus, popular concerns about car bombs seem all too real.
But what do automated car bombs mean to criminals and terrorists? Perhaps the same as anything else that is automated. Generally, robots take over those jobs called the “three D’s”: dull, dirty, and dangerous. They bring greater precision, more endurance, cost savings, labor efficiencies, force multiplication, ability to operate in inaccessible areas, less risk to human life, and other advantages.
But how would these benefits supposed to play out in robot car bombs? Less well than might be imagined.
Pros and cons. For the would-be suicide car bomber, a robotic car means eliminating the pesky suicide part. By replacing the human driver who is often sacrificed in the detonation of a car bomb, an autonomous vehicle removes a major downside. This aspect is related to the worry that nation-states may be quicker to use force because of armed drones, since those robots remove the political cost of casualties to their own side. When costs got down, adoption rates go up; therefore, we can expect to see an increase in suicide car-bombing incidents, driven by autonomous technologies.
Or so the thinking goes.
But this analysis is too pat. Part of the point for some guerilla fighters—though probably not for ordinary criminals—is martyrdom and its eternal benefits. So, dying isn’t so much of a cost to these terrorists, but rather more of a payoff. This demographic probably wouldn’t be tempted much by self-driving technology, since they are already undeterred by death.
Of course, it may be that a more calculating terrorist, who still seeks glory, would like to do as much damage as possible before he kills himself. (Though some suicide bombers are women, most of them are still men.) In this case, he may want to mastermind several car-bombing attacks before finally dying in one. Robot cars would enable him to do so, and still allow him to get credit for his work, an issue of importance to terrorists, if not to criminals.
And at the least, those not motivated by ideology might not want to die quite so soon. For them, a robotic driver would be an attractive accomplice.
However, other options are already available for terrorists who do not want to harm themselves—yet these options have not created any panic about car-bombing attacks. For instance, both criminals and guerilla fighters have been known to recruit and train others to do their bidding. Those designated as drivers sometimes are not even aware of their explosive cargo, which avoids the trouble of indoctrinating them toward fanatical self-sacrifice. Terrorists could kidnap innocent people and coerce them to become suicide bombers, which is reportedly occurring today in Nigeria.
So if ease and costs are considerations, there are better alternatives than transforming robot cars into mobile bombs. For one thing, the only production cars being built today with self-driving capabilities are the Mercedes Benz S-Class sedan (that sells for about $100,000) and the Infiniti Q50 sedan (about $40,000)—not exactly tools for the budget-conscious terrorist, even if prices do fall in the future. Even then, their capacity to operate autonomously is primarily limited to things such as staying within a lane and following the flow of traffic on a highway.
Google’s self-driving car makes even less sense for this evil purpose. As the most advanced automated car today, it would cost more than a Ferrari 599 at over $300,000—if it were for sale, which as a research vehicle it isn’t. (Even if a terrorist could steal it, good luck figuring out how to turn it on.) Anyway, the car can operate autonomously only around Google’s headquarters, since ultra-precise maps beyond that area don’t yet exist. In sum, it is not a good choice for targets outside Mountain View, California.
If a fanboy terrorist really did want to go high-tech, he could more easily rig his own car to be driven by remote control. Or kidnap engineers to do the work, as drug cartels in Mexico have done to build communication systems. Or just get some kamikaze micro-drones. All of these options are more likely and more practical, getting the same job done as autonomous car bombs.
Besides bombing, are there post-execution reasons for using a robot car, such as minimizing forensics evidence? A captured driver, or even the DNA of one who is blown up, can attribute an attack to a particular group. But the same could be achieved by stealing a car and coercing an innocent person to drive.
Robot cars may actually be worse for the criminal who wants to keep a low profile. If they are networked and depend on GPS for navigation, the cars could be tracked as soon as they leave the driveway of the suspect under surveillance. GPS records could be searched to piece together a timeline of events, including where the car has been on the days and weeks leading up to its use as a weapon.
Furthermore, a self-driving car without a human in it at all won’t be in production any time soon. A human will always be “in the loop” for the foreseeable future; at the moment, any “self-driving” car is supposed to have someone in the driver’s seat, ready to take the wheel at a moment’s notice, such as when an unexpected construction detour or bad weather interferes with the car’s sensors and a human operator must quickly retake control. So a robot car bomb with no driver in it would likely raise immediate suspicions, if the car would even move at all.
Admittedly, hacks have already appeared that disable the safety features meant to ensure a human is present and alert. Networked and autonomous cars present many more entry points for hackers, possibly allowing a very knowledgeable criminal to cyber-hijack a robot car.
Theoretically, a terrorist could want to use a robot car as a bomb while he’s still in it—that is, forego the opportunity to spare his own life. It could be that he tends to get lost easily, wants to read last-minute instructions behind the wheel, has to stay in contact with his home base, or must baby-sit the trigger mechanism. A robot car would offer these benefits, however minor they may be.
Possible solutions. The threat of robot car bombers, then, seems unlikely but not impossible to become a reality. Some solutions to that possible threat include requiring manufacturers to install a “kill switch” that law enforcement could activate to stop an autonomous vehicle. This plan was already proposed in the European Union for all cars in the future. Or sensors inside the car could be used to detect hazardous cargo and explosives, similar to the sensors at airport security checkpoints. Or regulators could require special registration of owners of autonomous vehicles, cross-referencing customers with criminal databases and terrorist watch-lists.
But any of these options will face fierce resistance from civil rights advocates and other groups.
And a determined terrorist can get around technological safeguards and firewalls.
At the end of the day, there’s still no substitute for good old-fashioned counterterrorism, human intelligence, and vigilance: in recent weeks, security checkpoints foiled car-bombing plots in Northern Ireland and Jerusalem. Overall, it makes more sense to use these traditional methods; it is easier to continue to use checkpoints, and regulate and monitor the ingredients used in car bombs, rather than oversee the cars themselves.
In truth, in the idea behind robot cars, domestic and international security is facing a very old threat. The problem isn’t so much with robots but with stopping enemy vehicles from penetrating city walls with a destructive payload, which is a problem as old as the Trojan horse of ancient Greek mythology. (There’s a reason why a certain kind of malware goes by the same name). Robot cars merely present a new way to deliver the payload.
Maybe this is a problem that doesn’t demand immediate action and is just part of the “new normal”—if it even comes to pass. For hundreds of years, just about every kind of vehicle has been turned into a mobile bomb: horse-drawn buggies, boats, planes, rickshaws, bicycles, motorcycles, and trains.
This could be a case of misplaced priorities. Or, as journalists Matthew Gault and Robert Beckhusen phrased it in War Is Boring: “Americans freak out over small threats and ignore big ones,” For example, a terrorist with a single well-placed match in California during the summertime could easily do a massive amount of economic damage and disrupt transportation, businesses, and ecosystems. It’s the ultimate in low-tech terrorism, yet could plausibly cause hundreds of millions of dollars in damage.
But the appearance of just one robot car bombing could set back the entire autonomous-driving industry, in addition to the loss of life and the property destroyed. And there are other uses, misuses, and abuses related to autonomous cars that should be of just as much—if not more—concern.
First-world problems. Weirdly, robot cars bombs seem to be a decidedly Western—or even American—fear, even though the actual threat posed by car bombs is generally located far elsewhere. Most suicide car bombs happen in the Middle East in a low-tech way, whereas they are very rare in the United States. But because most of the news coverage about a hypothetical robot car bomb has occurred in the US media, it gives the false impression that it’s a first-world problem. Autonomous cars would have a hard time operating on Afghanistan’s dirt roads without lane markings, for instance, even if one could be obtained there.
Perhaps the reason for America’s obsession is that the car bomb is a special, iconic weapon of terror—our prized possession turned against us. Different from rockets and drone missiles that fall from the sky, car bombs can be more insidious. They would infiltrate civilized society, sneaking up on its most vulnerable points. Like matches, cars are omnipresent in the modern world, and thus nearly impossible to control. But very few elaborate car bombings have been attempted, even though they could be done today via remote control or through the use of a kidnapped driver, for example. Simple still works. As an actual threat, the robot car bomb seems overblown.
 
--- End quote ---

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version