• Welcome to ZNAK SAGITE — više od fantastike — edicija, časopis, knjižara....

živimo SF

Started by zakk, 24-01-2009, 02:17:12

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

scallop

Nekako sam bio uveren da projekat "Zemlja za poneti" odavno nije u opticaju, jer je preskup i da je to bar nama na ZS jasno. Ako bih ja sa ovo malo pameti kolonizaciju vodio parcijalno (parče po parče), sa adaptacijom, odnosno, konverzijom resursa in situ, a od "poneti" samo startere za produkciju vitalnih sirovina, siguran sam da nosioci projekta imaju još zanimljivih polazišta i ideja. To znači da bi početna faza, u kojoj bi nosioci bili KimDžonggovci i elitni predstavnici ZS, opstanak počivala na konzumaciji sopstvenih proizvoda za jelo i piće, obogaćenih energetskim dodacima kao u Red Bullu (zbog krilca). To bi trajalo dok se ne otvore burekdžinice, hamburgernice i Pica Hat.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Dakle, ti bi na Marsu gajio hranu umesto da svu nosiš sa Zemlje? Dobro, ima logike, naravno, hidroponici i sve to, pa i pilići možda. Al za hamburger bi se  verovatno načekali.

scallop

Jesi li negde video podatak da za kilo robe u svemiru treba potrošiti više od 500 kila goriva? Pa ti pakuj krofne za put. Jesi li primetio da se još jednom pokazalo da je SF podsticajan, jer je napravljena odeća koja reciklira sopstvene tečnosti? Naravno da kosmonauti već piju sopstvenu mokraću i znoj. Jesi li primetio da je Bata izvalio da su naši seljaci već rešili jedan od problema kolonizacione kosmonautike izlivanjem sopstvenih septičkih jama na zasade? E, pa, ima da ih jedemo u transplanetarnoj koloniji dok ne rešimo muku sa zasadima. Ko će da kopa septičke jame, pa da ih izliva? I, na kraju, jesi li zapazio, negde pre dvadesetak godina da jedan naš čovek (ko bi drugi) u SAD preuzimao na sebe sabiranje izlivnih kanala javnih WC-a? Kad su shvatili da proizvodi ureu (najznačajniju komponentu veštačkih đubriva), licenca je dobila cenu, a naš čovek digao ruke od biznisa. Najveća šteta je što ZS posada ima tako mizeran senzibilitet za nadolazeće, pa topik koji bi se tim problemima kreativno bavio ne postoji.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

дејан

Goodbye, Oil: US Navy Cracks New Renewable Energy Technology To Turn Seawater Into Fuel, Allowing Ships To Stay At Sea Longer

QuoteAfter decades of experiments, U.S. Navy scientists believe they may have solved one of the world's great challenges: how to turn seawater into fuel.

и везане информације:

Fueling the Fleet, Navy Looks to the Seas

Navy's future: Electric guns, lasers, water as fuel

QuoteThe Navy thinks the other weapon prototype it discussed this week, the electromagnetic railgun, will save money while providing a more potent force.
The EM Railgun launches projectiles using electricity instead of chemical propellants.
The gun uses electromagnetic force to send a missile to a range of 125 miles at 7.5 times the speed of sound, according to the Navy. When it hits its target, the projectile does its damage with sheer speed. It does not have an explosive warhead.

Scale Model WWII Craft Takes Flight With Fuel From the Sea Concept

QuoteFueled by a liquid hydrocarbon—a component of NRL's novel gas-to-liquid (GTL) process that uses CO2 and H2 as feedstock—the research team demonstrated sustained flight of a radio-controlled (RC) P-51 replica of the legendary Red Tail Squadron, powered by an off-the-shelf (OTS) and unmodified two-stroke internal combustion engine.
...barcode never lies
FLA

Meho Krljic

Da, ja sam to pominjao pre neki dan ovde.

Ima i video kako avion leti:


Creating Fuel from Seawater

дејан

e јбг, некако га нисам видео/приметио  :( (бар сам мало проширио инфо са чланком са цнна)
...barcode never lies
FLA

Meho Krljic

Opušteno, vredi dvaput da se ponovi, ipak je ovo motor na vodu  :lol:

scallop

Kakvi smo, potrošićemo i okeane.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Ali pre nego što ih potrošimo gadno ćemo ratovati oko njih.

Meho Krljic

U video igrama kad ste povređeni samo gutnete "paketić zdravlja" i odmah ste dobro. No, ovo je zapravo na korak od prakse. Pogledajte: NANOČESTICE!!!



Innovative strategy to facilitate organ repair



Evo i celog rada:




http://onlinelibrary.wiley.com/enhanced/doi/10.1002/anie.201401043/

Meho Krljic

Nije baš tvrdi SF ali jeste neki mekši, sociološki itd. Dakle, ispostaviće se jednom da nam je Internet mnogo toga dao a mnogo i uzeo i da u krajnjem obračunu možda i gubimo - gomila stvari nam je postala dostupna putem interneta za male ili nikakve pare, ali je s druge strane efekat da smo urušili tržišta za te iste stvari i ljudima onemogućili da zarađuju na njima. Konvencionalna logika je da kad tehnologija određena zanimanja učini zastarelima, naredna generacija prelazi na nova zanimanja koja je ta tehnologija učinila dostupnim, ali utisak je da se danas tehnologija razvija brže od stope tranzicije populacija k novim poslovima, ako ih uopšte ima (u dovoljnoj meri). Ovo me podsetilo na Krugmanov tekst koji sam citirao pre neki dan u kome se priča o tome kako istorijski gledano produktivnost rada dovodi do povećanja društvenog bogatstva od čega svi profitiraju, uključujući one koji rade, ali da smo trenutno u fazi da produktivnost rada raste, društveno bogatstvo rade, ali njegova distribucija je takva da oni koji rade nemaju više nego često manje, a oni koji poseduju kapital imaju MNOGO više nego pre itd.

Uspon "share economije", dakle one gde obični ljudi jedni drugima ustupaju/ prodaju/ menjaju usluge koje izvode usputno dok rade neki svoj posao, koristeći internet kao medijum za komunikaciju je u tom svetlu jedna lepa stvar jer omogućava narodu da zaobiđe krupan biznis i iskoristi resurse zajednica a da ta zajednica više ne mora da bude samo naše neposredno okruženje. Wiredov članak opširno govori o ovome uzimajući za primer kompanije Airbnb i Lyft, insistirajući da je ovo pozitiva trend. Sa druge strane, odgovor na ovaj tekst u Nju Jork Magazinu ima malo manje optimističan ton, pokazujući da je porast share usluga pre svega produkt ekonomskog modela koji je pobrisao ogromnu količinu stalnih radnih mesta i "normalnih" zaposlenja, odnosno da ljudi pribegavaju prevoženju stranaca na aerodrom, onda kad i sami tamo idu i sličnim stvarima, zato što svoje profesionalne kapacitete zapravo više ne mogu da monetizuju (u dovoljnoj meri da od njih pristojno žive) u svetu u kome je gomila poslova otišla u Aziju, druga gomila poverena automatima, a treću gomilu izvode ljudi u svoje slobodno vreme...

scallop

zato što svoje profesionalne kapacitete zapravo više ne mogu da monetizuju (u dovoljnoj meri da od njih pristojno žive) u svetu u kome je gomila poslova otišla u Aziju, druga gomila poverena automatima, a treću gomilu izvode ljudi u svoje slobodno vreme...

Eto, trebalo je samo ovo napisati.

Nešto autentično odavde: Roditelji gube kontrolu nad decom, jer su razapeti između tri radna mesta.

Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Mica Milovanovic

 Čitao sam juče u Wiredu taj članak i baš razmišljao kao i tip iz New York Magazine. Nužda menja ekonomiju. U Ukrajini ako treba da se prevezeš sa jednog mesta na drugi samo podigneš ruku i dogovoriš se sa vlasnikom kola za koliko da te preveze. Kakva crna share economy... Sve više pucaju...
Mica

Mme Chauchat

Quote from: Meho Krljic on 29-04-2014, 10:18:32
Konvencionalna logika je da kad tehnologija određena zanimanja učini zastarelima, naredna generacija prelazi na nova zanimanja koja je ta tehnologija učinila dostupnim, ali utisak je da se danas tehnologija razvija brže od stope tranzicije populacija k novim poslovima, ako ih uopšte ima (u dovoljnoj meri).

Ja sam naravno bijedni filolog a ne sociolog, ali znam da nas je istorija ranog industrijskog doba poučila upravo tome da će pre eventualnog prelaska na nova zanimanja nekoliko generacija radnika pomreti od gladi: videti npr. tkače koji su po celoj Evropi dizali ustanke - nikad uspešne - od kraja osamnaestog pa do sredine devetnaestog veka.

scallop

Izgleda da im nisu potrebni ni lektori, a mi možemo da dižemo ustanke. Sledećih sto godina.

Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Da, veliko je pitanje koliko je uopšte ikada tranzicija podstaknuta tehnološkim promenama bila glatka. Svakako je ovo što danas imamo manje uzbudljivo od industrijske revolucije u Evropi i ludističkog pokreta. Ali svakom je njegova muka najteža, cenim.

Meho Krljic


scallop

Logika ratovanja je poslednji put bila u etičkoj sferi u vreme Sparte:


"Majko, mač je kratak."
"A ti korak bliže."


i nešto kasnije, u Rimu, kada je gladius namerno bio kratak da bi borce uveo u bliski kontakt. U to vreme su varvari već radije kovali duže mačeve. To znači da je jedan od ciljeva ratovanja veoma dugo neizlaganje sopstvenih boraca smrtonosnom kontaktu. Istorija razvoja naoružanja će pokazati da je cilj dobaciti kamen, a ne izložiti sopstvenu guzicu. Tako, između drona i robota i luka i strele ne postoji logička razlika. Uvek je bilo zgodno pobiti neprijatelje, a ne ostaviti kafu da se oladi.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

mac

Mda, postavlja se pitanje zašto da ne zabrane i mine i ostale eksplozivne naprave koje se postave negde, a onda "same" rade ubijanje.

Meho Krljic

Mine i jesu "zabranjene" odnosno dobar deo država na svetu se otavskim sporazumom složio da ne proizvodi i ne koristi mine.

A ova dva teksta koja sam linkovao pričaju o specifičnijoj vrsti etičke dileme vezanoj za autonomna oružja. Dakle ne samo o ubijanju na daljinu vezanom za korišćenje dronova ili mina. Drona pokreće čovek pa odluke o ubijanju donosi čovek, a mina nema moć odlučivanja, ona eksplodira ko god da je zgazi. Autonomno robotsko oružje, prezjumabli, razlikuje neprijatelja, prijatelja i neutralnu metu pa je etička rasprava tu fokusirana.

scallop

Quote from: Meho Krljic on 14-05-2014, 16:35:54
Autonomno robotsko oružje, prezjumabli, razlikuje neprijatelja, prijatelja i neutralnu metu pa je etička rasprava tu fokusirana.


Tu se, Meho, osoliti nećeš. Nikada u istoriji nismo uspeli da razlikujemo prijatelja od neprijatelja, pa je i ta rasprava - traćenje.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Pa, to je osnov ova dva teksta koja sam linkovao. Ko misli da je traćenje vremena slobodan je da ih, kako i ti radiš, kulturno izignoriše.

scallop

Voleo bih da nekad, jednom, braniš moje stavove onako kako braniš neke linkove. :shock:
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Hoću, čim se ukaže potreba!!!!!!  :lol:

Meho Krljic

Ovo nije još zaživelo ali je intersantno promisliti: kada jednom kompjuteri steknu svest o sebi, da li će uopšte komunicirati sa nama uzevši u obzir koliko oni brže od nas procesuju informacije. Drugim rečima, da li će uopšte želeti da se uspore dovoljno da sa nama vode konverzaciju koja - iz njihove perspektive - može da traje hiljade godina? Napisao Jeff Atwood koga sam već po nekim drugim osnovama ovde citirao.


The Infinite Space Between Words

Meho Krljic

Još malo o robotima koji treba da imaju mogućnost etičkog rasuđivaja:


Now The Military Is Going To Build Robots That Have Morals


QuoteAre robots capable of moral or ethical reasoning? It's no longer just a question for tenured philosophy professors or Hollywood directors. This week, it's a question being put to the United Nations.

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.
"Even though today's unmanned systems are 'dumb' in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we've seen before," Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. "For example, Google's self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake."
The United States military prohibits lethal fully autonomous robots. And semi-autonomous robots can't "select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator," even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive.
"Even if such systems aren't armed, they may still be forced to make moral decisions," Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. "While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can't take the idea of in-theater robots completely off the table," Bello said.
Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. "With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions," AI researcher Steven Omohundro told Defense One. "Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define 'the rules of war' and this technology is likely to increase the stakes for that."
"We're talking about putting robots in more and more contexts in which we can't predict what they're going to do, what kind of situations they'll encounter. So they need to do some kind of ethical reasoning in order to sort through various options," said Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of the book Moral Machines: Teaching Robots Right From Wrong.
The sophistication of cutting-edge drones like British BAE Systems's batwing-shaped Taranis and Northrop Grumman's X-47B reveal more self-direction creeping into ever more heavily armed systems. The X-47B, Wallach said, is "enormous and it does an awful lot of things autonomously."


But how do you code something as abstract as moral logic into a bunch of transistors?  The vast openness of the problem is why the framework approach is important, says Wallach. Some types of morality are more basic, thus more code-able, than others. 
"There's operational morality, functional morality, and full moral agency," Wallach said. "Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses... Functional morality is where the robot starts to move into situations where the operator can't always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear."
It's a thick knot of questions to work through. But, Wallach says, with a high potential to transform the battlefield.
"One of the arguments for [moral] robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action," he said.
Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. "It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of," Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent.
AI robotics expert Noel Sharkey is a detractor. He's been highly critical of armed drones in general. and has argued that autonomous weapons systems cannot be trusted to conform to  international law.
"I do not think that they will end up with a moral or ethical robot," Sharkey told Defense One. "For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won't really care. It will follow a human designer's idea of ethics."
"The simple example that has been given to the press about scheduling help for wounded soldiers is a good one. My concern would be if [the military] were to extend a system like this for lethal autonomous weapons - weapons where the decision to kill is delegated to a machine; that would be deeply troubling," he said.
This week, Sharkey and Arkin are debating the issue of whether or not morality can be built into AI systems before the U.N. where they may find an audience very sympathetic to the idea that a moratorium should be placed on the further development of autonomous armed robots.
Christof Heyns, U.N. special rapporteur on extrajudicial, summary or arbitrary executions for the Office of the High Commissioner for Human Rights, is calling for a moratorium. "There is reason to believe that states will, inter alia, seek to use lethal autonomous robotics for targeted killing," Heyns said in an April 2013 report to the U.N.
The Defense Department's policy directive on lethal autonomy offers little reassurance here since the department can change it without congressional approval, at the discretion of the chairman of the Joint Chiefs of Staff and two undersecretaries of Defense. University of Denver scholar Heather Roff, in an op-ed for the Huffington Post, calls that a "disconcerting" lack of oversight and notes that "fielding of autonomous weapons then does not even raise to the level of the Secretary of Defense, let alone the president."
If researchers can prove that robots can do moral math, even if in some limited form, they may be able to diffuse rising public anger and mistrust over armed unmanned vehicles. But it's no small task.
"This is a significantly difficult problem and it's not clear we have an answer to it," said Wallach. "Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action. If we're moving down this road of increasing autonomy in robotics, and that's the same as Google cars as it is for military robots, we should begin now to do the research to how far can we get in ensuring the robot systems are safe and can make appropriate decisions in the context they operate."

Meho Krljic

I još priče o tome kako i da li roboti mogu da donose etičke sudove, ovog puta u sasvim civilnom okruženju. Naime, Eric Sofge je prošle nedelje proizveo minorni cunami na internetu pišući na blogu na Popular Scienceu o tome kako, pošto ćemo uskoro imati komercijalno dostupne autonomne automobile, koji sami voze, bez ljudskog upravljanja, postaje bitno sledeće pitanje: ako automobil shvati da je u situaciji da mora da bira između toga da izazivanjem sudara ubije jednu osobu ili dve osobe, ko je odgovoran za odluku koju je doneo? Dakle, ko nosi sav teret etičkog suda donesenog u tom trenutku? Proizvođač? Programer? Ovo nas prilično vraća na Asimova i njegove zakone robotike. Evo tog posta:

The Mathematics of Murder: Should a Robot Sacrifice Your Life to Save Two?

Quote
It happens quickly—more quickly than you, being human, can fully process.
A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there's too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn't be simpler.
This, roughly speaking, is the problem presented by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University. In a recent opinion piece for Wired, Lin explored one of the most disturbing questions in robot ethics: If a crash is unavoidable, should an autonomous car choose who it slams into?
It might seem like a simple thought experiment, a twist on the classic "trolley problem," an ethical conundrum that asks whether you'd save five people on a runaway trolley, at the price of killing one person on the tracks. But the more detailed the crash scenarios get, the harder they are to navigate. Assume that the robot has what can only be described as superhuman senses and reaction speed, thanks to its machine reflexes and suite of advanced sensors. In that moment of truth before the collision, should the vehicle target a small car, rather than a big one, to err towards protecting its master? Or should it do the reverse, aiming for the SUV, even if it means reducing the robo-car owner's chances of survival? And what if it's a choice between driving into a school bus, or plowing into a tree? Does the robot choose a massacre, or a betrayal?
The key factor, again, is the car's superhuman status. "With great power comes great responsibility," says Lin. "If these machines have greater capacity than we do, higher processor speeds, better sensors, that seems to imply a greater responsibility to make better decisions."
Current autonomous cars, it should be said, are more student driver than Spider-Man, unable to notice a human motorist waving them through an intersection, much less churn through a complex matrix of projected impacts, death tolls, and what Lin calls "moral math" in the moments before a collision. But sensors, processors and software are the rare elements of robotics that tend to advance rapidly (while actuation and power density, for example, limp along with typical analog stubbornness). While the timeframe is unclear, autonomous cars are guaranteed to eventually do what people can't, either as individual sensor-laden devices, or because they're communicating with other vehicles and connected infrastructure, and anticipating events as only a hive mind can.
So if we assume that hyper-competence is the manifest destiny of machines, then we're forced to ask a question that's bigger than who they should crash into. If robots are going to be superhuman, isn't it their duty to be superheroes, and use those powers to save as many humans as possible?
* * *
This second hypothetical is bloodier than the first, but less lethal.
A group of soldiers has wandered into the kill box. That's the GPS-designated area within which an autonomous military ground robot has been given clearance to engage any and all targets. The machine's sensors calculate wind-speed, humidity, and barometric pressure. Then it goes to work.
The shots land cleanly, for the most part. All of the targets are down.
But only one of them is in immediate mortal danger—instead of suffering a leg wound, like the rest, he took a round to the abdomen. Even a robot's aim isn't perfect.
The machine pulls back, and holds its fire while the targets are evacuated.
No one would call this kind of robot a life-saver. But in a presentation to DARPA and the National Academy of Scientists two years ago, Lin presented the opposite what-if scenario: A killer robot that's accurate enough to shoot essentially every one of its target.
According to Lin, such a system would risk violating the Geneva Conventions' article on restricting "arms which cause superfluous injury or unnecessary suffering." The International Committee Red Cross developed more specific guidelines in a later proposal, calling for a ban on weapons with a "field mortality of more than 25% or hospital mortality of more than 5%." In other words, new systems shouldn't kill a target outright more than a quarter of the time, or have more than a five percent chance of leading to his or her death in a hospital.
"It's implicit in war, that we want to give everyone a fair chance," says Lin. "The other side probably aren't all volunteers. They could be conscripted. So the laws of war don't authorize you to kill, but to render enemy combatants unable to fight." A robot that specializes in shooting people in the head, or some other incredibly effective, but overwhelmingly lethal capability—where death is a certainty, because of superhuman prowess—could certainly be defined as inhumane.
As with the autonomous car crash scenario, everything hinges on that level of technological certainty. A human soldier or police officer isn't legally or ethically expected to aim for a target's leg. Accuracy, at any range or skill level, is never a sure thing for mere mortals, much less ones full of adrenaline. Likewise, even the most seasoned, professione driver can't be expected to execute the perfect maneuver, or the ethically "correct" decision, in the split-second preceding a sudden highway collision.
But if it's possible to build that level of precision into a machine, expectations would invariably change. The makers of robots that do bodily harm (though intention or accident) would have to address a range of trolley problems during development, and provide clear decisions for each one. Armed bot designers might have it relatively easy, if they're able to program systems to cripple targets instead of executing them. But if that's the clear choice—that robots should actively reduce human deaths, even among the enemy—wouldn't you have to accept that your car has killed you, instead of two strangers?
* * *
Follow this line of reasoning to its logical conclusion, and things start to get a little sci-fi, and more than a little unsettling. If robots are proven capable of sparing human lives, sacrificing the few for the good of the many, what sort of monster would program them to do otherwise?
And yet, nobody in their right mind would buy an autonomous car that explicitly warns the customer that his or her safety is not its first priority.
That's the dilemma that makers of robot vehicles could eventually face if they take the moral and ethical high road, and design them to limit human injury or death without discrimination. To say that such an admission would slow the adoption of autonomous cars is an understatement. "Buy our car," jokes Michael Cahill, a law professor and vice dean at Brooklyn Law School, "but be aware that it might drive over a cliff rather than hit a car with two people."
Okay, so that was Cahill's tossed-out hypothetical, not mine. But as difficult as it would be to convince automakers to throw their own customers under the proverbial bus, or to force their hand with regulations, it might be the only option that shields them from widespread litigation. Because whatever they choose to do—kill the couple, or the driver, or randomly pick a target—these are ethical decisions being made ahead of time. As such, they could be far more vulnerable to lawsuits, says Cahill, as victims and their family members dissect and indict decisions that weren't made in the spur of the moment, "but far in advance, in the comfort of corporate offices."
In the absence of a universal standard for built-in, pre-collision ethics, superhuman cars could start to resemble supervillains, aiming for the elderly driver rather than the younger investment banker—the latter's family could potentially sue for considerably more lost wages. Or, less ghoulishly, the vehicle's designers could pick targets based solely on make and model of car. "Don't steer towards the Lexus," says Cahill. "If you have to hit something, you could program it hit a cheaper car, since the driver is more likely to have less money."
The greater good scenario is looking better and better. In fact, I'd argue that from a legal, moral, and ethical standpoint, it's the only viable option. It's terrifying to think that your robot chauffeur might not have your back, and that it would, without a moment's hesitation, choose to launch you off that cliff. Or weirder still, concoct a plan among its fellow, networked bots, swerving your car into the path of a speeding truck, to deflect it away from a school bus. But if the robots develop that degree of power over life and death, shouldn't they have to wield it responsibly?
"That's one way to look at it, that the beauty of robots is that they don't have relationships to anybody. They can make decisions that are better for everyone," says Cahill. "But if you lived in that world, where robots made all the decisions, you might think it's a dystopia."



E, onda, nakon rasprave diljem interneta o ovom pitanju, Eric se vraća sa novim postom koji je zanimljiv koliko i prvi:



Robots Are Strong: The Sci-Fi Myth of Robotic Competence

QuoteLast week, I created a minor disturbance in the Internet, with a not-so-simple question—should a robotic car sacrifice its owner's life, in order to spare two strangers?
It was never meant to be a rhetorical question. After talking to Patrick Lin, the California Polytechnic State University robo-ethicist who initially presented the topic of ethical vehicles in an op-ed for Wired, as well as discussing the legal ramifications with a law professor and vice dean at Brooklyn Law School, I was convinced: For the good of our species, the answer is a wincing, but whole-hearted affirmative. If an autonomous vehicle has to choose between crashing into the few, in order to save the many, those ethical decisions should be worked out ahead of time, and baked into its algorithms. To me, all other options point to a chaos of litigation, or a monstrous, machine-assisted Battle Royale, as everyone's robots—automotive or otherwise—prioritize their owners' safety above all else, and take natural selection to the open road.
The reaction to this story was varied, as it should be for such a complex thought experiment. But a few trends emerged.
First, there were the usual comment-section robo-phobics, who may or may not have read past my headline, or further than the summaries presented by response pieces on Gizmodo and Slate. The most liked (so far) comment on Facebook: "Enough to justify NEVER making robots or self driving cars. Ethical and moral decisions should be made by humans, not their creations." Similarly, people talked about Skynet, because Terminator references are as unkillable as the movie's fictional robot assassins, despite being just as humorless, and based on nothing that's ever happened in real life.
But the more interesting responses were the dismissals, which came in two varieties: advising that all robot cars should simply follow legendary SF author Isaac Asimov's First Law of Robotics (that a robot may not harm a human being, through action or inaction), or predicting that robot cars will be so infallible, that lethal collisions will be obsolete. "I don't think this would be an issue," wrote Reddit user iamnotmagritte. "The cars would probably communicate with each other, avoiding the situation altogether, and in the case this isn't enough, letting each other know what paths to take to avoid either crashing." During a Twitter discussion with Tyler Lopez, who wrote the related Slate piece, and made some excellent points about the technical and legal inability of current autonomous cars to solve any sort of "who do I kill?" trolley problem, Lopez shared that sentiment. With vehicles and road infrastructure networked together, he proposed, "the dangers associated with the moral algorithm would be solved by network algorithms instead."
Robots, in other words, simply have to be told not to kill anyone, much less two people, and they'll carry out that mission with machine precision.
This is distressing. To me, it's proof of something I've suspected for years, but haven't been able to articulate, or at least mention without seeming like even more of an insufferable snob. But here goes: Humans, on the whole, do not understand how robots work.
This shouldn't be a huge surprise. Robotics is an immensely complex field of study, and only a vanishingly small portion of the human race are training or employed as roboticists. But you could say the same of physics, and yet the average person doesn't feel qualified to casually weigh in on the mechanics of gravitational lensing, or the spooky feasibility of alternate universes branching out with each decision we make.
So what is it about robots that makes people assume they understand them?
This isn't a rhetorical question either. The answer is Isaac Asimov. Or, more generally, science fiction. SF writers invented the robot long before it was possible to build one. Even as automated machines have become integral to modern existence, the robot SF keeps coming. And, by and large, it keeps lying. We think we know how robots work, because we've heard campfire tales about ones that don't exist.
I'll tackle other major myths of robotics in future posts, but let's start with the one most germane to robot cars, and the need to develop ethical frameworks for any autonomous machine operating in our midst.
Let's talk about the myth of robotic competence.
* * *
"Trust me," Bishop says, before the knife starts moving. In one of the most famous scenes in a movie filled with famous scenes, the android (played by Lance Henriksen) stabs the table between his splayed fingers, and those of the Space Marine his hand is pinning down. He stabs other gaps, and the pace builds until the blade is a blur, gouging the table in a staccato frenzy. When he's done, we see that Bishop has nicked himself slightly. But the poor Marine is unharmed. A bravura performance that's merely a side benefit of being an artificial person.
This is how Aliens introduces its resident robot, and his inhuman degree of competence. Along with possessing uncanny knife skills and hand-eye coordination, Bishop is unflinchingly brave, inhumanly immune to claustrophobia, able to expertly pilot combat spacecraft (not exactly standard training for a medical officer), and is barely fazed by having his body torn in half. Really, there was no reason to send humans on that doomed bug hunt. A crew of armed synthetics—cleared to do harm, as Bishop was not—could have waltzed off of LV-426 with nary a drop of their white blood spilled.
So why, exactly, is Bishop such a remarkable specimen? It's not that Aliens peered into our future, and divined the secrets of robotic efficiency that modern roboticists have yet to discover. It's because, like most SF, the movie is a work of adventure fiction. And when a story's primary goal is to thrill, its robots have to be thrilling.
Aliens is merely continuing a tradition that dates back to literature's first unofficial android, Frankenstein's monster, an assembled being whose superhuman physical and mental gifts aren't based on the quality of raw materials—he wasn't stitched together from Olympic athletes and Nobel winners. The monster's perfection is just as unexplained as Bishop's, or that of countless other fictional automatons, from Star Trek's Data to Almost Human's Dorian. You could guess at the reasons, of course. Where humans are a random jumble of genetic traits, some valuable, others maladaptive, robots are painstakingly optimized. Machines don't tire, or lose their nerve. Though their programming can be compromised, or it might suddenly sprout inconvenient survival instincts, their ability to accomplish tasks is assured. Robots are as infallible as the Swiss clocks they descended from.
The myth of robotic competence is based on a hunch. And it's a hunch that, for the most part, has been proven dead wrong by real-life robots.
Actual robots are devices of extremely narrow value and capability. They do one or two things with competence, and everything else terribly, or not at all. Auto-assembly bots can paint or spot-weld a vehicle in a fraction of the time that a human crew might require, and with none of the health concerns. That's their knife trick. But ask them to install upholstery, and they would most likely bash the vehicle to pieces.
Robot cars, at the moment, have a similarly savant-like range of expertise. As The Atlantic recently covered, Google's driverless vehicles require detailed LIDAR maps—3D models created from lasers sweeping the contours of a given roadway—to function. Autonomous cars have to do impressive things, like detecting the proximity of surrounding cars, and determining right of way at intersections. But they are algorithmically locked onto their laser roads. They stay the proscribed course, following a trail of sensor-generated breadcrumbs. Compared to what humans have to contend with, these robots are the most sheltered sort of permanent student drivers. No one is quizzing them by sending pedestrians or drunk drivers darting into their path, or diverting them through un-mapped, snow-covered country lanes. Their ability to avoid fatal collisions remains untested.
Of course, they'll get better. Sensors will improve and multiply, control algorithms will become more robust, and perhaps the robots will begin talking amongst themselves, warning each other of imminent danger. But the leap from a crash-reduced world to a completely crash-free one is an assumption, and not well-supported by the harsh realities of robotics in particular, and mechanical devices in general. Machines break. Software stumbles. The automotive environment is one of the most punishing and challenging in all of engineering, requiring components to stand up to wild swings of temperature, treacherous road conditions, and the unexpected failure of other components within an intricate, interlocking system. There's only one way to assume that robots will always know that a tire is about to blow, or be able to broadcast the emergency to all nearby cars, each of which will respond with the instant, miraculous performance of a Hollywood stunt driver. For that, you can't be a roboticist, or someone whose computer has crashed inexplicably, or whose WiFi has ever gone down, or whose streaming video has momentarily stuttered. To buy into the myth of robotic competence—or hyper-competence, really—you have to believe that robots are perfect, because SF says so.
* * *
When anyone cites Isaac Asimov's Laws of Robotics, it's an unintentional cry for help. It means that he or she sees robots as a modern fairy tale, the Google-built successors of the glitchy old golem of Jewish myth.
But Asimov's Laws weren't designed to solve future dilemmas related robots with the power of life and death. They are narrative devices, whose tidy, oversimplified directives allow for the loopholes, contradictions and logical gaps that can lead to compelling stories. If you're quoting any of those laws, you're falling for a dual-trap, employing the same dangerously narrow reasoning as the makers and deployers of Asimov's robots (who are supposed to be doing it wrong). And even worse, you're relying on fantasies to guide your thinking about reality. Even if it were possible to simply order all robots to never hurt a person, unless they are suddenly able to conquer the laws of physics, or banish the Blue Screen of Death in all its vicissitudes, large automated machines are going to roll or stumble or topple into people. This might be rare, but it will be an inevitability for some number of poor souls. Plus, the military, still the largest provider of R&D funding for robotics, might have something to say about that First Law being applied to all such machines.
Which isn't to say that SF means to mislead us about robotics, or should be ignored. I've talked to many roboticists and artificial intelligence researchers who were inspired by hyper-competent bogeymen, from 2001's HAL 9000 to the Terminator's T-800. The dream of robotic power is intoxicating. That the systems these scientists create are usually pale shadows of human competence is a mere fact of robotics. After all, the point of automation, in almost all cases, isn't to create a superhuman capability. It's to take people out of the equation, to save money, or save their lives, or save them the time and trouble of doing something boring. A Predator drone is not a better aircraft than a manned F-16 fighter, because it's robotic. In fact, it's not a better aircraft at all. Drones are, without exception, the least impressive military vehicles in the sky. But they're small, and cheaper to buy and deploy than a proper airborne killing machine. They're "good enough" technology, if your mission is to assassinate a ground target, in a region where air defense technology amounts to running for cover. But pit them against traditional attack craft, or systems designed to down encroaching aircraft, and armed drones will excel only at becoming smoking ruins.
In very specific, very limited applications, robots are strong. In most cases, though, they are weak. They are cost-effective surrogates. Or they are incredibly humble devices, like the awkward, bumbling humanoids of the DARPA Robotics Challenge, who are celebrated for gradually struggling through tasks (driving a vehicle, walking over rubble, using a power tool) that any able-bodied person would accomplish in exponentially less time. Journalists are often complicit in this myth-building. They inflate automated capabilities, romanticizing the decision-making that goes into how a robot approaches a task, or turning every discussion of exoskeletons and advanced prosthetics for the disabled into a bright-eyed prophesy of Iron Man-like abilities to come. Where journalists should be dismantling false, SF-sourced preconceptions about robotic technology, they're instead referencing those tales of derring-do, and reinforcing the sense that SF was right all along. Whether in make-believe settings, or the distorted scene-setting of media coverage, robots are strong, because anything less would be a buzzkill.
Which makes me the guy earnestly pooping on everyone's robot party. I think there's another option, though. Robots can be impressive without being overstated. A robotic limb can be a remarkable achievement because it restores independence to an amputee, and not because Almost Human imagines that a bionic leg is great for kicking people across the room. It's possible to love SF's thought experiments and vague predictions, while recognizing that it's not in SF's best interest to be rigidly accurate. Robots don't tend to shamble into dour literature about college professors and their desperate affairs. Fictional machines are the better, upgraded angels of our nature, protecting their makers with impossible intellects and physical prowess. More often they're rising against their masters in flawless, one-sided coups that are roughly as feasible (and impossible to banish from pop culture) as zombie outbreaks. Robots are perfect because that's the version of robots that's most fun.
That's a foolish way to think or talk about real robots, which are destined to break down and fall short. Automation is transforming our society in ways that are both disturbing and exciting. Assassination (in some places) is easier with robots. Collisions could one day be reduced with robots. Machine autonomy will annihilate whole professions and create or enhance others. Robots have only begun to reconfigure human life. So shut up, for a moment, about SF's artificial heroes and villains, and the easy, ill-informed fantasies that fill the gaps of technical understanding. There are too many actual, fallible robots to talk about. And there's only so much time in our short, brutish, meatbag lives to discuss what we're going to do with them.

Meho Krljic

Hipotetički, možda će se neko od nas zateći na Mesecu. Možda kao član neke naučne ekspedicije ili jedan od prvih kolonista namernih da tamo zasnuju zajednicu i povedu ljudsku rasu korak dalje zvezdama. Mesec je sjajno mesto za proširene vene (niska gravitacija!) i zdrav ten (nema proklete atmosfere da blokira najbolje delove sunčeve svetlosti!!) ali ima jednu veliku manu: ne možete da redovno obilazite Sagitu jer je pristup internetu najblaže rečeno nesiguran. Srećom, MIT-ovi naučnici imaju rešenje i za ovaj problem: laseri, pored toga što se od njih prave korisne kancelarijske alatke i futuristička oružja, mogu da širokopojasni internet dobace i na Mesec!!!!! Ping će biti problematičan zbog daljine, pa je igranje Quakea u mreži verovatno za sada isključeno, ali ko hoće samo da torentuje najnovije epizode Rules of Engagement, ima sreće! Dolecitirani tekst pokazuje koji su tehnološki izazovi vezani za ovakve poduhvate.


MIT figures out how to give the moon broadband -- using lasers


QuoteFour transmitting telescopes in the New Mexico desert, each just 6 inches in diameter, can give a satellite orbiting the moon faster Internet access than many U.S. homes get. The telescopes form the earthbound end of an experimental laser link to demonstrate faster communication with spacecraft and possible future bases on the moon and Mars. Researchers from the Massachusetts Institute of Technology will give details about the system and its performance next month at a conference of The Optical Society.


The Lunar Laser Communication Demonstration (LLCD) kicked off last September with the launch of NASA's LADEE (Lunar Atmosphere and Dust Environment Explorer), a research satellite now orbiting the moon. NASA built a laser communications module into LADEE for use in the high-speed wireless experiment.
LLCD has already proved itself, transmitting data from LADEE to Earth at 622Mbps (bits per second) and in the other direction at 19.44Mbps, according to MIT. It beat the fastest-ever radio communication to the moon by a factor of 4,800.
NASA hopes lasers can speed up communication with missions in space, which use radio to talk to Earth now, and let them send back more data. Laser equipment also weighs less than radio gear, a critical factor given the high cost of lifting any object into space.
The project uses transmitting telescopes at White Sands, New Mexico, to send data as pulses of invisible infrared light. The hard part of reaching the moon by laser is getting through Earth's atmosphere, which can bend light and cause it to fade or drop out on the way to the receiver.
One way the researchers got around that was by using the four separate telescopes. Each sends its beam through a different column of air, where the light-bending effects of the atmosphere are slightly different. That increases the chance that at least one of the beams will reach the receiver on the LADEE.
Test results have been promising, according to MIT, with the 384,633-kilometer optical link providing error-free performance in both darkness and bright sunlight, through partly transparent thin clouds, and through atmospheric turbulence that affected signal power.
One reason it works is that there's plenty of signal power to spare. The transmission power from the Earth antennas totals 40 watts, and less than a billionth of a watt is received on the LADEE. But that's still 10 times the signal needed to communicate without errors, according to MIT. On the craft, a smaller telescope collects the light and focuses it into an optical fiber. After the signal is amplified, it's converted to electrical pulses and into data.

Meho Krljic

Navodno ljudi napisali program koji je prošao Turingov test!  :-| :-| :-| :-| :-| :-| :-| :-| :-|


Turing Test breakthrough as super-computer becomes first to convince us it's human

QuoteEugene Goostman, a computer programme pretending to be a young Ukrainian boy, successfully duped enough humans to pass the iconic test
 
A programme that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.




Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations.
Read more: What exactly is the Turing test?Eugene Goostman, a computer programme made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33 per cent of the judges that it was human, said academics at the University of Reading, which organised the test.
It is thought to be the first computer to pass the iconic test. Though other programmes have claimed successes, those included set topics or questions in advance.
A version of the computer programme, which was created in 2001, is hosted online for anyone talk to. ("I feel about beating the turing test in quite convenient way. Nothing original," said Goostman, when asked how he felt after his success.)
The computer programme claims to be a 13-year-old boy from Odessa in Ukraine.
"Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn't know everything," said Vladimir Veselov, one of the creators of the programme. "We spent a lot of time developing a character with a believable personality."
The programme's success is likely to prompt some concerns about the future of computing, said Kevin Warwick, a visiting professor at the University of Reading and deputy vice-chancellor for research at Coventry University.


"In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human," he said. "Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.
"The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true... when in fact it is not."
The test, organised at the Royal Society on Saturday, featured five programmes in total. Judges included Robert Llewellyn, who played robot Kryten in Red Dwarf, and Lord Sharkey, who led the successful campaign for Alan Turing's posthumous pardon last year.
Alan Turing created the test in a 1950 paper, 'Computing Machinery and Intelligence'. In it, he said that because 'thinking' was difficult to define, what matters is whether a computer could imitate a real human being. It has since become a key part of the philosophy of artificial intelligence.
The success came on the 60th anniversary of Turing's death, on Saturday.

scallop

Sad je na redu da sa kompjuterskim programima komuniciramo na fejsbuku i tviteru, pa da oni komuniciraju međusobno. Kad to postignemo, moći ćemo da batalimo internet i da se bavimo stvarnim životom.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

To! Jedva čekam da vidim pornografiju koju će kompjuteri praviti za kompjutere dok mi sedimo na Adi ispod nekog drveta i pijemo lađena pića!!!!!!

scallop

Šta da vidiš? Pa, sediš na Adi, u ladovini i pijuckaš ladni sokić.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

zakk

jebga, sačekaćemo još malo za takvu akciju
https://www.techdirt.com/articles/20140609/07284327524/no-computer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml

mada, mislim da ne bi bio problem napraviti dva četbota da se hotuju
Why shouldn't things be largely absurd, futile, and transitory? They are so, and we are so, and they and we go very well together.

Meho Krljic

Ejebiga kad ja verujem independentu  :cry: :cry: U moju odbranu, imam puno posla, nisam stigao da čitam kmentare i reakcje inače bih video da je ovo bulšit.  :(

scallop

Zamisli da botMeho i botScallop trabunjaju po ZS, a nas dvojica seirimo uz piće i meze.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Sanjam o tom partikularnom Elizijumu!!!!!!!!!

scallop

Ma, mogao bih da ti dam i veselije primere. Na ovom našem bunjištu gomila nikova nije u stanju da u lice izgovori ono šta napiše. Kad bi ih zamenili botovi možda bi našli načina da se predstave kao ljudi.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Inače, Erik Sofge u Popular Science veli da je Turingov test originalno bio zamišljen da pokaže ko može bolje da imitira žene: muškarci ili veštačka inteligencija.
Lie Like A Lady: The Profoundly Weird, Gender-Specific Roots Of The Turing Test

Quote

By now, you may have heard that the Turing Test, that hallowed old test of machine intelligence proposed by pioneering mathematician Alan Turing in 1950, has been passed. In a contest held this past weekend, a chatbot posing as a 13-year-old Ukrainian boy fooled a third of its human judges into thinking it was a human. This prompted the University of Reading, which had organized the competition, to announce the acheivment of "an historic milestone in artificial intelligence."
You may have also heard that this was a complete sham, and the academic equivalent of urinating directly on Turing's grave. Turing imagined a benchmark that would answer the question, "Can machines think," with a resounding yes, demonstrating some level of human-like cognition. Instead, the researchers who built the winning program, "Eugene Goostman," engaged in outright trickery. Like every chatbot before it, Eugene evaded questions, rather than processing their content and returning a truly relevant answer. And it used possibly the dirtiest trick of all. In a two-part deception, Eugene's broken English could be explained away by not being a native speaker, and its general stupidity could be justified by its being a kid (no offense, 13-year-olds). Instead of passing the Turing Test, the researchers gamed it. They aren't the first—Cleverbot was considered by some to have passed in 2011—but as of right now, they're the most famous.
What you may not have heard, though, is how profoundly bizarre Alan Turing's original proposed test was. Much like the Uncanny Valley, the Turing Test is a seed of an idea that's been warped and reinterpreted into scientific canon. The University of Reading deserves to be ridiculed for claiming that its zany publicity stunt is a groundbreaking milestone in AI research. But the test it's desecrating deserves some scrutiny, too.
"Turing never proposed a test in which a computer pretends to be human," says Karl MacDorman, an associate professor of human-computer interaction at Indiana University. "Turing proposed an imitation game in which a man and a computer compete in pretending to be a woman. In this competition, the computer was pretending to be a 13-year-old boy, not a woman, and it was in a competition against itself, not a man."
MacDorman isn't splitting hairs with this analysis. It's right there in the second paragraph of Turing's landmark 1950 paper, "Computing Machinery and Intelligence," published in 1950 in the journal Mind. He begins by describing a scenario where a man and a woman would both try to convince the remote, unseen interrogator that they are female, using type-written responses or by speaking through an intermediary. The real action, however, comes when the man in replaced by a machine. "Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" asks Turing.
The Imitation Game asks a computer to not only imitate a thinking human, but a specific gender of thinking human. It sidesteps the towering hurdles associated with creating human-like machine intelligence, and tumbles face-first into what should be a mathematician's nightmare—the unbounded, unquantifiable quagmire of gender identity.
The imagined machine would need to understand the specific social mores and stereotypes of the country it's pretending to hail from. It might also have to decide when its false self was born. This was in 1950, after all, just 22 years after British women were granted universal voting rights. The aftershocks of the women's suffrage movement were still being felt. So how should a machine pretend to feel about this subject, whether as a woman of a certain age, or as a student born after those culture-realigning battles were won?
Whether a computer can pull this off seems terribly fascinating, and like an excellent research question for some distant era, long after the puzzle of artificial general intelligence is solved. But the Imitation Game is an exercise posed at the inception of the digital age, at a time when the term computer was as likely to conjure up an image of a woman crunching numbers for the Allied war effort as a machine capable of chatting about its hair.
The hair thing is Turing's example, not mine. More on that later.
By now you might be wondering why I haven't moved on to the Turing Test, which is surely some clarified, revised version of the Imitation Game that Turing presented in a later publication. Would that it were so. When he died in 1954, Turing hadn't removed gender from his ground-breaking thought experiment. The Turing Test is an act of collective academic kindness, conferred on its namesake posthumously. And as it entered popular usage, it took on new meaning and significance, as the standard by which future artificial intelligences should be judged. The moment when a computer tricks its human interrogator will be the first true glimpse of machine sentience. Depending on your science fiction intake, it will be cause for celebration, or war.
In that respect, the Turing Test shares something with the Uncanny Valley, a hypothesis that's also based on a very old paper, which also presented no experimental results, and also guesses at specific aspects of technology that wouldn't be remotely possible for decades. In this 1970 paper, roboticist Masahiro Mori imagined a curve on a graph, with positive feelings towards robots rising steadily as those machines looked more and more human-like, before suddenly plunging. At that proposed level of human mimicry, subjects would feel unease, if not terror. Finally, the graph's valley would form when some potential amount of perfect human-aping capability is achieved, and we don't just like androids—we love them!
The excessive italics are my attempt to highlight the fact that, in 1970, the Uncanny Valley was based on no interactions with actual robots. It was a thought experiment. And it still is, in large part, because we haven't achieved perfect impostors, and the related academic experiments rely not on robots, but static images and computer-generated avatars. Also, Mori himself never bothered to test his own theory, in the 44 years since he wistfully dreamt it up. (If that seems overly harsh, read the paper, co-translated by Karl MacDorman. It's alarmingly short and florid.) Instead, he eventually wrote a book about how robots are born Buddhists. (Again, don't take my word for it.)
But despite the flimsy, evidence-free nature of Mori's paper, and the fact that face-to-face interactions with robots have yielded a variety of results, too complex to conform to any single curve, the Uncanny Valley is still treated by many as fact. Any why shouldn't it be? It sounds logical. Like the Turing Test, there's a sense of poetry to its logic, and its ramifications, which involve robots. But however it applies to your opinion of the corpse-eyed cartoons in The Polar Express, the Uncanny Valley adds nothing of value to the field of robotics. It's science as junk food.
The Turing Test is also an overly simplified, and often unfortunately deployed concept. Its greatest legacy is the chatbot, and the competitions that try—and generally fail—to glorify the damnable things. But where the Uncanny Valley and the Turing Test differ is in their vision. The Turing Test, as we've come to understand it, and as last weekend's event proves, is a hollow measure. Turing, however, was still a visionary. And in his strange, sloppy, apparently over-reaching Imitation Game, he offers a brilliant insight into the nature of human and artificial intelligence.
Talking about your hair is smarter than it sounds.
* * *
Turing's first sample question for the Imitation Game reads, "Will X please tell me the length of his or her hair?" And the notional answer, from a human male: "My hair is shingled, and the longest strands are about nine inches long."
Think about what's happening in that response. The subject is picturing (presumably) someone else's hair, or whipping up a visual from scratch. He's included a reference to a specific haircut, too, rather than a plainly descriptive mention that it's shorter in back.
If a machine could deliver a similar answer, it would mean one of two things.
Its programmers are great at writing scripted responses, and lucked out when it detected the word "hair." The less cynical, pre-chatbot possibility is that the computer is able to access an image, and describe its physical characteristics, as well as its cultural context.
Making gender such a core component of a test for machine intelligence still makes me uneasy, and seems like the sort of tangential inclusion that would be lambasted by modern researchers. But what Turing sought was the ability to process data on the fly, and draw together multiple types of information. Intelligence, among other things, means understanding things like length and color, but also knowing what shingled hair is.
The Imitation Game also has better testing methodology than the standard version of the Turing Test, since it involves comparing a human's ability to deceive with a machine's ability to do the same. At first glance, this might seem like insanity—if this test is supposed to lead to computers that think like us, who cares if they can specifically pretend to be one gender or another? What makes the Imitation Game brilliant, though, is that it's a contest. It sets a specific goal for programmers, instead of staging an open-ended demonstration of person-like computation. And it asks the computer to perform a task that its human competitor could also fail at. The Turing Test, on the other hand, doesn't pit a computer against a person in an actual competition. Humans might be included as a control element, but no one expects them to fail at coming across at the most basic of all tasks—being a person. 
The Imitation Game might still be vulnerable to modern chatbot techniques. As legions of flirty programs on "dating" sites can attest, falling back on lame stereotypes can be a surprisingly successful strategy for temporarily duping humans. There's nothing perfect about Turing's original proposal. And it shouldn't be sacred either, considering its advanced age, and the developments in AI since it was written. But for all its problems and messy socio-cultural complications, I don't think we've done Turing any favors by replacing the Imitation Game with the Turing Test. Being better than a man at pretending to be a living woman is an undeniably fraught victory condition for AI. But it's a more contained experiment than simply aping the evasive chatroom habits of semi-literate humans, and would call for greater feats of machine cognition. After this latest cycle of breathless announcements and well-deserved backlash, no one should care when the next unthinking collection of auto-responses passes the Turing Test.
But if something beats a human at the Imitation Game?
I'm getting chills just writing that.


Meho Krljic

Novi, 3D-štampani materijal može da izdrži težinu 160000 puta veću od svoje:

New ultrastiff, ultralight material developed

QuoteNanostructured material based on repeating microscopic units has record-breaking stiffness at low density.


What's the difference between the Eiffel Tower and the Washington Monument?
Both structures soar to impressive heights, and each was the world's tallest building when completed. But the Washington Monument is a massive stone structure, while the Eiffel Tower achieves similar strength using a lattice of steel beams and struts that is mostly open air, gaining its strength from the geometric arrangement of those elements.
Now engineers at MIT and Lawrence Livermore National Laboratory (LLNL) have devised a way to translate that airy, yet remarkably strong, structure down to the microscale — designing a system that could be fabricated from a variety of materials, such as metals or polymers, and that may set new records for stiffness for a given weight.
The new design is described in the journal Science by MIT's Nicholas Fang; former postdoc Howon Lee, now an assistant professor at Rutgers University; visiting research fellow Qi "Kevin" Ge; LLNL's Christopher Spadaccini and Xiaoyu "Rayne" Zheng; and eight others.
The design is based on the use of microlattices with nanoscale features, combining great stiffness and strength with ultralow density, the authors say. The actual production of such materials is made possible by a high-precision 3-D printing process called projection microstereolithography, as a result of the joint research collaboration between the Fang and Spadaccini groups since 2008.
Normally, Fang explains, stiffness and strength declines with the density of any material; that's why when bone density decreases, fractures become more likely. But using the right mathematically determined structures to distribute and direct the loads — the way the arrangement of vertical, horizontal, and diagonal beams do in a structure like the Eiffel Tower — the lighter structure can maintain its strength.
A pleasant surprise
The geometric basis for such microstructures was determined more than a decade ago, Fang says, but it took years to transfer that mathematical understanding "to something we can print, using a digital projection — to convert this solid model on paper to something we can hold in our hand." The result was "a pleasant surprise to us," he adds, performing even better than anticipated.
"We found that for a material as light and sparse as aerogel [a kind of glass foam], we see a mechanical stiffness that's comparable to that of solid rubber, and 400 times stronger than a counterpart of similar density. Such samples can easily withstand a load of more than 160,000 times their own weight," says Fang, the Brit and Alex d'Arbeloff Career Development Associate Professor in Engineering Design. So far, the researchers at MIT and LLNL have tested the process using three engineering materials — metal, ceramic, and polymer — and all showed the same properties of being stiff at light weight.
"This material is among the lightest in the world," LLNL's Spadaccini says. "However, because of its microarchitected layout, it performs with four orders of magnitude higher stiffness than unstructured materials, like aerogels, at a comparable density."
Light material, heavy loads
This approach could be useful anywhere there's a need for a combination of high stiffness (for load bearing), high strength, and light weight — such as in structures to be deployed in space, where every bit of weight adds significantly to the cost of launch. But Fang says there may also be applications at smaller scale, such as in batteries for portable devices, where reduced weight is also highly desirable.
Another property of these materials is that they conduct sound and elastic waves very uniformly, meaning they could lead to new acoustic metamaterials, Fang says, that could help control how waves bend over a curved surface.
Others have suggested similar structural principles over the years, such as a proposal last year by researchers at MIT's Center for Bits and Atoms (CBA) for materials that could be cut out as flat panels and assembled into tiny unit cells to make larger structures. But that concept would require assembly by robotic systems that have yet to be developed, says Fang, who has discussed this work with CBA researchers. This technique, he says, uses 3-D printing technology that can be implemented now.
Martin Wegener, a professor of mechanical engineering at Karlsruhe Institute of Technology in Germany who was not involved in this research, says, "Achieving metamaterials that are ultralight in weight, yet stiffer than you would expect from usual scaling laws for elastic solids, is of obvious technological interest. The paper makes an interesting contribution in this direction."
The work was supported by the U.S. Defense Advanced Research Projects Agency and LLNL.



Printing with Light

Meho Krljic


Meho Krljic

 'Optical fibre' made out of thin air



Quotecientists say they have turned thin air into an "optical fibre" that can transmit and amplify light signals without the need for any cables.
In a proof-of-principle experiment they created an "air waveguide" that could one day be used as an instantaneous optical fibre to any point on earth, or even into space.
The findings, reported in the journal Optica, have applications in long range laser communications, high-resolution topographic mapping, air pollution and climate change research, and could also be used by the military to make laser weapons.
"People have been thinking about making air waveguides for a while, but this is the first time it's been realized," said Howard Milchberg of the University of Maryland, who led the research, which was funded by the U.S. military and National Science Foundation.
Lasers lose intensity and focus with increasing distance as photons naturally spread apart and interact with atoms and molecules in the air.
Fibre optics solves this problem by beaming the light through glass cores with a high refractive index, which is good for transmitting light.
The core is surrounded by material with a lower refractive index that reflects light back in to the core, preventing the beam from losing focus or intensity.
Fibre optics, however, are limited in the amount of power they can carry and the need for a physical structure to support them.
Light and air Milchberg and colleagues' made the equivalent of an optical fibre out of thin air by generating a laser with its light split into a ring of multiple beams forming a pipe.
They used very short and powerful pulses from the laser to heat the air molecules along the beam extremely quickly.
Such rapid heating produced sound waves that took about a microsecond to converge to the centre of the pipe, creating a high-density area surrounded by a low-density area left behind in the wake of the laser beams.
"A microsecond is a long time compared to how far light propagates, so the light is gone and a microsecond later those sound waves collide in the centre, enhancing the air density there," says Milchberg.
The lower density region of air surrounding the centre of the air waveguide had a lower refractive index, keeping the light focused.
"Any structure [even air] which has a higher density will have a higher index of refraction and thereby act like an optical fibre," says Milchberg.
Amplified signal Once Milchberg and colleagues created their air waveguide, they used a second laser to spark the air at one end of the waveguide turning it into plasma.
An optical signal from the spark was transmitted along the air waveguide, over a distance of a metre to a detector at the other end.
The signal collected by the detector was strong enough to allow Milchberg and colleagues to analyze the chemical composition of the air that produced the spark.
The researchers found the signal was 50 per cent stronger than a signal obtained without an air waveguide.
The findings show the air waveguide can be used as a "remote collection optic," says Milchberg.
"This is an optical fibre cable that you can reel out at the speed of light and place next to [something] that you want to measure remotely, and have the signal come all the way back to where you are."
Australian expert Ben Eggleton of the University of Sydney says this is potentially an important advance for the field of optics.
"It's sort of like you have an optical fibre that you can shine into the sky, connecting your laser to the top of the atmosphere," says Eggleton.
"You don't need big lenses and optics, it's already guided along this channel in the atmosphere."

scallop

Zgodno, mikrosekunda je oko 300 m, ali...


The findings, reported in the journal Optica, have applications in long range laser communications, high-resolution topographic mapping, air pollution and climate change research, and could also be used by the military to make laser weapons.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Pa, normalno da to sve ima i vojnu namenu kad im vojska finansira istraživanje.

QuoteThis research was supported by the Air Force Office of Scientific Research, the Defense Threat Reduction Agency, and the National Science Foundation.

scallop

Ako ti misliš da je normalno.  :(
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Pa ako su im dva od tri finansijera iz domena odbrane, valjda je to "normalno" ili makar očekivano? Zašto je sad tebi to nenormalno kad si i ti radio za vojsku???

scallop

Baš čudno. Ja za vojsku, a ti za Crveni krst. Pa, tebi normalno, a meni baš i nije.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Neko tu nije normalan!!!

scallop

Ako stigneš na vreme jednom ću ti ispričati zašto sam ja u pravu a ti ne. 8)
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.

Meho Krljic

Pošto je moj otac juče prvi put posle 43 godine izgovorio rečenicu "U pravu sam ja, ali i ti si u pravu", obraćajući se meni, onda živim u nadi da ću nešto slično jednom čuti i od tebe. Ti si doduše stariji odnjega dve godine ali me poznaješ kraće, pa će to da se potre  :lol:

scallop

Učinio ti je da se bolje osećaš.
Never argue with stupid people, they will drag you down to their level and then beat you with experience. - Mark Twain.