• Welcome to ZNAK SAGITE — više od fantastike — edicija, časopis, knjižara....

Mašina pobedila u kvizu znanja

Started by mac, 19-02-2011, 23:32:49

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

mac

U podnaslovu - What is Watson. Google pretraga: jeopardy watson.

Veštačka inteligencija zvana Watson je uspela da pobedi najjače takmičare kviza Jeopardy. Američki kviz Jeopardy je čuven po zaguljenim pitanjima, kojima često ni živi ljudi nisu dorasli. Piece of cake for AI. Watson nas uvodi korak dalje u budućnost u kojoj će mašine govorom interreagovati s ljudima, i to s razumevanjem. Veliki Watsonov uspeh nije to što je znao razne stvari, nego što je shvatio pitanja. Govorni jezik ima komplikovanija pravila nego šah, te i Watsonov uspeh veći od uspeha Deep Blue i Deep Thought zajedno. S uvođenjem ove tehnologije predviđam tehnološki skok sličan pojavi Interneta. Evo kako je to izgledalo u jednoj emisiji:

Miles vs. Watson: The Complete Man Against Machine Showdown

Naravno, ako postavimo našeg čoveka Watsona da istovremeno prisluškuje sve telefonske razgovore na planeti dobijamo sasvim drugu sliku Novog Sveta, ali to je ista priča kao i sa svakom drugom novom tehnologijom. Ne treba se bojati tehnologije, to jest nije tehnologija ta koje se treba bojati. Čovek je i dalje glavni. Za sada.

mac

Watson did it again. Veštačka inteligencija je naučila da igra FreeCiv 2.2 (open source reimplementaciju igre Civilization 2) čitanjem uputstva, a istovremeno je učila jezik u uputstvu igranjem igre. Više detalja na sledećim linkovima:

http://web.mit.edu/newsoffice/2011/language-from-games-0712.html
http://www.extremetech.com/extreme/90046-computer-learns-to-play-civilization-by-reading-the-instruction-manual
http://www.theinfoboom.com/articles/intelligent-text-analysis-mit-computer-learns-civilization/

Naučni rad iza ove vesti je na sledećoj adresi:
http://people.csail.mit.edu/regina/my_papers/civ11.pdf

Watsonu je učenje bilo olakšano time što se tokom igre prati igračev skor, pa je Watson mogao da proučava koji mu to potezi povećavaju skor, i tako dođe do maksimalnog rezultata i time do pobede. Voleo bih da se Watson sad oproba na malo tvrđem orahu: NetHack.

Živimo zaista u interesantnim vremenima...

Meho Krljic

Er... Pre nekoliko godina sam na Slashdotu čitao o tipu koji je naučio spam filter koga je sam napisao za svoj imejl klijent da igra šah. Evo ovde zapis celog projekta: http://dbacl.sourceforge.net/spam_chess-1.html

Ovo je naravno kompleksnije jer je Civ komplikovaniji od šaha, ali s druge strane, spem filter je mnogo jednostavniji softver. S treće strane, učenje kroz čitanje uputstva JESTE impresivno. Zanimljiv projekat.

Naravno, mrzi me sad sve da čitam ali zanimljivo bi bilo videti kako vaj softver prolazi na Turingovom testu.

mac

Prema predviđanjima Reja Kurcvajla kompjuter će proći Turingov test pred kraj dvadesetih godina. To je nekih petnaestak godina od danas...

Meho Krljic

Nema veze sa temom, al da ne otvaram novu:

NASA isključila svoj poslednji aktivni mejnfrejm:

Quote
It's somewhat hard to imagine that NASA doesn't need the computing power of an IBM mainframe any more but NASA CIO posted on her blog today at the end of the month, the Big Iron will be no more at the space agency.
More on space: Space junk funk: The anniversary of the Cosmos/Iridium satellite crash
NASA CIO Linda Cureton wrote: This month marks the end of an era in NASA computing. Marshall Space Flight Center powered down NASA's last mainframe, the IBM Z9 Mainframe.  For my millennial readers, I suppose that I should define what a mainframe is.  Well, that's easier said than done, but here goes -- It's a big computer that is known for being reliable, highly available, secure, and powerful.  They are best suited for applications that are more transaction oriented and require a lot of input/output - that is, writing or reading from data storage devices.
In my first stint at NASA, I was at NASA's Goddard Space Flight Center as a mainframe systems programmer when it was still cool. That IBM 360-95 was used to solve complex computational problems for space flight.   Back then, I comfortably navigated the world of IBM 360 Assembler language and still remember the much-coveted "green card" that had all the pearls of information about machine code.  Back then, real systems programmers did hexadecimal arithmetic - today, "there's an app for it!"
But all things must change.  Today, they are the size of a refrigerator but in the old days, they were the size of a Cape Cod.  Even though NASA has shut down its last one, there is still a requirement for mainframe capability in many other organizations.
Of course NASA is just one of the latest high profile mainframe decommissionings.  In 2009 The U.S. House of Representatives took its last mainframe offline.  At the time Network World wrote: "The last mainframe supposedly enjoyed "quasi-celebrity status" within the House data center, having spent 12 years keeping the House's inventory control records and financial management data, among other tasks. But it was time for a change, with the House spending $30,000 a year to power the mainframe and another $700,000 each year for maintenance and support."
And then there was this classic mainframe burial.


džin tonik

Quote from: mac on 12-08-2011, 12:37:28
Prema predviđanjima Reja Kurcvajla kompjuter će proći Turingov test pred kraj dvadesetih godina. To je nekih petnaestak godina od danas...

engineered by kempelen :)

mac

:) Pročitaj preduslove. Prvo ide nanotehnologija primenjena u medicini, mapiranje ljudskog mozga pomoću nanotehnologije, onda kreiranje virtuelnog mozga na osnovu mape postojećeg. Zamislivo je.

mac

Stići ćemo i do trenutka kada ćemo za mašinu poverovati da je osoba, ali možda ljudi treba da porade na tome da ih ne brkaju s mašinama. http://www.salon.com/2000/08/10/turing/

Meho Krljic

Ooooh, neuralna mreža naučila da igra Pong gledajući igru a onda pobeđivala ljude. Skajnet stiže!!!!!!!1!!!1
Neural Net Learns Breakout Then Thrashes Human  Gamers

Quote
The end is nigh. Humans have lost another key battle in the war against computer domination
The Atari 2600 games console has a special place in the hearts of any gamers who grew up in the 1970s. It popularised a number of games that changed the games industry for ever, such as Pong, Breakout and Space Invaders. Today, these games have legendary status and still play an important role in the gaming world.
One curious thing about these games is that computers themselves have never been very good at playing them in the same way as humans. That means playing them by looking at the monitor and judging actions accordingly. This kind of "hand-to-eye" co-ordination has always been a special human skill.
Not any more. Today, Volodymyr Mnih and pals at DeepMind Technologies in London say they've created a neural network that learns how to play video games in the same way as humans: using the computer equivalent of hand-to-eye co-ordination.
And not only is their neural network a handy player, it learns so well that it can actually beat expert human players in games such as Pong and Breakout.
The approach is relatively straightforward. These guys have set their neural network against a set of seven games for the Atari 2600 available on the Arcade Learning Environment. These are Pong, Breakout, Space Invaders, Seaquest, Beam Rider, Enduro and Q*bert.
At any instant in time during a game, a player can choose from a finite set actions that the game allows: move to the left, move to the right, fire and so on. So the task for any player—human or otherwise—is to choose an action at each point in the game that maximises the eventual score.
That's often easier said than done because the reward from any given action is not always immediately apparent. For example, taking cover from a space invader's bomb does not increase the score but does allow it to increase later.
So the gamer must learn from its actions. In other words, it must try different strategies, compare them and learn which to choose in future games.
All this is standard fare for a human player and straightforward for a computer too. What's hard is making sense of the screen, a visual task that computers have never really taken to. (Most computer competitors play using direct inputs from the game parameters rather than from the screen.)
Mnih and co tackle this problem by first simplifying the visual problem. The Atari 2600 produces a series of frames that are each 210 x 160 pixels with a 128-colour palette. So these guys begin by converting the game into a greyscale consisting of only four colours and down-sampling it to a 110-84 image. This is further cropped to 84 x 84 pixels since the system requires a square input.
The neural network works by evaluating each image and assessing how it will change given any of the possible actions. It makes this assessment based on its experience of the past (although Mnih and co are tight-lipped about the secret sauce they use to achieve this).
Significantly, the computer has no advanced knowledge of what the screen means. "Our agents only receive the raw RGB screenshots as input and must learn to detect objects on their own," say Mnih and co.
The results are impressive. The neural network not only learns how to play all of the games but becomes pretty good at most of them too. "Our method achieves better performance than an expert human player on Breakout, Enduro and Pong and it achieves close to human performance on Beam Rider," say Mnih and co.
What's more they achieved this performance without any special fine-tuning for any of the games played.
That's significantly better than other attempts to build AI systems that can take on human players in these kinds of games. It's also a serious threat to human domination of the video game world.
There is one crumb of hope for human gamers. The neural net cannot yet beat humans experts at Q*bert, Seaquest and, most important of all, Space Invaders. So we have a few years yet before computer domination is total.
Ref: arxiv.org/abs/1312.5602 : Playing Atari with Deep Reinforcement LearningWrite your story

mac

Kompjuter je pobedio stručnjaka u gou. Desilo se ranije nego što su mnogi predviđali.

http://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/

QuoteIn a major breakthrough for artificial intelligence, a computing system developed by Google researchers in Great Britain has beaten a top human player at the game of Go, the ancient Eastern contest of strategy and intuition that has bedeviled AI experts for decades.

Machines have topped the best humans at most games held up as measures of human intellect, including chess, Scrabble, Othello, even Jeopardy!. But with Go—a 2,500-year-old game that's exponentially more complex than chess—human grandmasters have maintained an edge over even the most agile computing systems. Earlier this month, top AI experts outside of Google questioned whether a breakthrough could occur anytime soon, and as recently as last year, many believed another decade would pass before a machine could beat the top humans.

But Google has done just that. "It happened faster than I thought," says Rémi Coulom, the French researcher behind what was previously the world's top artificially intelligent Go player.

In theory, such training only produces a system that's as good as the best humans---not better. So researchers matched their AI system against itself.

Researchers at DeepMind—a self-professed "Apollo program for AI" that Google acquired in 2014—staged this machine-versus-man contest in October, at the company's offices in London. The DeepMind system, dubbed AlphaGo, matched its artificial wits against Fan Hui, Europe's reigning Go champion, and the AI system went undefeated in five games witnessed by an editor from the journal Nature and an arbiter representing the British Go Federation. "It was one of the most exciting moments in my career, both as a researcher and as an editor," the Nature editor, Dr. Tanguy Chouard, said during a conference call with reporters on Tuesday.

This morning, Nature published a paper describing DeepMind's system, which makes clever use of, among other techniques, an increasingly important AI technology called deep learning. Using a vast collection of Go moves from expert players—about 30 million moves in total—DeepMind researchers trained their system to play Go on its own. But this was merely a first step. In theory, such training only produces a system as good as the best humans. To beat the best, the researchers then matched their system against itself. This allowed them to generate a new collection of moves they could then use to train a new AI player that could top a grandmaster.

"The most significant aspect of all this...is that AlphaGo isn't just an expert system, built with handcrafted rules," says Demis Hassabis, who oversees DeepMind. "Instead, it uses general machine-learning techniques how to win at Go."

'Go is implicit. It's all pattern matching. But that's what deep learning does very well.' Demis Hassabis, DeepMind

The win is more than a novelty. Online services like Google, Facebook, and Microsoft, already use deep learning to identify images, recognize spoken words, and understand natural language. DeepMind's techniques, which combine deep learning with a technology called reinforcement learning and other methods, point the way to a future where real-world robots can learn to perform physical tasks and respond to their environment. "It's a natural fit for robotics," Hassabis says.

He also believes these methods can accelerate scientific research. He envisions scientists working alongside artificially intelligent systems that can home in on areas of research likely to be fruitful. "The system could process much larger volumes of data and surface the structural insight to the human expert in a way that is much more efficient—or maybe not possible for the human expert," Hassabis explains. "The system could even suggest a way forward that might point the human expert to a breakthrough."

But at the moment, Go remains his primary concern. After beating a grandmaster behind closed doors, Hassabis and his team aim to beat one of the world's top players in a public forum. In mid-March, in South Korea, AlphaGo will challenge Lee Sedol, who holds more international titles than all but one player and has won the most over the past decade. Hassabis sees him as "the Roger Federer of the Go world."

Judging by Appearances

In early 2014, Coulom's Go-playing program, Crazystone, challenged grandmaster Norimoto Yoda at a tournament in Japan. And it won. But the win came with caveat: the machine had a four-move head start, a significant advantage. At the time, Coulom predicted that it would be another 10 years before machines beat the best players without a head start.

The challenge lies in the nature of the game. Even the most powerful supercomputers lack the processing power to analyze the results of every possible move in any reasonable amount of time. When Deep Blue topped world chess champion Gary Kasparov in 1997, it did so with what's called brute force. In essence, IBM's supercomputer analyzed the outcome of every possible move, looking further ahead than any human possibly could. That's simply not possible with Go. In chess, at any given turn, there are an average 35 possible moves. With Go—in which two players compete with polished stones on 19-by-19 grid—there are 250. And each of those 250 has another 250, and so on. As Hassabis points out, there are more possible positions on a Go board than atoms in the universe.

Players will tell you to make moves based on the general appearance of the board, not by closely analyzing how each move will play out.

Using a technique called a Monte Carlo tree search, systems like Crazystone can look pretty far ahead. And in conjunction with other techniques, they can pare down the field of possibilities they must analyze. In the end, they can beat some talented players—but not the best. Among grandmasters, moves are rather intuitive. Players will tell you to make moves based on the general appearance of the board, not by closely analyzing how each move might play out. "Good positions look good," says Hassabis, himself a Go player. "It seems to follow some kind of aesthetic. That's why it has been such a fascinating game for thousands of years."

But as 2014 gave way to 2015, several AI experts, including researchers at the University of Edinburgh and Facebook as well as the team at DeepMind, started applying deep learning to the Go problem. The idea was the technology could mimic the human intuition that Go requires. "Go is implicit. It's all pattern matching," says Hassabis. "But that's what deep learning does very well."

Self-Reinforcing

Deep learning relies on what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain. These networks don't operate by brute force or handcrafted rules. They analyze large amounts of data in an effort to "learn" a particular task. Feed enough photos of a wombat into a neural net, and it can learn to identify a wombat. Feed it enough spoken words, and it can learn to recognize what you say. Feed it enough Go moves, and it can learn to play Go.

At DeepMind and Edinburgh and Facebook, researchers hoped neural networks could master Go by "looking" at board positions, much like a human plays. As Facebook showed in a recent research paper, the technique works quite well. By pairing deep learning and the Monte Carlo Tree method, Facebook beat some human players—though not Crazystone and other top creations.

But DeepMind pushes this idea much further. After training on 30 million human moves, a DeepMind neural net could predict the next human move about 57 percent of the time—an impressive number (the previous record was 44 percent). Then Hassabis and team matched this neural net against slightly different versions of itself through what's called reinforcement learning. Essentially, as the neural nets play each other, the system tracks which move brings the most reward—the most territory on the board. Over time, it gets better and better at recognizing which moves will work and which won't.

"AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving," says DeepMind researcher David Silva.

According to Silva, this allowed AlphaGo to top other Go-playing AI systems, including Crazystone. Then the researchers fed the results into a second neural network. Grabbing the moves suggested by the first, it uses many of the same techniques to look ahead to the result of each move. This is similar to what older systems like Deep Blue would do with chess, except that the system is learning as it goes along, as it analyzes more data—not exploring every possible outcome through brute force. In this way, AlphaGo learned to beat not only existing AI programs but a top human as well.

Dedicated Silicon

Like most state-of-the-art neural networks, DeepMind's system runs atop machines equipped with graphics processing units, or GPUs. These chips were originally designed to render images for games and other graphics-intensive applications. But as it turns out, they're also well suited to deep learning. Hassabis says DeepMind's system works pretty well on a single computer equipped with a decent number of GPU chips, but for the match against Fan Hui, the researchers used a larger network of computers that spanned about 170 GPU cards and 1,200 standard processors, or CPUs. This larger computer network both trained the system and played the actual game, drawing on the results of the training.

When AlphaGo plays the world champion in South Korea, Hassabiss team will use the same setup, though they're constantly working to improve it. That means they'll need an Internet connection to play Lee Sedol. "We're laying down our own fiber," Hassabis says.

According to Coulom and others, topping the world champion will be more challenging than topping Fan Hui. But Coulom is betting on DeepMind. He has spent the past decade trying to build a system capable of beating the world's best players, and now, he believes that system is here. "I'm busy buying some GPUs," he says.

Go Forth

The importance of AlphaGo is enormous. The same techniques could be applied not only to robotics and scientific research, but so many other tasks, from Siri-like mobile digital assistants to financial investments. "You can apply it to any adversarial problem—anything that you can conceive of as a game, where strategy matters," says Chris Nicholson, founder of the deep learning startup Skymind. "That includes war or business or [financial] trading."

For some, that's a worrying thing—especially when they consider that DeepMind's system is, in more ways than one, teaching itself to play Go. The system isn't just learning from data provided by humans. It's learning by playing itself, by generating its own data. In recent months, Tesla founder Elon Musk and others have voiced concerns that such AI system eventually could exceed human intelligence and potentially break free from our control.

But DeepMind's system is very much under the control of Hassabis and his researchers. And though they used it to crack a remarkably complex game, it is still just a game. Indeed, AlphaGo is a long way from real human intelligence—much less superintelligence. "This is a highly structured situation," says Ryan Calo, an AI-focused law professor and the founder of the Tech Policy Lab at the University of Washington. "It's not really human-level understanding." But it points in the direction. And if DeepMind's system can understand Go, then maybe it can understand a whole lot more. "What if the universe," Calo says, "is just a giant game of Go?"


Ygg

Quote from: Meho Krljic on 25-03-2016, 06:37:32
Microsoft Releases AI Twitter Bot That Immediately Learns How To Be Racist

Vidjeh sinoć ovo i htjedoh da okačim na forum, ali sam zadrijemao čitajući tvitove.  :)


https://twitter.com/TayandYou/with_replies


Neki od najgrđih tvitova su uklonjeni, ali ima još dosta toga. Zanimljivo je gledati njene reakcije na komentare drugih.
"I am the end of Chaos, and of Order, depending upon how you view me. I mark a division. Beyond me other rules apply."

Meho Krljic

U Japanu su se stvari drugačije odigrale:


Meanwhile in Japan, Microsoft's A.I. Chatbot Has Become an Otaku


A evo nešto nevezano za MS ali u temi:


Diskusija o tome kako veštačke inteligencije igraju igre:

http://youtu.be/5oXyibEgJr0

mac

Upravo sam na Twitchu video kako bot pobeđuje nekoliko pro igrača u specijalnoj verziji Dote, 1 na 1. Bot je igrao kao i pro igrači, samo bolje. Botu je tokom učenja trebalo 1 sat da savlada postojeće programirane botove, i 2 nedelje da postane nepobedivo čudovište. Ovaj bot nije programiran, nego je samo igrao sam protiv sebe sve vreme u okviru OpenAI softvera. Za uvodno objašnjenje pogledajte prvi video, za sam meč protiv Dendija koji važi za najboljeg Shadow Fienda (heroja koji koriste i igrač i bot) videti drugi. Zabavno je gledati na Dendijevom licu kako shvata da je bot u svim elementima igre bolji od njega.

https://youtu.be/l92J1UvHf6M

https://youtu.be/ac1getNs2P8