The holy grail of AI has always been to enable computers to learn the way humans do. The most powerful AIs today, however, still rely on having certain known rules, like rules for a game of chess or Go. Human learning, however, is often messy in inferential, learning the rules of life as we go. DeepMind has long been trying … Continue reading
3
Today the machine learning algorithm MuZero was detailed in a feature research paper in Nature. MuZero expands on the abilities of systems like AlphaGo, AlphaGo Zero, and AlphaZero. Each new algorithm allowed a smart machine to become better at mastering games, starting with Go, then Chess and Shogi, now Atari! What is MuZero? MuZero is a machine learning algorithm. An … Continue reading
8
It's the next step toward self-directed learning about the real world. Cue the shark music
2
David Silver of DeepMind, who helped create the program that defeated a Go champion, thinks rewards are central to how machines—and humans—acquire knowledge.
3
A former world champion teams up with the makers of AlphaZero to test variants on the age-old game that can jolt players into creative patterns.
Back in January, Google's DeepMind team announced that its AI, dubbed AlphaStar, had beaten two top human professional players at StarCraft."This is a dream come true," said DeepMind co-author Oriol Vinyals, who was an avid StarCraft player 20 years ago.By playing itself over and over again, AlphaZero trained itself to play Go from scratch in just three days and soundly defeated the original AlphaGo 100 games to 0.The most recent version combined deep reinforcement learning (many layers of neural networks) with a general-purpose Monte Carlo tree search method.With AlphaZero's success, DeepMind's focus shifted to a new AI frontier: games of partial (incomplete) information, like poker, and multi-player video games like Starcraft II.Not only is the gameplay map hidden to players, but they must also control hundreds of units (mobile game pieces that can be built to influence the game) and buildings (used to create units or technologies that strengthen those units) simultaneously.
On May 28, the Beijing Academy of Artificial Intelligence (BAAI) released the “Beijing AI Principles,” an outline to guide the research and development, implementation, and governance of AI.Endorsed by Peking University; Tsinghua University; the Chinese Academy of Sciences’ Institute of Automation and Institute of Computing Technology; and companies such as Baidu, Alibaba, and Tencent, the principles are the latest global entry in a long list of statements about what AI is and should be.On the surface, a bland and unsurprising take on AI ethics, the document actually pushes forward the global discussion on what AI should look like.Instead of fluffy, feel-good utterances we can all agree on, the global AI community needs to go beyond just words and give us concrete examples of how AI can represent our highest values.October 3, 2017: DeepMind, developers of AlphaGo and AlphaZero, releases “Ethics & Society Principles.”April 9, 2018: OpenAI, a non-profit founded by Elon Musk and Sam Altman, publishes its Charter, in English and Chinese
Nanoengineers at the University of California San Diego have developed new deep learning models that can accurately predict the properties of molecules and crystals.By enabling almost instantaneous property predictions, these deep learning models provide researchers the means to rapidly scan the nearly-infinite universe of compounds to discover potentially transformative materials for various technological applications, such as high-energy-density Li-ion batteries, warm-white LEDs, and better photovoltaics.To construct their models, a team led by nanoengineering professor Shyue Ping Ong at the UC San Diego Jacobs School of Engineering used a new deep learning framework called graph networks, developed by Google DeepMind, the brains behind AlphaGo and AlphaZero.Graph networks have the potential to expand the capabilities of existing AI technology to perform complicated learning and reasoning tasks with limited experience and knowledge--something that humans are good at.For materials scientists like Ong, graph networks offer a natural way to represent bonding relationships between atoms in a molecule or crystal and enable computers to learn how these relationships relate to their chemical and physical properties.The new graph network-based models, which Ong's team dubbed MatErials Graph Network (MEGNet) models, outperformed the state of the art in predicting 11 out of 13 properties for the 133,000 molecules in the QM9 data set.
Alexa, Sophia, Watson: the ancient idea of a humanoid machine with superhuman powers has received fresh impetus by the progress achieved in AI research.Within the project "Clarification of Suspicion of Consciousness in Artificial Intelligence" funded by the Federal Ministry of Education and Research, technology assessment experts of Karlsruhe Institute of Technology (KIT) analyze this issue that has hardly been studied before.When the robot "Sophia" stepped up to the speaker's desk at a conference in Riyadh, Saudi Arabia, and explained to the half amused, half astonished audience its self-image of a learning and communicating machine in human form, public perception considered this a milestone on the apparently ever shorter way to an "awakening" artificial intelligence reflecting its individuality and its inner states.Also AI-based systems, such as the smart speaker "Alexa," IBM's interoperable super brain "Watson" or Google's self-learning chess giant "AlphaZero," boost the vision of a "superintelligence" which will be feasible within a foreseeable time and overshadow anything ever in the world.Surprisingly, it is hardly ever asked what "conscious AI" is and whether the scenarios of machines with an own existence may become reality.The computer scientist and technology assessment expert continues: "some consider it impossible that machines, in particular AI systems, will ever become 'conscious'.
It yielded “smarter” AI, real-world applications, improvements in underlying algorithms, and greater discussions on AI’s impact on civilization.Developed by Alphabet’s DeepMind, AlphaZero showcases the flexibility of deep reinforcement learning.It is now peer-reviewed and works across three different board games.Now, there are machines that can observe the environment, learn the “unspoken rules” of the environment, and adapt their actions to explore and benefit from the environment like humans.In healthcare, deep-learning models can perform as well as a human expert in analyzing electron microscopy or detecting eye diseases.For environment and climate application, AI is helping to build better climate models, mapping millions of solar roofs in the US, monitoring ocean health, and animal conservation works.
After a weeklong whirlwind of talks, demonstrations, spotlight sessions, and posters, the Conference on Neural Information Processing Systems (NeurIPS) — one of the largest artificial intelligence (AI) and machine learning conferences of the year — is coming to a close, and it was a smashing success by any measure.This year’s program featured 42 workshops and nine tutorials, and approximately 4,854 papers were submitted for consideration, 1,010 of which were accepted.That’s all despite a bit of a preconference kerfuffle that led to the NeurIPS board changing the conference’s acronym from “NIPS,” which some attendees and sponsors had protested for its potentially offensive connotations.And DeepMind, the British AI division owned by Google parent company Alphabet, announced that its work on AlphaZero — a system capable of defeating human world champion chess, shogi, and Go players — has been accepted in the journal Science, where it made the front page.NeurIPS 2018’s invited presenters, meanwhile, touched on hot topics in the AI community and broader tech industry.“Everything we’re striving for in changing the world [with AI] could be at risk,” Felten said, speaking to conference attendees this week.
Till now, we were trying to develop an AI which can beat Human players at respective board games.But the race has now shifted to beating other top AI players at their own games.Alphabet Inc.’s AI division, DeepMind, has developed an AI named AlphaZero, which can learn and master games like chess, Go, and shogi without any human intervention.In a paper published in The Journal Science, the DeepMind team notes that AlphaZero is an improved version of its famous AlphaGo engine.After feeding basic rules of chess, shogi, and Go, it took AlphaZero, nine hours, 12 hours, and 13 days to learn the games respectively.Then It was then pitted against the world’s best AIs for these games.
Humans have mostly accepted that they will never be as good at chess as the robots, but now even the robots have to accept they will never be as good as robots.A new artificial intelligence platform, known as AlphaZero, can learn the games of Go, chess and shogi from scratch, without any human intervention.Using deep neural networks, AlphaZero quickly learnt each game "to become the strongest player in history."DeepMind, a British AI subsidiary of Alphabet, Google's parent company, has been tinkering with Go AI for a number of years.In 2017, DeepMind retired former AI champion AlphaGo, but continued tinkering with the AI.The program was pitted against the world's best AI for three board games:
the Alphabets of the british AI company DeepMind has released a report on the work of its artificial intelligence AlphaZero, an AI is developed for learning different kinds of games, and then become the best in the world on these.AlphaZero is a successor to AlphaZero Go for three years ago, the world's best human Go players in a tournament of five matches.AlphaZero Go, however, having been trained by the people in about a month before it carried out the feat.When AlphaZero had studied Go for itself in just three days, it could defeat AlphaZero Go in the Go-matches.in Addition to the so AlphaZero also have learned to play chess and the japanese schackvarianten pieces on their own.the Idea behind AlphaZero is to be able to train yourself to världsmästarklass on all games in which rules and information are known when the decisions to be taken.
Google's DeepMind—the group that brought you the champion game-playing AIs AlphaGo and AlphaGoZero—is back with a new, improved, and more-generalized version.Dubbed AlphaZero, this program taught itself to play three different board games (chess, Go, and shoji, a Japanese form of chess) in just three days, with no human intervention."Starting from totally random play, AlphaZero gradually learns what good play looks like and forms its own evaluations about the game," said Demis Hassabis, CEO and co-founder of DeepMind.The very first chess computer program was written in the 1950s at Los Alamos National Laboratory, and in the late 1960s, Richard D. Greenblatt's Mac Hack IV program was the first to play in a human chess tournament—and to win against a human in tournament play.So AI researchers turned their attention in recent years to creating programs that can master the game of Go, a hugely popular board game in East Asia that dates back more than 2,500 years.It's a surprisingly complicated game, much more difficult than chess, despite only involving two players with a fairly simple set of ground rules.
DeepMind, the London-based subsidiary of Alphabet, has created a system that can quickly master any game in the class that includes chess, Go, and Shogi, and do so without human guidance.The research, published today in the journal Science, was performed by a team led by DeepMind’s David Silver.The paper was accompanied by a commentary by Murray Campbell, an AI researcher at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y.“This work has, in effect, closed a multi-decade chapter in AI research,” writes Campbell, who was a member of the team that designed IBM’s Deep Blue, which in 1997 defeated Garry Kasparov, then the world chess champion.Other examples include many multiplayer games, such as StarCraft II, Dota, and Minecraft.“A group has already beaten the best players at Dota 2, though it was a restricted version of the game; Starcraft may be a little harder.
Almost a year ago exactly, DeepMind, the British artificial intelligence (AI) division owned by Google parent company Alphabet, made headlines with preprint research (“Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”) describing a system — AlphaZero — that could teach itself how to master the game of chess, a Japanese variant of chess called shogi, and the Chinese board game Go.DeepMind’s claims were impressive to be sure, but they hadn’t undergone peer review.“A couple of years ago, our program, AlphaGo, defeated the 18-time world champion Go champion, Lee Sedol, by four games to one.But for us, that was actually the beginning of the journey to build a general-purpose learning system that could learn for itself to play many different games to superhuman level,” David Silver, lead researcher on AlphaZero, told reporters assembled in a conference room at NeurIPS 2018 in Montreal.“AphaZero is the next step in that journey.It learned from scratch to defeat world champion programs in Gi, Chess, and Shogi, started from no knowledge except the game rules.”
TL;DR: Deep RL sucks – A Google engineer has published a long, detailed blog post explaining the current frustrations in deep reinforcement learning, and why it doesn’t live up to the hype.Teaching agents to play games like Go well enough to beat human experts like Ke Jie fuels the man versus machine narrative.All impressive RL results that achieve human or superhuman level require a massive amount of training and experience to get the machine to do something simple.For example, it took DeepMind’s AlphaZero program to master chess and Go over 68 million games of self play – no human could ever play this many games in a lifetime.But, for any setting where this isn’t true, RL faces an uphill battle, and unfortunately, most real-world settings fall under this category,” he wrote.It’s difficult to try and coax an agent into learning a specific behavior, and in many cases hard coded rules are just better.
Neural networks sort stuff – they can't reason or inferDeep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues that there are numerous challenges to deep learning systems that broadly fall into a series of categories.It may be disheartening to know that programs like DeepMind's AlphaZero can thrash all meatbags at a game of chess and Go, but that only happened after playing a total of 68 million matches against itself across both types of games.That's far above what any human professional will play in a lifetime.The relationships between the input and output data are represented and learnt by adjusting the connections between the nodes of a neural network.
Good effort but the games were seemingly riggedAnalysis DeepMind claimed this month its latest AI system – AlphaZero – mastered chess and Shogi as well as Go to "superhuman levels" within a handful of hours.However, some things are too good to be completely true.AlphaZero is based on AlphaGo, the machine-learning software that beat 18-time Go champion Lee Sedol last year, and AlphaGo Zero, an upgraded version of AlphaGo that beat AlphaGo 100-0.Like AlphaGo Zero, AlphaZero learned to play games by playing against itself, a technique in reinforcement learning known as self-play.“Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case,” DeepMind's research team wrote in a paper detailing AlphaZero's design.
More

Top