first_imgStay on target There is no rest for the weary artificial intelligence: In the five months since being crowned world Go champion, AlphaGo has become stronger, smarter, and more powerful than ever.*Insert training montage set to “Eye of the Tiger” here*AlphaGo Zero, the latest evolution of Google’s elite algorithm, has no need for humans: The platform learns by playing against itself—with no intervention or historical data. And it’s a quick study.AdChoices广告After 40 days, AlphaGo Zero surpassed all previous versions (winning 100 of 100 games) to become “arguably … the best Go player in the world,” according to DeepMind CEO Demis Hassabis and lead researcher Dave Silver.The artificial intelligence firm used a “novel” form of reinforcement learning (a sure sign of the robot apocalypse), in which AlphaGo Zero becomes its own teacher.Starting from scratch, the neural network, like Jon Snow, knows nothing (about the game of Go). But through regular, self-contained gameplay, the system incrementally improves, creating stronger versions of AlphaGo Zero.“This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” a DeepMind blog post said. “Instead, it is able to learn tabula rasa [the idea that knowledge comes from experience or perception] from the strongest player in the world: AlphaGo itself.”Developed by Google’s DeepMind, the software in 2015 became the first computer Go program to beat a human professional. It made history again last year, when it pummeled Lee Sedol, and triumphed once more in May with a three-game win against current No. 1 ranking player Ke Jie.The latest iteration, however, differs from its predecessors: AlphaGo Zero abandons all hand-engineered features, runs only one neural network (versus the two found in earlier models), and relies solely on its own knowledge to evaluate positions.“All of these differences help improve the performance of the system and make it more general,” Hassabis and Silver said, boasting “unconventional strategies and creative new moves” that surpassed those used against Sedol and Jie.Invented in China more than 2,500 years ago, and played by more than 40 million people worldwide, Go requires players to place black or white stones on a board in an effort to capture the opponent’s pieces or surround empty spaces to build territories.The deceivingly difficult game features more possible moves than there are atoms in the universe, eliminating traditional “brute force” AI methods that search for all conceivable moves.“These moments of creativity give us confidence that AI will be a multiplier for human ingenuity, helping us with our mission to solve some of the most important challenges humanity is facing,” the pair wrote.Learn more about AlphaGo Zero in a new paper, published this week in the journal Nature.Let us know what you like about Geek by taking our survey. Robot Dog Astro Can Sit, Lie Down, and Save LivesMIT’s AI Knitting System Designs, Creates Woven Garments last_img read more