麻豆中文字幕丨欧美一级免费在线观看丨国产成人无码av在线播放无广告丨国产第一毛片丨国产视频观看丨七妺福利精品导航大全丨国产亚洲精品自在久久vr丨国产成人在线看丨国产超碰人人模人人爽人人喊丨欧美色图激情小说丨欧美中文字幕在线播放丨老少交欧美另类丨色香蕉在线丨美女大黄网站丨蜜臀av性久久久久蜜臀aⅴ麻豆丨欧美亚洲国产精品久久蜜芽直播丨久久99日韩国产精品久久99丨亚洲黄色免费看丨极品少妇xxx丨国产美女极度色诱视频www

 
AI beats leading human players in six-party poker, making milestone
                 Source: Xinhua | 2019-07-12 05:54:55 | Editor: huaxia

This is the interface used during the experiment with Pluribus and the professional players. Credit: Facebook

WASHINGTON, July 11 (Xinhua) -- An artificial intelligence (AI) program has defeated leading professionals in six-player Texas hold'em poker, making a superhuman AI milestone in a multi-party competition.

The study published online on Thursday in the journal Science showed that the AI called Pluribus developed by Carnegie Mellon University in collaboration with Facebook AI played against five poker professionals at a time for a total of 10,000 hands and emerged victorious with statistical significance.

All those human players are best players in the world who have won more than one million U.S. dollars playing poker, according to the study.

"The ability to beat five other players in such a complicated game opens up new opportunities to use AI to solve a wide variety of real-world problems," said Tuomas Sandholm, professor of computer science at Carnegie Mellon, who led the study.

Thus far, superhuman AI milestones in strategic reasoning have been limited to two-party competition like chess and Go. Poker is a bigger challenge than two-party ones because it is an incomplete information game. Player can't be certain which cards are in play and opponents can bluff.

AI in two-player games tends to approximate a Nash equilibrium, guaranteeing that only a result no worse than a tie, and AI emerges victorious once its opponent errs and cannot maintain the equilibrium. But in a game with more than two players, playing Nash equilibrium can be a losing strategy, so Pluribus has to consistently outplay its opponents.

Pluribus considers only five possible continuation strategies each opponent and itself might adopt for the rest of the game. It also learns to be unpredictable, by not always taking the best move each time, according to the study.

This milestone victory could bring AI closer to solving many real-world problems involving multiple parties and missing information.

Pluribus computed its blueprint strategy in eight days using only 12,400 core hours and used just 28 cores during live play.

Back to Top Close
Xinhuanet

AI beats leading human players in six-party poker, making milestone

Source: Xinhua 2019-07-12 05:54:55

This is the interface used during the experiment with Pluribus and the professional players. Credit: Facebook

WASHINGTON, July 11 (Xinhua) -- An artificial intelligence (AI) program has defeated leading professionals in six-player Texas hold'em poker, making a superhuman AI milestone in a multi-party competition.

The study published online on Thursday in the journal Science showed that the AI called Pluribus developed by Carnegie Mellon University in collaboration with Facebook AI played against five poker professionals at a time for a total of 10,000 hands and emerged victorious with statistical significance.

All those human players are best players in the world who have won more than one million U.S. dollars playing poker, according to the study.

"The ability to beat five other players in such a complicated game opens up new opportunities to use AI to solve a wide variety of real-world problems," said Tuomas Sandholm, professor of computer science at Carnegie Mellon, who led the study.

Thus far, superhuman AI milestones in strategic reasoning have been limited to two-party competition like chess and Go. Poker is a bigger challenge than two-party ones because it is an incomplete information game. Player can't be certain which cards are in play and opponents can bluff.

AI in two-player games tends to approximate a Nash equilibrium, guaranteeing that only a result no worse than a tie, and AI emerges victorious once its opponent errs and cannot maintain the equilibrium. But in a game with more than two players, playing Nash equilibrium can be a losing strategy, so Pluribus has to consistently outplay its opponents.

Pluribus considers only five possible continuation strategies each opponent and itself might adopt for the rest of the game. It also learns to be unpredictable, by not always taking the best move each time, according to the study.

This milestone victory could bring AI closer to solving many real-world problems involving multiple parties and missing information.

Pluribus computed its blueprint strategy in eight days using only 12,400 core hours and used just 28 cores during live play.

010020070750000000000000011100001382190531