Oxus Com

Main Menu

  • Home
  • Net present value
  • Trustee
  • International monetary system
  • Principal-Agent Theory
  • Banking

Oxus Com

Header Banner

Oxus Com

  • Home
  • Net present value
  • Trustee
  • International monetary system
  • Principal-Agent Theory
  • Banking
Principal-Agent Theory
Home›Principal-Agent Theory›DeepMind Uses Nash Equilibrium To Solve ML Problems

DeepMind Uses Nash Equilibrium To Solve ML Problems

By Terrie Graves
May 12, 2021
0
0

The most common method of teaching AI systems to perform tasks is training on examples. The process continues until the system is properly trained and errors are minimized. However, it is a lonely endeavor.

Humans learn from interactions. Scientists have found that the same applies to machines as well. AI Research Lab DeepMind has already trained AI agents to capture the flag and reach the Grandmaster level at Starcraft. Drawing on these experiences, DeepMind introduced a game theory-modeled approach to help solve fundamental machine learning problems.

Principal Component Analysis (PCA) is a dimensionality reduction technique for reducing the size of large data sets without losing most of the original information. For this research, the DeepMind team reformulated a competitive multi-agent game called EigenGame.

Main components analysis

PCA burst onto the scene in the early 1900s and is a long-standing technique for processing large-dimensional data. Over the years, the technique has become the first step in the data processing pipeline to aggregate and visualize data. It is useful for regression and classification tasks on low dimensional representations.

Even after a century later, the technique remains relevant and has become an important area of ​​research for mainly two main reasons:

  • With the increase in the amount of data available, PCA has become a computational bottleneck. To improve the scaling of PCA, researchers used random algorithms to fully exploit advancements in computing focused on deep learning. However, research is still ongoing for better optimization.
  • Since PCA shares common solutions with several machine learning and engineering problems, it has become an important research area for the development of information and algorithms that apply widely to branches of the ML tree. .

EigenGame

DeepMind recently introduced a new multi-agent perspective to PCA (this is traditionally a single agent problem) that provides a way to scale massive datasets that were previously computationally demanding. Presented at ICLR 2021, this approach was outlined in an article titled “EigenGame: PCA as Nash’s equilibrium“.

See also

In this approach, the DeepMind team used eigenvectors to design the game. Eigenvectors capture the critical variance of the data and are orthogonal to each other. In EigenGame, each player controls an eigenvector. To earn points, players must explain the variance in the data. On the other hand, a player will be penalized if he is too closely aligned with the other players. This means that if Player 1 is maximizing their variance, the other players should be careful to minimize their alignment with the players above them in the hierarchy. The combination of rewards and penalties ultimately defines a player’s usefulness.

With properly designed variance and alignment terms determined in EigenGame, the researchers were able to show:

  • If all the players play optimally, together they can achieve the game’s Nash balance, which is the PCA solution. Nash’s equilibrium is a decision-making theorem in game theory, named after a mathematician John Forbes Nash Jr. It states that a player is supposed to know the balance strategy of other players. No player wins anything by changing only their strategy.
  • The PCA solution can be found if each player uses gradient climb to maximize their utility independently and simultaneously.

The independence property of the gradient rise is important because it allows the calculation to be distributed over several Google Cloud TPUs. This allows for parallelism of data and models, which allows the algorithm to accommodate large-scale data. With EigenGame, researchers were able to find the major components of hundred terabyte datasets containing “millions of entities or billions of rows” in just a few hours.


Join our Telegram group. Be part of an engaging online community. Join here.

Subscribe to our newsletter

Receive the latest updates and relevant offers by sharing your email.

Shraddha Goled

Shraddha Goled

I am a journalist with a postgraduate degree in computer network engineering. When I am not reading or writing, I can be found scribbling at will.

Related posts:

  1. Why we (usually are not) combating | The day by day weblog
  2. Kind 424B2 UBS AG
  3. IMPLAUSIBLE VICARE LIABILITY ALLEGATIONS: CASE DISMISSED – TCPAWorld
  4. UArizona alum makes Ultimate 4 t-shirts

Categories

  • Banking
  • International monetary system
  • Net present value
  • Principal-Agent Theory
  • Trustee

Recent Posts

  • LEVY: Waterloo School Board muzzles lone black administrator
  • Prime Minister announces additional support for Ukraine and shared priorities at G7 summit in Germany
  • Mayor of Schaumburg recommends salary increases for village administrators and commission members
  • Chimpanzees master virtual reality to find hidden fruits
  • Patriarch Al-Rai, Lebanon’s top Christian cleric, urges politicians to form a government.

Archives

  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • March 2020
  • Terms And Conditions
  • Privacy Policy