Skip to content
This repository has been archived by the owner on Mar 10, 2020. It is now read-only.

Latest commit

 

History

History
332 lines (244 loc) · 18.6 KB

README.md

File metadata and controls

332 lines (244 loc) · 18.6 KB

Fractal AI: A Fragile Theory of Intelligence

deprecated

This repository is deprecated. If you would like to use any of the algorithms for your own research please refer to the fragile framework.

It is only for educational purposes, and for providing code to the Fractal AI paper.

Boxing-v0 MsPacman-v0 Tennis-v0 Centipede-v0 MontezumaRevenge-v0

Once you start doubting, just like you’re supposed to doubt, you ask me if the science is true. You say no, we don’t know what’s true, we’re trying to find out and everything is possibly wrong.

–Richard P. Feynman, The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman.

Table of Contents

Abstract

Fractal AI (arXiv#1, arXiv#2) is a theory for efficiently sampling state spaces. It allows one to derive new mathematical tools that could be useful for modeling information using cellular automaton-like structures instead of smooth functions.

In this repository we present Fractal Monte Carlo (FMC), a new planning algorithm derived from the first principles of Fractal AI theory. A FMC agent is capable of solving Atari-2600 games under the OpenAI Gym several orders of magnitude more efficiently than similar planning algorithms, such as Monte Carlo Tree Search (MCTS) [1].

We also present a more advanced Swarm Wave implementation, also derived from Fractal AI principles, that allows one to solve Markov decision processes under a perfect/informative model of the environment. This implementation is far more efficient than FMC, effectively "solving" a substantial number of Atari games.

The code provided under this repository exemplifies how it is now possible to beat some of the current state-of-the-art benchmarks on Atari games while generating a large set of top-performing examples with little computation required, turning Reinforcement Learning (RL) into a supervised problem.

These new algorithms propose a new approach to modeling the decision space, while maintaining control over any aspects of the agent's behavior. The algorithms can be applied to all combinations of discrete or continuous decision and state spaces.

Quick Start

To quickly understand the fundamentals of Fractal AI you can refer to the Introduction to FAI. The document provides a brief explanation of the algorithms here presented and their potential applications on the field of Reinforcement Learning.

To test how the Fractal Monte Carlo Agent performs on any Atari game you can refer to the FMC example notebook. This example allows us to run games using either the RAM content or the pixel render as observations.

To better understand how the Swarm Wave algorithm works in practice you can refer to the Swarm Wave example notebook.

Pleas note the authors are open to discuss the ideas and code here presented under the conceptual framework of Reinforcement Learning and its standard terminology.

Installation

The code provided aims to be both simple and self-explanatory. Requirements and instructions to set up the environment are provided below.

Requirements

Installing dependencies

As a first step, install the dependencies as explained on the OpenAI gym documentation:

To install the full set of environments, you'll need to have some system packages installed. We'll build out the list here over time; please let us know what you end up installing on your platform. In case you want to run the notebook:

pip3 install jupyter

On OSX:

brew install cmake boost boost-python sdl2 swig wget

On Ubuntu 14.04:

sudo apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig libav-tools

Cloning and Installing the FractalAI Repository

On the terminal, run:

git clone git@github.com:FragileTheory/FractalAI.git
cd FractalAI
sudo pip3 install -e .

Benchmarks

It doesn't matter how beautiful your theory is, it doesn't matter how smart you are.

If it doesn't agree with experiment, it's wrong.

–Richard P. Feynman

We used a standard set of 50 Atari-2600 games, common to all the planning algorithms articles found in the literature, to compare our implementation of the FMC algorithm against:

  • Standard Human: a professional game tester after 2 hours of trainnig, as reported in [5].
  • World Human Record: the maximum score achieved by a human player, as reported in [8].
  • Planning SOtA: the best score achieved by any "State of the Art" planning algorithms (Full Tree, MCTS UCT, IW(1), p-IW(1), R.p-IW(1), 2BSF, BrFS), as reported in [12] [13] [14] [15] [16] [17].
  • Hidden score limit: many games do not support scoring above 1M and reset score down to zero after 999,999 is reached. In most cases the limit was totally unknow as no one -human or algorithm- had ever been able to reach this limit before.
FMC Wins %
FMC vs Standard Human 49 / 50 98%
FMC vs World Human Record 32 / 50 64%
FMC vs Planning SOtA (1) 50 / 50 100%
FMC vs Hidden score limit 16 / 50 32%

(1) On average, the Swarm Wave version of FMC used 360 times fewer samples per action than the rest of planning algorithm, typically using 150k samples per action.

Fractal Monte Carlo Agent Performance Table

The following table depicts the Fractal Monte Carlo Agent performance on each tested game.

Game Human Record Planning SOtA FMC
Alien 251916 38951 479940
Amidar 155339 3122 5779
Assault 8647 1970 14472
Asterix (*) 335500 319667 999500
Asteroids 10004100 68345 12575000
Atlantis 7352737 198510 10000100
Bank Heist 199978 1171 3139
Battle Zone (*) 863000 330880 999000
Bean Rider (*) 999999 12243 999999
Berzerk 1057940 2096 17610
Bowling 300 69 180
Boxing 100 100 100
Breakout 752 772 864
Centipede 1301709 193799 1351000
Chopper Command (*) 999900 34097 999900
Crazy Climber 447000 141840 2254100
Demon Attack (*) 999970 34405 999970
Double Dunk 24 24 24
Enduro 3617.9 788 5279
Fishing Derby 71 42 63
Freeway 34 32 33
Frostbyte (*) 552590 6427 999960
Gopher (*) 120000 26297 999980
Gravitar 1673950 6520 14050
Hero 1000000 15280 43255
Ice Hockey 36 62 64
Jamesbond 45550 23070 152950
Kangaroo 1436500 8760 10800
Krull 1006680 15788 426534
Kung fu master 1000000 86290 172600
Montezuma's Revenge 1219200 500 5600
Ms. Pacman (*) 290090 30785 999990
Name this Game 25220 15410 53010
Pong 21 21 21
Private Eye 103100 2544 41760
QBert () 999975 44876 999975
River Raid 194940 15410 18510
Road Runner (*) 999900 120923 999900
Robotank 74 75 94
Seaquest (*) 527160 35009 999999
Space Invaders 621535 3974 17970
Star Gunner (*) 77400 14193 999800
Tennis 24 24 24
Time Pilot 66500 65213 90000
Tutankham 3493 226 342
Up and Down (*) 168830 120200 999999
Venture 31900 1200 1500
Video Pinball (*) 999999 471859 999999
Wizard of Wor (*) 99900 161640 99900
Zaxxon 100000 39687 92100

(*) Games with the "1 Million bug" where max. score is hard-limited.

Detailed Performance Sheet

We provide a more detailed Google Docs spreadsheet where the performance of the Fractal Monte Carlo Agent is logged relative to the current alternatives. In the spreadsheet we also provide the parameters used in each of the runs.

If you find any outdated benchmarks or for some reason you are unable to replicate some of our results, please open an issue and we will update the document accordingly.

Additional Resources

Theoretical Foundations

Fractal AI: A Fragile Theory of Intelligence: This document explains the fundamental principles of the Fractal AI theory in which our Agent is based. We worked all the fundamental principles completely from scratch to build our own solution. We try to be consistent with existing terminology, and this document should contain everything you need to understand the theory. Comments on how to better explain the content are appreciated.

Solving Atari Games Using Fractals And Entropy: A short version of the article written by Spiros Baxevanakis and submitted -under very high uncertaintly- to NIPS2018.

Blog

EntropicAI, Sergio Hernández Cerezo's blog: Here you can find the evolution of the research process for developing this algorithm, documented and explained, as well as experiments which aim to apply the theory to other fields of research.

YouTube

Fractal AI playlist: In the Youtube playlist you can find videos of the accomplishments over the years. Besides the recordings Atari games using the Agent, you can find videos recorded using a custom library that allows one to create different tasks in continuous control environments, as well as visualizations of how the Agent samples the state space.

Related Research

GAS paper [9]: A manuscript describing an application of the Fractal AI theory on general optimization problems. There are certainly better ways to apply the theory such problems, yet it illustrates why code is better than maths to explain the theory. When trying to formalize it, things can get really non-intuitive.

Causal Entropic Forces by Alexander Wissner-Gross [10]: The fundamental concepts behind this paper inspired the present research. We develop our theory aiming to calculate future entropy more quickly and being able to leverage the information contained in the Entropy of any state space, together with any potential function.

Cite us

@misc{1803.05049,
    Author = {Sergio Hernández Cerezo and Guillem Duran Ballester},
    Title = {Fractal AI: A fragile theory of intelligence},
    Year = {2018},
    Eprint = {arXiv:1803.05049},
  }

FAQ

As questions regarding the research and methodology we will address them under the FAQ.

You can refer to the FAQ document.

About the Authors

Authors:

The authors have developed the theory as personal side projects driven purely by intellectual curiosity. Guillem worked on it while attending college, and Sergio while working as a programmer. The authors are not part of academia, have no corporate affiliation and no formal track record.

All the time and resources involved came from the authors themselves, besides the support from:

  • HCSoft, which supported our research and believed the ideas since the very beginning.
  • source{d}, which kindly sponsored the project for 6 months.

We currently do not have the resources to further carry our research. We will gladly accept contributions or sponsorships that allow us to continue working with what is our passion.

Special thanks: We want to thank all the people who has believed in us along the years. Their patience, understanding and support made possible for this project to evolve to this point.

Bibliography

  • [1] Guo, Xiaoxiao and Singh, Satinder and Lee, Honglak and Lewis, Richard L and Wang, Xiaoshi. Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning. NIPS2014_5421, 2014.

  • [2] Greg Brockman and Vicki Cheung and Ludwig Pettersson and Jonas Schneider and John Schulman and Jie Tang and Wojciech Zaremba. OpenAI Gym . arXiv:1606.01540, 2016.

  • [3] Marc G. Bellemare, Will Dabney Rémi Munos. A Distributional Perspective on Reinforcement Learning. arXiv:1707.06887, 2017.

  • [4] Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Matteo Hessel, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, Shane Legg. Noisy networks for exploration. arXiv:1706.10295, 2018.

  • [5] Volodymyr Mnih & others. Human-level control through deep reinforcement learning. doi:10.1038/nature14236, 2015.

  • [6] Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, Marcin Andrychowicz. Parameter Space Noise for Exploration. arXiv:1706.01905, 2017.

  • [7] Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever. Evolution Strategies as a Scalable Alternative to Reinforcement Learning. arXiv:1703.03864, 2017.

  • [8] ATARI VCS/2600 Scoreboard. Atari compendium, 2018.

  • [9] Sergio Hernández, Guillem Duran, José M. Amigó. General Algorithmic Search. arXiv:1705.08691, 2017.

  • [10] Alexander Wissner-Gross. Causal entropic forces. Physical Review Letters, 2013.

  • [11] Shane Legg Machine Super Intelligence. Doctoral Dissertation , 2008.

  • [12] Foley, Daniel John *Model-Based Reinforcement Learning in Atari 2600 Games. Thesis, 2017.

  • [13] Justin Fu, Irving Hsu *Model-Based Reinforcement Learning for Playing Atari Games. Stanford report, 2016.

  • [14] Xiaoxiao Guo et al. *Deep Learning for real-time Atari game play using offline MCTS planning. NIPS-2014, 2014.

  • [15] Marc G. Bellemare, Yavar Naddaf, Joel Veness, Michael Bowling *The Arcade Learning Environment: An Evaluation Platform for General Agents. arxiv:1207.4708, 2013.

  • [16] Alexander Shleyfman, Alexander Tuisov, Carmel Domshlak *Blind Search for Atari-Like Online Planning Revisited. IJCAI-16, 2016.

  • [17] Nir Lipovetzky, Miquel Ramirez, Hector Geffner *Classical Planning with Simulators: Results on the Atari Video Games. IJCAI-15, 2015.

  • [18] Wilmer Bandres, Blai Bonet, Hector Geffner *Planning with Pixels in (Almost) Real Time. arXiv:1801.03354, 2018.