Implementing MCTS for PS

I am a university student and (aspiring) data scientist. I am working on a project to implement Monte Carlo Tree Search in order to make a bot that can efficiently search through the Pokemon game tree and hopefully achieve top level competitive performance.

Referencing this video:

The creator of that video mentioned that they were able to get access to a dataset of 2,000,000 games in order to train on.

I would love access to this dataset if possible! :heart:

Any help or guidance on another place to ask for this data would be much appreciated.
 
If you're looking into MCTS then a better route might be a self-play model using deep reinforcement learning (an architecture similar to AlphaGo Zero). These models do not require any historical play data and outperform historical data models over time. The showdown simulator conmmand-line tools make doing these quite feasible, I think. I'm beginning work on a similar project myself. I'm an experienced data scientist, but software engineering is not my forte so a lot of pieces to put together to make it happen, though.
 
So I have done research into implementing and AlphaGo Zero or even MuZero like algorithm, and I was under the impression that it wouldn’t work because of Pokémon’s imperfect information.

Of course you could just pretend it’s a perfect information game with either domain knowledge and usage stats, or one approach I was considering was just having an unknown state be a valid parameter for each Pokémon in a party.

Now the answer to this could be using ReBEL instead. ReBEL takes into account the missing information factor an represents everything as an info state, but takes significantly more compute in order to implement it, which is a limitation discussed in the paper. This is because of how many game states it needs to create and evaluate. This is for the CFR part not even the evaluation or policy network. Given my lack of experience(and funding) in data science I am not sure where I could acquire such a large amount of compute, but I am open to suggestions.

Also melondonkey software engineering is my forte with data science being where I am lacking, so if you wanted to work together on a project I would be open to it.
 
Yeah, I wouldn't expect it to work out of the box with the same NN architectures, but I think the key insight of self-play combined with deep learning will be the only possible path to a world-champion Pokemon AI. Can't find the link now but pretty sure David Silver said in his Deep RL course on youtube that imperfect information wasn't necessarily a problem. For example, it has been overcome with Poker AIs. I think it's just going to require a very specific architecture. There's probably ways to build very good bots from more traditional methods but it seems like a herculean engineering effort that I'd be afraid to dive into.

As far as collab goes, I'm down for some but can't be a good faith technical partner for hard deliverables as my schedule is just too busy and sporadic. I'm also only interested in VGC, which adds a lot of complexity to the action space. Right now I'm working on learning how to use poke-env and until I learn some of the basic tools I probably won't be much use. I also have a Pokemon blog for other kinds of analyses, so if you're interested in that kind of thing I would love to have guest contributors. https://pokemon-data-analysis.netlify.app. The blog is git-backed using distill and rmarkdown so other could write for it in theory: pkmn-blog
 
Just read through a good bit of your blog and you do some really awesome data analysis on there!

ReBEL's primary implementation is a poker AI, which does some clever transformations on the information states in order to make it into effectively a perfect information game, and then takes a AlphaGo Zero like approach with the perfect information. Just another approach to look in to.
 
Hey! I'm a grad student at Carnegie Mellon Interested in developing AI for Pokemon as well. spicy were you able to get access to a dataset by emailing the staff? melondonkey really love your blog and the information you've given in the post. I'm interested in using self-play and deep learning (RL/Supervised together maybe?) just like you mentioned, to try my hand at creating world-champion Pokemon AI.
 

Users Who Are Viewing This Thread (Users: 1, Guests: 0)

Top