In this blog post I am going to explain one of possible way how to implement Deep Learning ML to play video game. For this purpose I used the following:
- N64 Nintendo emulator which can be found here,
- Mario Kart 64 ROM, which can be found on internet as well,
- CNTK – Microsoft Cognitive Toolkit
- .NET Framework and C#
The idea behind this machine learning project is to capture images together with action, while you play Mario Kart game. Then captured images are transformed into features of training data set, and action keys into label hot vectors respectively. Since we need to capture images, the emulator should be positioned at fixed location and size during playing the game, as well as during testing algorithm to play game. The flowing image shows N64 emulator graphics configuration settings.
Also the N64 emulator is positioned to Top-Left corned of screen, so it is easier to capture the images.
Data collection for training data set
During image captures game is played as you would play normally. Also no special agent, not platform is required.
In .NET and C# it is implemented image capture from the specific position of screen, as well as it is recorded which keys are pressed during game play. In order to record keys press, the code found here is modified and used.
The flowing image shows the position of N64 emulator with playing Mario Kart game (1), the windows which is capture and transform the image (2), and the application which collect images, and key press action and generated training data set into file(3).
The data is generated on the following way:
- each image is captured, resized to 100×74 pixels and gray scaled prior to be transformed and persisted to data set training file.
- before image is persisted the hotkey of action key press is recorded and connected to image.
So the training data is persisted into CNTK format which consist of:
- |label – which represent 5 component hot vector, indicate: Forward, Break, Forward-Left, Forward-Right and None (1 0 0 0 0)
- |features consist of 100×74 numbers which represent pixels of the images.
The following data sample shows how training data set are persisted in the txt file:
|label 1 0 0 0 0 |features 202 202 202 202 202 202 204 189 234 209 199...
|label 0 1 0 0 0 |features 201 201 201 201 201 201 201 201 203 18...
|label 0 0 1 0 0 |features 199 199 199 199 199 199 199 199 199 19...
|label 0 0 0 1 1 |features 199 199 199 199 199 199 199 199 199 19...
Since my training data is more than 300 000 MB of size, I provided just few BM sized file, but you can generate file as big as you wish with just playing the game, and running the flowing code from Program.cs file:
Training Model to play the game
Once we generate the data, we can move to the next step: training RCNN model to play the game. For training model the CNTK is used. Also since we play a game and previous sequence will determined the next sequence in the game, LSTM RNN is used. More information about CNTK and LSTM can be found in previous posts. In my case I have collected nearly 15000 images during several round of playing the same level and route. Also for more accurate model much more images should be collected, nearly 100 000. The model is trained in one hour, with 500000 iterations. The source code about whole project can be found on GitHub page. (http://github.com/bhrnjica/LSTMBotGame )
By running the following code, the training process is started with provided training data:
Playing the game with CNTK model
Once we trained the model, we move to the next step: playing a game. The emulator should be positioned on the same position and with the same size in order to play the game.ONce the model is trained and created in th training folder, the playing game can be achive by running:
var dev = DeviceDescriptor.CPUDevice;
How it looks like on my case, you can see on this youtube video: