Using CNTK and C# to train Mario to drive Kart


Introduction

In this blog post I am going to explain one of possible way how to implement Deep Learning to play video game. For this purpose I used the following:

  1. N64 Nintendo emulator which can be found
  2. Mario Kart 64 ROM, which can be found on internet as well,
  3. CNTK – Microsoft Cognitive Toolkit
  4. .NET Framework and C#

The idea behind this machine learning project is to capture images together with action, while you play Mario Kart game. Then captured images are transformed into features of training data set, and action keys into label hot vectors respectively.  Since we need to capture images, the emulator should be positioned at fixed location and resized to fixed size during playing the game, testing algorithm to play game, and play the game. The flowing image shows N64 emulator graphics configuration settings.

2018-02-15_16-34-03

Also the N64 emulator is positioned to Top-Left corned of screen, so it is easier to capture the images.

Data collection for training data set

During image captures, game is played as you would play normally. Also no special agent, nor platform is required.

In .NET and C#, it is implemented image capture from the specific position of screen, as well as it is recorded which keys are pressed during game play. In order to record keys press, the code found is modified and used.

The flowing image shows the position of N64 emulator with Mario Kart game (1), the window which shows the captured and transformed images (2), and .NET Console  application with the implementation (3).

2018-02-15_16-31-42

The data is generated on the following way:

  • each image is captured, resized to 100×74 pixels and gray scaled prior to be transformed and persisted to data set training file.
  • before image is persisted the hotkey of action key press is recorded and connected to image.

So the training data is persisted into CNTK format which consist of:

  1. |label – which represent 5 component hot vector, indicate: Forward, Break, Forward-Left, Forward-Right and None (1 0 0 0 0)
  2. |features consist of 100×74 numbers which represent pixels of the images.

The following data sample shows how training data set are persisted in the txt file:

|label 1 0 0 0 0 |features 202 202 202 202 202 202 204 189 234 209 199...
|label 0 1 0 0 0 |features 201 201 201 201 201 201 201 201 203 18...
|label 0 0 1 0 0 |features 199 199 199 199 199 199 199 199 199 19...
|label 0 0 0 1 1 |features 199 199 199 199 199 199 199 199 199 19...

Since my training data is more than 300 000 MB of size, I provided just few BM sized file, but you can generate file as big as you wish with just playing the game, and running the flowing code from Program.cs file:

await GenerateData.Start();

Model training to play the game

Once we generate the data, we can move to the next step: training RCNN model to play the game. For training model the CNTK is used. Since we play a game which the previous sequence will determined the next sequence in the game, the LSTM RNN is used. More information about CNTK and LSTM can be found in previous posts.

In my case I have collected nearly 15000 images during several round of playing the same level and route. Also for more accurate model much more images should be collected, nearly 100 000. The model is trained in one hour, with 500000 iterations. The source code about whole project can be found on page. ()

By running the following code, the training process is started with provided training data:

CNTKDeepNN.Train(DeviceDescriptor.UseDefaultDevice());

Playing the game with CNTK model

Once we trained the model, we move to the next step: playing a game.

The emulator should be positioned on the same position and with the same size as we had while generating training data.Once the model is trained and created, the playing game can be achieve by running:

var dev = DeviceDescriptor.UseDefaultDevice();
MarioKartPlay.LoadModel("../../../../training/mario_kart_modelv1", dev);
MarioKartPlay.PlayGame(dev);

How it looks like on my case, you can see on this you tube video:

Step by step CNTK Object Detection on Custom Dataset with Python


Recently, I was playing with CNTK object detection API, and produced very interesting model which can recognize the Nokia3310 mobile phone. As you probably already know Nokia3310 is legendary mobile phone which was popular 15 years ago, and recently re-branded by Nokia.

In this blog post I will provide you with step by step introductions how to:

  • prepare images for training
  • generate training data for selected images by using VOOT tool,
  • prepare Python code for object detection using FasterRCNN alogirithm implemented with CNTK,
  • testing custom image in order to detect Nokia3310 on image.

Preparing Image for model training

Finding appropriate images for our model is very easy. Just go to google.com and type “Nokia3310” and bum, there are plenty of images.

Find at least 20 images, and put into the Nokia3310 image folder. Once we collect enough image for the model, we can move to the next step.

Generating data from the image data set using VOTT tool

In order to train image detection model by using FasterRCNN algoritm, we have to provide three kinds of data separated in three different files:

  1. class_map file – which contains list of available objects which the model should recognize on the image,
  2. train_image file – which contains the list of image file paths
  3. train roi file – which contains “region of interest” data. The data is consisting of list of 4 numbers which represent the top, left, right and bottom coordinate producing rectangle of the object.

Seems pretty much job for simple object detection, but hopefully there is a tool which can generate all data for us. It is called  VoTT: Visual Object Tagging Tool, and it can be found at : .

Generating Image data with VOTT

Here we will explain in detail how to generate image data by using VOTT tool.

1. Open VOTT tool, from File menu and select folder we previously collected with images.

2. Enter “nokia3310”  in Labels edit box and click Continue button. In case we have more than one

3. Then for each image, make a rectangle on each object which represents the Nokia3310.

4. Once you finish with tagging for one image, press Next, and do the same for all selected images.

5. Once the process of tagging is finished, then the export action can be performed.

6. With Export option data is generated for each rectangle we made, and the two files are generated for each image in data set. Also once the tagging process is completed VOTT tool generated three folders:

a) negative – contains images which have no any tagged rectangle (no nokia3310 on images),

b) positive – contains approximate 70% of all images which we tagged Nokia3310 object, and this folder will be used for training the model,

c) testImages – contains approximate 30% of all images which we tagged Nokia3310 object, and this folder will be used for evaluation and testing the model.

The VOOT classified all images in three folders. In case there are images with no tagging, images will be moved to negatives, all other images is separated into positive and testImages folder.

From each image two files are generated:

[imagename].bboxes.labels.tsv – which consist of all labels tagged in image file.

[imagename].bboxes.tsv – rectangle coordinates of all tags in the image.

Processing VOTT generated data into CNTK training and testing dataset files

Once we have VOTT generated data, we need to transform them into cntk format. First we will generate: class_map file.txt

7. Create new “class_map file.txt”  file, and put the following text into it:

__background__	0
Nokia3310	1

As can be seen there is only one class which we want to detect, and ti is Nokia3310, (the __backgroud__ is reserved tag which is added by default and cannot be removed). Now we need to generate the second file:
8. Create new “train_image_file.txt” file, and put text similar with this one:

0 positive/img01.jpg 0
1 positive/img05.jpg 0
2 positive/img10.jpg 0
...

The content of the file is list of all images placed in positive folder, with ID on the left side and zero on the right side, separated by tabulator. Image path should be relative.
9. Create new “train_roi_file.txt”, and put data similar with this one:

0 |roiAndLabel 10	418	340	520 1
1 |roiAndLabel 631	75	731	298 1
2 |roiAndLabel 47	12	222	364 1
3 |roiAndLabel 137	67	186	184 1	188	69	234	180 1
...

As can be seen first four numbers are rectangle coordinates, which follow the 1 number indicates classValue. Since we have only one class 1 is always after 4 numbers. Also in case image contains more than one rectangle which is the case of line 3, after every four  numbers it goes class value.

This is procedure how can we make three files for training, needed to run CNTK object detection. Also for testing data we need image and ROI files. Whole data set and corresponded files can be found on GitHub page.

Implementation of Object Detection

CNTK comes with example how to implement object detection which can be found at:

So I took the source code from , and modify it for my case, and published at git hub which can be found .

10. Before downloading source code, be sure the CNTK 2.3 is installed on your machine with Anaconda 4.1.1, in the environment with Python 3.5 version.

11. Clone the Github repository and open it in Visual Studio or Visual Studio Code.

12. First thing you should do is to download pre-trained “Alex net” model. You can easily download it, by running the download_model.py python code placed in PretrainedModels folder.

13. Process of training is started when you run Nokia3310_detection.py python file. Beside pre-trained model, no other resources are required in order to run the project. The folowing picture shows main parts of the solution.

Once the training process is finished, once image is evaluated and shown in order to evaluate how model is good in detecting the phone. Such image is shows at the beginning of the blog post.

All source code with image dataset you can download from GitHub at