Simple said, data normalization is set of tasks which transform values of any feature in a data set into predefined number range. Usually this range is [-1,1] , [0,1] or some other specific ranges. Data normalization plays very important role in ML, since it can dramatically improve the training process, and simplify settings of network parameters.
There are two main types of data normalization:
– MinMax normalization – which transforms all values into range of [0,1],
– Gauss Normalization or Z score normalization, which transforms the value in such a way that the average value is zero, and std is 1.
Beside those types there are plenty of other methods which can be used. Usually those two are used when the size of the data set is known, otherwise we should use some of the other methods, like log scaling, dividing every value with some constant, etc. But why data need to be normalized? This is essential question in ML, and the simplest answer is to provide the equal influence to all features to change the output label. More about data normalization and scaling can be found on this .
In this blog post we are going to implement CNTK neural network which contain a “Normalization layer” between input and first hidden layer. The schematic picture of the network looks like the following image:
As can be observed, the Normalization layer is placed between input and first hidden layer. Also the Normalization layer contains the same neurons as input layer and produced the output with the same dimension as the input layer.
In order to implement Normalization layer the following requirements must be met:
Before network creation, we should prepare mean and standard deviation parameters which will be used in the Normalization layer as constants. Hopefully, the CNTK has the static method in the Minibatch source class for this purpose “MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs”. The method takes the whole training data set defined in the minibatch and calculate the parameters.
//calculate mean and std for the minibatchsource // prepare the training data var d = new DictionaryNDArrayView, NDArrayView>>(); using (var mbs = MinibatchSource.TextFormatMinibatchSource( trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false)) { d.Add(mbs.StreamInfo("feature"), new Tuple(null, null)); //compute mean and standard deviation of the population for inputs variables MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device); }
Now that we have average and std values for each feature, we can create network with normalization layer. In this example we define simple feed forward NN with 1 input, 1 normalization, 1 hidden and 1 output layer.
private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device) { //First the parameters initialization must be performed var glorotInit = CNTKLib.GlorotUniformInitializer( CNTKLib.DefaultParamInitScale, CNTKLib.SentinelValueForInferParamInitRank, CNTKLib.SentinelValueForInferParamInitRank, 1); //*******Input layer is indicated as feature var inputLayer = feature; //*******Normalization layer var mean = new Constant(avgStdConstants.Item1, "mean"); var std = new Constant(avgStdConstants.Item2, "std"); var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std); //*****hidden layer creation //shape of one hidden layer should be inputDim x neuronCount var shape = new int[] { hiddenDim, 4 }; var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh"); var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh"); var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam; var hidLayerAct = CNTKLib.ReLU(hidLay); //******Output layer creation //the last action is creation of the output layer var shapeOut = new int[] { 3, hiddenDim }; var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo"); var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo"); var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut; return outLay; }
The whole source code about this example is listed below. The example show how to normalize input feature for Iris famous data set. Notice that when using such way of data normalization, we don’t need to handle normalization for validation or testing data sets, because data normalization is part of the network model.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using CNTK; namespace NormalizationLayerDemo { class Program { static string trainingDataPath = "./data/iris_training.txt"; static string validationDataPath = "./data/iris_validation.txt"; static void Main(string[] args) { DeviceDescriptor device = DeviceDescriptor.UseDefaultDevice(); //stream configuration to distinct features and labels in the file var streamConfig = new StreamConfiguration[] { new StreamConfiguration("feature", 4), new StreamConfiguration("flower", 3) }; // build a NN model //define input and output variable and connecting to the stream configuration var feature = Variable.InputVariable(new NDShape(1, 4), DataType.Float, "feature"); var label = Variable.InputVariable(new NDShape(1, 3), DataType.Float, "flower"); //calculate mean and std for the minibatchsource // prepare the training data var d = new Dictionary(); using (var mbs = MinibatchSource.TextFormatMinibatchSource( trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false)) { d.Add(mbs.StreamInfo("feature"), new Tuple(null, null)); //compute mean and standard deviation of the population for inputs variables MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device); } //Build simple Feed Froward Neural Network with normalization layer var ffnn_model = createFFModelWithNormalizationLayer(feature,5,3,d.ElementAt(0).Value, device); //Loss and error functions definition var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction"); var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError"); // set learning rate for the network var learningRatePerSample = new TrainingParameterScheduleDouble(0.01, 1); //define learners for the NN model var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample); //define trainer based on model, loss and error functions , and SGD learner var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll }); //Preparation for the iterative learning process // create minibatch for training var mbsTraining = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.InfinitelyRepeat, true); int epoch = 1; while (epoch a.sweepEnd)) { reportTrainingProgress(feature, label, streamConfig, trainer, epoch, device); epoch++; } } Console.Read(); } private static void reportTrainingProgress(Variable feature, Variable label, StreamConfiguration[] streamConfig, Trainer trainer, int epoch, DeviceDescriptor device) { // create minibatch for training var mbsTrain = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.FullDataSweep, false); var trainD = mbsTrain.GetNextMinibatch(int.MaxValue, device); // var a1 = new UnorderedMapVariableMinibatchData(); a1.Add(feature, trainD[mbsTrain.StreamInfo("feature")]); a1.Add(label, trainD[mbsTrain.StreamInfo("flower")]); var trainEvaluation = trainer.TestMinibatch(a1); // create minibatch for validation var mbsVal = MinibatchSource.TextFormatMinibatchSource(validationDataPath, streamConfig, MinibatchSource.FullDataSweep, false); var valD = mbsVal.GetNextMinibatch(int.MaxValue, device); // var a2 = new UnorderedMapVariableMinibatchData(); a2.Add(feature, valD[mbsVal.StreamInfo("feature")]); a2.Add(label, valD[mbsVal.StreamInfo("flower")]); var valEvaluation = trainer.TestMinibatch(a2); Console.WriteLine($"Epoch={epoch}, Train Error={trainEvaluation}, Validation Error={valEvaluation}"); } private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device) { //First the parameters initialization must be performed var glorotInit = CNTKLib.GlorotUniformInitializer( CNTKLib.DefaultParamInitScale, CNTKLib.SentinelValueForInferParamInitRank, CNTKLib.SentinelValueForInferParamInitRank, 1); //*******Input layer is indicated as feature var inputLayer = feature; //*******Normalization layer var mean = new Constant(avgStdConstants.Item1, "mean"); var std = new Constant(avgStdConstants.Item2, "std"); var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std); //*****hidden layer creation //shape of one hidden layer should be inputDim x neuronCount var shape = new int[] { hiddenDim, 4 }; var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh"); var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh"); var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam; var hidLayerAct = CNTKLib.ReLU(hidLay); //******Output layer creation //the last action is creation of the output layer var shapeOut = new int[] { 3, hiddenDim }; var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo"); var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo"); var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut; return outLay; } } }
The output window should looks like:
The data set files used in the example can be downloaded from , and full source code demo from .
]]>Each CNTK compute graph is created with set of nodes where each node represents numerical (mathematical) operation. The edges between nodes in the graph represent data flow between operations. Such a representation allows CNTK to schedule computation on the underlying hardware GPU or CPU. The CNTK can dynamically analyze the graphs in order to to optimize both latency and efficient use of resources. The most powerful part of this is the fact thet the CNTK can calculate derivation of any constructed set of operations, which can be used for efficient learning process of the network parameters. The flowing image shows the core architecture of the CNTK.
On the other hand, any operation can be executed on CPU or GPU with minimal code changes. In fact we can implement method which can automatically takes GPU computation if available. The CNTK is the first .NET library which provide .NET developers to develop GPU aware .NET applications.
What this exactly mean is that with this powerful library you can develop complex math computation directly to GPU in .NET using C#, which currently is not possible when using standard .NET library.
For this blog post I will show how to calculate some of basic statistics operations on data set.
Say we have data set with 4 columns (features) and 20 rows (samples). The C# implementation of this 2D array is show on the following code snippet:
static float[][] mData = new float[][] { new float[] { 5.1f, 3.5f, 1.4f, 0.2f}, new float[] { 4.9f, 3.0f, 1.4f, 0.2f}, new float[] { 4.7f, 3.2f, 1.3f, 0.2f}, new float[] { 4.6f, 3.1f, 1.5f, 0.2f}, new float[] { 6.9f, 3.1f, 4.9f, 1.5f}, new float[] { 5.5f, 2.3f, 4.0f, 1.3f}, new float[] { 6.5f, 2.8f, 4.6f, 1.5f}, new float[] { 5.0f, 3.4f, 1.5f, 0.2f}, new float[] { 4.4f, 2.9f, 1.4f, 0.2f}, new float[] { 4.9f, 3.1f, 1.5f, 0.1f}, new float[] { 5.4f, 3.7f, 1.5f, 0.2f}, new float[] { 4.8f, 3.4f, 1.6f, 0.2f}, new float[] { 4.8f, 3.0f, 1.4f, 0.1f}, new float[] { 4.3f, 3.0f, 1.1f, 0.1f}, new float[] { 6.5f, 3.0f, 5.8f, 2.2f}, new float[] { 7.6f, 3.0f, 6.6f, 2.1f}, new float[] { 4.9f, 2.5f, 4.5f, 1.7f}, new float[] { 7.3f, 2.9f, 6.3f, 1.8f}, new float[] { 5.7f, 3.8f, 1.7f, 0.3f}, new float[] { 5.1f, 3.8f, 1.5f, 0.3f},};
If you want to play with CNTK and math calculation you need some knowledge from Calculus, as well as vectors, matrix and tensors. Also in CNTK any operation is performed as matrix operation, which may simplify the calculation process for you. In standard way, you have to deal with multidimensional arrays during calculations. As my knowledge currently there is no .NET library which can perform math operation on GPU, which constrains the .NET platform for implementation of high performance applications.
If we want to compute average value, and standard deviation for each column, we can do that with CNTK very easy way. Once we compute those values we can used them for normalizing the data set by computing (Gauss Standardization).
The Gauss standardization is calculated by the flowing term:
,
where X- is column values, – column mean, and – standard deviation of the column.
For this example we are going to perform three statistic operations,and the CNTK automatically provides us with ability to compute those values on GPU. This is very important in case you have data set with millions of rows, and computation can be performed in few milliseconds.
Any computation process in CNTK can be achieved in several steps:
1. Read data from external source or in-memory data,
2. Define Value and Variable objects.
3. Define Function for the calculation
4. Perform Evaluation of the function by passing the Variable and Value objects
5. Retrieve the result of the calculation and show the result.
All above steps are implemented in the following implementation:
using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Text; using System.Threading.Tasks; using CNTK; namespace DataNormalizationWithCNTK { class Program { static float[][] mData = new float[][] { new float[] { 5.1f, 3.5f, 1.4f, 0.2f}, new float[] { 4.9f, 3.0f, 1.4f, 0.2f}, new float[] { 4.7f, 3.2f, 1.3f, 0.2f}, new float[] { 4.6f, 3.1f, 1.5f, 0.2f}, new float[] { 6.9f, 3.1f, 4.9f, 1.5f}, new float[] { 5.5f, 2.3f, 4.0f, 1.3f}, new float[] { 6.5f, 2.8f, 4.6f, 1.5f}, new float[] { 5.0f, 3.4f, 1.5f, 0.2f}, new float[] { 4.4f, 2.9f, 1.4f, 0.2f}, new float[] { 4.9f, 3.1f, 1.5f, 0.1f}, new float[] { 5.4f, 3.7f, 1.5f, 0.2f}, new float[] { 4.8f, 3.4f, 1.6f, 0.2f}, new float[] { 4.8f, 3.0f, 1.4f, 0.1f}, new float[] { 4.3f, 3.0f, 1.1f, 0.1f}, new float[] { 6.5f, 3.0f, 5.8f, 2.2f}, new float[] { 7.6f, 3.0f, 6.6f, 2.1f}, new float[] { 4.9f, 2.5f, 4.5f, 1.7f}, new float[] { 7.3f, 2.9f, 6.3f, 1.8f}, new float[] { 5.7f, 3.8f, 1.7f, 0.3f}, new float[] { 5.1f, 3.8f, 1.5f, 0.3f},}; static void Main(string[] args) { //define device where the calculation will executes var device = DeviceDescriptor.UseDefaultDevice(); //print data to console Console.WriteLine($"X1,\tX2,\tX3,\tX4"); Console.WriteLine($"-----,\t-----,\t-----,\t-----"); foreach (var row in mData) { Console.WriteLine($"{row[0]},\t{row[1]},\t{row[2]},\t{row[3]}"); } Console.WriteLine($"-----,\t-----,\t-----,\t-----"); //convert data into enumerable list var data = mData.ToEnumerable<IEnumerable<float>>(); //assign the values var vData = Value.CreateBatchOfSequences<float>(new int[] {4},data, device); //create variable to describe the data var features = Variable.InputVariable(vData.Shape, DataType.Float); //define mean function for the variable var mean = CNTKLib.ReduceMean(features, new Axis(2));//Axis(2)- means calculate mean along the third axes which represent 4 features //map variables and data var inputDataMap = new Dictionary<Variable, Value>() { { features, vData } }; var meanDataMap = new Dictionary<Variable, Value>() { { mean, null } }; //mean calculation mean.Evaluate(inputDataMap,meanDataMap,device); //get result var meanValues = meanDataMap[mean].GetDenseData<float>(mean); Console.WriteLine($""); Console.WriteLine($"Average values for each features x1={meanValues[0][0]},x2={meanValues[0][1]},x3={meanValues[0][2]},x4={meanValues[0][3]}"); //Calculation of standard deviation var std = calculateStd(features); var stdDataMap = new Dictionary<Variable, Value>() { { std, null } }; //mean calculation std.Evaluate(inputDataMap, stdDataMap, device); //get result var stdValues = stdDataMap[std].GetDenseData<float>(std); Console.WriteLine($""); Console.WriteLine($"STD of features x1={stdValues[0][0]},x2={stdValues[0][1]},x3={stdValues[0][2]},x4={stdValues[0][3]}"); //Once we have mean and std we can calculate Standardized values for the data var gaussNormalization = CNTKLib.ElementDivide(CNTKLib.Minus(features, mean), std); var gaussDataMap = new Dictionary<Variable, Value>() { { gaussNormalization, null } }; //mean calculation gaussNormalization.Evaluate(inputDataMap, gaussDataMap, device); //get result var normValues = gaussDataMap[gaussNormalization].GetDenseData<float>(gaussNormalization); //print data to console Console.WriteLine($"-------------------------------------------"); Console.WriteLine($"Normalized values for the above data set"); Console.WriteLine($""); Console.WriteLine($"X1,\tX2,\tX3,\tX4"); Console.WriteLine($"-----,\t-----,\t-----,\t-----"); var row2 = normValues[0]; for (int j = 0; j < 80; j += 4) { Console.WriteLine($"{row2[j]},\t{row2[j + 1]},\t{row2[j + 2]},\t{row2[j + 3]}"); } Console.WriteLine($"-----,\t-----,\t-----,\t-----"); } private static Function calculateStd(Variable features) { var mean = CNTKLib.ReduceMean(features,new Axis(2)); var remainder = CNTKLib.Minus(features, mean); var squared = CNTKLib.Square(remainder); //the last dimension indicate the number of samples var n = new Constant(new NDShape(0), DataType.Float, features.Shape.Dimensions.Last()-1); var elm = CNTKLib.ElementDivide(squared, n); var sum = CNTKLib.ReduceSum(elm, new Axis(2)); var stdVal = CNTKLib.Sqrt(sum); return stdVal; } } public static class ArrayExtensions { public static IEnumerable<T> ToEnumerable<T>(this Array target) { foreach (var item in target) yield return (T)item; } } }
The output for the source code above should look like:
After one year of writing and coding, finally I can announce my two big achievements which are related to each other:
1. The fifth version of my open source project – genetic programming tool, and
2. The book:, published by IGI-GLobal.
Along the book, I was developing GPdotNET application which is explained in Chapter 5. Actually the Chapter 5 described in depth all aspects of the application, with real world examples.
As can be seen GPdotNET v5 is completely rewritten application, with new logo and GUI. As Introduction of the application I have prepared several videos on youtube with quick explanation how to use some of the main modules in GPdotNET.
]]>All procedures from downloading the data set, to exporting model, can be achieved in 6 steps.
1. Step: Download the data set file from
2. Step: Open ANNdotNET application. Press New command, select Project 1 tree item and rename the project into Iris Data Set.
2. Step: Select Data Command from Model Preparation ribbon group, Click File button from Import experimenal data dialog and select the recently downloaded file. Check Comma check box and press Import Data button.
3. Steps: Double click on Scaling for each column, and select MinMax normalization option from the popup ComboBox list. Double click on Type for the output column, and select Category, and 1:N for encoding. More information how to prepare data for ML you can find at /2018/03/01/data-preparation-tool-for-machine-learning/
4. Steps: Once the data is prepared Click Create Model Command and Model Settings panel is shown. Setup parameters as shown on the image below and click Run command.
5. Steps: Once the model is trained you can evaluate model by selecting Evaluate Command. Depending on the model type (regression, Binary or Multi class classification) The appropriate Evaluation dialog appears. Since this is multi class classification model, the Confusion matrix is shows, with micro and macron performance parameters.
6. Steps: For further analysis you can export model to Excel, or into ONNX. Also you can save the project which can later be opened and retrained again.
Note: Currently ANNdotNET is in alpha version, and more feature will come in near future.
]]>Currently, ANNdotNET supports the folowing type of ANN:
The process of creating, training, evaluating and exporting models is provided from the GUI Application and does not require knowledge for supported programming languages. The ANNdotNET is ideal for engineers which are not familiar with programming languages.
ANNdotNET is x64 Windows desktop application which is running on .NET Framework 4.7.1. In order to run the application, the following requirements must be met:
– Windows 7, 8 or 10 with x64 architecture
– NET Framework 4.7.1
– CPU/GPU support.
Note: The application automatically detect GPU capability on your machine and use it in training and evaluation, otherwise it will use CPU.
In order to run the application there are two possibilities:
Clone the GitHub of the application and open it in Visual Studio 2017.
The following three short videos quickly show how to create, train and evaluate regression, binary and multi class classification models.
2. Training and evaluation binary classifier model. Data represent Titanic data set downloaded from the public repository.
3. Training and evaluation multi class classification models. Data represents Iris data set downloaded from the same page as above.
]]>With the latest release the GPdotNET has changed a lot. First of all, the initial idea about GPdotNET was to provide GP method in the application. And as the project grew lot of new implementations were included in the main project. This year I decided to make two different projects which can be seen as the natural evolution of .
The first project remain the same which follows the previous version and it is called . The project includes only GP related algorithm implementation which is developed for creating and training supervised ML problems (regression, binary and multi-class classification).
The second project uses several ANN algorithms for creating and training supervised machine learning problems. The project is called . It is Windows Forms desktop application very similar with GPdotNET, for creating and training ANN models.
I am very prod to announce that the new version of GPdotNET will be released as two different open source projects.
]]>
However, I wanted to back previous feature since I need that option, specially when you try to replace some text in Word Equation. So in order to return the previous option to replace the highlighted text with new one, you should go to:
File->Options->Advanced
]]>In this blog post I am going to present the simple tool which can significantly reduce the preparation time for ML. The tool simply loads the data in to GUI, and then the user can define all necessary information. Once the data is prepared user can store the data it to files which can be then directly imported into ML algorithm such as CNTK.
The following image shows the ML Data Preparation Tool main window.
From the image above, the data preparation can be achieved in several steps.
As can be seen this is straightforward workflow of data preparation.
Besides the general export options which can be achieved by selecting different delimiter options, you can export data set in to CNTK format, which is very handy if you play with CNTK.
After data transformations, the user need to check CNTK format int the export options and press Export in order to get CNTK training and testing files, which can be directly used in the code without any modifications.
Some of examples will be provided int he next blog post.
The project is hosted at GitHub, where the source code can be freely downloaded and used at this location: .
In case you want only binaries, the release of version v1.0 is published here:
]]>In this blog post I am going to explain one of possible way how to implement Deep Learning ML to play video game. For this purpose I used the following:
The idea behind this machine learning project is to capture images together with action, while you play Mario Kart game. Then captured images are transformed into features of training data set, and action keys into label hot vectors respectively. Since we need to capture images, the emulator should be positioned at fixed location and size during playing the game, as well as during testing algorithm to play game. The flowing image shows N64 emulator graphics configuration settings.
Also the N64 emulator is positioned to Top-Left corned of screen, so it is easier to capture the images.
During image captures game is played as you would play normally. Also no special agent, not platform is required.
In .NET and C# it is implemented image capture from the specific position of screen, as well as it is recorded which keys are pressed during game play. In order to record keys press, the code found is modified and used.
The flowing image shows the position of N64 emulator with playing Mario Kart game (1), the windows which is capture and transform the image (2), and the application which collect images, and key press action and generated training data set into file(3).
The data is generated on the following way:
So the training data is persisted into CNTK format which consist of:
The following data sample shows how training data set are persisted in the txt file:
|label 1 0 0 0 0 |features 202 202 202 202 202 202 204 189 234 209 199... |label 0 1 0 0 0 |features 201 201 201 201 201 201 201 201 203 18... |label 0 0 1 0 0 |features 199 199 199 199 199 199 199 199 199 19... |label 0 0 0 1 1 |features 199 199 199 199 199 199 199 199 199 19...
Since my training data is more than 300 000 MB of size, I provided just few BM sized file, but you can generate file as big as you wish with just playing the game, and running the flowing code from Program.cs file:
await GenerateData.Start();
Once we generate the data, we can move to the next step: training RCNN model to play the game. For training model the CNTK is used. Also since we play a game and previous sequence will determined the next sequence in the game, LSTM RNN is used. More information about CNTK and LSTM can be found in previous posts. In my case I have collected nearly 15000 images during several round of playing the same level and route. Also for more accurate model much more images should be collected, nearly 100 000. The model is trained in one hour, with 500000 iterations. The source code about whole project can be found on page. ()
By running the following code, the training process is started with provided training data:
CNTKDeepNN.Train(DeviceDescriptor.GPUDevice(0));
Once we trained the model, we move to the next step: playing a game. The emulator should be positioned on the same position and with the same size in order to play the game.ONce the model is trained and created in th training folder, the playing game can be achive by running:
var dev = DeviceDescriptor.CPUDevice;
MarioKartPlay.LoadModel(“../../../../training/mario_kart_modelv1”, dev);
MarioKartPlay.PlayGame(dev);
How it looks like on my case, you can see on this youtube video:
]]>Recently, I was playing with CNTK object detection API, and produced very interesting model which can recognize the Nokia3310 mobile phone. As you probably already know Nokia3310 is legendary mobile phone which was popular 15 years ago, and recently re-branded by Nokia.
In this blog post I will provide you with step by step introductions how to:
Finding appropriate images for our model is very easy. Just go to google.com and type “Nokia3310” and bum, there are plenty of images.
Find at least 20 images, and put into the Nokia3310 image folder. Once we collect enough image for the model, we can move to the next step.
In order to train image detection model by using FasterRCNN algoritm, we have to provide three kinds of data separated in three different files:
Seems pretty much job for simple object detection, but hopefully there is a tool which can generate all data for us. It is called VoTT: Visual Object Tagging Tool, and it can be found at : .
Here we will explain in detail how to generate image data by using VOTT tool.
1. Open VOTT tool, from File menu and select folder we previously collected with images.
2. Enter “nokia3310” in Labels edit box and click Continue button. In case we have more than one
3. Then for each image, make a rectangle on each object which represents the Nokia3310.
4. Once you finish with tagging for one image, press Next, and do the same for all selected images.
5. Once the process of tagging is finished, then the export action can be performed.
6. With Export option data is generated for each rectangle we made, and the two files are generated for each image in data set. Also once the tagging process is completed VOTT tool generated three folders:
a) negative – contains images which have no any tagged rectangle (no nokia3310 on images),
b) positive – contains approximate 70% of all images which we tagged Nokia3310 object, and this folder will be used for training the model,
c) testImages – contains approximate 30% of all images which we tagged Nokia3310 object, and this folder will be used for evaluation and testing the model.
The VOOT classified all images in three folders. In case there are images with no tagging, images will be moved to negatives, all other images is separated into positive and testImages folder.
From each image two files are generated:
–[imagename].bboxes.labels.tsv – which consist of all labels tagged in image file.
–[imagename].bboxes.tsv – rectangle coordinates of all tags in the image.
Once we have VOTT generated data, we need to transform them into cntk format. First we will generate: class_map file.txt
7. Create new “class_map file.txt” file, and put the following text into it:
__background__ 0 Nokia3310 1
As can be seen there is only one class which we want to detect, and ti is Nokia3310, (the __backgroud__ is reserved tag which is added by default and cannot be removed). Now we need to generate the second file:
8. Create new “train_image_file.txt” file, and put text similar with this one:
0 positive/img01.jpg 0 1 positive/img05.jpg 0 2 positive/img10.jpg 0 ...
The content of the file is list of all images placed in positive folder, with ID on the left side and zero on the right side, separated by tabulator. Image path should be relative.
9. Create new “train_roi_file.txt”, and put data similar with this one:
0 |roiAndLabel 10 418 340 520 1 1 |roiAndLabel 631 75 731 298 1 2 |roiAndLabel 47 12 222 364 1 3 |roiAndLabel 137 67 186 184 1 188 69 234 180 1 ...
As can be seen first four numbers are rectangle coordinates, which follow the 1 number indicates classValue. Since we have only one class 1 is always after 4 numbers. Also in case image contains more than one rectangle which is the case of line 3, after every four numbers it goes class value.
This is procedure how can we make three files for training, needed to run CNTK object detection. Also for testing data we need image and ROI files. Whole data set and corresponded files can be found on GitHub page.
CNTK comes with example how to implement object detection which can be found at:
So I took the source code from , and modify it for my case, and published at git hub which can be found .
10. Before downloading source code, be sure the CNTK 2.3 is installed on your machine with Anaconda 4.1.1, in the environment with Python 3.5 version.
11. Clone the Github repository and open it in Visual Studio or Visual Studio Code.
12. First thing you should do is to download pre-trained “Alex net” model. You can easily download it, by running the download_model.py python code placed in PretrainedModels folder.
13. Process of training is started when you run Nokia3310_detection.py python file. Beside pre-trained model, no other resources are required in order to run the project. The folowing picture shows main parts of the solution.
Once the training process is finished, once image is evaluated and shown in order to evaluate how model is good in detecting the phone. Such image is shows at the beginning of the blog post.
All source code with image dataset you can download from GitHub at
]]>