Export options in ANNdotNET


ANNdotNET v1.0 has been release a few weeks ago, and the feedback is very positive. Also up to now there is no any blocking or serious bug in the release which makes me very happy. For this blog post we are going through Export options in ANNdotNET.

The ANNdotNET supposed to be an application which can offer whole life-cycle for  machine learning project: from the defining raw data set, cleaning and features engineering, to training and evaluation of the model. Also with different mlconfig files within the same project, the user has ability to create as many ml configurations as wants. Once the user select the best ml configuration, and the training and evaluation process completes, the next step in ML project life-cycle is the model deployment/export.

Currently, ANNdotNET defines three export options:

  • Export model result to CSV file,
  • Export model and model result to Excel, and
  • Export model in CNTK file format.

With those three export option, we can achieve many ML scenarios.

Export to CSV

Export to CSV provides exporting actual and predicted values of testing data set to comma separated txt file. In case the testing data set is not provided, the result of validation data set will exported. In case nor testing nor validation dataset are not provided the export process is terminated.

The export process starts by selecting appropriate mlconfig file. The network model must be trained prior to be exported.

2018-10-22_9-35-07.pngOnce the export process completes, the csv file is created on disk. We can import the exported result in Excel, and similar content will be shows as image below:

2018-10-22_11-40-49.png

Exported result is shows in two columns. The actual and predicted values. In case the classification result is exported, in the header the information about class values are exported.

Export to Excel

Export to Excel option is more than just exporting the result. In fact, it is deployment of the model into Excel environment. Beside exporting all defined data sets (training, Validation, and Test) the model is also exported. Predicted values are calculated by using ANNdotNET Excel Add-in, which the model evaluation looks like calling ordinary Excel formula.  More information how it works can be found here.

2018-10-22_12-25-20.png

Exported xlsx file can be opened, and the further analysis for the model and related data sets can be continued. The following image shows exported model for Concrete Slum Test example. Since only two data sets are defined (training and validation) those data sets are exported. As can be seen the predicted column is not filled, only the row is filled with the formula that must be evaluated by inserting equal sign “=” in front of the formula.

2018-10-22_12-29-08.png

Once the formula is evaluated for the first row, we can use Excel trick to copy it on other rows.

The same situation is for other data sets separated in Excel Worksheets.

Export to CNTK

The last option allows to export CNTK trained model in CNTK format. Also ONNX format will be supported as soon as being available on CNTK for C# library. This option is handy in situation where trained CNTK model being evaluated in other solutions.

For this blog post, there is a short video which the reader can see all three options in actions.

Advertisements

Sentiment Analysis using ANNdotNET


The October 2018 issue of MSDN magazine brings the article “Sentiment Analysis Using CNTK” written by I was wondering if I can implement this solution in ANNdotNET as Dr. McCaffrey written in the magazine. Indeed I have implemented complete solution in less than 5 minutes.

In this blog post I am going to walk you through this very good and well written MSDN article example. I am not going to repeat the text written in the MSDN article, so it is recommendation to read the article first, and back here and implement the example in ANNdotNET. Since the ANNdotNET is GUI tool, it is interesting to see all great visualizations during the model training and evaluation. Also the ANNdotNET provides complete binary model evaluation by providing the confusion matrix, ROC Curve, and other binary performance parameters, this example makes more interesting and valuable to read.

Whole example is implemented in five steps.

Step 1: Prepare files and folder structure

First we need to create several folders and files in order to create empty annproject. This manual creation of folders are necessary because ANNdotNET v1.0 has not option to create Empty project. This will be added in the next version.

So first, create the following set of hierarchically ordered folders:

  • SentimentAnalysis
    • MoveReview
      • data

The following figure shows this set of folder.

2018-10-15_21-08-04

Step 2: Download data sets used in the example.

Only thing we need from the MSDN article is train and test data sets. The data can be downloaded from the MSDN sample: Once the zip file is downloaded unzip the sample, and copy files: imdb_sparse_train_50w.txt and indb_sparse_test_50w.txt to data folder as image above shows.

Step 3: Create MoviewReview.ann and LSTM-Net.mlconfig files

  • Open Notepad and create file with the following content:
project:|Name:MovieReview |Type:NoRawData |MLConfigs:LSTM-Net
data:|RawData:MovieReview_rawdata.txt
parser:|RowSeparator:rn |ColumnSeparator: ; |Header:0 |SkipLines:0

Save file in SentimenAnalysis folder as MovieReview.ann. The following picture shows saved annproject file on disk.

2018-10-15_21-29-24

Now open Notepad again, create a new empty file. The empty file is supposed to be  mlconfig file with the content shown below. Don’t worry about the content of the file, since all those details will be visible once we open it with ANNdotNET. If you want to know more about structure of the mlconfig file, please refer to this of the ANNdotNET project.

configid:msdn-oct-2018-issue-sentiment-analysis-article
metadata:|Column02:y;Category;Label;Random;0;1
features:|x 129892 1
labels:|y 2 0
network:|Layer:Embedding 50 0 0 None 0 0 |Layer:LSTM 25 25 0 TanH 1 1 |Layer:Dense 2 0 0 Softmax 0 0
learning:|Type:AdamLearner |LRate:0.01 |Momentum:0.85 |Loss:CrossEntropyWithSoftmax |Eval:ClassificationAccuracy |L1:0 |L2:0
training:|Type:Default |BatchSize:250 |Epochs:400 |Normalization:0 |RandomizeBatch:0 |SaveWhileTraining:0 |FullTrainingSetEval:1 |ProgressFrequency:1 |ContinueTraining:0 |TrainedModel:
paths:|Training:data\imdb_sparse_train_50w.txt |Validation:data\imdb_sparse_test_50w.txt |Test:data\imdb_sparse_test_50w.txt |TempModels:temp_models |Models:models |Result:LSTM-Net_result.csv |Logs:log<span id="mce_SELREST_end" style="overflow:hidden;line-height:0;"></span>

The file should be saved in the MovieReview folder with LSTM-Net.mlconfig file name. The next image shows where mlconfig file is stored.

2018-10-15_21-41-16

Step 4. Open annproject file with ANNdotNET GUI tool

Now we have setup everything in order to open and train sentiment analysis example with ANNdotNET. Since ANNdotNET implements MLEngine which is based on CNTK, data sets are compatible and can be read by the trainer. In order to get better result we have changed learning parameter a little bit. Instead of SGD we used AdamLearner.

In case you don’t have ANNdotNET tool installed on your machine, just go to section and download the latest version. Or clone the GitHub repository and run it within the Visual Studio. All information about how to run ANNdotNET as standalone application or as the Visual Studio solution can be found at GitHub page .

After simple unzipping binaries of the ANNdotNET on your machine, run it by simply selecting anndotnet.wnd.exe file. Once the ANNdotNET is running, click the Open application command and select the MoveReview.ann file. In a second the application loads the project with corresponded mlconfig file. From the project explorer, click on LSTM-NET three item, and similar content as image below should be appeared.

2018-10-15_21-54-34

Everything we have written into mlconfig file are now shown in the Network settings tab page.

  1. Input layer with 129892 dimensions
  2. Output layer with 2 dimension (binary problem)
  3. Learning parameters:
    1. AdamLearner, with 0.01 lr and 0.85 momentum,
    2. Loss Function is CrossEntropywithSoftmax
    3. Evaluation function is ClassificationAccuracy
  4. NNetwork Designer shows typical LSTM recurrent network

Step 5. Training and Evaluation of the Example

Now that we reviewed the network settings, we can switch to the train tab page, and review the training parameters. Since we already setup training parameters in the mlconfig file, we don’t need to change anything.

Start training process by click on the Run application command. After some time we should see the following result:

2018-10-16_16-44-28

If we switch to Evaluation page we can perform some statistics analysis in order to evaluate if the model is good or not. Once the evaluation tab page is shown, click on Refresh button to evaluate the model against training and validation data stets.

2018-10-16_16-44-39

The left statistics are for the training dataset, and the left side is for the validation data set. As can be seen, the model perfectly predicted all data from the training data set, and about 70% of accuracy described the validation data set. Off cource, the model is not good as we expected for the production, but for this demonstration is good enough. There are also two buttons to show ROC curve, and other binary performance parameters, for both data sets, which the reader my taste.

That’s all needed in order to have complete Sentiment Analysis exemple setup and running. In case you want complete ANNdotNET project, it can be downloaded from .

ANNdotNET v1.0 has been released


Half year ago, in this post announced the new open source project ANNdotNET, which was ANN part of the – artificial intelligence tool. On that day I finished my about machine learning and genetic programming when also released the new version of genetic programming tool, without ANN and other non GP related modules. Now my second big open source project achieved the first stable released.

ANNdotNET () is deep learning tool on .NET platform, which has similar workflow as GPdotNET. Both projects share several modules, mostly for data preparation, and model evaluation since all that stuff are same.

anndotnet_gpdotnet.png

ANNdotNET is project which is more than GUI tool, since it contains CMD tool, which can be part of bigger cloud solution. There are several key concepts of the project which is worth to mention here:

1. Machine Learning Configuration mlconfig file

The ANNdotNET is based on so called machine learning configuration file, where everything about data, training and learning parameters, as well as neural network layers are store in the file so called mlconfig file. Along mlconfig file, there are several other file types generated during development of the ml solution. The mlconfig file can be shared between cloud services in order to prepare and transform data, train, evaluate or export ml models. If you want to see more information about files in ANNdotNET you can look at the of the project. Since the mlconfig file is independent of the tool, it can be executed with GUI or CMD tool, or any other custom tool, implemented on anndotnet API.

2. Machine Learning Project Explorer

In order to start developing ml solution with ANNdotNET, the first thing you do is create annproject file, by selecting New option from the Application command. Under annproject the user can create as many mlconfig files as he/she want. The annproject and related mlconfig files are presented in the ML Project Explorer, where the user can manage them as ordinary list items.

mlproject_explorer.png

3. ANNdotNET MLEngine – Machine Learning Engine

ANNdotNET introduces the ANNdotNET Machine Learning Engine (MLEngine) which is responsible for training and evaluation models defined in the mlconfig files.The ML Engine relies on Microsoft Cognitive Toolkit, CNTK open source library which is proved to be one of the best open source library for deep learning. Through all application’s components ML Engine exposed all great features of the CNTK e.g. GPU support for training and evaluation, different kind of learners, but also extends CNTK features with more Evaluation functions (RMSE, MSE, Classification Accuracy, Coefficient of Determination, etc.), Extended Mini-batch Sources, Trainer and Evaluation models.

anndotnet_mlengine

MLEngine is build on top of CNTK and .NET, with ability to provide backed component for any cloud/on-premise  ML solution.

4. Visual Neural Network Designer

ML Engine also contains the implementation of neural network layers which supposed to be high level CNTK API very similar as layer implementation in Keras and other python based deep learning APIs. With this implementation the ANNdotNET implements the Visual Neural Network Designer called ANNdotNET NNDesigner which allows the user to design neural network configuration of any size with any type of the layers. In the first release the following layers are implemented:

  • Normalization Layer – takes the numerical features and normalizes its values before getting to the network. More information can be found here.
  • Dense – classic neural network layer with activation function,
  • LSTM – LSTM layer with option for peephole and self-stabilization.
  • Embedding – Embedding layer.
  • Drop – drop layer.

More layer types will be added in the future release. More information about Visual Network Designer can be found on previous blog post.

5. Data Transformation

Along the ml related stuff, ANNdotNET implement set of components for data transformation from raw dataset into mlready datasets. The user doesn’t worry about complex CNTK file format, one-hot encoding, and other data and variable transformation e.g handling missing values, data normalization etc. Data transformation starts loading raw data file into ANNdotNET, then with set of GUI related options the data can be completely prepared to mlready dataset. There are set of short videos about how to quickly transform raw dataset into mlready dataset.

5. Model Evaluation, Saving Good Models & Retraining Trained Models

Once the model is trained, ANNdotNET provides basic evaluation tool for evaluating trained models. The MLEvaluator contains set of basic options in order to evaluate regression, binary or multi-class classification models. Without leaving ANNdotNET the user has ability to decide if the model is good or not by performing set of statistics measures agains model and related datasets (training, validation and test). Beside evaluation, ANNdotNET offers instantly evaluation during training phase, by providing an option for saving good models during training phase. On this way ANNdotNET has ability to select best trained model regardless of the number of iterations. Different strategy for selecting the best model among set of saved models will be implemented in the future. Also any previous trained models can be trained again from the last check point. This is important option in various scenario. For example to change some parameters and continue training. Also this option has ability to start training model on one machine or environment, and then continue with training on different machine or environment.

Summary

ANNdotNET – is an open source project for deep learning on .NET Platform. This is complete GUI solution for data preparation, training, evaluation and deployment ml models. ANNdotNET introduces the ANNdotNET Machine Learning Engine ( MLEngine) which is responsible for training and evaluation models defined in the mlconfig files. The MLEngine relies on Microsoft Cognitive Toolkit, CNTK open source library which is proved to be one of the best open source library for deep learning. Through all application’s components MLEngine exposed all great features of the CNTK e.g. GPU support for training and evaluation, different kind of learners. MLEngine also extends CNTK features with more evaluation functions (RMSE, MSE, Classification Accuracy, Coefficient of Determination, etc.), Extended Mini-batch Sources, Trainer and Evaluation models.
The process of creating, training, evaluating and exporting models is provided from the GUI Application and does not require knowledge for supported programming languages.

The ANNdotNET is ideal in several scenarios:

  • more focus on network development and training process using classic desktop approach, instead of focusing on coding,
  • less time spending on debugging source code, more focusing on different configuration and parameter variants,
  • ideal for engineers/users who are not familiar with programming languages,
  • in case the problem requires coding custom models, or training process, ANNdotNET CMD provides high level of API for such implementation,
  • all ml configurations developed with GUI tool,can be handled with CMD tool and vice versa.


In case you like this project star it on GitHub at. In case you want to use it in you academic paper, please cite it appropriate as specified at this link:

 

Linear Regression with CNTK and C#


CNTK is Microsoft’s deep learning tool for training very large and complex neural network models. However, you can use CNTK for various other purposes. In some of the previous posts we have seen how to use CNTK to perform matrix multiplication, in order to calculate descriptive statistics parameters on data set.
In this blog post we are going to implement simple linear regression model, LR. The model contains only one neuron. The model also contains bias parameters, so in total the linear regression has only two parameters: w and b.
The image below shows LR model:

The reason why we use the CNTK to solve such a simple task is very straightforward. Learning on simple models like this one, we can see how the CNTK library works, and see some of not-so-trivial actions in CNTK.
The model shown above can be easily extend to logistic regression model, by adding activation function. Besides the linear regression which represent the neural network configuration without activation function, the Logistic Regression is the simplest neural network configuration which includes activation function.

The following image shows logistic regression model:
In case you want to see more info about how to create Logistic Regression with CNTK, you can see this official demo
Now that we made some introduction to the neural network models, we can start by defining the data set. Assume we have simple data set which represent the simple linear function y=2x+1. The generated data set is shown in the following table:

We already know that the linear regression parameters for presented data set are: b_0=1 and b_1=2, so we want to engage the CNTK library in order to get those values, or at least parameter values which are very close to them.

All task about how the develop LR model by using CNTK can be described in several steps:

Step 1: Create C# Console application in Visual Studio, change the current architecture to x64, and add the latest “CNTK.GPU “ NuGet package in the solution. The following image shows those action performed in Visual Studio.

Step 2: Start writing code by adding two variables: X – feature, and label Y. Once the variables are defined, start with defining the training data set by creating batch. The following code snippet shows how to create variables and batch, as well as how to start writing CNTK based C# code.

First we need to add some using statements, and define the device where computation will be happen. Usually, we can defined CPU or GPU in case the machine contains NVIDIA compatible graphics card. So the demo starts with the following cod snippet:

using System;
using System.Linq;
using System.Collections.Generic;
using CNTK;
namespace LR_CNTK_Demo
{
    class Program
    {
        static void Main(string[] args)
        {
             //Step 1: Create some Demo helpers
             Console.Title = "Linear Regression with CNTK!";
             Console.WriteLine("#### Linear Regression with CNTK! ####");
             Console.WriteLine("");
            //define device
            var device = DeviceDescriptor.UseDefaultDevice();

Now define two variables, and data set presented in the previous table:

//Step 2: define values, and variables
Variable x = Variable.InputVariable(new int[] { 1 }, DataType.Float, "input");
Variable y = Variable.InputVariable(new int[] { 1 }, DataType.Float, "output");

//Step 2: define training data set from table above
var xValues = Value.CreateBatch(new NDShape(1, 1), new float[] { 1f, 2f, 3f, 4f, 5f }, device);
var yValues = Value.CreateBatch(new NDShape(1, 1), new float[] { 3f, 5f, 7f, 9f, 11f }, device);

Step 3: Create linear regression network model, by passing input variable and device for computation. As we already discussed, the model consists of one neuron and one bias parameter. The following method implements LR network model:

private static Function createLRModel(Variable x, DeviceDescriptor device)
{
    //initializer for parameters
    var initV = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, 1);

    //bias
    var b = new Parameter(new NDShape(1,1), DataType.Float, initV, device, "b"); ;

    //weights
    var W = new Parameter(new NDShape(2, 1), DataType.Float, initV, device, "w");

    //matrix product
    var Wx = CNTKLib.Times(W, x, "wx");

    //layer
    var l = CNTKLib.Plus(b, Wx, "wx_b");

    return l;
}

First, we create initializer, which will initialize startup values of network parameters. Then we defined bias and weight parameters, and join them in form of linear model “wx+b”, and returned as Function type. The createModel function is called in the main method. Once the model is created, we can exam it, and prove there are only two parameters in the model. The following code create the Linear Regression model, and print model parameters:

//Step 3: create linear regression model
var lr = createLRModel(x, device);
//Network model contains only two parameters b and w, so we query
//the model in order to get parameter values
var paramValues = lr.Inputs.Where(z => z.IsParameter).ToList();
var totalParameters = paramValues.Sum(c => c.Shape.TotalSize);
Console.WriteLine($"LRM has {totalParameters} params, {paramValues[0].Name} and {paramValues[1].Name}.");

In the previous code, we have seen how to extract parameters from the model. Once we have parameters, we can change its values, or just print those values for the further analysis.

Step 4: Create Trainer, which will be used to train network parameters w and b. The following code snippet shows implementation of Trainer method.

public Trainer createTrainer(Function network, Variable target)
{
    //learning rate
    var lrate = 0.082;
    var lr = new TrainingParameterScheduleDouble(lrate);
//network parameters
    var zParams = new ParameterVector(network.Parameters().ToList());

    //create loss and eval
    Function loss = CNTKLib.SquaredError(network, target);
    Function eval = CNTKLib.SquaredError(network, target);

    //learners
    //
    var llr = new List();
    var msgd = Learner.SGDLearner(network.Parameters(), lr);
    llr.Add(msgd);

    //trainer
    var trainer = Trainer.CreateTrainer(network, loss, eval, llr);
    //
    return trainer;
}

First we defined learning rate the main neural network parameter. Then we create Loss and Evaluation functions. With those parameters we can create SGD learner. Once the SGD learner object is instantiated, the trainer is created by calling CreateTrainer static CNTK method, and passed it further as function return. The method createTrainer is called in the main method:

//Step 4: create trainer
var trainer = createTrainer(lr, y);

Step 5: Training process: Once the variables, data set, network model and trainer are defined, the training process can be started.

//Ştep 5: training
for (int i = 1; i <= 200; i++)
{
var d = new Dictionary();
d.Add(x, xValues);
d.Add(y, yValues);
//
trainer.TrainMinibatch(d, true, device);
//
var loss = trainer.PreviousMinibatchLossAverage();
var eval = trainer.PreviousMinibatchEvaluationAverage();
//
if (i % 20 == 0)
    Console.WriteLine($"It={i}, Loss={loss}, Eval={eval}");

if(i==200)
 {
    //print weights
    var b0_name = paramValues[0].Name;
    var b0 = new Value(paramValues[0].GetValue()).GetDenseData(paramValues[0]);
    var b1_name = paramValues[1].Name;
    var b1 = new Value(paramValues[1].GetValue()).GetDenseData(paramValues[1]);
    Console.WriteLine($" ");
    Console.WriteLine($"Training process finished with the following regression parameters:");
    Console.WriteLine($"b={b0[0][0]}, w={b1[0][0]}");
    Console.WriteLine($" ");
 }
}
}

As can be seen, in just 200 iterations, regression parameters got the values we almost expected b_0=0.995, and w=2.005. Since the training process is different than classic regression parameter determination, we cannot get exact values. In order to estimate regression parameters, the neural network uses iteration methods called Stochastic Gradient Decadent, SGD. On the other hand, classic regression uses regression analysis procedures by minimizing the least square error, and solve system equations where unknowns are b and w.
Once we implement all code above, we can start LR demo by pressing F5. Similar output window should be shown:

Hope this blog post can provide enough information to start with CNTK C# and Machine Learning. Source code for this blog post can be downloaded .

Input normalization as separate layer in CNTK with C#


In the previous post, we have seen how to calculate some of basis parameters of descriptive statistics, as well as how to normalize data by calculating  mean and standard deviation. In this blog post we are going to implement data normalization as regular neural network layer, which can simplify the training process and data preparation.

What is Data normalization?

Simple said, data normalization is set of tasks which transform values of any feature in a data set into predefined number range. Usually this range is [-1,1] , [0,1] or some other specific ranges. Data normalization plays very important role in ML, since it can dramatically improve the training process, and simplify settings of network parameters.

There are two main types of data normalization:
– MinMax normalization – which transforms all values into range of [0,1],
– Gauss Normalization or Z score normalization, which transforms the value in such a way that the average value is zero, and std is 1.

Beside those types there are plenty of other methods which can be used. Usually those two are used when the size of the data set is known, otherwise we should use some of the other methods, like log scaling, dividing every value with some constant, etc. But why data need to be normalized? This is essential question in ML, and the simplest answer is to provide the equal influence to all features to change the output label. More about data normalization and scaling can be found on this .

In this blog post we are going to implement CNTK neural network which contain a “Normalization layer” between input and first hidden layer. The schematic picture of the network looks like the following image:

As can be observed, the Normalization layer is placed between input and first hidden layer. Also the Normalization layer contains the same neurons as input layer and produced the  output with the same dimension as the input layer.

In order to implement Normalization layer the following requirements must be met:

  • calculate average  \mu and standard deviation \sigma in training data set as well find maximum and minimum value of each feature.
  • this must be done prior to neural network model creation, since we need those values in the normalization layer.
  • within network model creation, the normalization layer should be define after input layer is defined.

Calculation of mean and standard deviation for training data set

Before network creation, we should prepare mean and standard deviation parameters which will be used in the Normalization layer as constants. Hopefully, the CNTK has the static method in the Minibatch source class for this purpose “MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs”. The method takes the whole training data set defined in the minibatch and calculate the parameters.

//calculate mean and std for the minibatchsource
// prepare the training data
var d = new DictionaryNDArrayView, NDArrayView>>();
using (var mbs = MinibatchSource.TextFormatMinibatchSource(
trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false))
{
d.Add(mbs.StreamInfo("feature"), new Tuple(null, null));
//compute mean and standard deviation of the population for inputs variables
MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device);

}

Now that we have average and std values for each feature, we can create network with normalization layer. In this example we define simple feed forward NN with 1 input, 1 normalization, 1 hidden and 1 output layer.

private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device)
{
//First the parameters initialization must be performed
var glorotInit = CNTKLib.GlorotUniformInitializer(
CNTKLib.DefaultParamInitScale,
CNTKLib.SentinelValueForInferParamInitRank,
CNTKLib.SentinelValueForInferParamInitRank, 1);

//*******Input layer is indicated as feature
var inputLayer = feature;

//*******Normalization layer
var mean = new Constant(avgStdConstants.Item1, "mean");
var std = new Constant(avgStdConstants.Item2, "std");
var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std);

//*****hidden layer creation
//shape of one hidden layer should be inputDim x neuronCount
var shape = new int[] { hiddenDim, 4 };
var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh");
var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh");
var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam;
var hidLayerAct = CNTKLib.ReLU(hidLay);

//******Output layer creation
//the last action is creation of the output layer
var shapeOut = new int[] { 3, hiddenDim };
var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo");
var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo");
var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut;
return outLay;
}

Complete Source Code Example

The whole source code about this example is listed below. The example show how to normalize input feature for Iris famous data set. Notice that when using such way of data normalization, we don’t need to handle  normalization for validation or testing data sets, because data normalization  is part of the network model.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using CNTK;
namespace NormalizationLayerDemo
{
    class Program
    {
        static string trainingDataPath = "./data/iris_training.txt";
        static string validationDataPath = "./data/iris_validation.txt";
        static void Main(string[] args)
        {
            DeviceDescriptor device = DeviceDescriptor.UseDefaultDevice();

            //stream configuration to distinct features and labels in the file
            var streamConfig = new StreamConfiguration[]
               {
                   new StreamConfiguration("feature", 4),
                   new StreamConfiguration("flower", 3)
               };

            // build a NN model
            //define input and output variable and connecting to the stream configuration
            var feature = Variable.InputVariable(new NDShape(1, 4), DataType.Float, "feature");
            var label = Variable.InputVariable(new NDShape(1, 3), DataType.Float, "flower");

            //calculate mean and std for the minibatchsource
            // prepare the training data
            var d = new Dictionary();
            using (var mbs = MinibatchSource.TextFormatMinibatchSource(
               trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false))
            {
                d.Add(mbs.StreamInfo("feature"), new Tuple(null, null));
                //compute mean and standard deviation of the population for inputs variables
                MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device);

            }

            //Build simple Feed Froward Neural Network with normalization layer
            var ffnn_model = createFFModelWithNormalizationLayer(feature,5,3,d.ElementAt(0).Value, device);

            //Loss and error functions definition
            var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
            var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

            // set learning rate for the network
            var learningRatePerSample = new TrainingParameterScheduleDouble(0.01, 1);

            //define learners for the NN model
            var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

            //define trainer based on model, loss and error functions , and SGD learner
            var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

            //Preparation for the iterative learning process

            // create minibatch for training
            var mbsTraining = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);

            int epoch = 1;
            while (epoch  a.sweepEnd))
                {
                    reportTrainingProgress(feature, label, streamConfig, trainer, epoch, device);
                    epoch++;
                }
            }
            Console.Read();
        }

        private static void reportTrainingProgress(Variable feature, Variable label, StreamConfiguration[] streamConfig,  Trainer trainer, int epoch, DeviceDescriptor device)
        {
            // create minibatch for training
            var mbsTrain = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.FullDataSweep, false);
            var trainD = mbsTrain.GetNextMinibatch(int.MaxValue, device);
            //
            var a1 = new UnorderedMapVariableMinibatchData();
            a1.Add(feature, trainD[mbsTrain.StreamInfo("feature")]);
            a1.Add(label, trainD[mbsTrain.StreamInfo("flower")]);
            var trainEvaluation = trainer.TestMinibatch(a1);

            // create minibatch for validation
            var mbsVal = MinibatchSource.TextFormatMinibatchSource(validationDataPath, streamConfig, MinibatchSource.FullDataSweep, false);
            var valD = mbsVal.GetNextMinibatch(int.MaxValue, device);

            //
            var a2 = new UnorderedMapVariableMinibatchData();
            a2.Add(feature, valD[mbsVal.StreamInfo("feature")]);
            a2.Add(label, valD[mbsVal.StreamInfo("flower")]);
            var valEvaluation = trainer.TestMinibatch(a2);

            Console.WriteLine($"Epoch={epoch}, Train Error={trainEvaluation}, Validation Error={valEvaluation}");
        }

        private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device)
        {
            //First the parameters initialization must be performed
            var glorotInit = CNTKLib.GlorotUniformInitializer(
                    CNTKLib.DefaultParamInitScale,
                    CNTKLib.SentinelValueForInferParamInitRank,
                    CNTKLib.SentinelValueForInferParamInitRank, 1);

            //*******Input layer is indicated as feature
            var inputLayer = feature;

            //*******Normalization layer
            var mean = new Constant(avgStdConstants.Item1, "mean");
            var std = new Constant(avgStdConstants.Item2, "std");
            var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std);

            //*****hidden layer creation
            //shape of one hidden layer should be inputDim x neuronCount
            var shape = new int[] { hiddenDim, 4 };
            var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh");
            var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh");
            var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam;
            var hidLayerAct = CNTKLib.ReLU(hidLay);

            //******Output layer creation
            //the last action is creation of the output layer
            var shapeOut = new int[] { 3, hiddenDim };
            var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo");
            var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo");
            var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut;
            return outLay;
        }
    }
}

The output window should looks like:

2018-07-13_18-20-58

The data set files used in the example can be downloaded from , and full source code demo from .

Descriptive statistics and data normalization with CNTK and C#


As you probably know CNTK is Microsoft Cognitive Toolkit for deep learning. It is open source library which is used by various Microsoft products. Also the CNTK is powerful library for developing custom ML solutions from various fields with different platforms and languages. What is also so powerful in the CNTK is the way of the implementation. In fact the library is implemented as series of computation graphs, which  is fully elaborated into the sequence of steps performed in a deep neural network training.

Each CNTK compute graph is created with set of nodes where each node represents numerical (mathematical) operation. The edges between nodes in the graph represent data flow between operations. Such a representation allows CNTK to schedule computation on the underlying hardware GPU or CPU. The CNTK can dynamically analyze the graphs in order to to optimize both latency and efficient use of resources. The most powerful part of this is the fact thet the CNTK can calculate derivation of any constructed set of operations, which can be used for efficient learning  process of the network parameters. The flowing image shows the core architecture of the CNTK.

On the other hand, any operation can be executed on CPU or GPU with minimal code changes. In fact we can implement method which can automatically takes GPU computation if available. The CNTK is the first .NET library which provide .NET developers to develop GPU aware .NET applications.

What this exactly mean is that with this powerful library you can develop complex math computation directly to GPU in .NET using C#, which currently is not possible when using standard .NET library.

For this blog post I will show how to calculate some of basic statistics operations on data set.

Say we have data set with 4 columns (features) and 20 rows (samples). The C# implementation of this 2D array is show on the following code snippet:

static float[][] mData = new float[][] {
new float[] { 5.1f, 3.5f, 1.4f, 0.2f},
new float[] { 4.9f, 3.0f, 1.4f, 0.2f},
new float[] { 4.7f, 3.2f, 1.3f, 0.2f},
new float[] { 4.6f, 3.1f, 1.5f, 0.2f},
new float[] { 6.9f, 3.1f, 4.9f, 1.5f},
new float[] { 5.5f, 2.3f, 4.0f, 1.3f},
new float[] { 6.5f, 2.8f, 4.6f, 1.5f},
new float[] { 5.0f, 3.4f, 1.5f, 0.2f},
new float[] { 4.4f, 2.9f, 1.4f, 0.2f},
new float[] { 4.9f, 3.1f, 1.5f, 0.1f},
new float[] { 5.4f, 3.7f, 1.5f, 0.2f},
new float[] { 4.8f, 3.4f, 1.6f, 0.2f},
new float[] { 4.8f, 3.0f, 1.4f, 0.1f},
new float[] { 4.3f, 3.0f, 1.1f, 0.1f},
new float[] { 6.5f, 3.0f, 5.8f, 2.2f},
new float[] { 7.6f, 3.0f, 6.6f, 2.1f},
new float[] { 4.9f, 2.5f, 4.5f, 1.7f},
new float[] { 7.3f, 2.9f, 6.3f, 1.8f},
new float[] { 5.7f, 3.8f, 1.7f, 0.3f},
new float[] { 5.1f, 3.8f, 1.5f, 0.3f},};

If you want to play with CNTK and math calculation you need some knowledge from Calculus, as well as vectors, matrix and tensors. Also in CNTK any operation is performed as matrix operation, which may simplify the calculation process for you. In standard way, you have to deal with multidimensional arrays during calculations. As my knowledge currently there is no .NET library which can perform math operation on GPU, which constrains the .NET platform for implementation of high performance applications.

If we want to compute average value, and standard deviation for each column, we can do that with CNTK very easy way. Once we compute those values we can used them for normalizing the data set by computing (Gauss Standardization).

The Gauss standardization is calculated by the flowing term:

nValue= \frac{X-\nu}{\sigma},
where X- is column values, \nu – column mean, and \sigma– standard deviation of the column.

For this example we are going to perform three statistic operations,and the CNTK automatically provides us with ability to compute those values on GPU. This is very important in case you have data set with millions of rows, and computation can be performed in few milliseconds.

Any computation process in CNTK can be achieved in several steps:

1. Read data from external source or in-memory data,
2. Define Value and Variable objects.
3. Define Function for the calculation
4. Perform Evaluation of the function by passing the Variable and Value objects
5. Retrieve the result of the calculation and show the result.

All above steps are implemented in the following implementation:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using CNTK;
namespace DataNormalizationWithCNTK
{
    class Program
    {
       static float[][] mData = new float[][] {
        new float[] { 5.1f, 3.5f, 1.4f, 0.2f},
        new float[] { 4.9f, 3.0f, 1.4f, 0.2f},
        new float[] { 4.7f, 3.2f, 1.3f, 0.2f},
        new float[] { 4.6f, 3.1f, 1.5f, 0.2f},
        new float[] { 6.9f, 3.1f, 4.9f, 1.5f},
        new float[] { 5.5f, 2.3f, 4.0f, 1.3f},
        new float[] { 6.5f, 2.8f, 4.6f, 1.5f},
        new float[] { 5.0f, 3.4f, 1.5f, 0.2f},
        new float[] { 4.4f, 2.9f, 1.4f, 0.2f},
        new float[] { 4.9f, 3.1f, 1.5f, 0.1f},
        new float[] { 5.4f, 3.7f, 1.5f, 0.2f},
        new float[] { 4.8f, 3.4f, 1.6f, 0.2f},
        new float[] { 4.8f, 3.0f, 1.4f, 0.1f},
        new float[] { 4.3f, 3.0f, 1.1f, 0.1f},
        new float[] { 6.5f, 3.0f, 5.8f, 2.2f},
        new float[] { 7.6f, 3.0f, 6.6f, 2.1f},
        new float[] { 4.9f, 2.5f, 4.5f, 1.7f},
        new float[] { 7.3f, 2.9f, 6.3f, 1.8f},
        new float[] { 5.7f, 3.8f, 1.7f, 0.3f},
        new float[] { 5.1f, 3.8f, 1.5f, 0.3f},};
        static void Main(string[] args)
        {
            //define device where the calculation will executes
            var device = DeviceDescriptor.UseDefaultDevice();

            //print data to console
            Console.WriteLine($"X1,\tX2,\tX3,\tX4");
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
            foreach (var row in mData)
            {
                Console.WriteLine($"{row[0]},\t{row[1]},\t{row[2]},\t{row[3]}");
            }
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");


            //convert data into enumerable list
            var data = mData.ToEnumerable<IEnumerable<float>>();

            
            //assign the values 
            var vData = Value.CreateBatchOfSequences<float>(new int[] {4},data, device);
            //create variable to describe the data
            var features = Variable.InputVariable(vData.Shape, DataType.Float);

            //define mean function for the variable
            var mean =  CNTKLib.ReduceMean(features, new Axis(2));//Axis(2)- means calculate mean along the third axes which represent 4 features
            
            //map variables and data
            var inputDataMap = new Dictionary<Variable, Value>() { { features, vData } };
            var meanDataMap = new Dictionary<Variable, Value>() { { mean, null } };

            //mean calculation
            mean.Evaluate(inputDataMap,meanDataMap,device);
            //get result
            var meanValues = meanDataMap[mean].GetDenseData<float>(mean);

            Console.WriteLine($"");
            Console.WriteLine($"Average values for each features x1={meanValues[0][0]},x2={meanValues[0][1]},x3={meanValues[0][2]},x4={meanValues[0][3]}");

            //Calculation of standard deviation
            var std = calculateStd(features);
            var stdDataMap = new Dictionary<Variable, Value>() { { std, null } };
            //mean calculation
            std.Evaluate(inputDataMap, stdDataMap, device);
            //get result
            var stdValues = stdDataMap[std].GetDenseData<float>(std);
            
            Console.WriteLine($"");
            Console.WriteLine($"STD of features x1={stdValues[0][0]},x2={stdValues[0][1]},x3={stdValues[0][2]},x4={stdValues[0][3]}");

            //Once we have mean and std we can calculate Standardized values for the data
            var gaussNormalization = CNTKLib.ElementDivide(CNTKLib.Minus(features, mean), std);
            var gaussDataMap = new Dictionary<Variable, Value>() { { gaussNormalization, null } };
            //mean calculation
            gaussNormalization.Evaluate(inputDataMap, gaussDataMap, device);

            //get result
            var normValues = gaussDataMap[gaussNormalization].GetDenseData<float>(gaussNormalization);
            //print data to console
            Console.WriteLine($"-------------------------------------------");
            Console.WriteLine($"Normalized values for the above data set");
            Console.WriteLine($"");
            Console.WriteLine($"X1,\tX2,\tX3,\tX4");
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
            var row2 = normValues[0];
            for (int j = 0; j < 80; j += 4)
            {
                Console.WriteLine($"{row2[j]},\t{row2[j + 1]},\t{row2[j + 2]},\t{row2[j + 3]}");
            }
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
        }

        private static Function calculateStd(Variable features)
        {
            var mean = CNTKLib.ReduceMean(features,new Axis(2));
            var remainder = CNTKLib.Minus(features, mean);
            var squared = CNTKLib.Square(remainder);
            //the last dimension indicate the number of samples
            var n = new Constant(new NDShape(0), DataType.Float, features.Shape.Dimensions.Last()-1);
            var elm = CNTKLib.ElementDivide(squared, n);
            var sum = CNTKLib.ReduceSum(elm, new Axis(2));
            var stdVal = CNTKLib.Sqrt(sum);
            return stdVal;
        }
    }

    public static class ArrayExtensions
    {
        public static IEnumerable<T> ToEnumerable<T>(this Array target)
        {
            foreach (var item in target)
                yield return (T)item;
        }
    }
}

The output for the source code above should look like:

ANNdotNET – the first GUI based CNTK tool


anndotnet-logo
ANNdotNET is windows desktop application written in C# for creating and training ANN models. The application relies on Microsoft Cognitive Toolkit, CNTK, and it is supposed to be GUI tool for CNTK library with extensions in data preprocessing, model evaluation and exporting capabilities. It is hosted at GitHub and can be clone from

Currently, ANNdotNET supports the folowing type of ANN:

  • Simple Feed Forward NN
  • Deep Feed Forward NN
  • Recurrent NN with LSTM

The process of creating, training, evaluating and exporting models is provided from the GUI Application and does not require knowledge for supported programming languages. The ANNdotNET is ideal for engineers which are not familiar with programming languages.

Software Requirements

ANNdotNET is x64 Windows desktop application which is running on .NET Framework 4.7.1. In order to run the application, the following requirements must be met:

– Windows 7, 8 or 10 with x64 architecture
– NET Framework 4.7.1
– CPU/GPU support.

Note: The application automatically detect GPU capability on your machine and use it in training and evaluation, otherwise it will use CPU.

How to run application

In order to run the application there are two possibilities:

Clone the GitHub of the application and open it in Visual Studio 2017.

  1. Change build architecture into x64, build and run the application.
  2. Download released version unzip and run ANNdotNET.exe.

The following three short videos quickly show how to create, train and evaluate regression, binary and multi class classification models.

  • Training regression model. Data set is is downloaded from the and loaded into ANNdotNET without any modification, since the data preparation module can prepare it.

2. Training and evaluation binary classifier model. Data represent Titanic data set downloaded from the public repository.

3. Training and evaluation multi class classification models. Data represents Iris data set downloaded from the same page as above.