Input normalization as separate layer in CNTK with C#


In the previous post, we have seen how to calculate some of basis parameters of descriptive statistics, as well as how to normalize data by calculating  mean and standard deviation. In this blog post we are going to implement data normalization as regular neural network layer, which can simplify the training process and data preparation.

What is Data normalization?

Simple said, data normalization is set of tasks which transform values of any feature in a data set into predefined number range. Usually this range is [-1,1] , [0,1] or some other specific ranges. Data normalization plays very important role in ML, since it can dramatically improve the training process, and simplify settings of network parameters.

There are two main types of data normalization:
– MinMax normalization – which transforms all values into range of [0,1],
– Gauss Normalization or Z score normalization, which transforms the value in such a way that the average value is zero, and std is 1.

Beside those types there are plenty of other methods which can be used. Usually those two are used when the size of the data set is known, otherwise we should use some of the other methods, like log scaling, dividing every value with some constant, etc. But why data need to be normalized? This is essential question in ML, and the simplest answer is to provide the equal influence to all features to change the output label. More about data normalization and scaling can be found on this link.

In this blog post we are going to implement CNTK neural network which contain a “Normalization layer” between input and first hidden layer. The schematic picture of the network looks like the following image:

As can be observed, the Normalization layer is placed between input and first hidden layer. Also the Normalization layer contains the same neurons as input layer and produced the  output with the same dimension as the input layer.

In order to implement Normalization layer the following requirements must be met:

  • calculate average  \mu and standard deviation \sigma in training data set as well find maximum and minimum value of each feature.
  • this must be done prior to neural network model creation, since we need those values in the normalization layer.
  • within network model creation, the normalization layer should be define after input layer is defined.

Calculation of mean and standard deviation for training data set

Before network creation, we should prepare mean and standard deviation parameters which will be used in the Normalization layer as constants. Hopefully, the CNTK has the static method in the Minibatch source class for this purpose “MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs”. The method takes the whole training data set defined in the minibatch and calculate the parameters.


//calculate mean and std for the minibatchsource
// prepare the training data
var d = new DictionaryNDArrayView, NDArrayView>>();
using (var mbs = MinibatchSource.TextFormatMinibatchSource(
trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false))
{
d.Add(mbs.StreamInfo("feature"), new Tuple(null, null));
//compute mean and standard deviation of the population for inputs variables
MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device);

}

Now that we have average and std values for each feature, we can create network with normalization layer. In this example we define simple feed forward NN with 1 input, 1 normalization, 1 hidden and 1 output layer.


private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device)
{
//First the parameters initialization must be performed
var glorotInit = CNTKLib.GlorotUniformInitializer(
CNTKLib.DefaultParamInitScale,
CNTKLib.SentinelValueForInferParamInitRank,
CNTKLib.SentinelValueForInferParamInitRank, 1);

//*******Input layer is indicated as feature
var inputLayer = feature;

//*******Normalization layer
var mean = new Constant(avgStdConstants.Item1, "mean");
var std = new Constant(avgStdConstants.Item2, "std");
var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std);

//*****hidden layer creation
//shape of one hidden layer should be inputDim x neuronCount
var shape = new int[] { hiddenDim, 4 };
var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh");
var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh");
var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam;
var hidLayerAct = CNTKLib.ReLU(hidLay);

//******Output layer creation
//the last action is creation of the output layer
var shapeOut = new int[] { 3, hiddenDim };
var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo");
var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo");
var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut;
return outLay;
}

Complete Source Code Example

The whole source code about this example is listed below. The example show how to normalize input feature for Iris famous data set. Notice that when using such way of data normalization, we don’t need to handle  normalization for validation or testing data sets, because data normalization  is part of the network model.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using CNTK;
namespace NormalizationLayerDemo
{
    class Program
    {
        static string trainingDataPath = "./data/iris_training.txt";
        static string validationDataPath = "./data/iris_validation.txt";
        static void Main(string[] args)
        {
            DeviceDescriptor device = DeviceDescriptor.UseDefaultDevice();

            //stream configuration to distinct features and labels in the file
            var streamConfig = new StreamConfiguration[]
               {
                   new StreamConfiguration("feature", 4),
                   new StreamConfiguration("flower", 3)
               };

            // build a NN model
            //define input and output variable and connecting to the stream configuration
            var feature = Variable.InputVariable(new NDShape(1, 4), DataType.Float, "feature");
            var label = Variable.InputVariable(new NDShape(1, 3), DataType.Float, "flower");

            //calculate mean and std for the minibatchsource
            // prepare the training data
            var d = new Dictionary();
            using (var mbs = MinibatchSource.TextFormatMinibatchSource(
               trainingDataPath , streamConfig, MinibatchSource.FullDataSweep,false))
            {
                d.Add(mbs.StreamInfo("feature"), new Tuple(null, null));
                //compute mean and standard deviation of the population for inputs variables
                MinibatchSource.ComputeInputPerDimMeansAndInvStdDevs(mbs, d, device);

            }

            //Build simple Feed Froward Neural Network with normalization layer
            var ffnn_model = createFFModelWithNormalizationLayer(feature,5,3,d.ElementAt(0).Value, device);

            //Loss and error functions definition
            var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
            var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

            // set learning rate for the network
            var learningRatePerSample = new TrainingParameterScheduleDouble(0.01, 1);

            //define learners for the NN model
            var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

            //define trainer based on model, loss and error functions , and SGD learner
            var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

            //Preparation for the iterative learning process

            // create minibatch for training
            var mbsTraining = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);

            int epoch = 1;
            while (epoch  a.sweepEnd))
                {
                    reportTrainingProgress(feature, label, streamConfig, trainer, epoch, device);
                    epoch++;
                }
            }
            Console.Read();
        }

        private static void reportTrainingProgress(Variable feature, Variable label, StreamConfiguration[] streamConfig,  Trainer trainer, int epoch, DeviceDescriptor device)
        {
            // create minibatch for training
            var mbsTrain = MinibatchSource.TextFormatMinibatchSource(trainingDataPath, streamConfig, MinibatchSource.FullDataSweep, false);
            var trainD = mbsTrain.GetNextMinibatch(int.MaxValue, device);
            //
            var a1 = new UnorderedMapVariableMinibatchData();
            a1.Add(feature, trainD[mbsTrain.StreamInfo("feature")]);
            a1.Add(label, trainD[mbsTrain.StreamInfo("flower")]);
            var trainEvaluation = trainer.TestMinibatch(a1);

            // create minibatch for validation
            var mbsVal = MinibatchSource.TextFormatMinibatchSource(validationDataPath, streamConfig, MinibatchSource.FullDataSweep, false);
            var valD = mbsVal.GetNextMinibatch(int.MaxValue, device);

            //
            var a2 = new UnorderedMapVariableMinibatchData();
            a2.Add(feature, valD[mbsVal.StreamInfo("feature")]);
            a2.Add(label, valD[mbsVal.StreamInfo("flower")]);
            var valEvaluation = trainer.TestMinibatch(a2);

            Console.WriteLine($"Epoch={epoch}, Train Error={trainEvaluation}, Validation Error={valEvaluation}");
        }

        private static Function createFFModelWithNormalizationLayer(Variable feature, int hiddenDim,int outputDim, Tuple avgStdConstants, DeviceDescriptor device)
        {
            //First the parameters initialization must be performed
            var glorotInit = CNTKLib.GlorotUniformInitializer(
                    CNTKLib.DefaultParamInitScale,
                    CNTKLib.SentinelValueForInferParamInitRank,
                    CNTKLib.SentinelValueForInferParamInitRank, 1);

            //*******Input layer is indicated as feature
            var inputLayer = feature;

            //*******Normalization layer
            var mean = new Constant(avgStdConstants.Item1, "mean");
            var std = new Constant(avgStdConstants.Item2, "std");
            var normalizedLayer = CNTKLib.PerDimMeanVarianceNormalize(inputLayer, mean, std);

            //*****hidden layer creation
            //shape of one hidden layer should be inputDim x neuronCount
            var shape = new int[] { hiddenDim, 4 };
            var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "wh");
            var biasParam = new Parameter(new NDShape(1, hiddenDim), 0, device, "bh");
            var hidLay = CNTKLib.Times(weightParam, normalizedLayer) + biasParam;
            var hidLayerAct = CNTKLib.ReLU(hidLay);

            //******Output layer creation
            //the last action is creation of the output layer
            var shapeOut = new int[] { 3, hiddenDim };
            var wParamOut = new Parameter(shapeOut, DataType.Float, glorotInit, device, "wo");
            var bParamOut = new Parameter(new NDShape(1, 3), 0, device, "bo");
            var outLay = CNTKLib.Times(wParamOut, hidLayerAct) + bParamOut;
            return outLay;
        }
    }
}

The output window should looks like:

2018-07-13_18-20-58

The data set files used in the example can be downloaded from here.

Advertisements

Descriptive statistics and data normalization with CNTK and C#


As you probably know CNTK is Microsoft Cognitive Toolkit for deep learning. It is open source library which is used by various Microsoft products. Also the CNTK is powerful library for developing custom ML solutions from various fields with different platforms and languages. What is also so powerful in the CNTK is the way of the implementation. In fact the library is implemented as series of computation graphs, which  is fully elaborated into the sequence of steps performed in a deep neural network training.

Each CNTK compute graph is created with set of nodes where each node represents numerical (mathematical) operation. The edges between nodes in the graph represent data flow between operations. Such a representation allows CNTK to schedule computation on the underlying hardware GPU or CPU. The CNTK can dynamically analyze the graphs in order to to optimize both latency and efficient use of resources. The most powerful part of this is the fact thet the CNTK can calculate derivation of any constructed set of operations, which can be used for efficient learning  process of the network parameters. The flowing image shows the core architecture of the CNTK.

On the other hand, any operation can be executed on CPU or GPU with minimal code changes. In fact we can implement method which can automatically takes GPU computation if available. The CNTK is the first .NET library which provide .NET developers to develop GPU aware .NET applications.

What this exactly mean is that with this powerful library you can develop complex math computation directly to GPU in .NET using C#, which currently is not possible when using standard .NET library.

For this blog post I will show how to calculate some of basic statistics operations on data set.

Say we have data set with 4 columns (features) and 20 rows (samples). The C# implementation of this 2D array is show on the following code snippet:

static float[][] mData = new float[][] {
new float[] { 5.1f, 3.5f, 1.4f, 0.2f},
new float[] { 4.9f, 3.0f, 1.4f, 0.2f},
new float[] { 4.7f, 3.2f, 1.3f, 0.2f},
new float[] { 4.6f, 3.1f, 1.5f, 0.2f},
new float[] { 6.9f, 3.1f, 4.9f, 1.5f},
new float[] { 5.5f, 2.3f, 4.0f, 1.3f},
new float[] { 6.5f, 2.8f, 4.6f, 1.5f},
new float[] { 5.0f, 3.4f, 1.5f, 0.2f},
new float[] { 4.4f, 2.9f, 1.4f, 0.2f},
new float[] { 4.9f, 3.1f, 1.5f, 0.1f},
new float[] { 5.4f, 3.7f, 1.5f, 0.2f},
new float[] { 4.8f, 3.4f, 1.6f, 0.2f},
new float[] { 4.8f, 3.0f, 1.4f, 0.1f},
new float[] { 4.3f, 3.0f, 1.1f, 0.1f},
new float[] { 6.5f, 3.0f, 5.8f, 2.2f},
new float[] { 7.6f, 3.0f, 6.6f, 2.1f},
new float[] { 4.9f, 2.5f, 4.5f, 1.7f},
new float[] { 7.3f, 2.9f, 6.3f, 1.8f},
new float[] { 5.7f, 3.8f, 1.7f, 0.3f},
new float[] { 5.1f, 3.8f, 1.5f, 0.3f},};

If you want to play with CNTK and math calculation you need some knowledge from Calculus, as well as vectors, matrix and tensors. Also in CNTK any operation is performed as matrix operation, which may simplify the calculation process for you. In standard way, you have to deal with multidimensional arrays during calculations. As my knowledge currently there is no .NET library which can perform math operation on GPU, which constrains the .NET platform for implementation of high performance applications.

If we want to compute average value, and standard deviation for each column, we can do that with CNTK very easy way. Once we compute those values we can used them for normalizing the data set by computing standard score (Gauss Standardization).

The Gauss standardization is calculated by the flowing term:

nValue= \frac{X-\nu}{\sigma},
where X- is column values, \nu – column mean, and \sigma– standard deviation of the column.

For this example we are going to perform three statistic operations,and the CNTK automatically provides us with ability to compute those values on GPU. This is very important in case you have data set with millions of rows, and computation can be performed in few milliseconds.

Any computation process in CNTK can be achieved in several steps:

1. Read data from external source or in-memory data,
2. Define Value and Variable objects.
3. Define Function for the calculation
4. Perform Evaluation of the function by passing the Variable and Value objects
5. Retrieve the result of the calculation and show the result.

All above steps are implemented in the following implementation:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using CNTK;
namespace DataNormalizationWithCNTK
{
    class Program
    {
       static float[][] mData = new float[][] {
        new float[] { 5.1f, 3.5f, 1.4f, 0.2f},
        new float[] { 4.9f, 3.0f, 1.4f, 0.2f},
        new float[] { 4.7f, 3.2f, 1.3f, 0.2f},
        new float[] { 4.6f, 3.1f, 1.5f, 0.2f},
        new float[] { 6.9f, 3.1f, 4.9f, 1.5f},
        new float[] { 5.5f, 2.3f, 4.0f, 1.3f},
        new float[] { 6.5f, 2.8f, 4.6f, 1.5f},
        new float[] { 5.0f, 3.4f, 1.5f, 0.2f},
        new float[] { 4.4f, 2.9f, 1.4f, 0.2f},
        new float[] { 4.9f, 3.1f, 1.5f, 0.1f},
        new float[] { 5.4f, 3.7f, 1.5f, 0.2f},
        new float[] { 4.8f, 3.4f, 1.6f, 0.2f},
        new float[] { 4.8f, 3.0f, 1.4f, 0.1f},
        new float[] { 4.3f, 3.0f, 1.1f, 0.1f},
        new float[] { 6.5f, 3.0f, 5.8f, 2.2f},
        new float[] { 7.6f, 3.0f, 6.6f, 2.1f},
        new float[] { 4.9f, 2.5f, 4.5f, 1.7f},
        new float[] { 7.3f, 2.9f, 6.3f, 1.8f},
        new float[] { 5.7f, 3.8f, 1.7f, 0.3f},
        new float[] { 5.1f, 3.8f, 1.5f, 0.3f},};
        static void Main(string[] args)
        {
            //define device where the calculation will executes
            var device = DeviceDescriptor.UseDefaultDevice();

            //print data to console
            Console.WriteLine($"X1,\tX2,\tX3,\tX4");
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
            foreach (var row in mData)
            {
                Console.WriteLine($"{row[0]},\t{row[1]},\t{row[2]},\t{row[3]}");
            }
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");


            //convert data into enumerable list
            var data = mData.ToEnumerable<IEnumerable<float>>();

            
            //assign the values 
            var vData = Value.CreateBatchOfSequences<float>(new int[] {4},data, device);
            //create variable to describe the data
            var features = Variable.InputVariable(vData.Shape, DataType.Float);

            //define mean function for the variable
            var mean =  CNTKLib.ReduceMean(features, new Axis(2));//Axis(2)- means calculate mean along the third axes which represent 4 features
            
            //map variables and data
            var inputDataMap = new Dictionary<Variable, Value>() { { features, vData } };
            var meanDataMap = new Dictionary<Variable, Value>() { { mean, null } };

            //mean calculation
            mean.Evaluate(inputDataMap,meanDataMap,device);
            //get result
            var meanValues = meanDataMap[mean].GetDenseData<float>(mean);

            Console.WriteLine($"");
            Console.WriteLine($"Average values for each features x1={meanValues[0][0]},x2={meanValues[0][1]},x3={meanValues[0][2]},x4={meanValues[0][3]}");

            //Calculation of standard deviation
            var std = calculateStd(features);
            var stdDataMap = new Dictionary<Variable, Value>() { { std, null } };
            //mean calculation
            std.Evaluate(inputDataMap, stdDataMap, device);
            //get result
            var stdValues = stdDataMap[std].GetDenseData<float>(std);
            
            Console.WriteLine($"");
            Console.WriteLine($"STD of features x1={stdValues[0][0]},x2={stdValues[0][1]},x3={stdValues[0][2]},x4={stdValues[0][3]}");

            //Once we have mean and std we can calculate Standardized values for the data
            var gaussNormalization = CNTKLib.ElementDivide(CNTKLib.Minus(features, mean), std);
            var gaussDataMap = new Dictionary<Variable, Value>() { { gaussNormalization, null } };
            //mean calculation
            gaussNormalization.Evaluate(inputDataMap, gaussDataMap, device);

            //get result
            var normValues = gaussDataMap[gaussNormalization].GetDenseData<float>(gaussNormalization);
            //print data to console
            Console.WriteLine($"-------------------------------------------");
            Console.WriteLine($"Normalized values for the above data set");
            Console.WriteLine($"");
            Console.WriteLine($"X1,\tX2,\tX3,\tX4");
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
            var row2 = normValues[0];
            for (int j = 0; j < 80; j += 4)
            {
                Console.WriteLine($"{row2[j]},\t{row2[j + 1]},\t{row2[j + 2]},\t{row2[j + 3]}");
            }
            Console.WriteLine($"-----,\t-----,\t-----,\t-----");
        }

        private static Function calculateStd(Variable features)
        {
            var mean = CNTKLib.ReduceMean(features,new Axis(2));
            var remainder = CNTKLib.Minus(features, mean);
            var squared = CNTKLib.Square(remainder);
            //the last dimension indicate the number of samples
            var n = new Constant(new NDShape(0), DataType.Float, features.Shape.Dimensions.Last()-1);
            var elm = CNTKLib.ElementDivide(squared, n);
            var sum = CNTKLib.ReduceSum(elm, new Axis(2));
            var stdVal = CNTKLib.Sqrt(sum);
            return stdVal;
        }
    }

    public static class ArrayExtensions
    {
        public static IEnumerable<T> ToEnumerable<T>(this Array target)
        {
            foreach (var item in target)
                yield return (T)item;
        }
    }
}

The output for the source code above should look like:

CNTK 106 Tutorial – Time Series prediction with LSTM using C#


In this post will show how to implement CNTK 106 Tutorial in C#. This tutorial lecture is written in Python and there is no related example in C#. For this reason I decided to translate this very good tutorial into C#. The tutorial can be found at: CNTK 106: Part A – Time series prediction with LSTM (Basics)  and uses sin wave function in order to predict time series data. For this problem the Long Short Term Memory, LSTM, Recurrent Neural Network is used.

Goal

The goal of this tutorial is prediction the simulated data of a continuous function ( sin wave). From N previous values of the y=sin(t) function where y is the observed amplitude signal at time t, prediction of  M values of y is going to predict for the corresponding future time points.

The excitement of this tutorial is using the LSTM recurrent neural network which is nicely suited for this kind of problems. As you probably know LSTM is special recurrent neural network which has ability to learn from its experience during the training. More information about this fantastic version of recurrent neural network can be found here.

The blog post is divided into several sub-sections:

  1. Simulated data part
  2. LSTM Network
  3. Model training and evaluation

Since the simulated data set is huge, the original tutorial has two running mode which is described by the variable isFast. In case of fast mode, the variable is set to True, and this mode will be used in this tutorial. Later, the reader may change the value to False in order to see much better training model, but the training time will be much longer. The Demo for this this blog post exposes variables of the batch size and iteration number to the user, so the user may defined those numbers as he/she want.

Data generation

In order to generate simulated sin wave data, we are going to implement several helper methods. Let N and M  be a ordered set of past values and future (desired predicted values) of the sine wave, respectively. The two methods are implemented:

  1. generateWaveDataset()

The generateWaveDataset takes the periodic function,set of independent values (which is corresponded the time for this case) and generate the wave function, by providing the time steps and time shift. The method is related to the generate_data() python methods from the original tutorial.

static Dictionary<string, (float[][] train, float[][] valid, float[][] test)> loadWaveDataset(Func<double, double> fun, float[] x0, int timeSteps, int timeShift)
{
    ////fill data
    float[] xsin = new float[x0.Length];//all data
    for (int l = 0; l < x0.Length; l++)
        xsin[l] = (float)fun(x0[l]);


    //split data on training and testing part
    var a = new float[xsin.Length - timeShift];
    var b = new float[xsin.Length - timeShift];

    for (int l = 0; l < xsin.Length; l++)
    {
        //
        if (l < xsin.Length - timeShift) a[l] = xsin[l]; // if (l >= timeShift)
            b[l - timeShift] = xsin[l];
    }

    //make arrays of data
    var a1 = new List<float[]>();
    var b1 = new List<float[]>();
    for (int i = 0; i < a.Length - timeSteps + 1; i++)
    {
        //features
        var row = new float[timeSteps];
        for (int j = 0; j < timeSteps; j++)
            row[j] = a[i + j];
        //create features row
        a1.Add(row);
        //label row
        b1.Add(new float[] { b[i + timeSteps - 1] });
    }

    //split data into train, validation and test data set
    var xxx = splitData(a1.ToArray(), 0.1f, 0.1f);
    var yyy = splitData(b1.ToArray(), 0.1f, 0.1f);


    var retVal = new Dictionary<string, (float[][] train, float[][] valid, float[][] test)>();
    retVal.Add("features", xxx);
    retVal.Add("label", yyy);
    return retVal;
}

Once the data is generated, three datasets should be created: train, validate and test dataset, which are generated by splitting the dataset generated by the above method. The following splitData method splits the original sin wave dataset into three datasets,

static (float[][] train, float[][] valid, float[][] test) splitData(float[][] data, float valSize = 0.1f, float testSize = 0.1f)
{
    //calculate
    var posTest = (int)(data.Length * (1 - testSize));
    var posVal = (int)(posTest * (1 - valSize));

    return (data.Skip(0).Take(posVal).ToArray(), data.Skip(posVal).Take(posTest - posVal).ToArray(), data.Skip(posTest).ToArray());
}

In order to visualize the data, the Windows Forms project is created. Moreover, the ZedGraph .NET class library is used in order to visualize the data. The following picture shows the generated data.

Network modeling

As mentioned on the beginning of the blog post, we are going to create LSTM recurrent neural network, with 1 LSTM cell for each input. We have N inputs and each input is a value in our continuous function. The N outputs from the LSTM are the input into a dense layer that produces a single output. Between LSTM and dense layer we insert a dropout layer that randomly drops 20% of the values coming from the LSTM to prevent overfitting the model to the training dataset. We want use use the dropout layer during training but when using the model to make predictions we don’t want to drop values.

The description above can be illustrated on the following picture:

The implementation of the LSTM can be sumarize in one method, but the real implementation can be viewed in the demo sample which is attached with this blog post.
The following methods implements LSTM network depicted on the image above. The arguments for the method are already defined.

public static Function CreateModel(Variable input, int outDim, int LSTMDim, int cellDim, DeviceDescriptor device, string outputName)
{

    Func<Variable, Function> pastValueRecurrenceHook = (x) => CNTKLib.PastValue(x);

    //creating LSTM cell for each input variable
    Function LSTMFunction = LSTMPComponentWithSelfStabilization<float>(
        input,
        new int[] { LSTMDim },
        new int[] { cellDim },
        pastValueRecurrenceHook,
        pastValueRecurrenceHook,
        device).Item1;

    //after the LSTM sequence is created return the last cell in order to continue generating the network
    Function lastCell = CNTKLib.SequenceLast(LSTMFunction);

    //implement drop out for 10%
    var dropOut = CNTKLib.Dropout(lastCell,0.2, 1);

    //create last dense layer before output
    var outputLayer =  FullyConnectedLinearLayer(dropOut, outDim, device, outputName);

    return outputLayer;
}

Training the network

In order to train the model, the nextBatch() method is implemented that produces batches to feed the training function. Note that because CNTK supports variable sequence length, we must feed the batches as list of sequences. This is a convenience function to generate small batches of data often referred to as minibatch.

private static IEnumerable<(float[] X, float[] Y)> nextBatch(float[][] X, float[][] Y, int mMSize)
{

    float[] asBatch(float[][] data, int start, int count)
    {
        var lst = new List<float>();
        for (int i = start; i < start + count; i++) { if (i >= data.Length)
                break;

            lst.AddRange(data[i]);
        }
        return lst.ToArray();
    }

    for (int i = 0; i <= X.Length - 1; i += mMSize) { var size = X.Length - i; if (size > 0 && size > mMSize)
            size = mMSize;

        var x = asBatch(X, i, size);
        var y = asBatch(Y, i, size);

        yield return (x, y);
    }
}

Note: Since the this tutorial is implemented as WinForms C# project which can visualize training and testing datasets, as well as it  can show the best found model during the training process, there are lot of other implemented methods which are not mentioned here, but can be found in the demo source code attached in this blog post.

Key Insight

When working with LSTM the user should pay attention on the following:

Since LSTM must work with axes with unknown dimensions, the variables should be defined on different way as we could saw in the previous blog posts. So the input and the output variable are initialized with the following code listing:

// build the model
var feature = Variable.InputVariable(new int[] { inDim }, DataType.Float, featuresName, null, false /*isSparse*/);
var label = Variable.InputVariable(new int[] { ouDim }, DataType.Float, labelsName, new List<CNTK.Axis>() { CNTK.Axis.DefaultBatchAxis() }, false);

As specified in the original tutorial: “Specifying the dynamic axes enables the recurrence engine handle the time sequence data in the expected order. Please take time to understand how to work with both static and dynamic axes in CNTK as described here, the dynamic axes is key point in LSTM.
Now the implementation is continue with the defining learning rate, momentum, the learner and the trainer.

 
var lstmModel = LSTMHelper.CreateModel(feature, ouDim, hiDim, cellDim, device, "timeSeriesOutput");

Function trainingLoss = CNTKLib.SquaredError(lstmModel, label, "squarederrorLoss");
Function prediction = CNTKLib.SquaredError(lstmModel, label, "squarederrorEval");


// prepare for training
TrainingParameterScheduleDouble learningRatePerSample = new TrainingParameterScheduleDouble(0.0005, 1);
TrainingParameterScheduleDouble momentumTimeConstant = CNTKLib.MomentumAsTimeConstantSchedule(256);

IList<Learner> parameterLearners = new List<Learner>() {
    Learner.MomentumSGDLearner(lstmModel.Parameters(), learningRatePerSample, momentumTimeConstant, /*unitGainMomentum = */true)  };

//create trainer
var trainer = Trainer.CreateTrainer(lstmModel, trainingLoss, prediction, parameterLearners);

Now the code is ready, and the 10 epochs should return acceptable result:

 
// train the model
for (int i = 1; i <= iteration; i++)
{
    //get the next minibatch amount of data
    foreach (var miniBatchData in nextBatch(featureSet.train, labelSet.train, batchSize))
    {
        var xValues = Value.CreateBatch<float>(new NDShape(1, inDim), miniBatchData.X, device);
        var yValues = Value.CreateBatch<float>(new NDShape(1, ouDim), miniBatchData.Y, device);

        //Combine variables and data in to Dictionary for the training
        var batchData = new Dictionary<Variable, Value>();
        batchData.Add(feature, xValues);
        batchData.Add(label, yValues);

        //train minibarch data
        trainer.TrainMinibatch(batchData, device);
    }

    if (this.InvokeRequired)
    {
        // Execute the same method, but this time on the GUI thread
        this.Invoke(
            new Action(() =>
            {
                //output training process
                progressReport(trainer, lstmModel.Clone(), i, device);
            }
            ));
    }
    else
    {
        //output training process
        progressReport(trainer, lstmModel.Clone(), i, device);

    }             
}

Model Evaluation

Model evaluation is implemented during the training process. In this way we can see the learning process and how the model is getting better and better.

Fore each minibatch the progress method is called which updates the charts for the training and testing data set.

void progressReport(Trainer trainer, Function model, int iteration, DeviceDescriptor device)
{
    textBox3.Text = iteration.ToString();
    textBox4.Text = trainer.PreviousMinibatchLossAverage().ToString();
    progressBar1.Value = iteration;

    reportOnGraphs(trainer, model, iteration, device);
}

private void reportOnGraphs(Trainer trainer, Function model, int i, DeviceDescriptor device)
{
    currentModelEvaluation(trainer, model, i, device);
    currentModelTest(trainer, model, i, device);
}

The following picture shows the training process, where the model evaluation is shown simultaneously, for the training and testing data set.
Also the simulation of the Loss value during the training is simulated as well.

As can be see the blog post extends the original Tutorial with some handy tricks during the training process. Also this demo is good strarting point for development bether tool for LSTM Time Series training. The full source code of this blog post, which shows much more implementation than presented in the blog post can be found here.

Deploy CNTK model to Excel using C#


In the last blog post, we saw how to save model state (checkpoint) in order to load it and train again. Also we have seen how to save model for the evaluation and testing. In fact we have seen how to prepare the model to be production ready.

Once we finish with the modelling process, we enter in to production phase, to install the model on place where we can use it for solving real world problems. This blog post is going to describe the process how deployed CNTK model can be exported to Excel like as AddIn and be used like ordinary Excel formula.

Preparing and deploying CNTK model

From the previous posts we saw how to train and save the model. This will be our starting point for this post.

Assume we trained and saved the model for evaluation from the previous blog post with file name as “IrisModel.model”. The model calculates Iris flower based on 4 input parameters, as we saw earlier.

  1. The first step is to create  .NET Class Library and install the following Nugget packages
    1. CNTK CPU Only ver. 2.3
    2. Excel Dna Addin
    3. Include saved IrisModel.model file in the project as Content and should be copied in Debug folder of the application.

As we can see, for this export we need Excel Dna Addin, fantastic library for making anything as Excel Addin. It can be install as Nuget package, and more information can be found at http://excel-dna.net/.

The following picture shows above 3 actions.

Once we prepare everything, we can start with the implementation of the Excel Addin.

  1. Change the Class.cs name into IrisModel.cs, and implement two methods:
    1. public static string IrisEval(object arg) and
    2. private static float EvaluateModel(float[] vector).

The first method is direct Excel function which will be called in the excel, and the second method is similar from the previous blog post for model evaluation. The following code snippet shows the implementation for the methods:

[ExcelFunction(Description = "IrisEval - Prediction for the Iris flower based on 4 input values.")]
public static string IrisEval(object arg)
{
    try
    {
        //First convert object in to array
        object[,] obj = (object[,])arg;

        //create list to convert values
        List<float> calculatedOutput = new List<float>();
        //
        foreach (var s in obj)
        {
            var ss = float.Parse(s.ToString(), CultureInfo.InvariantCulture);
            calculatedOutput.Add(ss);
        }
        if (calculatedOutput.Count != 4)
            throw new Exception("Incorrect number of input variables. It must be 4!");
        return EvaluateModel(calculatedOutput.ToArray());
    }
    catch (Exception ex)
    {
        return ex.Message;
    }

}
private static string EvaluateModel(float[] vector)
{
    //load the model from disk
    var ffnn_model = Function.Load(@"IrisModel.model", DeviceDescriptor.CPUDevice);

    //extract features and label from the model
    Variable feature = ffnn_model.Arguments[0];
    Variable label = ffnn_model.Output;

    Value xValues = Value.CreateBatch<float>(new int[] { feature.Shape[0] }, vector, DeviceDescriptor.CPUDevice);
    //Value yValues = - we don't need it, because we are going to calculate it

    //map the variables and values
    var inputDataMap = new Dictionary<Variable, Value>();
    inputDataMap.Add(feature, xValues);
    var outputDataMap = new Dictionary<Variable, Value>();
    outputDataMap.Add(label, null);

    //evaluate the model
    ffnn_model.Evaluate(inputDataMap, outputDataMap, DeviceDescriptor.CPUDevice);
    //extract the result  as one hot vector
    var outputData = outputDataMap[label].GetDenseData<float>(label);
    var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();
    var flower = actualLabels.FirstOrDefault();
    var strFlower = flower == 0 ? "setosa" : flower == 1 ? "versicolor" : "virginica";
    return strFlower;
}<span style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" data-mce-type="bookmark" class="mce_SELRES_start"></span>

That is all we need for model evaluation in Excel.

Notice that the project must be build with the x64 architecture, and also installed Excel must be in x64 version.  This demo will not work in Excel 32 bits.

Rebuild the project and open Excel file with Iris Data set. You can use the file included in the demo project,  specially prepared for this blog post.

  • Got to Excel – > Options -> Addins,
    • Install the ExportCNTKToExcel-AddIn64-packed addin file.

  • Start typing the Excel  formula :
    • IrisEval(A1:D1) and press Enter. And the magic happen.

 

Complete source code for this blog post can be found here.

 

How to save CNTK model to file in C#


Final process of training is the model which should be used in the production as safe, reliable and accurate. Usually model training is frustrating and time consuming process, and it is not like we can see as demo to introduce with the library. Once the model is built with right combination of parameter values and network architecture, the process of modelling turn in to interesting and funny task, since it calculates the values just as we expect.

In most of the time, we have to save the current state of the model, and continue with training due to various reasons:

  • to change the parameters of the learner
  • to switch from one to another machine,
  • or to share the state of the model with your team mate,
  • to switch from CPU to GPU,
  • etc

In all above cases the current state of the model should be saved, and continue the training process from the last stage of the model. Because, the training from the beginning is not a solution, since we already invest time and knowledge to achieve progress of the model building.

The CNTK supports two kind of persisting the model.

  • production/ready or evaluation ready state, and
  • saving the checkpoint of the model for later training.

In the first case, the model is prepare for the evaluation and production but cannot be trained again, because it is freed from all other information but for the evaluation. During the saving process, only one files is generated.

In the second case, beside a model file, another file is generated with the name “modelname.ckp”. The file contains all information needed for the continuation of training.  Once the trainer  checkpoint is persisted we can continue with model training even if we changed the following:

  • the training data set with the same dimensions and data types
  • the parameters of the learner,
  • the learner

What we cannot change in order to continue with training is the the network model. In other words, the model must remain with the same number of layers, input and output dimensions.

Saving, loading and evaluating the model

Once the model is trained it can be persisted as separated file. As separate file, it can be loaded and evaluated with different dataset, but the number of the features and the label must remain the same as in case when was trained. Use this method when you want to share the model with someone else, or when you want to deploy the model in the production.

The model is saved simply by calling the CNTK method  Save:

public void SaveTrainedModel(Function model, string fileName)
{
    model.Save(fileName);
}

The model evaluation requires several steps:

  • load the model from the file,
  • extract the features and label from the model
  • call evaluate method from the model, by passing the batch created from the features, label and the evaluation dataset.

The model is loaded by calling Load method.

public Function LoadTrainedModel(string fileName, DeviceDescriptor device)
{
   return Function.Load(fileName, device, ModelFormat.CNTKv2);
}

Once the model is loaded, features and label are extracted from the model on the following way:

//load the mdoel from file
Function model = Function.Load(modelFile, device);
//extract features and label from the model
Variable feature = ffnn_model.Arguments[0];
Variable label = ffnn_model.Output;

The next step is creating the minibatch in order to pass the data to the evaluation.In this case we are going to create only one row for the Iris example of:

//Example: 5.0f, 3.5f, 1.3f, 0.3f, setosa
float[] xVal = new float[4] { 5.0f, 3.5f, 1.3f, 0.3f };
Value xValues = Value.CreateBatch<float>(new int[] {feature.Shape[0] }, xVal, device);
//Value yValues = - we don't need it, because we are going to calculate it

Once we created the variable and values we can map them and pass to the model evaluation, and calculate the result:

//map the variables and values
var inputDataMap = new Dictionary<Variable, Value>();
inputDataMap.Add(feature,xValues);
var outputDataMap = new Dictionary<Variable, Value>();
outputDataMap.Add(label, null);
//evaluate the model
ffnn_model.Evaluate(inputDataMap, outputDataMap, device);
//extract the result  as one hot vector
var outputData = outputDataMap[label].GetDenseData<float>(label);

The evaluation result should be transformed to proper format, and compared with expected result:

//transforms into class value
var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();
var flower = actualLabels.FirstOrDefault();
var strFlower = flower == 0 ? "setosa" : flower == 1 ? "versicolor" : "versicolor";
Console.WriteLine($"Model Prediction: Input({xVal[0]},{xVal[1]},{xVal[2]},{xVal[3]}), Iris Flower={strFlower}");
Console.WriteLine($"Model Expectation: Input({xVal[0]},{xVal[1]},{xVal[2]},{xVal[3]}), Iris Flower= setosa");

Training previous saved model

Training previously saved model is very simple, since it requires no special coding. Right after the trainer is created with all necessary stuff (network, learning rate, momentum and other),
you just need to call

 trainer.RestoreFromCheckpoint(strIrisFilePath);

No additional code should be added.
The above method is called, after you successfully saved the model state by calling

trainer.SaveCheckpoint(strIrisFilePath);

The method is usually called at the end of the training process.
Complete source code from this blog post can be found here.

How to setup learning rate per iteration in CTNK using C#


So far we have seen how to train and validate models in CNTK using C#. Also there many more details which should be revealed in order to better understand the CNTK library. One of the important feature not only in the CNTK but also in every DNN (deep neural networks) is the learning rate.

In ANN the learning rate is the number by which the derivative is multiply before it is subtracted by the weight. If the weight is decreased to much the loss function will be increased and the network will diverge. On the other hand if the weight is decreased to little the loss function will be changed little and the diverge progress will be to slow. So selecting the right value of the parameter is important. During the training process, the learning rate is usually defined as constant value. In CNTK the learning rate is defined as follow:

// set learning rate for the network
var learningRate = new TrainingParameterScheduleDouble(0.2, 1);

From the code above the learning rate is assign to 0.2 value per sample. This means whole training process will be done with the learning rate of 0.2.
The CNTK support dynamic changing of the learning rate.
Assume we want to setup different the learning rates so that from the fist to the 100 iterations the learning rate would be 0.2. From the 100 to 500 iterations we want the learning rate would be 0.1. Moreover, after the 500 iterations are completed and to he end of the iteration process, we want to setup the learning rate to 0.05.

Above said can be expressed:

lr1=0.2 , from 1 to 100 iterations

lr2= 0.1 from 100 to 500 iterations

lr3= 0.05 from 500 to the end of the searching process.

In case we want to setup the learning rate dynamically we need to use the PairSizeTDouble class in order to defined the learning rate. So for the above requirements the flowing code should be implemented:

PairSizeTDouble p1 = new PairSizeTDouble(2, 0.2);
PairSizeTDouble p2 = new PairSizeTDouble(10, 0.1);
PairSizeTDouble p3 = new PairSizeTDouble(1, 0.05);

var vp = new VectorPairSizeTDouble() { p1, p2, p3 };
var learningRatePerSample = new CNTK.TrainingParameterScheduleDouble(vp, 50);

First we need to defined PairSizeTDouble object for every learning rate value, with the integer number which will be multiply.
Once we define the rates, make a array of rate values by creating the VectorPairSizeTDouble object. Then the array is passed as the first argument in the TrainingParameterScheduleDouble method. The second argument of the method is multiplication number. So in the first rate value, the 2 is multiple with 50 which is 100, and denotes the iteration number. Similar multiplication are done in the other rate values.

Testing and Validation CNTK models using C#


…continue from the previous post.
Once the model is build and Loss and Validation functions are satisfied our expectation, we need to validate and test the model using the data which was not part of the training data set (unseen data). The model validation is very important because we want to see if our model is trained well,so that can evaluates unseen data approximately same as the training data. Otherwise the model which cannot predict the output is called overfitted model. Overfitting can happen when the model was trained long enough that shows very high performance for the training data set, but for the testing data evaluate bad results.
We will continue with the implementation from the prevision two posts, and implement model validation. After the model is trained, the model and the trainer are passed to the Evaluation method. The evaluation method loads the testing data and calculated the output using passed model. Then it compares calculated (predicted) values with the output from the testing data set and calculated the accuracy. The following source code shows the evaluation implementation.

private static void EvaluateIrisModel(Function ffnn_model, Trainer trainer, DeviceDescriptor device)
{
    var dataFolder = "Data";//files must be on the same folder as program
    var trainPath = Path.Combine(dataFolder, "testIris_cntk.txt");
    var featureStreamName = "features";
    var labelsStreamName = "label";

    //extract features and label from the model
    var feature = ffnn_model.Arguments[0];
    var label = ffnn_model.Output;

    //stream configuration to distinct features and labels in the file
    var streamConfig = new StreamConfiguration[]
        {
            new StreamConfiguration(featureStreamName, feature.Shape[0]),
            new StreamConfiguration(labelsStreamName, label.Shape[0])
        };

    // prepare testing data
    var testMinibatchSource = MinibatchSource.TextFormatMinibatchSource(
        trainPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);
    var featureStreamInfo = testMinibatchSource.StreamInfo(featureStreamName);
    var labelStreamInfo = testMinibatchSource.StreamInfo(labelsStreamName);

    int batchSize = 20;
    int miscountTotal = 0, totalCount = 20;
    while (true)
    {
        var minibatchData = testMinibatchSource.GetNextMinibatch((uint)batchSize, device);
        if (minibatchData == null || minibatchData.Count == 0)
            break;
        totalCount += (int)minibatchData[featureStreamInfo].numberOfSamples;

        // expected labels are in the mini batch data.
        var labelData = minibatchData[labelStreamInfo].data.GetDenseData<float>(label);
        var expectedLabels = labelData.Select(l => l.IndexOf(l.Max())).ToList();

        var inputDataMap = new Dictionary<Variable, Value>() {
            { feature, minibatchData[featureStreamInfo].data }
        };

        var outputDataMap = new Dictionary<Variable, Value>() {
            { label, null }
        };

        ffnn_model.Evaluate(inputDataMap, outputDataMap, device);
        var outputData = outputDataMap[label].GetDenseData<float>(label);
        var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();

        int misMatches = actualLabels.Zip(expectedLabels, (a, b) => a.Equals(b) ? 0 : 1).Sum();

        miscountTotal += misMatches;
        Console.WriteLine($"Validating Model: Total Samples = {totalCount}, Mis-classify Count = {miscountTotal}");

        if (totalCount >= 20)
            break;
    }
    Console.WriteLine($"---------------");
    Console.WriteLine($"------TESTING SUMMARY--------");
    float accuracy = (1.0F - miscountTotal / totalCount);
    Console.WriteLine($"Model Accuracy = {accuracy}");
    return;

}

The implemented method is called in the previous Training method.

 EvaluateIrisModel(ffnn_model, trainer, device);

As can be seen the model validation has shown that the model predicts the data with high accuracy, which is shown on the following picture.

This was the latest post in series of blog posts about using Feed forward neural networks to train the Iris data using CNTK and C#.

The full source code for all three samples can be found here.