Blog Archives

Testing and Validation CNTK models using C#


…continue from the previous post.
Once the model is build and Loss and Validation functions are satisfied our expectation, we need to validate and test the model using the data which was not part of the training data set (unseen data). The model validation is very important because we want to see if our model is trained well,so that can evaluates unseen data approximately same as the training data. Otherwise the model which cannot predict the output is called overfitted model. Overfitting can happen when the model was trained long enough that shows very high performance for the training data set, but for the testing data evaluate bad results.
We will continue with the implementation from the prevision two posts, and implement model validation. After the model is trained, the model and the trainer are passed to the Evaluation method. The evaluation method loads the testing data and calculated the output using passed model. Then it compares calculated (predicted) values with the output from the testing data set and calculated the accuracy. The following source code shows the evaluation implementation.

private static void EvaluateIrisModel(Function ffnn_model, Trainer trainer, DeviceDescriptor device)
{
    var dataFolder = "Data";//files must be on the same folder as program
    var trainPath = Path.Combine(dataFolder, "testIris_cntk.txt");
    var featureStreamName = "features";
    var labelsStreamName = "label";

    //extract features and label from the model
    var feature = ffnn_model.Arguments[0];
    var label = ffnn_model.Output;

    //stream configuration to distinct features and labels in the file
    var streamConfig = new StreamConfiguration[]
        {
            new StreamConfiguration(featureStreamName, feature.Shape[0]),
            new StreamConfiguration(labelsStreamName, label.Shape[0])
        };

    // prepare testing data
    var testMinibatchSource = MinibatchSource.TextFormatMinibatchSource(
        trainPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);
    var featureStreamInfo = testMinibatchSource.StreamInfo(featureStreamName);
    var labelStreamInfo = testMinibatchSource.StreamInfo(labelsStreamName);

    int batchSize = 20;
    int miscountTotal = 0, totalCount = 20;
    while (true)
    {
        var minibatchData = testMinibatchSource.GetNextMinibatch((uint)batchSize, device);
        if (minibatchData == null || minibatchData.Count == 0)
            break;
        totalCount += (int)minibatchData[featureStreamInfo].numberOfSamples;

        // expected labels are in the mini batch data.
        var labelData = minibatchData[labelStreamInfo].data.GetDenseData<float>(label);
        var expectedLabels = labelData.Select(l => l.IndexOf(l.Max())).ToList();

        var inputDataMap = new Dictionary<Variable, Value>() {
            { feature, minibatchData[featureStreamInfo].data }
        };

        var outputDataMap = new Dictionary<Variable, Value>() {
            { label, null }
        };

        ffnn_model.Evaluate(inputDataMap, outputDataMap, device);
        var outputData = outputDataMap[label].GetDenseData<float>(label);
        var actualLabels = outputData.Select(l => l.IndexOf(l.Max())).ToList();

        int misMatches = actualLabels.Zip(expectedLabels, (a, b) => a.Equals(b) ? 0 : 1).Sum();

        miscountTotal += misMatches;
        Console.WriteLine($"Validating Model: Total Samples = {totalCount}, Mis-classify Count = {miscountTotal}");

        if (totalCount >= 20)
            break;
    }
    Console.WriteLine($"---------------");
    Console.WriteLine($"------TESTING SUMMARY--------");
    float accuracy = (1.0F - miscountTotal / totalCount);
    Console.WriteLine($"Model Accuracy = {accuracy}");
    return;

}

The implemented method is called in the previous Training method.

 EvaluateIrisModel(ffnn_model, trainer, device);

As can be seen the model validation has shown that the model predicts the data with high accuracy, which is shown on the following picture.

This was the latest post in series of blog posts about using Feed forward neural networks to train the Iris data using CNTK and C#.

The full source code for all three samples can be found here.

Advertisements

Train Iris data by Batch using CNTK and C#


In the previous post we have seen how to train NN model by using MinibatchSource. Usually we should use it when we have large amount of data. In case of small amount of the data, all data can be loaded in memory, and all can be passed to each iteration in order to train the model. This blog post will implement this kind of feeding the trainer.
We will reused the previous implementation, so the starting point can be previous source code. For data loading we have to define a new method. The Iris data is stored in text format like the following:

sepal_length,sepal_width,petal_length,petal_width,species
5.1,3.5,1.4,0.2,setosa(1 0 0)
7.0,3.2,4.7,1.4,versicolor(0 1 0)
7.6,3.0,6.6,2.1,virginica(0 0 1)
...

The output column is encoded to 1-N-1 encoding rule we have seen previously.
The method will read all the data from the file, parse the data and create two float arrays:

  • float[] feature, and
  • float[] label.

As can be seen both arrays are 1D, which means all data will be inserted in 1D, because the CNTK requires so.  Since the data is in 1D array, we should also provide the dimensionality of the data so te CNTK can resolve what values for each features. The following listing shows the loading Iris data in two 1D array returned as tuple.

static (float[], float[]) loadIrisDataset(string filePath, int featureDim, int numClasses)
{
    var rows = File.ReadAllLines(filePath);
    var features = new List<float>();
    var label = new List<float>();
    for (int i = 1; i < rows.Length; i++)
    {
        var row = rows[i].Split(',');
        var input = new float[featureDim];
        for (int j = 0; j < featureDim; j++)
        {
            input[j] = float.Parse(row[j], CultureInfo.InvariantCulture);
        }
        var output = new float[numClasses];
        for (int k = 0; k < numClasses; k++)
        {
            int oIndex = featureDim + k;
            output[k] = float.Parse(row[oIndex], CultureInfo.InvariantCulture);
        }

        features.AddRange(input);
        label.AddRange(output);
    }

    return (features.ToArray(), label.ToArray());
}

Once the data is loaded we should change very little amount of the previous code in order to implement batching instead of using minibatchSource. At the beginning we provides several variable to define the NN model structure. Then we call the loadIrisDataset, and define xValues and yValues, which we use in order to create feature and label input variables. Then we create dictionary which connect the feature and labels with data values which we will pass to the trainer later.
The next code is the same as in the previous version in order to create NN model, Loss and Evaluation functions, and learning rate.

Then we create loop, for 800 iteration. Once the iteration reaches the maximum value the program outputs the model properties and terminates.
Above said it implemented in the following code.

public static void TrainIriswithBatch(DeviceDescriptor device)
{
    //data file path
    var iris_data_file = "Data/iris_with_hot_vector.csv";

    //Network definition
    int inputDim = 4;
    int numOutputClasses = 3;
    int numHiddenLayers = 1;
    int hidenLayerDim = 6;
    int sampleSize = 130;

    //load data in to memory
    var dataSet = loadIrisDataset(iris_data_file, inputDim, numOutputClasses);

    // build a NN model
    //define input and output variable
    var xValues = Value.CreateBatch<float>(new NDShape(1, inputDim), dataSet.Item1, device);
    var yValues    = Value.CreateBatch<float>(new NDShape(1, numOutputClasses), dataSet.Item2, device);

    // build a NN model
    //define input and output variable and connecting to the stream configuration
    var feature = Variable.InputVariable(new NDShape(1, inputDim), DataType.Float);
    var label = Variable.InputVariable(new NDShape(1, numOutputClasses), DataType.Float);

    //Combine variables and data in to Dictionary for the training
    var dic = new Dictionary<Variable, Value>();
    dic.Add(feature, xValues);
    dic.Add(label, yValues);

    //Build simple Feed Froward Neural Network model
    // var ffnn_model = CreateMLPClassifier(device, numOutputClasses, hidenLayerDim, feature, classifierName);
    var ffnn_model = createFFNN(feature, numHiddenLayers, hidenLayerDim, numOutputClasses, Activation.Tanh, "IrisNNModel", device);

    //Loss and error functions definition
    var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
    var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

    // set learning rate for the network
    var learningRatePerSample = new TrainingParameterScheduleDouble(0.001125, 1);

    //define learners for the NN model
    var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

    //define trainer based on ffnn_model, loss and error functions , and SGD learner
    var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

    //Preparation for the iterative learning process
    //used 800 epochs/iterations. Batch size will be the same as sample size since the data set is small
    int epochs = 800;
    int i = 0;
    while (epochs > -1)
    {
        trainer.TrainMinibatch(dic, device);

        //print progress
        printTrainingProgress(trainer, i++, 50);

        //
        epochs--;
    }
    //Summary of training
    double acc = Math.Round((1.0 - trainer.PreviousMinibatchEvaluationAverage()) * 100, 2);
    Console.WriteLine($"------TRAINING SUMMARY--------");
    Console.WriteLine($"The model trained with the accuracy {acc}%");
}

If we run the code, the output will be the same as we got from the previous blog post example:

The full source code with training data can be downloaded
here.

Train Iris data with MinibatchSource using CNTK and C#


So far (post1, post2, post3) we have seen, what is CNTK, how to use it with Python, and how to create simple C# .NET application and call basic CNTK methods. For this blog post we are going to implement full C# program to train Iris data.

The first step in using CNTK is how to get the data and feed the trainer. In the previous post we prepared the Iris data in CNTK format, which is suitable when using MinibatchSource. In order to use the MinibatchSource , we need to create two streams:

  • one for the features and
  • one for the label.

Also features and label variables must be created using the streams as well, so that when accessing the data by using variables the trainer is aware that the data is coming from the file.

Data preparation

As mentioned above we are going to use CNTK MinibatchSource to load the Iris data.
The two files are prepared for this demo:

var dataPath = Path.Combine(dataFolder, "iris_with_hot_vector.csv");
var trainPath = Path.Combine(dataFolder, "iris_with_hot_vector_test.csv");

One file path contains the Iris data for the training, and the second path contains the data for testing, which will be used in the future post. Those two files will be arguments when creating minibatchSource for the training and validation respectively.

The first step in getting the data from the file is defining the stream configuration with proper information. Those information will be used when the data would be extracted from the file. The configuration is completed by providing the number of features and the number of the one-hot vector component of the label in the file, as well as the names of features and labels. At the end of the blog post the data is attached so the reader can see how data is prepare for the minibatchSource.

The following code defines the stream configuration for the Iris data set.

//stream configuration to distinct features and labels in the file
var streamConfig = new StreamConfiguration[]
  {
    new StreamConfiguration(featureStreamName, inputDim),
    new StreamConfiguration(labelsStreamName, numOutputClasses)
  };

Also features and label variables must be created by providing above the stream names.

//define input and output variable and connecting to the stream configuration
var feature = Variable.InputVariable(new  NDShape(1,inputDim), DataType.Float, featureStreamName);
var label = Variable.InputVariable(new NDShape(1, numOutputClasses), DataType.Float, labelsStreamName);

Now the input and the output variables are connected with the data from the file, and minibachSource can handle them.

Creating Feed Forward Neural Network Model

Once we defined the stream and variables, we can defined the network model. The CNTK is implemented so that you can defined any number of hidden layers with any activation function.
For this demo we are going to create simple feed forward neural network with one hidden layer. The picture below show the NN model.

In order to implement above NN  model we need to implement three methods:

  • static Function applyActivationFunction(Function layer, NNActivation actFun)
  • static Function simpleLayer(Function input, int outputDim, DeviceDescriptor device)
  • static Function createFFNN(Function input, int hiddenLayerCount, int hiddenDim, int outputDim, NNActivation activation, string modelName, DeviceDescriptor device)

The first method just apply specified activation function for the passed layer. The method is very simple and should looks like:

static Function applyActivationFunction(Function layer, Activation actFun)
{
    switch (actFun)
    {
        default:
        case Activation.None:
            return layer;
        case Activation.ReLU:
            return CNTKLib.ReLU(layer);
        case Activation.Sigmoid:
            return CNTKLib.Sigmoid(layer);
        case Activation.Tanh:
            return CNTKLib.Tanh(layer);
    }
}

The method takes the layer as argument and return the layer with applied activation function.

The next method is creation of the simple layer with n weights and one bias. The method is shown on the following listing.

static Function simpleLayer(Function input, int outputDim, DeviceDescriptor device)
{
    //prepare default parameters values
    var glorotInit = CNTKLib.GlorotUniformInitializer(
            CNTKLib.DefaultParamInitScale,
            CNTKLib.SentinelValueForInferParamInitRank,
            CNTKLib.SentinelValueForInferParamInitRank, 1);

    //create weight and bias vectors
    var var = (Variable)input;
    var shape = new int[] { outputDim, var.Shape[0] };
    var weightParam = new Parameter(shape, DataType.Float, glorotInit, device, "w");
    var biasParam = new Parameter(new NDShape(1,outputDim), 0, device, "b");

    //construct W * X + b matrix
    return CNTKLib.Times(weightParam, input) + biasParam;
}

After initialization of the parameters, the Function object is created with number of output components and previous layer or the input variable. This is so called chain rule in NN layer creation. With this strategy the user can create very complex NN model.

The last method perform layers creation. It is called from the main method, and can create arbitrary feed forward neural network, by providing the parameters.

static Function createFFNN(Variable input, int hiddenLayerCount, int hiddenDim, int outputDim, Activation activation, string modelName, DeviceDescriptor device)
{
    //First the parameters initialization must be performed
    var glorotInit = CNTKLib.GlorotUniformInitializer(
            CNTKLib.DefaultParamInitScale,
            CNTKLib.SentinelValueForInferParamInitRank,
            CNTKLib.SentinelValueForInferParamInitRank, 1);

    //hidden layers creation
    //first hidden layer
    Function h = simpleLayer(input, hiddenDim, device);
    h = ApplyActivationFunction(h, activation);
    //2,3, ... hidden layers
    for (int i = 1; i < hiddenLayerCount; i++)
    {
        h = simpleLayer(h, hiddenDim, device);
        h = ApplyActivationFunction(h, activation);
    }
    //the last action is creation of the output layer
    var r  = simpleLayer(h, outputDim, device);
    r.SetName(modelName);
    return r;
}

Now that we have implemented method for NN model creation, the next step would be a training implementation.
The training process is iterative where the minibachSource feed the trainer for each iteration.
The Loss and the evaluation functions are calculated for each iteration, and shown in iteration progress. The iteration progress is defined by separate method which looks like the following code listing:

private static void printTrainingProgress(Trainer trainer, int minibatchIdx, int outputFrequencyInMinibatches)
{
    if ((minibatchIdx % outputFrequencyInMinibatches) == 0 && trainer.PreviousMinibatchSampleCount() != 0)
    {
        float trainLossValue = (float)trainer.PreviousMinibatchLossAverage();
        float evaluationValue = (float)trainer.PreviousMinibatchEvaluationAverage();
        Console.WriteLine($"Minibatch: {minibatchIdx} CrossEntropyLoss = {trainLossValue}, EvaluationCriterion = {evaluationValue}");
    }
}

During the iteration, the Loss function is constantly decreasing its value showing by indicating that the model is becoming better and better. Once the iteration process is completed, the model is shown in context of the accuracy of the training data.

Full program implementation

The following listing shows the complete source code implementation using CNTK for Iris data set training. At the beginning several variables are defined in order to define structure of NN model: the  number of input and output variables. Also the main method implements the iteration process where the minibatchSource handling with the data by passing the relevant data to the trainer. More about it will be in separate blog post. Once the iteration process is completed the model result is shows and the program terminates.

public static void TrainIris(DeviceDescriptor device)
{
    var dataFolder = "";//files must be on the same folder as program
    var dataPath = Path.Combine(dataFolder, "iris_with_hot_vector.csv");
    var trainPath = Path.Combine(dataFolder, "iris_with_hot_vector_test.csv");

    var featureStreamName = "features";
    var labelsStreamName = "labels";

    //Network definition
    int inputDim = 4;
    int numOutputClasses = 3;
    int numHiddenLayers = 1;
    int hidenLayerDim = 6;
    uint sampleSize = 130;

    //stream configuration to distinct features and labels in the file
    var streamConfig = new StreamConfiguration[]
        {
            new StreamConfiguration(featureStreamName, inputDim),
            new StreamConfiguration(labelsStreamName, numOutputClasses)
        };

    // build a NN model
    //define input and output variable and connecting to the stream configuration
    var feature = Variable.InputVariable(new NDShape(1, inputDim), DataType.Float, featureStreamName);
    var label = Variable.InputVariable(new NDShape(1, numOutputClasses), DataType.Float, labelsStreamName);

    //Build simple Feed Froward Neural Network model
    // var ffnn_model = CreateMLPClassifier(device, numOutputClasses, hidenLayerDim, feature, classifierName);
    var ffnn_model = CreateFFNN(feature, numHiddenLayers, hidenLayerDim, numOutputClasses, Activation.Tanh, "IrisNNModel", device);

    //Loss and error functions definition
    var trainingLoss = CNTKLib.CrossEntropyWithSoftmax(new Variable(ffnn_model), label, "lossFunction");
    var classError = CNTKLib.ClassificationError(new Variable(ffnn_model), label, "classificationError");

    // prepare the training data
    var minibatchSource = MinibatchSource.TextFormatMinibatchSource(
        dataPath, streamConfig, MinibatchSource.InfinitelyRepeat, true);
    var featureStreamInfo = minibatchSource.StreamInfo(featureStreamName);
    var labelStreamInfo = minibatchSource.StreamInfo(labelsStreamName);

    // set learning rate for the network
    var learningRatePerSample = new TrainingParameterScheduleDouble(0.001125, 1);

    //define learners for the NN model
    var ll = Learner.SGDLearner(ffnn_model.Parameters(), learningRatePerSample);

    //define trainer based on ffnn_model, loss and error functions , and SGD learner
    var trainer = Trainer.CreateTrainer(ffnn_model, trainingLoss, classError, new Learner[] { ll });

    //Preparation for the iterative learning process
    //used 800 epochs/iterations. Batch size will be the same as sample size since the data set is small
    int epochs = 800;
    int i = 0;
    while (epochs > -1)
    {
        var minibatchData = minibatchSource.GetNextMinibatch(sampleSize, device);
        //pass to the trainer the current batch separated by the features and label.
        var arguments = new Dictionary<Variable, MinibatchData>
        {
            { feature, minibatchData[featureStreamInfo] },
            { label, minibatchData[labelStreamInfo] }
        };

        trainer.TrainMinibatch(arguments, device);

        Helper.PrintTrainingProgress(trainer, i++, 50);

        // MinibatchSource is created with MinibatchSource.InfinitelyRepeat.
        // Batching will not end. Each time minibatchSource completes an sweep (epoch),
        // the last minibatch data will be marked as end of a sweep. We use this flag
        // to count number of epochs.
        if (minibatchData.Values.Any(a => a.sweepEnd))
        {
            epochs--;
        }
    }
    //Summary of training
    double acc = Math.Round((1.0 - trainer.PreviousMinibatchEvaluationAverage()) * 100, 2);
    Console.WriteLine($"------TRAINING SUMMARY--------");
    Console.WriteLine($"The model trained with the accuracy {acc}%");

    //// validate the model
    // this will be posted as separate blog post
}

The full source code with formatted Iris data set for training can be found here.

Hello CNTK from C#


Less than two months ago CNTK 2.2 is released with support of training and evaluating models in C#. For this post we will go through the process of creating  .NET application which has ability to call CNTK. The CNTK library is accessible by using NuGet package.

First,  open Visual Studio and create Classic Windows desktop console application.

Once we have starting application, open NuGet package window by right click on Solution item and select “Manage NuGet package for the solution”.

Browse for CNTK and select CNTK.CPUOnly library version, select the project and press install.

Once the installation package is completed, the .NET application can use CNTK library and perform training and evaluation actions. To see the library is available write the flowing code in the main method of the demo application:

var cpu = DeviceDescriptor.UseDefaultDevice();
Console.WriteLine($"Hello from CNTK for {cpu.Type} only!");

Before compile,CNTK project should be changed from Any CPU into x64. This is accomplished by copy the current configuration in to x64.
The flowing picture shows how to do that.

Compile and run the application. The flowing output should be shown in case the CNTK for C# is successful installed.

Once we have configured the CNTK library to run on .NET platform , we can start building some more interesting NN models.
Stay tuned.