Building Predictive Maintenance Model Using ML.NET


Summary

This C# notebook is a continuation from the previous blog post Predictive Maintenance on .NET Platform.

The notebook is completely implemented on .NET platform using C# Jupyter Notebook and Daany – C# data analytics library. There are small differences between this notebook and the notebooks at the official azure gallery portal, but in most cases, the code follows the steps defined there.

The notebook shows how to use .NET Jupyter Notebook with Daany.DataFrame and ML.NET in order to prepare the data and build the Predictive Maintenance Model on .NET platform.

Description

In the previous post, we analyzed 5 data sets with information about telemetry, data, errors and maintenance as well as failure for 100 machines. The data were transformed and analyzed in order to create the final data set for building a machine learning model for Predictive maintenance.

Once we created all features from the data sets, as a final step we created the label column so that it describes if a certain machine will fail in the next 24 hours due to failure a component1, component2, component3, component4 or it will continue to work. . In this part, we are going to perform a part of the machine learning task and start training a machine learning model for predicting if a certain machine will fail in the next 24 hours due to failure, or it will be in functioning normal in that time period.

The model which we are going to build is multi-class classification model sice it has 5 values to predict:

  • component1,
  • component2,
  • component3,
  • component4 or
  • none – means it will continue to work.

ML.NET framework as library for training

In order to train the model, we are going to use ML.NET – Microsoft open source framework for Machine Learning on .NET Platform. First we need to put some preparation codes like:

  • Required Nuget packages,
  • Set of using statements and code for formatting the output:

At the beggining of this notebook, we installed the several NugetPackages in order to complete this notebook. The following code shows using statements, and method for formatting the data from the DataFrame.

//using Microsoft.ML.Data;
using XPlot.Plotly;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;

//
using Microsoft.ML;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Transforms;
using Microsoft.ML.Trainers.LightGbm;
//
using Daany;
using Daany.Ext;
//DataFrame formatter
using Microsoft.AspNetCore.Html;
Formatter.Register((df, writer) =>
{
    var headers = new List();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c)));
    //renders the rows
    var rows = new List<List>();
    var take = 20;
    //
    for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
    {
        var cells = new List();
        cells.Add(td(df.Index[i]));
        foreach (var obj in df[i]){
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }
    var t = table(
        thead(
            headers),
        tbody(
            rows.Select(
                r => tr(r)))); 
    writer.Write(t);
}, "text/html");

Once we install the Nuget packages and define using statements we are going to define a class we need to create an ML.NET pipeline.

The class PrMaintenanceClass – contains the features (properties) we build in the previous post. We need them to define features in the ML.NET pipeline. The second class we defined is PrMaintenancePrediction we used for prediction and model evaluation.

class PrMaintenancePrediction
{
    [ColumnName("PredictedLabel")]
    public string failure { get; set; }
}
class PrMaintenanceClass
{
    public DateTime datetime { get; set; }
    public int machineID { get; set; }
    public float voltmean_3hrs { get; set; }
    public float rotatemean_3hrs { get; set; }
    public float pressuremean_3hrs { get; set; }
    public float vibrationmean_3hrs { get; set; }
    public float voltstd_3hrs { get; set; }
    public float rotatestd_3hrs { get; set; }
    public float pressurestd_3hrs { get; set; }
    public float vibrationstd_3hrs { get; set; }
    public float voltmean_24hrs { get; set; }
    public float rotatemean_24hrs { get; set; }
    public float pressuremean_24hrs { get; set; }
    public float vibrationmean_24hrs { get; set; }
    public float voltstd_24hrs { get; set; }
    public float rotatestd_24hrs { get; set; }
    public float pressurestd_24hrs { get; set; }
    public float vibrationstd_24hrs { get; set; }
    public float error1count { get; set; }
    public float error2count { get; set; }
    public float error3count { get; set; }
    public float error4count { get; set; }
    public float error5count { get; set; }
    public float sincelastcomp1 { get; set; }
    public float sincelastcomp2 { get; set; }
    public float sincelastcomp3 { get; set; }
    public float sincelastcomp4 { get; set; }
    public string model { get; set; }
    public float age { get; set; }
    public string failure { get; set; }
}

Now that we have defined a class type, we are going to implement the pipeline for this ml model.First, we create MLContext with constant seed, so that the model can be reproduced by any user running this notebook. Then we load the data and split the data into train and test set.

MLContext mlContext= new MLContext(seed:88888);
var strPath="data/final_dataFrame.csv";
var mlDF= DataFrame.FromCsv(strPath);
//
//split data frame on training and testing part
//split at 2015-08-01 00:00:00, to train on the first 8 months and test on last 4 months
var trainDF = mlDF.Filter("datetime", new DateTime(2015, 08, 1, 1, 0, 0), FilterOperator.LessOrEqual);
var testDF = mlDF.Filter("datetime", new DateTime(2015, 08, 1, 1, 0, 0), FilterOperator.Greather);

The summary for the training set is show in the following tables:

Similarly the testing set has the following summary:

Once we have data into application memory, we can prepare the ML.NET pipeline. The pipeline consists of data transformation from the Daany.DataFrame type into collection IDataView. For this task, the LoadFromEnumerable method is used.

//Load daany:DataFrame into ML.NET pipeline
public static IDataView loadFromDataFrame(MLContext mlContext,Daany.DataFrame df)
{
    IDataView dataView = mlContext.Data.LoadFromEnumerable(df.GetEnumerator(oRow =>
    {
        //convert row object array into PrManitenance row
        var ooRow = oRow;
        var prRow = new PrMaintenanceClass();
        prRow.datetime = (DateTime)ooRow["datetime"];
        prRow.machineID = (int)ooRow["machineID"];
        prRow.voltmean_3hrs = Convert.ToSingle(ooRow["voltmean_3hrs"]);
        prRow.rotatemean_3hrs = Convert.ToSingle(ooRow["rotatemean_3hrs"]);
        prRow.pressuremean_3hrs = Convert.ToSingle(ooRow["pressuremean_3hrs"]);
        prRow.vibrationmean_3hrs = Convert.ToSingle(ooRow["vibrationmean_3hrs"]);
        prRow.voltstd_3hrs = Convert.ToSingle(ooRow["voltsd_3hrs"]);
        prRow.rotatestd_3hrs = Convert.ToSingle(ooRow["rotatesd_3hrs"]);
        prRow.pressurestd_3hrs = Convert.ToSingle(ooRow["pressuresd_3hrs"]);
        prRow.vibrationstd_3hrs = Convert.ToSingle(ooRow["vibrationsd_3hrs"]);
        prRow.voltmean_24hrs = Convert.ToSingle(ooRow["voltmean_24hrs"]);
        prRow.rotatemean_24hrs = Convert.ToSingle(ooRow["rotatemean_24hrs"]);
        prRow.pressuremean_24hrs = Convert.ToSingle(ooRow["pressuremean_24hrs"]);
        prRow.vibrationmean_24hrs = Convert.ToSingle(ooRow["vibrationmean_24hrs"]);
        prRow.voltstd_24hrs = Convert.ToSingle(ooRow["voltsd_24hrs"]);
        prRow.rotatestd_24hrs = Convert.ToSingle(ooRow["rotatesd_24hrs"]);
        prRow.pressurestd_24hrs = Convert.ToSingle(ooRow["pressuresd_24hrs"]);
        prRow.vibrationstd_24hrs = Convert.ToSingle(ooRow["vibrationsd_24hrs"]);
        prRow.error1count = Convert.ToSingle(ooRow["error1count"]);
        prRow.error2count = Convert.ToSingle(ooRow["error2count"]);
        prRow.error3count = Convert.ToSingle(ooRow["error3count"]);
        prRow.error4count = Convert.ToSingle(ooRow["error4count"]);
        prRow.error5count = Convert.ToSingle(ooRow["error5count"]);
        prRow.sincelastcomp1 = Convert.ToSingle(ooRow["sincelastcomp1"]);
        prRow.sincelastcomp2 = Convert.ToSingle(ooRow["sincelastcomp2"]);
        prRow.sincelastcomp3 = Convert.ToSingle(ooRow["sincelastcomp3"]);
        prRow.sincelastcomp4 = Convert.ToSingle(ooRow["sincelastcomp4"]);
        prRow.model = (string)ooRow["model"];
        prRow.age = Convert.ToSingle(ooRow["age"]);
        prRow.failure = (string)ooRow["failure"];
        //
        return prRow;
    }));
            
    return dataView;
}

Load the data sets into the app memory:

//Split dataset in two parts: TrainingDataset  and TestDataset          
var trainData = loadFromDataFrame(mlContext, trainDF);
var testData = loadFromDataFrame(mlContext, testDF);

Prior to start training we need to process that data, so that we encoded all non-numerical columns into numerical columns. Also we need to define which columns are going to be part of the Featuresand which one will be label. For this reason we define PrepareData method.

public static IEstimator PrepareData(MLContext mlContext)
{
    //one hot encoding category column
    IEstimator dataPipeline =

    mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", inputColumnName: nameof(PrMaintenanceClass.failure))
    //encode model column
    .Append(mlContext.Transforms.Categorical.OneHotEncoding("model",outputKind: OneHotEncodingEstimator.OutputKind.Indicator))

    //define features column
    .Append(mlContext.Transforms.Concatenate("Features",
    // 
    nameof(PrMaintenanceClass.voltmean_3hrs), nameof(PrMaintenanceClass.rotatemean_3hrs),
    nameof(PrMaintenanceClass.pressuremean_3hrs),nameof(PrMaintenanceClass.vibrationmean_3hrs),
    nameof(PrMaintenanceClass.voltstd_3hrs), nameof(PrMaintenanceClass.rotatestd_3hrs), 
    nameof(PrMaintenanceClass.pressurestd_3hrs), nameof(PrMaintenanceClass.vibrationstd_3hrs), 
    nameof(PrMaintenanceClass.voltmean_24hrs),nameof(PrMaintenanceClass.rotatemean_24hrs),
    nameof(PrMaintenanceClass.pressuremean_24hrs),nameof(PrMaintenanceClass.vibrationmean_24hrs), 
    nameof(PrMaintenanceClass.voltstd_24hrs),nameof(PrMaintenanceClass.rotatestd_24hrs),
    nameof(PrMaintenanceClass.pressurestd_24hrs),nameof(PrMaintenanceClass.vibrationstd_24hrs), 
    nameof(PrMaintenanceClass.error1count), nameof(PrMaintenanceClass.error2count),
    nameof(PrMaintenanceClass.error3count), nameof(PrMaintenanceClass.error4count), 
    nameof(PrMaintenanceClass.error5count), nameof(PrMaintenanceClass.sincelastcomp1),
    nameof(PrMaintenanceClass.sincelastcomp2),nameof(PrMaintenanceClass.sincelastcomp3),
    nameof(PrMaintenanceClass.sincelastcomp4),nameof(PrMaintenanceClass.model), nameof(PrMaintenanceClass.age) ));

    return dataPipeline;
}

As can be seen, the method converts the label column failure which is a simple textual column into categorical columns containing numerical representation for each different category called Keys.

Now that we have finished with data transformation, we are going to define the Train method which is going to implement ML algorithm, hyper-parameters for it and training process. Once we call this method the method will return the trained model.

//train method
static public TransformerChain Train(MLContext mlContext, IDataView preparedData)
{
    var transformationPipeline=PrepareData(mlContext);
    //settings hyper parameters
    var options = new LightGbmMulticlassTrainer.Options();
    options.FeatureColumnName = "Features";
    options.LearningRate = 0.005;
    options.NumberOfLeaves = 70;
    options.NumberOfIterations = 2000;
    options.NumberOfLeaves = 50;
    options.UnbalancedSets = true;
    //
    var boost = new DartBooster.Options();
    boost.XgboostDartMode = true;
    boost.MaximumTreeDepth = 25;
    options.Booster = boost;
    
    // Define LightGbm algorithm estimator
    IEstimator lightGbm = mlContext.MulticlassClassification.Trainers.LightGbm(options);

    //train the ML model
    TransformerChain model = transformationPipeline.Append(lightGbm).Fit(preparedData);

    //return trained model for evaluation
    return model;
}

Training process and model evaluation

Since we have all required methods, the main program structure looks like:

//prepare data transformation pipeline
var dataPipeline = PrepareData(mlContext);

//print prepared data
var pp = dataPipeline.Fit(trainData);
var transformedData = pp.Transform(trainData);

//train the model
var model = Train(mlContext, trainData);

Once the Train method returns the model, the evaluation phase started. In order to evaluate model, we perform full evaluation with training and testing data.

Model Evaluation with train data set

The evaluation of the model will be performed for training and testing data sets:

//evaluate train set
var predictions = model.Transform(trainData);
var metricsTrain = mlContext.MulticlassClassification.Evaluate(predictions);

ConsoleHelper.PrintMultiClassClassificationMetrics("TRAIN DataSet", metricsTrain);
ConsoleHelper.ConsoleWriteHeader("Train DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTrain.ConfusionMatrix);

The model evaluation output:

************************************************************
*    Metrics for TRAIN DataSet multi-class classification model   
*-----------------------------------------------------------
    AccuracyMacro = 0.9603, a value between 0 and 1, the closer to 1, the better
    AccuracyMicro = 0.999, a value between 0 and 1, the closer to 1, the better
    LogLoss = 0.0015, the closer to 0, the better
    LogLoss for class 1 = 0, the closer to 0, the better
    LogLoss for class 2 = 0.088, the closer to 0, the better
    LogLoss for class 3 = 0.0606, the closer to 0, the better
************************************************************
 
Train DataSet Confusion Matrix 
###############################
 

Confusion table
          ||========================================
PREDICTED ||  none | comp4 | comp1 | comp2 | comp3 | Recall
TRUTH     ||========================================
     none || 165 371 |     0 |     0 |     0 |     0 | 1.0000
    comp4 ||     0 |   772 |    16 |    25 |    11 | 0.9369
    comp1 ||     0 |     8 |   884 |    26 |     4 | 0.9588
    comp2 ||     0 |    31 |    22 | 1 097 |     8 | 0.9473
    comp3 ||     0 |    13 |     4 |     8 |   576 | 0.9584
          ||========================================
Precision ||1.0000 |0.9369 |0.9546 |0.9490 |0.9616 |

As can be seen the model predict the values correctly in most cases in the train data set. Now lets see how the model predict the data which have not been part of the raining process.

Model evaluation with test data set

//evaluate test set
var testPrediction = model.Transform(testData);
var metricsTest = mlContext.MulticlassClassification.Evaluate(testPrediction);
ConsoleHelper.PrintMultiClassClassificationMetrics("Test Dataset", metricsTest);

ConsoleHelper.ConsoleWriteHeader("Test DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTest.ConfusionMatrix);
************************************************************
*    Metrics for Test Dataset multi-class classification model   
*-----------------------------------------------------------
    AccuracyMacro = 0.9505, a value between 0 and 1, the closer to 1, the better
    AccuracyMicro = 0.9986, a value between 0 and 1, the closer to 1, the better
    LogLoss = 0.0033, the closer to 0, the better
    LogLoss for class 1 = 0.0012, the closer to 0, the better
    LogLoss for class 2 = 0.1075, the closer to 0, the better
    LogLoss for class 3 = 0.1886, the closer to 0, the better
************************************************************
 
Test DataSet Confusion Matrix 
##############################
 

Confusion table
          ||========================================
PREDICTED ||  none | comp4 | comp1 | comp2 | comp3 | Recall
TRUTH     ||========================================
     none || 120 313 |     6 |    15 |     0 |     0 | 0.9998
    comp4 ||     1 |   552 |    10 |    17 |     4 | 0.9452
    comp1 ||     2 |    14 |   464 |    24 |    24 | 0.8788
    comp2 ||     0 |    39 |     0 |   835 |    16 | 0.9382
    comp3 ||     0 |     4 |     0 |     0 |   412 | 0.9904
          ||========================================
Precision ||1.0000 |0.8976 |0.9489 |0.9532 |0.9035 |

We can see, that the model has overall accuracy 99%, and 95% average per class accuracy. The complete nptebook of this blog post can be found here.

Your first data analysis with .NET Jupyter Notebook and Daany.DataFrame


Note: The .NET Jupyter notebook for this blog post can be found here.

The Structure of Daany.DataFrame

The main part of Daany project is Daany.DataFrame – an c# implementation of a data frame. A data frame is a software component used for handling tabular data, especially for data preparation, feature engineering, and analysis during the development of machine learning models. The concept of Daany.DataFrame implementation is based on simplicity and .NET coding standard. It represents tabular data consisting of columns and rows. Each column has name and type and each row has its index and label.

Usually, rows indicate a zero axis, while columns indicate axis one.

The following image shows a data frame structure:

data frame structure

The basic components of the data frame are:

  • header – list of column names,
  • index – list of object representing each row,
  • data – list of values in the data frame,
  • missing value – data with no values in data frame.

The image above shows the data frame components visually, and how they are positioned in the data frame.

Create Data Frame from a text based file

The data we used are stored in files, and they must be load into application memory in order to be analyzed and transformed. Loading data from files by using Daany.DataFrame is as easy as calling one method.

By using static method DataFrame.FromCsv a user can create data frame object from the csv file. Otherwise, data frame can be persisted on disk by calling static method DataFrame.ToCsv.

The following code shows how to use static methods ToCsv and FromCsv to show persisting and loading data to data frame:

string filename = "df_file.txt";
//define a dictionary of data
var dict = new Dictionary<string, List>
{
    { "ID",new List() { 1,2,3} },
    { "City",new List() { "Sarajevo", "Seattle", "Berlin" } },
    { "Zip Code",new List() { 71000,98101,10115 } },
    { "State",new List() {"BiH","USA","GER" } },
    { "IsHome",new List() { true, false, false} },
    { "Values",new List() { 3.14, 3.21, 4.55 } },
    { "Date",new List() { DateTime.Now.AddDays(-20) , DateTime.Now.AddDays(-10) , DateTime.Now.AddDays(-5) } },

};

//create data frame with 3 rows and 7 columns
var df = new DataFrame(dict);

//first Save data frame on disk and load it
DataFrame.ToCsv(filename, df);

//create data frame with 3 rows and 7 columns
var dfFromFile = DataFrame.FromCsv(filename, sep:',');

//show dataframe
dfFromFile

First, we created a data frame from the dictionary collection. Then we store the data frame to file. After successfully saving, we load the same data frame from the CSV file. The end of the code snippet put asserts in order to prove everything is correctly implemented. The output of the code cell is:

data frame structure

In case the performance is important, you should pass column types to the FromCSV method in order to achieve up to 50% of loading time. For example the following code loads the data from the file, by passing predefined column types:

//defined types of the column 
var colTypes1 = new ColType[] { ColType.I32, ColType.IN, ColType.I32, ColType.STR, ColType.I2, ColType.F32, ColType.DT };
//create data frame with 3 rows and 7 columns
var dfFromFile = DataFrame.FromCsv(filename, sep: ',', colTypes: colTypes1);

And we got the same result: data frame structure

Loading Real Data from the Web

Data can be loaded directly from the web storage by using FromWebstatic method. The following code shows how to load the Concrete Slump Test data from the web. The data set includes 103 data points. There are 7 input variables, and 3 output variables in the data set: Cement, Slag, Fly ash, Water, SP, Coarse Aggr.,Fine Aggr., SLUMP (cm), FLOW (cm), Strength (Mpa). The following code load the Concrete Slump Test data set into Daany DataFrame:

//define web url where the data is stored
var url = "https://archive.ics.uci.edu/ml/machine-learning-databases/concrete/slump/slump_test.data";
//
var df = DataFrame.FromWeb(url);
df.Head(5)

data frame structure

Once we have the data into the application memory, we can perform some statistical calculations. First, let’s see the structure of the data by calling the Describe method:

df.Describe(false)

data frame structure

Now, we see we have a data frame with 103 rows and all columns are of numerical type. The frequency of the data indicated that values are mostly not repeated. From the maximum and minimum values, we can see the data have no outlines since distributions of the values are tends to be normal.

Data Visualization

Let’s perform some visualization just to see how visually data look like. As first let’s see the Slump distribution with respect of SP and Fly ash:

var chart = Chart.Plot(
    new Graph.Scatter()
    {
        x = df["SP"],
        y = df["Fly ash"],
        mode = "markers",
        marker = new Graph.Marker()
        {
            color = df["SLUMP(cm)"].Select(x=>x),
            colorscale = "Jet"
        }
    }
);

var layout = new Layout.Layout(){title="Slump vs. Cement and Slag"};
chart.WithLayout(layout);
chart.WithXTitle("Cement");
chart.WithYTitle("Slag");

display(chart);

data frame structure

From the chart above, we cannot see any relation between those two columns. Let’s see the chart made between Slump and Flow:

var chart = Chart.Plot(
    new Graph.Scatter()
    {
        x = df["SLUMP(cm)"],
        y = df["FLOW(cm)"],
        mode = "markers",
    }
);

var layout = new Layout.Layout(){title="Slump vs. Cement and Slag"};
chart.WithLayout(layout);
chart.WithLegend(true);
chart.WithXTitle("Slump");
chart.WithYTitle("Flow");

display(chart);

data frame structure

We can see some relation in the chart and the relation is positive. This means as Slupm is growing, Flow value grows as well. If we want to measure the relation between the columns we can do that with the following code:

var x1= df["SLUMP(cm)"].Select(x=>Convert.ToDouble(x)).ToArray();
var x2= df["FLOW(cm)"].Select(x=>Convert.ToDouble(x)).ToArray();

//The Pearson coefficient is calculated by
var r=x1.R(x2);
r

The correlation is 0.90 which indicates a strong relationship between those two columns.

The complete .NET Jupyter Notebook for this blog post can be found here

Daany – .NET DAta ANalYtics library


Daany - .NET DAta ANalYtics library

Introduction

Daany is .NET data analytics library written in C# and it supposed to be a tool for data preparation, feature engineering and other kinds of data transformations prior to creating ml-ready data set. It is .NET Core based library with ability to run on Windows Linux based distribution and Mac. It is based on .NET Standard 2.1.

Besides data analysis, the library implements a set of statistics or data science features e.g. time series decompositions, optimization performance parameters and similar.

Currently Daany project consists of four main components:

  • Daany.DataFrame,
  • Daany.Stats,
  • Daany.MathStuff and
  • Daany.DataFrame.Ext

The main Daany component is Daany.DataFrame – a data frame implementation for data analysis. It is much like Pandas but the component is not going to follow pandas implementation. It is suitable for doing data exploration and preparation with C# Jupyter Notebook. In order to create or load data into data frame it doesn’t require any predefined class type. In order to defined relevant value type of each column all data are parsed internally during data frame creation. The Daany.DataFrame implements set of powerful features for data manipulation, handling missing values, calculated columns, merging two or more data frames into one, and similar. It is handy for extracting its rows or columns as series of elements and put into the chart to visualizing the data.

Daany.Stat is a collection of statistics features e.g. time series decompositions, optimization, performance parameters and similar.

Daany.Math is a component within data frame with implementation of od matrix and related linear algebra capabilities. It also contains some implementation of other great open source projects. The component is not going to be separate NuGet package.

Daany.DataFrame.Ext contains extensions for Daany.DataFrame component, but they are related to other projects mostly to ML.NET. The Daany.DataFrame should not be dependent on ML.NET and other libraries. So, any future data frame feature which depends on something other than Daany.Math, should be placed in Daany.Ext.

The project is developed as a need to have a set of data transformation features in one library while I am working with machine learning. So, I thought it might help to others. Currently, the library has pretty much data transformation features and might be your number one data analytics library on .NET platform. Collaboration to the project is also welcome.

How to start with Daany

Daany is 100% .NET Core component and can be run on any platform .NET Core supports, from the Windows x86/x64 to Mac or Linux based OS. It can be used by Visual Studio or Visual Studio Code. It consisted of 3 NuGet packages, so the easiest way to start with it is to install the packages in your .NET application. Within Visual Studio create or open your .NET application and open NuGet packages window. Type Daany in the browse edit box and hit enter. You can find four packages starting with Daany. You have few options to install the packages.

  1. Install Daany.DataFrame – only. Use this option if you want only data analysis by using data frame. Once you click Install button, Daany.DataFrame and Daany.Math will be installed into your project app.

  2. Install Daany.Stat package. This package already contains DataFrame, as well as time series decomposition and related statistics features.

Once you install the packages, you can start developing your app using Daany packages.

Using Daany as assembly reference

Since Daany has no dependency to other libraries you can copy three dlls and add them as reference to your project.

file explorer

In order to do so clone the project from http://github.com/bhrnjica/daany,build it and copy Daany.DataFrame.dll, Daany.Math.dll and Daany.Stat.dll to your project as assembly references. Whole project is just 270 KB.

Using Daany with .NET Jupyter Notebook

Daany library is ideal with .NET Jupyter Notebook, and some of the great notebooks are implemented already, and can be viewed at http://github.com/bhrnjica/notebooks. The GitHub project contains the code necessary to run the notebooks in Binder, a Jupyter Virtual Environment, and try Daany without any local installation. So the first recommendation is to try Daany with already implemented notebooks using Binder.com.

Namespaces in Daany

Daany project contains several namespaces for separating different implementation. The following list contains relevant namespaces:

  • using Daany – data frame and related code implementation,
  • using Daany.Ext – data frame extensions, used with dependency on third party library,
  • using Daany.MathStuff – math related stuff implemented in Daany,
  • using Daany.Optimizers – set of optimizers like SGD,
  • using Daany.Stat – set of statistics implementations in the project.

That’s all for this post. Next blog posts will show more exciting implementation using Daany.

C# Jupyter Notebook Part 2/n


What is .NET Jupyter Notebook

In this blog post, we are going to explore the main features in the new C# Juypter Notebook. For those who used Notebook from other programming languages like Python or R, this would be an easy task. First of all, the Notebook concept provides a quick, simple and straightforward way to present a mix of text and $ \Latex $, source code implementation and its output. This means you have a full-featured platform to write a paper or blog post, presentation slides, lecture notes, and other educated materials.

The notebook consists of cells, where a user can write code or markdown text. Once he completes the cell content confirmation for cell editing can be achieved by Ctrl+Enter or by press run button from the notebook toolbar. The image below shows the notebook toolbar, with a run button. The popup combo box shows the type of cell the user can define. In the case of text, Markdown should be selected, for writing source code the cell should be Code.

run button

To start writing code to C# Notebook, the first thing we should do is install NuGet packages or add assembly references and define using statements. In order to do that, the following code installs several nuget packages, and declare several using statements. But before writing code, we should add a new cell by pressing + toolbar button.

The first few NuGet packages are packages for ML.NET. Then we install the XPlot package for data visualization in .NET Notebook, and then we install a set of Daany packages for data analytics. First, we install Daany.DataFrame for data exploration and analysis, and then Daany.DataFrame.Ext set of extensions for data manipulation used with ML.NET.

//ML.NET Packages
#r "nuget:Microsoft.ML.LightGBM"
#r "nuget:Microsoft.ML"
#r "nuget:Microsoft.ML.DataView"

//Install XPlot package
#r "nuget:XPlot.Plotly"

//Install Daany.DataFrame 
#r "nuget:Daany.DataFrame"
#r "nuget:Daany.DataFrame.Ext"
using System;
using System.Linq;

//Daany data frame
using Daany;
using Daany.Ext;

//Plotting functionalities
using XPlot.Plotly;

//ML.NET using
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Trainers.LightGbm;

The output for the above code:

run button

Once the NuGet packages are installed successfully, we can start with data exploration. But before this declare few using statements:

We can define classes or methods globally. The following code implements the formatter method for displaying Daany.DataFrame in the output cell.

// Temporal DataFrame formatter for this early preview
using Microsoft.AspNetCore.Html;
Formatter<DataFrame>.Register((df, writer) =>
{
    var headers = new List<IHtmlContent>();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c)));
    
    //renders the rows
    var rows = new List<List<IHtmlContent>>();
    var take = 20;
    
    //
    for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
    {
        var cells = new List<IHtmlContent>();
        cells.Add(td(df.Index[i]));
        foreach (var obj in df[i])
        {
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }
    
    var t = table(
        thead(
            headers),
        tbody(
            rows.Select(
                r => tr(r))));
    
    writer.Write(t);
}, "text/html");

For this demo we will used famous Iris data set. We will download the file from the internet, load it by using Daany.DataFrame, a display few first rows. In order to do that we run the folloing code:

var url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data";
var cols = new string[] {"sepal_length","sepal_width", "petal_length", "petal_width", "flower_type"};
var df = DataFrame.FromWeb(url, sep:',',names:cols);
df.Head(5)

The output looks like this: run button

As can be seen, the last line from the previous code has no semicolon, which means the line should be displayed in the output cell. Let’s move on, and implement two new columns. The new columns will be sepal and petal area for the flower. The expression we are going to use is:

$$ PetalArea = petal_width \cdot petal_length;\ SepalArea = sepal_width \cdot sepal_length; $$

As can be seen, the $\LaTeX$ is fully supported in the notebook.

The above formulea is implemented in the following code:

//calculate two new columns into dataset
df.AddCalculatedColumn("SepalArea", (r, i) => Convert.ToSingle(r["sepal_width"]) * Convert.ToSingle(r["sepal_length"]));
df.AddCalculatedColumn("PetalArea", (r, i) => Convert.ToSingle(r["petal_width"]) * Convert.ToSingle(r["petal_length"]));
df.Head(5)

run button

The data frame has two new columns. They indicate the area for the flower. In order to see basic statistics parameters for each of the defined columns, we call Describe method.

//see descriptive stats of the final ds
df.Describe(false)

run button

From the table above, we can see the flower column has only 3 values. The most frequent value has a frequency equal to 50, which is an indicator of a balanced dataset.

Data visualization

The most powerful feature in Notebook is a data visualization. In this section, we are going to plot some interesting charts.

In order to see how sepal and petal areas are spread in 2D plane, the following plot is implemented:

//plot the data in order to see how areas are spread in the 2d plane
//XPlot Histogram reference: http://tpetricek.github.io/XPlot/reference/xplot-plotly-graph-histogram.html

var faresHistogram = Chart.Plot(new Graph.Histogram(){x = df["flower_type"], autobinx = false, nbinsx = 20});
var layout = new Layout.Layout(){title="Distribution of iris flower"};
faresHistogram.WithLayout(layout);
display(faresHistogram);

run button

The chart is also an indication of a balanced dataset.

Now lets plot areas depending on the flower type:

// Plot Sepal vs. Petal area with flower type

var chart = Chart.Plot(
    new Graph.Scatter()
    {
        x = df["SepalArea"],
        y = df["PetalArea"],
        mode = "markers",
        marker = new Graph.Marker()
        {
            color = df["flower_type"].Select(x=>x.ToString()=="Iris-virginica"?1:(x.ToString()=="Iris-versicolor"?2:3)),
            colorscale = "Jet"
        }
    }
);

var layout = new Layout.Layout(){title="Plot Sepal vs. Petal Area & color scale on flower type"};
chart.WithLayout(layout);
chart.WithLegend(true);
chart.WithLabels(new string[3]{"Iris-virginica","Iris-versicolor", "Iris-setosa"});
chart.WithXTitle("Sepal Area");
chart.WithYTitle("Petal Area");
chart.Width = 800;
chart.Height = 400;

display(chart);

run button

As can be seen from the chart above, flower types are separated almost linearly, since we used petal and sepal areas instead of width and length. With this transformation, we can get a 100% accurate ml model.

Machine Learning

Once we finished with data transformation and visualization we can define the final data frame before machine learning application. To end this we are going to select only two columns for features and one label column which will be flower type.

//create new data-frame by selecting only three columns
var derivedDF = df["SepalArea","PetalArea","flower_type"];
derivedDF.Head(5)

run button

Since we are going to use ML.NET, we need to declare Iris in order to load the data into ML.NET.

//Define an Iris class for machine learning.
class Iris
{
    public float PetalArea { get; set; }
    public float SepalArea { get; set; }
    public string Species { get; set; }
}
//Create ML COntext
MLContext mlContext = new MLContext(seed:2019);

Then load the data from Daany data frame into ML.NET:

//Load Data Frame into Ml.NET data pipeline
IDataView dataView = mlContext.Data.LoadFromEnumerable<Iris>(derivedDF.GetEnumerator<Iris>((oRow) =>
{
    //convert row object array into Iris row

    var prRow = new Iris();
    prRow.SepalArea = Convert.ToSingle(oRow["SepalArea"]);
    prRow.PetalArea = Convert.ToSingle(oRow["PetalArea"]);
    prRow.Species = Convert.ToString(oRow["flower_type"]);
    //
    return prRow;
}));

Once we have data, we can split it into train and test sets:

//Split dataset in two parts: TrainingDataset (80%) and TestDataset (20%)
var trainTestData = mlContext.Data.TrainTestSplit(dataView, testFraction: 0.2);
var trainData = trainTestData.TrainSet;
var testData = trainTestData.TestSet;

The next step in prepare the data for training is define pipeline for dtaa transformation and feature engineering:

//one encoding output category column by defining KeyValues for each category
IEstimator<ITransformer> dataPipeline =
mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", inputColumnName: nameof(Iris.Species))

//define features columns
.Append(mlContext.Transforms.Concatenate("Features",nameof(Iris.SepalArea), nameof(Iris.PetalArea)));

Once we completes the preparation paert, we can perform the training part. The training start by calling Fit to the pipeline:

%%time
 // Define LightGbm algorithm estimator
IEstimator<ITransformer> lightGbm = mlContext.MulticlassClassification.Trainers.LightGbm();
//train the ML model
TransformerChain<ITransformer> model = dataPipeline.Append(lightGbm).Fit(trainData);

Once the training is completes, we have trained model which can be evaluated. In order to print the evaluation result with formatting, we are going to install Daany DataFrame extension which has implementation of printing results


//evaluate train set
var predictions = model.Transform(trainData);
var metricsTrain = mlContext.MulticlassClassification.Evaluate(predictions);
ConsoleHelper.PrintMultiClassClassificationMetrics("TRAIN Iris DataSet", metricsTrain);
ConsoleHelper.ConsoleWriteHeader("Train Iris DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTrain.ConfusionMatrix);

run button

//evaluate test set
var testPrediction = model.Transform(testData);
var metricsTest = mlContext.MulticlassClassification.Evaluate(testPrediction);
ConsoleHelper.PrintMultiClassClassificationMetrics("TEST Iris Dataset", metricsTest);
ConsoleHelper.ConsoleWriteHeader("Test Iris DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTest.ConfusionMatrix);

run button

As can be seen, we have a 100% accurate model for Iris flower recognition. Now, let’s add a new column into the data frame called Prediction to have a model prediction in the data frame.
In order to do that, we are evaluating the model on the train and the test data set. Once we have a prediction for both sets, we can join them and add as a separate column in Daany data frame. The following code does exactly what we described previously.

var flowerLabels = DataFrameExt.GetLabels(predictions.Schema).ToList();
var p1 = predictions.GetColumn<uint>("PredictedLabel").Select(x=>(int)x).ToList();
var p2 = testPrediction.GetColumn<uint>("PredictedLabel").Select(x => (int)x).ToList();
//join train and test
p1.AddRange(p2);
var p = p1.Select(x => (object)flowerLabels[x-1]).ToList();
//add new column into df
var dic = new Dictionary<string, List<object>> { { "PredictedLabel", p } };
var dff = derivedDF.AddColumns(dic);
dff.Head()

run button The output above shows the first few rows in the data frame. To see the few last rows from the data frame we call a Tail method.

dff.Tail()

run button

In this blog post, we saw how can we be more productive when using .NET Jupyter Notebook with Machine Learning and Data Exploration and transformation, by using ML.NET and Daany – DAtaANalYtics library. Complete source code for this notebook can be found at GitHub repo: https://github.com/bhrnjica/notebooks/blob/master/net_jupyter_notebook_part2.ipynb

How to start with C# Jupyter Notebook


Yesterday at Ignite conference .NET team has announced the Jupyter Notebook for .NET languages C# and F#. This is a huge step ahead for all data scientists who want to do data science and machine learning on the .NET platform. With C# Jupyter Notebook you can perform data exploration and transformation, training, evaluation and testing your Ml models. All operations are performed by code block and you can quickly see the result without running and debugging application every time you want to change something. In order to see how it looks like, in this blog post we are going to explore some of the basic functionalities in C# Jupyter Notebook.

How to Install .NET Jupyter Notebook

In order to install Jupyter Notebook you can see the official blog post, anyhow here I am going to present this process because it is very short and easy. Before install .NET Jupyter components, you have to install the latest version of .NET SDK and Anaconda. Once you have Anaconda installed on your machine, open Anaconda Prompt from Windows Start Menu.

To run Anaconda Prompt you have two options:

  • to open power shell or
  • to open classic command prompt.

Select Anaconda Powershell Prompt, and the powershell window will pop up. Once the powershell prompt is opened we can start with the installation of Jupyter Notebook components. The first step is to install the dotnet try global tool.

Type this to cmd:

dotnet tool install -g dotnet-try

After some time you should get the following message:

Then we need to install .NET Kernel by typing the following command:

dotnet try jupyter install 

Then the following message should appear:

In case you have any problems with the installation please refer to official blog post or post an Issue at https://github.com/dotnet/try/issues.

Also note that this version of Jupyter Notebook is in preview, so not all actions will work as you expected. Now that you have installed C# Jupyter, you can open Jupyter notebook from the Anaconda navigator, or just type Jupyter Notebook in to Anaconda Prompt. Once we did that, your default bowers pops up and shows the starting directory in the Jupyter Notebook. If you click New button, you can see option to create C# and F# notebooks. Press C#, and the new C# notebook will appeared in the browser.

Try some basic stuff in notebook.

In the next blog post we are going to explore more and see some of the coolest features in C# Jupyter Notebook.

In depth LSTM Implementation using CNTK on .NET platform


In this blog post the implementation of the LSTM recurrent neural network in CNTK will be shown in detail. The implementation will cover LSTM implementation based on Hochreiter & Schmidhuber (1997) paper which can be found here here. The great blog post about LSTM can also be found at colah’s blog, that explains in details the structure of the LSTM cell, as well as some of the most used LSTM variants. In this blog post the LSTM recurrent network will be implemented using CNTK, a deep learning tool using C# programming language and .NET Core platform. Also in case you want to see how to use pure C# without any additional library for LSTM implementation, you can see great MSDN article: Test Run – Understanding LSTM Cells Using C# By James McCaffrey.

Whole implementation of LSTM RNN is part of ANNdotNET – deep learning tool on .NET platform. More information about the project can be found at GitHub Project page: github.com/bhrnjica/anndotnet.

Introduction to LSTM Recurrent Network

Classic neural networks are built on the fact that data don’t have any order when entering into the network, and the output depend only on the input features. In case when the output depends on features and previous outputs, the classic feed forward neural network cannot help. The solution for such problem may be neural network which can be recursively provides the previous outputs. This kind of network is called recurrent neural network RNN, and it was introduced by the Hopfields in the 1980s, and later popularized when the back-propagation algorithm was improved in the beginning of 1990s. Simple concept of the recurrent neural network can be shown on the following image.

The current output of the recurrent network is defined by the current input Xt, and also on states related on the previous network outputs ht-1, ht-2,

The concept of the recurrent neural network is simple and easy to implement, but the problem raises during the training phase due to unpredictable gradient behavior. During training phase, gradient problem of neural network can be summarized in two categories: the vanishing and the exploding gradient.

The recurrent neural network is based on back-propagation algorithm, specially developed for the recurrent ANN, which is called back-propagation through time, BPTT.  In vanishing gradient problem parameters updates are proportional to the gradient of the error, which in most cases negligibly small, and results that the corresponding weights are constant and stop the network from further training.

On the other hand, exploding gradient problem refers to the opposite behavior, where the updates of weights (gradient of the cost function) became large in each back-propagation step. This problem is caused by the explosion of the long-term components in the recurrent neural network.

The solution to the above problems is specific design of there current network called Long Short-Term Memory, LSTM. One of the main advantages of the LSTM is that it can provide a constant error flow. In order to provide a constant error flow, the LSTM cell contains set of memory blocks,which have the ability to store the temporal state of the network. The LSTM also has special multiplicative units called gates that control the information flow.

The LSTM cell consists of:

  • input gate – which controls the flow of the input activations into the memory cell,
  • output gate which controls the output flow of the cell activation.
  • forget gate, which filters the information from the input and previous output and decides which one should be remembered or forgot and dropped out.

Besides three gates the LSTM cell contains cell update which is usually tanh layer to be part of the cell state.

In each LSTM cell the three variables are coming into the cell:

  • the current input xt,
  • previous output ht-1 and
  • previous cell state ct-1.

On the other hand, from each LSTM cell two variables are getting out:

  • the current output ht and
  • the current cell state ct.

Graphical representation of the LSTM cell is shown on the following image.

In order to implement LSTM recurrent network, first the LSTM cell should be implemented. The LSTM cell has three gates, and two internal states, which should be determined in order to calculate the current output and current cell state.

The LSTM cell can be define as neural network where the input vector x=\left(x_1,x_2,x_3,\ldots x_t\right)  in time t, maps to the output vector y=\left(y_1,\ y_2,\ \ldots,y_m\right), through the calculation of the following layers:

  • the forget gate sigmoid layer for the time t, ft  is calculated by the previous output ht-1 the input vector xt, and the matrix of weights from the forget layer Wf with addition of corresponded bias bi:

f_t=\sigma\left(W_f\bullet\left[h_{t-1},x_t\right]+b_f\right).

  • the input gate sigmoid layer for the time t, it  is calculated by the previous output ht-1 the input vector xt, and the matrix of weights from the input layer Wi with addition of corresponded bias bi:

i_t=\sigma\left(W_i\bullet\left[h_{t-1},x_t\right]+b_i\right).

  • the cell state in time t, Ct  is calculated from the forget gate 
    ft  and the previous cell state Ct-1. The result is summed wth the input gate it and the cell update state {\widetilde{c}}_t, that is tanh layer calculated by the previous output ht-1 the input vector xt, and the weight matrix for the cell with addition of corresponded bias bi:

C_t=f_t\ \otimes C_{t-1}+\ i_t\otimes\tanh{\left(\ W_C\bullet\left[h_{t-1},x_t\right]+b_C\right).}

  • the output gate sigmoid layer for the time t, ot is calculated by the previous output ht-1, the input vector xt, and the matrix of weights from the output layer Wo with addition of corresponded bias bi:

o_t=\sigma\left(\ W_0\bullet\left[h_{t-1},x_t\right]+b_0\right).

The final stage of the LSTM cell is current output ht calculation. The current output  is calculated with the multiplication operation \otimes between output gate layer and tanh layer of the current cell state Ct .

h_t=o_t\otimes\tanh{\left(C_t\right)}.

The current output ht, has passed through the network as the previous state for the next LSTM cell, or as the input for neural network output layer.

LSTM with Peephole connection

One of the LSTM variant which is implemented in python based CNTK is LSTM with peephole connection which is first introduced by Gers & Schmidhuber (2000). LSTM with peephole connection let each gate (forget, input and output) look at the cell state.

Now the gates with peephole connection can be expressed sothat the started terms of each gates are extended with additional matrix of Ct.So, the forget gate with peephole can be expressed:

f_t=\sigma\left(W_f\bullet\left[{C_{t-1},\ h}_{t-1},x_t\right]+b_f\right).

Similarly, the input gate and the output gate with peephole connection are expressed as:

i_t=\sigma\left(W_i\bullet\left[C_{t-1},\ h_{t-1},x_t\right]+b_i\right),

o_t=\sigma\left(\ W_0\bullet\left[{C_{t-1},h}_{t-1},x_t\right]+b_0\right).

With peephole connection LSTM cell get additional matrix for each gate and the number of LSTM parameters are increased by additional 3mXm parameters, where m – is output dimension.

Implementation of LSTM Recurrent network

The CNTK is Microsoft open source library for deep learning written in C++, but it can be run from various programming languages: Python,C#, R, Java. In order to use the library in C#, the CNTK related Nugget package has to be installed, and the project must be built for 64bit architecture.

  1. So open Visual Studio 2017 and create simple .NET Core Console application.
  2. Then install CNTK GPU Nugget package to your recently created console application.

Once the startup project is created the LSTM CNTK implementation can be started.

Implementation of the LSTM Cell

As stated previously the implementation presented in this blog post is  originally implemented in ANNdotNET – open source project for deep learning on .NET platform. It can be found at official GitHub project page

The LSTM recurrent network starts by implementation of the LSTMCell class. The LSTMCell class is derived from the NetworkFoundation class which implements basic neural network operations. The Basic operations are implemented through the implementation of the following methods:

  • Bias – bias parameters implementation
  • Weights – implementation of the weights parameters
  • Layer – implementation of the classic fully connected linear layer
  • AFunction – applying activation function on the layer.

NetworkFoundation class is shown in the next code snippet

///////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform                                                      
// Copyright 2017-2018 Bahrudin Hrnjica                                                                                                                                       //
// This code is free software under the MIT License                                     //
// See license section of https://github.com/bhrnjica/anndotnet/blob/master/LICENSE.md  //
//                                                                                      
// Bahrudin Hrnjica                                                                     
// bhrnjica@hotmail.com                                                                 
// Bihac, Bosnia and Herzegovina                                                         //
// http://bhrnjica.net                                                                  
//////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;

namespace NNetwork.Core.Network
{
public class NetworkFoundation
{

public Variable Layer(Variable x, int outDim, DataType dataType, DeviceDescriptor device, uint seed = 1 , string name="")
{
    var b = Bias(outDim, dataType, device);         
    var W = Weights(outDim, dataType, device, seed, name);

    var Wx = CNTKLib.Times(W, x, name+"_wx");
    var l = CNTKLib.Plus(b,Wx, name);

    return l;
}

public Parameter Bias(int nDimension, DataType dataType, DeviceDescriptor device)
{
    //initial value
    var initValue = 0.01;
    NDShape shape = new int[] { nDimension };
    var b = new Parameter(shape, dataType, initValue, device, "_b");
    //
    return b;
}

public Parameter Weights(int nDimension, DataType dataType, DeviceDescriptor device, uint seed = 1, string name = "")
{
    //initializer of parameter
    var glorotI = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, seed);
    //create shape the dimension is partially known
    NDShape shape = new int[] { nDimension, NDShape.InferredDimension };
    var w = new Parameter(shape, dataType, glorotI, device, name=="" ? "_w" : name);
    //
    return w;
}

public Function AFunction(Variable x, Activation activation, string outputName="")
{
    switch (activation)
    {
        default:
        case Activation.None:
            return x;
        case Activation.ReLU:
            return CNTKLib.ReLU(x, outputName);
        case Activation.Softmax:
            return CNTKLib.Sigmoid(x, outputName);
        case Activation.Tanh:
            return CNTKLib.Tanh(x, outputName);
    }
}
}}

As can be seen, methods implement basic neural buildingblocks, which can be apply to any network type. Once the NetworkFoundation baseclass is implemented, the LSTM cell class implementation starts by definingthree properties and custom constructor, that is shown in the following code snippet:

///////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform                                                       
// Copyright 2017-2018 Bahrudin Hrnjica                                                                                                                                       //
// This code is free software under the MIT License                                     //
// See license section of https://github.com/bhrnjica/anndotnet/blob/master/LICENSE.md  //
//                                                                                      
// Bahrudin Hrnjica                                                                     
// bhrnjica@hotmail.com                                                                 
// Bihac, Bosnia and Herzegovina                                                         //
// http://bhrnjica.net                                                                  
//////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;

namespace NNetwork.Core.Network.Modules
{
public class LSTM : NetworkFoundation
{
    public Variable X { get; set; } //LSTM Cell Input
    public Function H { get; set; } //LSTM Cell Output
    public Function C { get; set; } //LSTM Cell State

public LSTM(Variable input, Variable dh, Variable dc, DataType dataType, Activation actFun, bool usePeephole, bool useStabilizer, uint seed, DeviceDescriptor device)
{
    //create cell state
    var c = CellState(input, dh, dc, dataType, actFun, usePeephole, useStabilizer, device, ref seed);

    //create output from input and cell state
    var h = CellOutput(input, dh, c, dataType, device, useStabilizer, usePeephole, actFun, ref seed);

    //initialize properties
    X = input;
    H = h;
    C = c;
}

Properties X, H and C, hold current values of the LSTM cell,once the LSTM object is created. The LSTM constructor takes several arguments:

  • the first three are variables for the input, previous output and previous cell state;
  • the activation function of the cell update layer. 

The constructor also contains two arguments for creation a different LSTM variant: peepholes, and self-stabilization, and few other self-explained arguments. The LSTM constructor creates cell state and output by calling CellState and CellOutput methods respectively. Thei mplementation of those methods is shown on the next code snippet:

public Function CellState(Variable x, Variable ht_1, Variable ct_1, DataType dataType, 
    Activation activationFun, bool usePeephole, bool useStabilizer, DeviceDescriptor device, ref uint seed)
{
    var ft = AGate(x, ht_1, ct_1, dataType, usePeephole, useStabilizer, device, ref seed, "ForgetGate");
    var it = AGate(x, ht_1, ct_1, dataType, usePeephole, useStabilizer, device, ref seed, "InputGate");
    var tan = Gate(x, ht_1, ct_1.Shape[0], dataType, device, ref seed);

    //apply Tanh (or other) to gate
    var tanH = AFunction(tan, activationFun, "TanHCt_1" );

    //calculate cell state
    var bft = CNTKLib.ElementTimes(ft, ct_1,"ftct_1");
    var bit = CNTKLib.ElementTimes(it, tanH, "ittanH");

    //cell state
    var ct = CNTKLib.Plus(bft, bit, "CellState");
    //
    return ct;
}

public Function CellOutput(Variable input, Variable ht_1, Variable ct, DataType dataType, DeviceDescriptor device, 
    bool useStabilizer, bool usePeephole, Activation actFun ,ref uint seed)
{
    var ot = AGate(input, ht_1, ct, dataType, usePeephole, useStabilizer, device, ref seed, "OutputGate");

    //apply activation function to cell state
    var tanHCt = AFunction(ct, actFun, "TanHCt");

    //calculate output
    var ht = CNTKLib.ElementTimes(ot, tanHCt,"Output");

    //create output layer in case different dimensions between cell and output
    var c = ct;
    Function h = null;
    if (ht.Shape[0] != ct.Shape[0])
    {
        //rectified dimensions by adding linear layer
        var so = !useStabilizer? ct : Stabilizer(ct, device);
        var wx_b = Weights(ht_1.Shape[0], dataType, device, seed++);
        h = wx_b * so;
    }
    else
        h = ht;

    return h;
}

Above methods have been implemented by using previously defined gates and blocks. The method AGate creates LSTM gate. The method is called two times in order to create forget and input gates. Then the Gate method is called in order to create linear layer for the update cell state. The activation function is provided as the constructor argument. Implementation of AGate and Gate functions is shown in the following code snippet:

public Variable AGate(Variable x, Variable ht_1, Variable ct_1, DataType dataType, bool usePeephole,
    bool useStabilizer, DeviceDescriptor device, ref uint seed, string name)
{
    //cell dimension
    int cellDim = ct_1.Shape[0];
    //define previous output with stabilization of if defined
    var h_prev = !useStabilizer ? ht_1 : Stabilizer(ht_1, device);

    //create linear gate
    var gate = Gate(x, h_prev, cellDim, dataType, device, ref seed);
    if (usePeephole)
    {
        var c_prev = !useStabilizer ? ct_1 : Stabilizer(ct_1, device);
        gate = gate + Peep(c_prev, dataType, device, ref seed);
    }
    //create forget gate
    var sgate = CNTKLib.Sigmoid(gate, name);
    return sgate;
}

private Variable Gate(Variable x, Variable hPrev, int cellDim,
                            DataType dataType, DeviceDescriptor device, ref uint seed)
{
    //create linear layer
    var xw_b = Layer(x, cellDim, dataType, device, seed++);
    var u = Weights(cellDim, dataType, device, seed++,"_u");
    //
    var gate = xw_b + (u * hPrev);
    return gate;
}

As can be seen AGate calls the Gate method in order to create linear layer, and then apply the activation function.

In order to create LSTM variant with peephole connection, as well as LSTM with self-stabilization, two additional methods are implemented. The peephole connection is explained previously. The implementation of Stabilizer methods is based on the implementation found at C# examples on the CNTK github page, with minor modification and re-factorization.

internal Variable Stabilizer(Variable x, DeviceDescriptor device)
{
    //define floating number
    var f = Constant.Scalar(4.0f, device);

    //make inversion of prev. value
    var fInv = Constant.Scalar(f.DataType, 1.0 / 4.0f);

    //create value of 1/f*ln (e^f-1)
    double initValue = 0.99537863;

    //create param with initial value
    var param = new Parameter(new NDShape(), f.DataType, initValue, device, "_stabilize");

    //make exp of product scalar and parameter
    var expValue = CNTKLib.Exp(CNTKLib.ElementTimes(f, param));

    //
    var cost = Constant.Scalar(f.DataType, 1.0) + expValue;

    var log = CNTKLib.Log(cost);

    var beta = CNTKLib.ElementTimes(fInv, log);

    //multiplication of the variable layer with constant scalar beta
    var finalValue = CNTKLib.ElementTimes(beta, x);

    return finalValue;
}

internal Function Peep(Variable cstate, DataType dataType, DeviceDescriptor device, ref uint seed)
{
    //initial value
    var initValue = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, seed);

    //create shape which for bias should be 1xn
    NDShape shape = new int[] { cstate.Shape[0] };

    var bf = new Parameter(shape, dataType, initValue, device, "_peep");

    var peep = CNTKLib.ElementTimes(bf, cstate);
    return peep;
}

The Peep method is based on previous description in the blog post, that simply adds the additional set of parameters which includes the previous cell state into Gates.

Implementation of the LSTM Recurrent Network

Once we have the LSTM cell implementation it is easy toimplement recurrent network based on LSTM. Previously the LSTM is defined withthree input variables: input and two previous state variables. Those previous states should be defined not as real variables but as placeholders, and should be changed dynamically for each iteration. So, the recurrent network starts by defining placeholders of previous output and previous cell state. Then the LSTMcell object is created. Once the the LSTM is created, the actual values is replaced by the previous values by calling the CNTK method PastValue. Then the placeholders are replaced with the past values of the variables. At the end the method return the CNTK Function object, which can be one of two cases, which is controlled by the returnSequence argument:

  • first case where the method returns the full sequence,
  • second case where the methods return the last element of the sequence.
 
/////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform                                      
// Copyright 2017-2018 Bahrudin Hrnjica                                                 
//
// This code is free software under the MIT License                                     
// See license section of  https://github.com/bhrnjica/anndotnet/blob/master/LICENSE.md  
//
// Bahrudin Hrnjica                                                                     
// bhrnjica@hotmail.com                                                                 
// Bihac, Bosnia and Herzegovina                                                         
// http://bhrnjica.net                                                                  //
////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;
using NNetwork.Core.Network.Modules;
using System;
using System.Collections.Generic;

namespace NNetwork.Core.Network
{
public class RNN
{
public static Function RecurrenceLSTM(Variable input, int outputDim, int cellDim, DataType dataType, DeviceDescriptor device, bool returnSequence=false,
    Activation actFun = Activation.TanH, bool usePeephole = true, bool useStabilizer = true, uint seed = 1)
{
    if (outputDim &lt;= 0 || cellDim &lt;= 0)
        throw new Exception("Dimension of LSTM cell cannot be zero.");
    //prepare output and cell dimensions 
    NDShape hShape = new int[] { outputDim };
    NDShape cShape = new int[] { cellDim };

    //create placeholders
    //Define previous output and previous cell state as placeholder which will be replace with past values later
    var dh = Variable.PlaceholderVariable(hShape, input.DynamicAxes);
    var dc = Variable.PlaceholderVariable(cShape, input.DynamicAxes);

    //create lstm cell
    var lstmCell = new LSTM(input, dh, dc, dataType, actFun, usePeephole, useStabilizer, seed, device);

    //get actual values of output and cell state
    var actualDh = CNTKLib.PastValue(lstmCell.H);
    var actualDc = CNTKLib.PastValue(lstmCell.C);

    // Form the recurrence loop by replacing the dh and dc placeholders with the actualDh and actualDc
    lstmCell.H.ReplacePlaceholders(new Dictionary&lt;Variable, Variable&gt; { { dh, actualDh }, { dc, actualDc } });

    //return value depending of type of LSTM layer
    if (returnSequence)
        return lstmCell.H;
    else
        return CNTKLib.SequenceLast(lstmCell.H); 

}
}}

As can be seen, the RNN class contains only one static method, which return the CNTK Function object which contains the recurrent network with LSTM cell. The method takes several arguments: input variable, dimension of the output of the recurrent network, dimension of the LSTM cell, and the additional arguments for creation different variants of the LSTM cell.

Implementation of Test Application

Now that the full LSTM based recurrent network is implemented, we are going to provide the test application that can test basic LSTM functionality. The application contains two test methods in order to check:

  • number of LSTM parameters, and
  • output and cell states of the LSTM cell for two iterations.

Testing the correct number of the parameters

The first method implements validation of the correct numberof LSTM parameters. The LSTM cell has three kinds of matrices: U and W and bfor each LSTM component: forget, input and output gate, and cell update.
Let assume the number of input dimension is n,and the number of output is m. Also let assume that dimension number of the cell is equal to output dimension. We can defined the following matrices:

  • U matrix with dimensions of mxn
  • W matrix with dimensions of mxm
  • B matrix (vector) with dimensions 1xm

In total the LSTM has P_{\left(LSTM\right)}=4\bullet\left(m^2+m\bullet n+m\right).

In case the LSTM has peephole connection the the number of parameters is increased with additional C matrix with 1xm parameters.

In total the LSTM with peephole connection has P_{\left(LSTM\right)}=4\bullet\left(m^2+m\bullet n+m\right)+3\bullet1\bullet m. The test method is implemented for n=3, and m=4, so the total number of parameters for default LSTM cell is P(n)=4(9+6+3)=4*18=72. With peephole connection the LSTM cell has P(n)= 4(9+6+3)+3*1*4 = 4*18+3*16 = 72+12=84.

In case the LSTM cell is defined with self-stabilization parameter, the additional 4xm parameters are defined.

Now that we defined parameter number for pure LSTM, with peephole and self-stabilization, we can implement test methods based on n=3 and m=4:

[TestMethod]
public void LSTM_Test_Params_Count()
{
    //define values, and variables
    Variable x = Variable.InputVariable(new int[] { 3 }, DataType.Float, "input");
    Variable y = Variable.InputVariable(new int[] { 4 }, DataType.Float, "output");

    //Number of LSTM parameters
    var lstm1 = RNN.RecurrenceLSTM(x,4,4, DataType.Float,device, Activation.Tanh,true,true,1);

    var ft = lstm1.Inputs.Where(l=&gt;l.Uid.StartsWith("Parameter")).ToList();
    var consts = lstm1.Inputs.Where(l =&gt; l.Uid.StartsWith("Constant")).ToList();
    var inp = lstm1.Inputs.Where(l =&gt; l.Uid.StartsWith("Input")).ToList();

    //bias params
    var bs = ft.Where(p=&gt;p.Name.Contains("_b")).ToList();
    var totalBs = bs.Sum(v =&gt; v.Shape.TotalSize);
    Assert.AreEqual(totalBs,12);
    //weights
    var ws = ft.Where(p =&gt; p.Name.Contains("_w")).ToList();
    var totalWs = ws.Sum(v =&gt; v.Shape.TotalSize);
    Assert.AreEqual(totalWs, 24);
    //update
    var us = ft.Where(p =&gt; p.Name.Contains("_u")).ToList();
    var totalUs = us.Sum(v =&gt; v.Shape.TotalSize);
    Assert.AreEqual(totalUs, 36);
    

    var totalOnly = totalBs + totalWs + totalUs;
    var totalWithSTabilize = totalOnly + totalst;
    var totalWithPeep = totalOnly + totalPh;

    var totalP = totalOnly + totalst + totalPh;
    var totalParams = ft.Sum(v=&gt;v.Shape.TotalSize);
    Assert.AreEqual(totalP,totalParams);
}

Testing the output and cell state values

In this test the network parameters input, previous output and cell states are setup. The result of this test is weather the LSTM cell returns correct output and cell state values for first and second iteration. The implementation of this test is shows on the following code snippet:

public void LSTM_Test_WeightsValues()
{

    //define values, and variables
    Variable x = Variable.InputVariable(new int[] { 2 }, DataType.Float, "input");
    Variable y = Variable.InputVariable(new int[] { 3 }, DataType.Float, "output");

    //data 01
    var x1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 2), new float[] { 1f, 2f }, device);
    var ct_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0f, 0f, 0f }, device);
    var ht_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0f, 0f, 0f }, device);

    var y1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0.0629f, 0.0878f, 0.1143f }, device);

    //data 02
    var x2Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 2), new float[] { 3f, 4f }, device);
    var y2Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0.1282f, 0.2066f, 0.2883f }, device);

    //Create LSTM Cell with predefined previous output and prev cell state
    Variable ht_1 = Variable.InputVariable(new int[] { 3 }, DataType.Float, "prevOutput");
    Variable ct_1 = Variable.InputVariable(new int[] { 3 }, DataType.Float, "prevCellState");
    var lstmCell = new LSTM(x, ht_1, ct_1, DataType.Float, Activation.Tanh, false, false, 1, device);
            

    var ft = lstmCell.H.Inputs.Where(l =&gt; l.Uid.StartsWith("Parameter")).ToList();
    var pCount = ft.Sum(p =&gt; p.Shape.TotalSize);
    var consts = lstmCell.H.Inputs.Where(l =&gt; l.Uid.StartsWith("Constant")).ToList();
    var inp = lstmCell.H.Inputs.Where(l =&gt; l.Uid.StartsWith("Input")).ToList();

    //bias params
    var bs = ft.Where(p =&gt; p.Name.Contains("_b")).ToList();
    var pa = new Parameter(bs[0]);
    pa.SetValue(new NDArrayView(pa.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
    var pa1 = new Parameter(bs[1]);
    pa1.SetValue(new NDArrayView(pa1.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
    var pa2 = new Parameter(bs[2]);
    pa2.SetValue(new NDArrayView(pa2.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
    var pa3 = new Parameter(bs[3]);
    pa3.SetValue(new NDArrayView(pa3.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
            
    //set value to weights parameters
    var ws = ft.Where(p =&gt; p.Name.Contains("_w")).ToList();
    var ws0 = new Parameter(ws[0]);
    var ws1 = new Parameter(ws[1]);
    var ws2 = new Parameter(ws[2]);
    var ws3 = new Parameter(ws[3]);
    (ws0).SetValue(new NDArrayView(ws0.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
    (ws1).SetValue(new NDArrayView(ws1.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
    (ws2).SetValue(new NDArrayView(ws2.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
    (ws3).SetValue(new NDArrayView(ws3.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
            
    //set value to update parameters
    var us = ft.Where(p =&gt; p.Name.Contains("_u")).ToList();
    var us0 = new Parameter(us[0]);
    var us1 = new Parameter(us[1]);
    var us2 = new Parameter(us[2]);
    var us3 = new Parameter(us[3]);
    (us0).SetValue(new NDArrayView(us0.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
    (us1).SetValue(new NDArrayView(us1.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
    (us2).SetValue(new NDArrayView(us2.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
    (us3).SetValue(new NDArrayView(us3.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));

    //evaluate 
    //Evaluate model after weights are setup
    var inV = new Dictionary&lt;Variable, Value&gt;();
    inV.Add(x, x1Values);
    inV.Add(ht_1, ht_1Values);
    inV.Add(ct_1, ct_1Values);

    //evaluate output when previous values are zero
    var outV11 = new Dictionary&lt;Variable, Value&gt;();
    outV11.Add(lstmCell.H, null);
    lstmCell.H.Evaluate(inV, outV11, device);
            
    //test  result values
    var result = outV11[lstmCell.H].GetDenseData&lt;float&gt;(lstmCell.H);
    Assert.AreEqual(result[0][0], 0.06286034f);//
    Assert.AreEqual(result[0][1], 0.0878196657f);//
    Assert.AreEqual(result[0][2], 0.114274308f);//

    //evaluate cell state
    var outV = new Dictionary&lt;Variable, Value&gt;();
    outV.Add(lstmCell.C, null);
    lstmCell.C.Evaluate(inV, outV, device);

    var resultc = outV[lstmCell.C].GetDenseData&lt;float&gt;(lstmCell.C);
    Assert.AreEqual(resultc[0][0], 0.114309229f);//
    Assert.AreEqual(resultc[0][1], 0.15543206f);//
    Assert.AreEqual(resultc[0][2], 0.197323829f);//

    //evaluate second value, with previous values as previous state
    //setup previous state and output
    ct_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { resultc[0][0], resultc[0][1], resultc[0][2] }, device);
    ht_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { result[0][0], result[0][1], result[0][2] }, device);

    //Prepare for the evaluation
    inV = new Dictionary&lt;Variable, Value&gt;();
    inV.Add(x, x2Values);
    inV.Add(ht_1, ht_1Values);
    inV.Add(ct_1, ct_1Values);

    outV11 = new Dictionary&lt;Variable, Value&gt;();
    outV11.Add(lstmCell.H, null);
    lstmCell.H.Evaluate(inV, outV11, device);

    //test  result values
    result = outV11[lstmCell.H].GetDenseData&lt;float&gt;(lstmCell.H);
    Assert.AreEqual(result[0][0], 0.128203377f);//
    Assert.AreEqual(result[0][1], 0.206633776f);//
    Assert.AreEqual(result[0][2], 0.288335562f);//

    //evaluate cell state
    outV = new Dictionary&lt;Variable, Value&gt;();
    outV.Add(lstmCell.C, null);
    lstmCell.C.Evaluate(inV, outV, device);

    //evaluate cell state with previous value
    resultc = outV[lstmCell.C].GetDenseData&lt;float&gt;(lstmCell.C);
    Assert.AreEqual(resultc[0][0], 0.227831185f);//
    Assert.AreEqual(resultc[0][1], 0.3523231f);//
    Assert.AreEqual(resultc[0][2], 0.4789199f);//
}

In this article the implementation of the LSTM cell is presented in details from the theory and implementation. Also the article contains two test methods in order to prove the correctness of the implementation. The result values of the output and the cell states are compared with manually calculated values.

Create CIFAR-10 Deep Learning Model With ANNdotNET GUI Tool


With ANNdotNET 1.2 the user is able to create and train deep learning models for image classification. Image classification module provides minimum of GUI actions in order to fully prepare data set. In this post, we are going to create and train deep learning model for CIFAR-10 data set, and see how it easy to do that with ANNdotNET v1.2.

In order to prepare data we have to download CIFAR-10 data set from official web site . The CIFAR-10 data set is provided in 6 binary batch files that should be extracted and persisted on your local machine. Number 10 in the name means that data set is created for 10 labels.The following image shows 10 labels of CIFAR-10 data set each label with few sample images.

CIFAR-10 data set (Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009.)

The data set contains 60 000 (50 000 for training and validation, and 10 000 for test) tinny colored images dimensions of 32×32. There is also bigger version of the data set CIFAR-100 with 100 labels. Our task is to create deep learning model capable of recognizing only one of 10 predefined labels from each image.

Data preparation

In order to prepare images, we need to do the following:

The following image shows extracted data set persisted in 10 label folders. The bird folder is opened and shows all images labeled for bird. The test folder contains all images created for testing the model once the model is trained.

In order to properly save all images, we need to create simple C# Console application which should extract and save all 60 000 images. Complete C# program can be downloaded from here.

In order to successfully extract the images, we have to see how those images are stored in binary files. From the official site we can see that there are 5 for training and 1 for test binary files: data_batch_1.bin, data_batch_2.bin, …, data_batch_5.bin, as well as test_batch.bin.

Each of these files is formatted as follows so that the first byte of the array is label index, and the next 3072 bytes represent the image. Each batch contains 10 000 images.

Important to know is that images are stored in CHW format which means that 1d image array is created so that the first 1024 bytes are the red channel values, the next 1024 the green, and the final 1024 the blue. The values are stored in row-major order, so the first 32 bytes are the red channel values of the first row of the image. To end this, all those information have been carried out when implementing the Extractor application. The most important methods are reshaping the 1D byte array into [3, height, width] image tensor, and creating the image from the byte tensor. The following implementation shows how 1D byte array is transformed into 3channel bitmap tensor.

static int[][][] reshape(int channel, int height, int width,  byte[] img)
{
    var data = new int[channel][][];
    int counter = 0;
    for(int c = 0; c < channel; c++)
    {
        data[c] = new int[height][];
        for (int y = 0; y < height; y++)
        {
            data[c][y] = new int[width];
            for (int x = 0; x < width; x++)
            {
                data[c][y][x] = img[counter];
                counter++;
            }
        }
    }
    return data;
}

Once the 1D byte array is transformed into tensor, the image can be created and persisted on disk. The following method iterates through all 10000 images in one batch file, extract them and persist on disk.

public static void extractandSave(byte[] batch, string destImgFolder, ref int imgCounter)
{
    var nStep = 3073;//1 for label and 3072 for image
    //
    for (int i = 0; i < batch.Length; i += nStep)
    {
        var l = (int)batch[i];
        var img = new ArraySegment<byte>(batch, i + 1, nStep - 1).ToArray();
// data in CIFAR-10 dataset is in CHW format, which means CHW: RR...R, GG..G, BB..B;

        // while HWC: RGB, RGB, ... RGB
        var reshaped = reshape(3, 32, 32, img);
        var image = ArrayToImg(reshaped);
        //check if folder exist
        var currentFolder = destImgFolder + classNames[l];

        if (!Directory.Exists(currentFolder))
            Directory.CreateDirectory(currentFolder);

        //save image to specified folder
        image.Save(currentFolder + "\\" + imgCounter.ToString() + ".png");

        imgCounter++;
   }
}

Run Cifar-Extractor console application and the process of downloading, extracting and saving images will be finished in few minutes. The most important is that CIFAR-10 data set will be stored in c://sc/datasets/cifar-10 path. This is important later, when we create image classifier.

Now that we have 60000 tiny images on disk arranged by labels we can start creating deep learning model.

Create new image classification project file in ANNdotNET

Open the latest ANNdotNET v1.2 and select New-> Image Classification project. Enter CIFAR project name and press save button. The following image shows CIFAR new ann-project:

Once we have new project, we can start defining image labels by pressing Add button. For each 10 labels we need to add new label item in the list. In each item the following fields should be defined:

  • Image label
  • Path to images with the label.
  • Query – in case we need to get all images within the specified path with certain part of the name. In case all images withing the specified path are images that indicate one label, query should be empty string.

Beside Label item, image transformation should be defined in order to define the size of the images, as well as how many images create validation/test data set.

Assuming the CIFAR-10 data set is extracted at c:/sc/datasets/cifar-10 folder, the following image shows how label items should be defined:

In case label item should be removed from the list, this is done by selecting the item, and then pressing Remove button. Beside image properties, we should defined how many images belong to validation data set. As can be seen 20% of all extracted images will be created validation data set. Notice that images from the test folder are not part of those two data set. they will be used for testing phase once the model is trained. Now that we done with data preparation we can move to the next step: creating mlconifg file.

Create mlconfig in ANNdotNET

By selecting New MLConfig command the new mlconfig file is created within the project explorer. Moreover by pressing F2 key on selected mlconfig tree item, we can easily change the name into “CIRAF-10-ConvNet”. The reason why we gave such name is because we are going to use convolution neural networks.

In order to define mlconfig file we need to define the following:

  • Network configuration using Visual Network Designer
  • Define Learning parameters
  • Define training parameters

Create Network configuration

By using Visual Network Designer (VND) we can quickly create network model. For this CIFAR-10 data set we are going to create 11 layers model with 4 Constitutional, 2 Pooling, 1 DropOut and 3 Dense layer, all followed by Scale layer:

Scale (1/255)->Conv2D(32,[3,3])->Conv2D(32,[3,3])->Pooling2d([2,2],2)->Conv2D(64,[3,3])->Conv2D(64,[3,3])->Pooling2d([2,2],2)->DropOut(0.5)->Dense(64, TanH)->Dense(32, TanH)->Dense(10,Softmax)

This network can be created so that we select appropriate layer from the VND combo box and click on Add button. The first layer is Scale layer, since we need to normalize the input values to be in interval (0,1). Then we created two sequence of Convolution, Pooling layers. Once we done with that, we can add two Dense layers with 64 and 32 neurons with TanH activation function. The last layer is output layer that must follow the output dimension, and Softmax activation function.

Once network model is defined, we can move to the next step: Setting learning and training parameters.

Learning parameters can be defined through the Learning parameters interface: For this model we can select:

  • AdamLearner with 0.005 rate and 0.9 momentum value. Loss function is Classification Error, and the evaluation function is Classification Accuracy

In order to define the training parameters we switch to Training tab page and setup:

  • Number of epoch
  • Minibatch size
  • Progress frequency
  • Randomize minibatch during training

Now we have enough information to start model training. The training process is started by selecting Run command from the application ribbon. In order to get good model we need to train the model at least few thousands epoch. The following image shows trained model with training history charts.

The model is trained with exactly of 4071 epochs, with network parameters mentioned above. As can be seen from the upper chart, mini-batch loss function was CrossEntropyWithSoftmax, while the evaluation function was classification accuracy.  The bottom chart shows performance of the training and validation data sets for each 4071 epoch. We can also recognize that validation data set has roughly the same accuracy as training data set which indicates the model is trained well.  More details about model performance can be seen on the next image:

Upper charts of the image above show actual and predicted values for training (left) and validation (right). Most of the point values are blue and overlap the orange which indicates that most of value are correctly predicted. The charts can be zoomed and view details of each value.The bottom part of the evaluation show performance parameters of the model for corresponded data set. As can be seen the trained model has 0.91 overall accuracy for training data set and 0.826 overall accuracy for validation data set, which indicate pretty good accuracy of the model. Moreover, the next two images shows confusion matrix for the both data sets, which in details shows how model predict all 10 labels.

The last part of the post is testing model for test data set. For that purpose we selected 10 random images from each label of the test set, and evaluate the model. The following images shows the model correctly predicted all 10 images.

Conclusion

ANNdotNET v1.2 image classification module offers complete data preparation and model development for image classification. The user can prepare data for training, create network model with Neural Network Designer, and perform set of statistical tools against trained model in order to validate and evaluate model. The important note is that the data set of images must be stored on specific location in order to use this trained model shown in the blog post. The trained model, as well as mlcofig files, can be load directly into ANNdotNET project explorer by doublick on CIFAR-10.zip feed example.

ANNdotNET as open source project provides outstanding way in complete development of deep learning model.