Results#

The facade produces results both from the training phase and the testing phase. This class is composed of the following as a hierarchical tree like object graph including:

Either model type of the results determines what kind of metrics are provided as either:

An example from the [Iris example] is given below:

Name: Iris: 1
Run index: 2
Learning rate: 0.005
    train:
        started: 08/20/2020 19:35:19:397030
        ended: 08/20/2020 19:35:41:234688
        batches: 6
        ave data points per batch: 18.8
        converged/epochs: 760/1000
        ave/min loss: 5.55785/2.85951
        accuracy: 0.885 (100/113)
        micro: F1: 0.885, precision: 0.885, recall: 0.885
        macro: F1: 0.881, precision: 0.900, recall: 0.888
    validation:
        batches: 1
        ave data points per batch: 12.0
        converged/epochs: 844/1000
        ave/min loss: 3.08125/3.08125
        accuracy: 1.000 (12/12)
        micro: F1: 1.000, precision: 1.000, recall: 1.000
        macro: F1: 1.000, precision: 1.000, recall: 1.000
    test:
        started: 08/20/2020 19:35:42:012540
        ended: 08/20/2020 19:35:42:014008
        batches: 2
        ave data points per batch: 12.5
        converged/epochs: 1/1
        ave/min loss: 3.56373/1.14945
        accuracy: 0.880 (22/25)
        micro: F1: 0.880, precision: 0.880, recall: 0.880
        macro: F1: 0.864, precision: 0.900, recall: 0.875

The facade provides access to the last_result, which has just the training results if only trained, or both the training and test results after testing as a ModelResult. Note that you must call persist_result to store the results after training and/or testing to get those respective results as detailed in facade resources section.

Plotting Loss#

Both the training and validation loss are plotted during the training phase. This is both available not only upon completion of the training phase, but during as well.

During the training of the model, if the update_path path is configured on the executor, the training and validation loss is plotted.

Result Manager#

The aforementioned last_result uses an instance of a ModelResultManager from the result_manager property of the facade no training has yet occurred (for the instance of the facade). This instance is used to retrieve previous results in the case they were computed in a previous Python interpreter. The ModelResultManager saves results in many formats using the model name and increasing integer index with the following extensions:

  • txt: human readable text

  • json: indented JSON format

  • dat: pickled format (i.e. used to restore results by last_result)

  • png: the training and validation loss plot

  • model: a directory with the model files used to restore a model from disk with methods such as the facade load_from_path

Predictions#

The get_predictions method of the facade generates predictions as a Pandas pandas.DataFrame. The output of the predictions include the correct label, the prediction label and the DataPoint ID by default. However, additional columns can be generated by passing a list of column names and a mapping function that returns a tuple of column values. See the Iris notebook for an example of how to do this.

Reproducibility#

Being able to reproduce the results is one of the major goals of this framework. While this framework provides an API (TorchConfig) to set the random seed state of PyTorch, numpy, and the Python environment, there is still some variance in some cases in results.

According to this GitHub issue:

This is expected, some of our kernels are not deterministic (specially during backward). Might be good to refer to #1535.