# Results The [facade] produces results both from the training phase and the testing phase. This class is composed of the following as a hierarchical tree like object graph including: * [ModelResult]: top level object that contains the results from all data sets as properties `test`, `train` and `validation`, which are all properties of type [DatasetResult] * [DatasetResult]: results of data set results given in the [results property] as [EpochResult] instances * [EpochResult]: results for each epoch containing the [labels], [predictions], and [metrics]. Either [model type] of the results determines what kind of [metrics] are provided as either: * [prediction metrics]: R^2, RMSE, MAE, and correlation * [classification metrics]: accuracy, micro and macro F1, recall and precision An example from the [Iris example] is given below: ``` Name: Iris: 1 Run index: 2 Learning rate: 0.005 train: started: 08/20/2020 19:35:19:397030 ended: 08/20/2020 19:35:41:234688 batches: 6 ave data points per batch: 18.8 converged/epochs: 760/1000 ave/min loss: 5.55785/2.85951 accuracy: 0.885 (100/113) micro: F1: 0.885, precision: 0.885, recall: 0.885 macro: F1: 0.881, precision: 0.900, recall: 0.888 validation: batches: 1 ave data points per batch: 12.0 converged/epochs: 844/1000 ave/min loss: 3.08125/3.08125 accuracy: 1.000 (12/12) micro: F1: 1.000, precision: 1.000, recall: 1.000 macro: F1: 1.000, precision: 1.000, recall: 1.000 test: started: 08/20/2020 19:35:42:012540 ended: 08/20/2020 19:35:42:014008 batches: 2 ave data points per batch: 12.5 converged/epochs: 1/1 ave/min loss: 3.56373/1.14945 accuracy: 0.880 (22/25) micro: F1: 0.880, precision: 0.880, recall: 0.880 macro: F1: 0.864, precision: 0.900, recall: 0.875 ``` The [facade] provides access to the [last_result], which has just the training results if only trained, or both the training and test results after testing as a [ModelResult]. Note that you must call [persist_result] to store the results after training and/or testing to get those respective results as detailed in [facade resources] section. ## Plotting Loss Both the training and validation loss are plotted during the training phase. This is both available not only upon [completion](#result-manager) of the training phase, but during as well. During the training of the model, if the `update_path` path is configured on the [executor], the training and validation loss is plotted. ## Result Manager The aforementioned [last_result] uses an instance of a [ModelResultManager] from the [result_manager] property of the [facade] no training has yet occurred (for the instance of the facade). This instance is used to retrieve previous results in the case they were computed in a previous Python interpreter. The [ModelResultManager] saves results in many formats using the model name and increasing integer index with the following extensions: * **txt**: human readable text * **json**: indented JSON format * **dat**: pickled format (i.e. used to restore results by [last_result]) * **png**: the training and validation [loss plot](#plotting-loss) * **model**: a directory with the model files used to restore a model from disk with methods such as the [facade] [load_from_path] ## Predictions The [get_predictions] method of the [facade] generates predictions as a Pandas `pandas.DataFrame`. The output of the predictions include the correct label, the prediction label and the [DataPoint] ID by default. However, additional columns can be generated by passing a list of column names and a mapping function that returns a `tuple` of column values. See the [Iris notebook] for an example of how to do this. ## Reproducibility Being able to reproduce the results is one of the major goals of this framework. While this framework provides an API ([TorchConfig]) to set the [random seed state] of [PyTorch], numpy, and the Python environment, there is still some variance in some cases in results. According to this [GitHub issue](https://github.com/pytorch/pytorch/issues/18412): > This is expected, some of our kernels are not deterministic (specially during backward). > Might be good to refer to [#1535](https://github.com/pytorch/pytorch/issues/15359). [PyTorch]: https://pytorch.org [facade]: facade.md [executor]: model.md [facade resources]: facade.html#resources [ModelResult]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.ModelResult [DatasetResult]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.DatasetResult [results property]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.DatasetResult.results [EpochResult]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.EpochResult [labels]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.ResultsContainer.labels [predictions]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.ResultsContainer.predictions [metrics]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.ResultsContainer.metrics [model type]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.ResultsContainer.model_type [prediction metrics]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.ResultsContainer.prediction_metrics [classification metrics]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.domain.ResultsContainer.classification_metrics [ModelResultManager]: ../api/zensols.deeplearn.result.html#zensols.deeplearn.result.manager.ModelResultManager [last_result]: ../api/zensols.deeplearn.model.html#zensols.deeplearn.model.facade.ModelFacade.last_result [persist_result]: ../api/zensols.deeplearn.model.html#zensols.deeplearn.model.facade.ModelFacade.persist_result [TorchConfig]: ../api/zensols.deeplearn.html#zensols.deeplearn.torchconfig.TorchConfig [random seed state]: ../api/zensols.deeplearn.html#zensols.deeplearn.torchconfig.TorchConfig.set_random_seed [result_manager]: ../api/zensols.deeplearn.model.html#zensols.deeplearn.model.executor.ModelExecutor.result_manager [get_predictions]: ../api/zensols.deeplearn.model.html#zensols.deeplearn.model.facade.ModelFacade.get_predictions [load_from_path]: ../api/zensols.deeplearn.model.html#zensols.deeplearn.model.facade.ModelFacade.load_from_path [DataPoint]: ../api/zensols.deeplearn.batch.html#zensols.deeplearn.batch.domain.DataPoint [Iris notebook]: https://github.com/plandes/deeplearn/blob/master/notebook/iris.ipynb