This is a repository that contains generic utilities used by other deep learning projects.
The following are currently available:
A collection of generic functions that can be used to handle Tensorflow Keras models.
A function to save a model and its weights as follows:
- Model saved in file
filename.json
- Weights saved in file
filename.h5
- model: Keras model to be saved
- filename: The full path of the filename without file extension.
A function to load a model and its weights as follows:
- Model read from file
filename.json
- Weights read from file
filename.h5
- filename: The full path of the fulename without file extension
- custom_objects: A dictionary of Keras custom layers (if any) that were defined for the model that is being read
A function to save training, validation and testing results into a file.
The saved results will be as follows:
- Training info: information about the training parameters used:
- Loss function
- Optimisation method
- Learning rate
- Batch size
- Number of epochs
- Training results: The last value reached while training for each of the metrics that are available in history. The metrics to be printed are the ones passed in 'metrics' (an array of metrics)
- Validation results: Same as Training results (if available)
- Testing results: The same metrics values for testing (if available), passed via the dictionary 'test_result'
- Model Summary
- model: The Keras model
- init: An object that must contain the following elements:
- init.loss: Loss function that was used
- init.optimiser: Optimiser used for training
- init.lr: learning rate
- init.batchsize: batch size (as its name indicates)
- init.epochs: number of epochs
- init.save: full path where the results will be saved. The filename without extension. a ".txt" will be added to this filename
- init.validate: Boolean that indicates if validation on the validation set was done and hence results are available
- init.evaltest: Boolean that indicates if validation on the test set was done and hence results are available
- history: The history element from Keras .fit return object which is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable)
- test_results: A list containing the metrics values for the test set evaluation (if applicable)
- metrics: A list of the metrics against which the model was trained.
Save all history values in a json file in order to be plotted later if needed
- filename: The full path of the filename without file extension. A "_history.csv" will be added to the filename
- history: The history element from Keras .fit return object which is a dictionary of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable)
A module that optimises the logging object in order to generate logs with the following format:
YYYY-mm-dd HH:MM:SS.msec (process_id) (levelname) module.function -> message
OR
YYYY-mm-dd HH:MM:SS.msec (process_id) (levelname) thread.function -> message
Example:
2019-02-21 18:44:37.548 (668456) (INFO) main.<module> -> Tensorflow version: 1.11.0
OR
2019-02-21 18:44:37.548 (668456) (INFO) MainThread.<module> -> Tensorflow version: 1.11.0
This module will do the following:
- All messages with level Error or CRITICAL will be written on the terminal where the main python file is being run regardless of whether you want to generate logs on not. All error and critical messages are written on the terminal.
- If you choose to create logs, a logfile will be created with all messages whose level is greater than debuglevel.
Initialises logging. This function must be called in the main python file
- name: Name of the logger that is passed from the main python file and which must be used by all other files
- filename: Full path of the file where logs are to be written. This is a TimedRotatingFileHandler that rotates at midnight.
- debuglevel: The minimum log level that will be written into the file. All messages with level less than debuglevel will not be written into the file handler. Please check the logging class documentation for more info.
- log: Whether to generate logs or not.
- threadname: If True (default), put the threadname instead of the modulename in the log. threadname.function instead of module.function
In the main python file, do the following:
- Define a variable
logger_name
giving it the value that you want. This variable must be defined at the beginning of the main python script before importing all other python files that uses the logging. Example:logger_name = "Test"
from Msglog import LogInit
log = LogInit(logger_name, logfilename, debuglevel)
In all other python files, add the following:
from __main__ import logger_name
import logging
log = logging.getLogger(logger_name)
You can then write to logfile using the log class.
Example: log.info("Tensorflow version: %s", tf.__version__)