When working with the Valohai platform and PyTorch Lightning, you have powerful tools at your disposal to streamline metadata logging and save the best models during training. Let's explore how you can take advantage of these functionalities on the Valohai platform.
Note, that we have a complete working example available on GitHub. You can find the code and step-by-step instructions in the GitHub repository.
Example of logging metadata with PyTorch Lightning
PyTorch Lightning offers several hooks to facilitate metadata logging to the Valohai platform. In this case, we will utilize the on_train_epoch_end
hook, which allows us to log metadata at the end of each training epoch.
Within the hook, we can access the Valohai platform using the valohai_utils
module, simplifying the process of metadata logging.
To illustrate this, consider the following example code snippet that demonstrates metadata logging within the on_train_epoch_end
hook:
def on_train_epoch_end(self):
with valohai.metadata.logger() as logger:
train_loss = ...
train_acc = ...
logger.log("epoch", self.current_epoch + 1)
logger.log("train_acc", train_acc)
logger.log("train_loss", train_loss)
In the code above, you can replace train_loss
and train_acc
with the actual values you want to log. By utilizing the logger.log
function, you can easily log different metrics or values by specifying the key and the corresponding value.
By incorporating this hook and the valohai_utils
module, you can seamlessly log the desired metadata to Valohai, providing valuable insights into the training process.
After logging the metrics, you can easily build plots and visualize the results within the Valohai platform.
Comments
0 comments
Please sign in to leave a comment.