Watch how to easily perform data-parallel distributed training on multiple GPUs and hyperparameter search.
Learn how to share a GPU cluster among a number of experimenters.
Follow how one engineer views and navigates another teammate’s experiments, and continue training from a previously trial.
Find out how to reproduce experiment results from automatic metadata tracking and checkpoints.
Check out the talk on scalable deep learning introducing Hyperband for hyperparameter optimization and Paleo, an analytical performance model for deep learning.
Listen to the podcast on the fundamental problems related to training and deploying deep learning applications at scale, and ways to solve them.
Learn how to use the Determined AI platform for your deep learning development with detailed documentation.
Learn about Determined AI platform and how it accelerates deep learning development lifecycle.