Watch how to easily perform data-parallel distributed training on multiple GPUs and hyperparameter search.
Learn how to share a GPU cluster among a number of experimenters.
Follow how one engineer views and navigates another teammate’s experiments, and continue training from a previously trained model.
Find out how to reproduce experiment results from automatic metadata tracking and checkpoints.
Learn about why we're in the dark age of AI infrastructure, and how to dramatically improve deep learning workflows via novel algorithmic and software solutions.
Watch Ameet Talwalkar’s talk on his recent research demonstrating both the potential promise of Neural Architecture Search (NAS) along with the current immaturity of the field.
Check out the talk on scalable deep learning introducing Hyperband for hyperparameter optimization and Paleo, an analytical performance model for deep learning.
Listen to the podcast on the fundamental problems related to training and deploying deep learning applications at scale, and ways to solve them.
Learn how to use the Determined AI platform for your deep learning development with detailed documentation.
Learn about Determined AI platform and how it accelerates deep learning development lifecycle.