Resources

Recent posts

MAR 13, 2019

Announcing the Future of AI Infrastructure

MAR 05, 2019

Random Search is a hard baseline to beat for Neural Architecture Search

FEB 20, 2019

Addressing the challenges of massively parallel hyperparameter optimization

Screencasts

Distributed deep learning at scale

Watch how to easily perform data-parallel distributed training on multiple GPUs and hyperparameter search.

GPU resource sharing

Learn how to share a GPU cluster among a number of experimenters.

Collaboration and experiment warmstarting

Follow how one engineer views and navigates another teammate’s experiments, and continue training from a previously trial.

Reproducible experiments

Find out how to reproduce experiment results from automatic metadata tracking and checkpoints.

Recordings

Addressing the challenges of massively parallel hyperparameter optimization

Follow how one engineer views and navigates another teammate’s experiments, and continue training from a previously trial.

Scalable deep learning

Check out the talk on scalable deep learning introducing Hyperband for hyperparameter optimization and Paleo, an analytical performance model for deep learning.

Training and deploying deep learning at scale

Listen to the podcast on the fundamental problems related to training and deploying deep learning applications at scale, and ways to solve them.