Resources

Recent posts

JUN 04, 2019

The cloud giants have an AI problem

MAY 20, 2019

Stop doing iterative model development

MAR 13, 2019

Announcing the future of AI infrastructure

Screencasts

Distributed deep learning at scale

Watch how to easily perform data-parallel distributed training on multiple GPUs and hyperparameter search.

GPU resource sharing

Learn how to share a GPU cluster among a number of experimenters.

Collaboration and experiment warmstarting

Follow how one engineer views and navigates another teammate’s experiments, and continue training from a previously trained model.

Reproducible experiments

Find out how to reproduce experiment results from automatic metadata tracking and checkpoints.

Recordings

Taming the deep learning workflow

Learn about why we're in the dark age of AI infrastructure, and how to dramatically improve deep learning workflows via novel algorithmic and software solutions.

Random search and reproducibility for neural architecture search

Watch Ameet Talwalkar’s talk on his recent research demonstrating both the potential promise of Neural Architecture Search (NAS) along with the current immaturity of the field.

Addressing the challenges of massively parallel hyperparameter optimization

Follow how one engineer views and navigates another teammate’s experiments, and continue training from a previously trained model.

Scalable deep learning

Check out the talk on scalable deep learning introducing Hyperband for hyperparameter optimization and Paleo, an analytical performance model for deep learning.

Training and deploying deep learning at scale

Listen to the podcast on the fundamental problems related to training and deploying deep learning applications at scale, and ways to solve them.