March '20 Newsletter

AI-Native Software Infrastructure

In our latest blog post, we discuss some of the theoretical and practical considerations that deep learning engineers run into as they attempt to scale training beyond a single machine. There’s a lot to get right to even get functional distributed training off the ground. Once you do, there is a rich space of optimizations to navigate whose efficacy can depend on everything from your model architecture to your network topology.

By the way, here’s how easy it is to enable optimized distributed training in Determined AI:

alt text

Watch your inbox for future news and updates. You can also keep up with all our latest activities on Twitter or LinkedIn. If you have a question or comment, or would like to start a conversation with us, please reply to this email or contact us on our website.

Try Determined AI for free! Request sandbox access today.

Recent Posts

AUG 05, 2020

YogaDL: a better approach to data loading for deep learning models

JUL 30, 2020

TensorFlow Datasets: The Bad Parts

JUL 30, 2020

Choosing Your Deep Learning Infrastructure: The Cloud vs. On-Prem Debate

Stay updated