March '20 Newsletter

AI-Native Software Infrastructure

In our latest blog post, we discuss some of the theoretical and practical considerations that deep learning engineers run into as they attempt to scale training beyond a single machine. There’s a lot to get right to even get functional distributed training off the ground. Once you do, there is a rich space of optimizations to navigate whose efficacy can depend on everything from your model architecture to your network topology.

By the way, here’s how easy it is to enable optimized distributed training in Determined AI:

alt text

Watch your inbox for future news and updates. You can also keep up with all our latest activities on Twitter or LinkedIn. If you have a question or comment, or would like to start a conversation with us, please reply to this email or contact us on our website.

Try Determined AI for free! Request sandbox access today.

Recent posts

MAY 27, 2020

How to Build an Enterprise Deep Learning Platform, Part Two

MAY 12, 2020

How to Build an Enterprise Deep Learning Platform, Part One

MAY 07, 2020

Scale Your Model Development on a Budget With GCP Preemptible Instances

Stay updated