February 25, 2021
Over the past few years, we’ve talked at length about the need to design hardware and software in tandem, especially when it relates to machine learning; we even produced our own podcast with some of the brightest minds in both categories to prove our point. We’re betting that there will continue to be a proliferation of different sorts of hardware, for both training and deployment, so being hardware-agnostic is a critical characteristic of the Determined platform.
I recently sat down with George Anadiotis, host of “Orchestrate All The Things” and a ZDNet columnist, to talk about the importance of interoperability in AI. It shouldn’t matter what upstream data system - of which there are many - or exotic high-performance computing hardware you’re using if your goal is fast and accurate model training. Our belief at Determined has always been that if you have taken the time to learn these high-level frameworks, like PyTorch or TensorFlow, then you should be able to experiment, iterate, and repeat without worrying about what’s going on under the hood.
“We work with a medical devices company whose problem right now is getting the most accurate model to run on existing hardware. They might start with a huge model and employ techniques like quantization or distillation to fit that hardware. Right now, we help them do this semi-automatically, but it’s historically been a fairly manual process. What we’d like to see over time is a way for users to specify things like design and deployment constraints, memory footprint limitations, and accuracy requirements…. to perhaps help them automatically select the hardware that’s going to best fit that task. Or, if the hardware is fixed, help them automatically select the model that is going to most accurately fit into that performance profile. Some of the capabilities of what we’re working on at Determined are anchored on this idea.”
Right now, fitting models onto existing hardware is a highly manual, error-prone process - but there’s a better way. Our team at Determined is tackling just that – developers should be able to specify their design and deployment constraints, and automatically be told which hardware or model best fits that performance profile. George and I talked about our approach to this, starting with model development.
Read more on Determined’s approach to model development here.
George wraps up our chat perfectly in his blog post, originally posted on ZDNet. You can also check out the full podcast recording below.
George’s article, titled “AI chips in the real world: Interoperability, constraints, cost, energy efficiency, and models” originally appeared on ZDNet and was published on February 3, 2021.