November 19, 2019
This blog post was initially published in Forbes
Decades ago, Japan faced an unavoidable, long-term economic challenge. Even as its economy reached record highs in the late 1980s (fueled by strong auto sales, the rise of innovative companies like Nintendo, and real estate speculation), it was preparing for the coming day when more than a quarter of its population would be over age 65. Today, Japan’s median age is more than 10 years older (47) than that of the US (36). To offset the economic realities of a rapidly aging workforce, Japan made the decision to become a world leader in robotics. Advanced robotics in manufacturing, healthcare, consumer electronics, and soon personal services are now deeply entrenched in the Japanese economy, a movement created out of a need to maintain productivity and GDP growth. As they say, “necessity is the mother of invention.”
Today, while the United States may not be facing the same demographic challenges, it is similarly at a crossroads: Artificial Intelligence (AI) will soon disrupt our economy - it is up to us whether it is positive disruption or negative disruption. Indeed, the continuing advancement of AI, Deep Learning (DL), Machine Learning (ML), computer vision and robotics is going to have a massive economic effect and some jobs displacement will be a downside to industries becoming more productive. McKinsey estimates that AI’s impact on employment could be as high as 15 percent. But that’s not the whole story.
Many conversations exist about the resulting need to up-skill workforces and prepare government and business for more autonomous industries. While those discussions are necessary, it’s also important to view AI through an optimistic lens, as it has great potential to be a catalyst for the economy and help solve some of our most pressing issues. And the hard truth is that if America isn’t leading in AI other countries will, and they will be the benefactors of economic efficiencies in tentpole industries like manufacturing, agriculture, transportation, technology, education - and even environmental health.
Every technological shift, from the Industrial Revolution to the Internet, has been met with fear around its resulting impact on jobs. But each time, these revolutions created entirely new markets and economies that created more jobs than they displaced; and these new economies were far more complex and far-reaching than originally imagined. AI will also be an expansive technology that impacts big industry, but also social issues like privacy and more. Consider:
Productivity: While America’s economy is strong today, its labor productivity was higher in the 1960s. If you leave aside a brief uptick in productivity during the dotcom era of 1995-2005, productivity growth in the US since 1970 has been just 1.4 percent. While there is a lot of talk of AI systems replacing human labor, the most realistic outcome will be that AI systems and intelligent robots augment human labor and tasks to increase productivity. In manufacturing, more AI-centric facilities will certainly increase output and eliminate some human jobs, but this will also mean that manufacturers can localize, so Apple for example can affordably make its devices in dozens of facilities in the United States vs. a few massive facilities in China. In the industrial world, increases in productivity are almost always reinvested in expansion, so if an airline sees 10 percent more profits from increased productivity, it would most likely look at buying more planes, expanding to more airports - thus creating thousands of jobs and a positive ripple effect through commercial airline supply chain - rather than sitting on profits.
Privacy: One of the biggest issues facing American consumers, businesses and regulators today is privacy. Generally, AI has been viewed as an anti-privacy technology, but there is great potential for AI to increase privacy and combat bias, even the bias of other AI systems. Consider facial recognition. Most people would agree that systems that scan and make a record of your face on a city street is a massive breach of privacy. At the same time, most would agree that video cameras on city streets can be a strong deterrent of crime and helpful in the identification of dangerous people. The best middle-ground to this issue may be AI, specifically lightweight AI systems that can both cache and delete data on the device it is running. So, say in a 24 hour period there is no crime reported in the area a camera is pointed and no face of a criminal the system is looking for appears, the data is erased. Furthermore, while AI systems have proven to show bias across race, gender, and more, there is a great opportunity to enable AI systems to police other AI systems, as Oren Etzioni of the Allen Institute for Artificial Intelligence once wrote. AI advancement in cyber security, in privacy and in policing bias on the internet is already happening.
Federated Learning: Another boost to privacy with more profound economic and environmental benefits is the rise of federated learning. This emerging framework essentially decentralizes ML by running models at the edge on thousands or millions of mobile phones vs. in a massive centralized database. Because mobile phones today run AI chips, they can run ML models locally, which can help in both customizing mobile software for individual users as well as make improvements from the summary of learnings from each phone that would be sent to a central server using advanced encryption, like homomorphic encryption, that keeps all personal data private. Federated learning has massive potential to improve the privacy of data collection and enable innovators that aren’t personal data aggregators like Amazon, Google, Microsoft, or Facebook to build strong AI solutions for businesses and consumers. There are still many important research questions to be solved to make techniques in this burgeoning field practical, but Federated Learning offers some true promise for an AI-powered world where consumers still control their data and models are made without massive use of energy.
When the Internet went mainstream in the mid-1990s, no one could imagine where we’d be today with mobile, streaming content, genomics, on-demand services, and so much more. Back then, there was a similar fear of the Internet that we have today - it would take millions of jobs, it would make some industries obsolete. For some reason, there is always a massive concentration of time when discussing technological shifts. AI research and progress will go on for decades, it will be a slow but important progression, and while we absolutely need to be ready for human labor disruption, there are reasons to be optimistic as well. It’s time to look at both sides of the AI conversation, because America’s economy this century will heavily depend on technological leadership, and turning back is not an option.