February 09, 2023 - 978 views|
By pushing machine learning to low-powered edge devices, TinyML could reduce the carbon footprint of computing by a factor of 1,000.
At its current growth rate, artificial intelligence (AI) computation will by 2040 require far greater energy for computation than humans will be able to generate, according to MIT research.
To take just one example that’s very much in the news, ChatGPT-3—which we wrote about recently—has 175 billion machine learning (ML) parameters. It was trained on high-powered NVIDIA V100 graphical processing units (GPU), but researchers say it would have required 1,024 GPUs, 34 days and $4.6 million if done on A100 GPUs.
And while energy consumption was not officially disclosed, it is estimated that ChatGPT-3 consumed 936 MWh. That’s enough energy to power approximately 30,632 US households for one day, or 97,396 European households for the same period.
Now that’s unsustainable.
Already, information and communication tech eat up 2% of all global energy, making it comparable to the appetite of the aviation industry. Some are urging the scientific community to reduce the carbon footprint of computing by a factor of a million.
AI is just one component of this goal, of course. But there is a proposed way to improve its sustainability by a factor of a thousand, and that’s a step in the right direction. The key is TinyML.
As the term implies, tiny machine learning, or TinyML, takes the capabilities of ML—itself a subset of AI, in which algorithms are used by computers to recognize patterns—and pushes them to devices that have been deemed too cheap and energy- and resource-constrained for ML.
Already, there are plenty of use cases for TinyML. Anytime you say “OK Google” to your Android phone, for example, it’s TinyML that “wakes” the device. TinyML’s killer app may well be sustainability.
Aakash Shirodkar, a Cognizant Senior Director in AI and Analytics, says, “AI is shaping the future of humanity; everyday devices are becoming smarter due to the algorithms embedded in them. As models become larger and larger to handle more complex tasks, the demand for servers to process them is growing exponentially.“
While TinyML offers many benefits—such as improving edge devices’ speed and latency, security and privacy protections, reliability and resiliency, and scalability—none of these compares to what TinyML can do in terms of cost savings by lowering the operational costs of running these devices, Aakash notes.
“TinyML eliminates one of the most significant costs for smart edge devices by reducing continuous communication between devices and the cloud to a bare minimum,” he says. “So, the technology has the potential to make a significant difference in terms of sustainability and energy coherence.”
Because of this energy efficiency, devices could be installed almost anywhere. Imagine a scientist planting a device in the forest that can run for a year on a 1 MWh battery, providing ready-to-use insights into the ecosystem's growth and conditions over time.
There’s high demand from multiple industries for assistance in building a data-driven infrastructure. “Customers are seeking help with data ingestion, integration, analytics and TinyML model development for informed business decisions,” Aakash says.