Optimizing Machine Learning for IoT: Energy-Efficient AI Approaches and Architectures
ID:189
View protection:Participant Only
Updated time:2025-12-23 13:40:02 Views:120
Online
Abstract
The swift expansion of Internet of Things (IoT) devices has heightened the demand for machine learning (ML) models that function within tight limitations on energy, memory, and computation. This document offers an in-depth analysis of energy-saving AI methods and design enhancements specifically designed for resource-limited IoT settings. We explore lightweight machine learning and deep learning methods—such as model compression, pruning, quantization, knowledge distillation, and event-driven processing—and assess their effects on energy usage and inference efficiency across diverse IoT platforms. A refined edge–cloud cooperative framework is suggested to lower communication costs, adaptively distribute computation, and prolong device lifespan while delivering real-time insights. Experimental analysis shows that the suggested energy-efficient ML pipeline results in considerable decreases in power consumption, latency, and model size while maintaining prediction accuracy. The results emphasize the essential importance of adaptive, hardware-aware AI techniques in facilitating scalable, sustainable, and efficient IoT implementations, while providing directions for future developments in on-device learning, federated optimization, and neuromorphic computing
Keywords
Energy-efficient AI,Internet of Things,Edge computing,Lightweight machine learning,Model compression,Low-power architectures.
Post comments