Authors: S Gokulakrishnan, Dayananda Sagar University Ampavathi Anusha, Vidya Jyothi Institute of Technology Chythanya Kanegonda Ravi, SR UniversityBalakrishnan Minu, Sri Eshwar College Of Engineering Ammisetty Veeraswamy, Koneru Lakshmaiah Education Foundation
The swift growth of the Internet of Things (IoT) has resulted in unparalleled amounts of distributed data production, requiring the implementation of machine learning (ML) models at the network's edge. Nonetheless, the resource-limited characteristics of IoT devices—restricted battery life, processing capability, and memory—present considerable obstacles for running computationally demanding AI algorithms. This research explores energy-saving AI methods aimed at enhancing ML efficiency while reducing energy usage in diverse IoT settings. We assess lightweight model designs, techniques for model compression (quantization, pruning, and knowledge distillation), and adaptive learning methods that modify computation dynamically according to context and resource availability. Moreover, we introduce a cohesive framework that utilizes edge–cloud cooperation to optimize workload allocation, minimize communication costs, and prolong device functionality. Experimental findings reveal that the suggested energy-efficient ML techniques attain reductions of 40–65% in energy consumption while maintaining accuracy levels similar to conventional ML models. The results emphasize the capacity of smart optimization methods to facilitate scalable, sustainable, and high-performance IoT implementations, setting the stage for future environmentally friendly AI-powered systems
Keywords: Energy-efficient machine learning, Edge AI, Internet of Things (IoT), Lightweight AI algorithms, and Adaptive learning.
Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)
Date of Publication: --
DOI: -
Publisher: IEEE