A cross-border community for researchers with openness, equality and inclusion

ABSTRACT LIBRARY

Optimizing Machine Learning for IoT: Energy-Efficient AI Approaches and Architectures

Publisher: IEEE

Authors: Haldorai Anandakumar, Sri Eshwar College of Engineering

  • Favorite
  • Share:

Abstract:

The swift expansion of Internet of Things (IoT) devices has heightened the demand for machine learning (ML) models that function within tight limitations on energy, memory, and computation. This document offers an in-depth analysis of energy-saving AI methods and design enhancements specifically designed for resource-limited IoT settings. We explore lightweight machine learning and deep learning methods—such as model compression, pruning, quantization, knowledge distillation, and event-driven processing—and assess their effects on energy usage and inference efficiency across diverse IoT platforms. A refined edge–cloud cooperative framework is suggested to lower communication costs, adaptively distribute computation, and prolong device lifespan while delivering real-time insights. Experimental analysis shows that the suggested energy-efficient ML pipeline results in considerable decreases in power consumption, latency, and model size while maintaining prediction accuracy. The results emphasize the essential importance of adaptive, hardware-aware AI techniques in facilitating scalable, sustainable, and efficient IoT implementations, while providing directions for future developments in on-device learning, federated optimization, and neuromorphic computing

 

Keywords: Energy-efficient AI,Internet of Things,Edge computing,Lightweight machine learning,Model compression,Low-power architectures.

Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)

Date of Publication: --

DOI: -

Publisher: IEEE

×

USS WeChat Official Account

USSsociety

Please scan the QR code to follow
the wechat official account.