Authors: Guo Claire, Lynbrook High School
Abstract—State-Of-The-Art Neural Networks are accurate but are hungry for compute and memory. For MCU (Microcontrollers), it usually has much less resources: 32KB RAM, 256KB Flash, no GPU. In turn, it requires Neural Network model must meet stringent requirements for energy efficiency, low latency, and robust inferencing. To address this challenge in the paper, I propose EmberNet, a micro-friendly Neural Network based on an Augmented Depthwise Separable Convolution Network[5] for compute-efficiency and much smaller parameters. I illustrate the application of the model with the public dataset[8] using denial-of-service(DoS), Fuzzy, Gear-spoofing, and spoofing-RPM attack types. With EmberNet’s tiny 514-parameter and model size 6.4KB, I am able to achieve 99.46% accuracy and 0.0085 false-negative rate across four attack types. In comparison, EmbernetNet is 1100+ times smaller than a 7MB Inception-ResNet baseline[1], 45 times smaller than specialized RGB-CNN[2]. To make these benchmark results production-viable and reproducible, a build pipeline using TVM (Tensor Virtual Machines), Zephyr Project, and QEMU (Quick EMUlator) has been established to enforce the reliability of the model.
Keywords: CAN bus,Depthwise Separable Convolution Network,GroupNorm,Global Adaptive Average Pool,Structural pruning,Intrusion Detection System,edge ai
Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)
Date of Publication: --
DOI: -
Publisher: IEEE