Poster Presentation OFFLINE

Cyber-attack detection using Gradient Clipping Long short term memory networks (GC-LSTM) in Internet of Things (IoT)

Madan Mohan Tito Ayyalasomayajula

The Internet of Things (IoT) is a network that connects a vast number of objects, enabling them to communicate and interact each other with human intervention. The IoT is seeing rapid growth in the field of computing. However, it is important to acknowledge that IoT is very susceptible to many forms of assaults due to the hostile nature of the internet. In order to address this problem, it is necessary to implement practical steps to ensure the security of IoT networks, such as the implementation of network anomaly detection. While it is impossible to completely prevent assaults indefinitely, timely discovery of an attack is essential for effective defence. Because IoT devices have limited storage and processing power, standard high-end security solutions cannot protect them. In addition, IoT devices are now autonomously linked for extended durations. Consequently, it is necessary to create advanced network-based security solutions such as deep neural network solutions. While several research have focused on the use of neural network methods for attack detection, there has been less emphasis on detecting assaults especially in IoT networks. The objective of this research is to develop a Gradient Clipping Long Short-Term Memory network (GC-LSTM) that can efficiently and promptly identify IoT network assaults. The Bot-IoT dataset is employed for evaluating various detection methodologies. The incorporation of additional features resulted in improved results. The GC-LSTM model, as proposed, achieves a remarkable accuracy of 99.98%.

Poster Presentation OFFLINE

A physics-embedded deep learning framework for cloth simulation

Zhiwei Zhao

Delicate cloth simulations have long been desired in computer graphics. Various methods were proposed to improve engaged force interactions, collision handling, and numerical integrations. Deep learning has the potential to achieve fast and real-time simulation, but common neural network (NN) structures often demand many parameters to capture cloth dynamics. This paper proposes a physics-embedded learning framework that directly encodes physical features of cloth simulation. The convolutional neural network is used to represent spatial correlations of the mass-spring system, after which three branches are designed to learn linear, nonlinear, and time derivate features of cloth physics. The framework can also integrate with other external forces and collision handling through either traditional simulators or sub neural networks. The model is tested across different cloth animation cases, without training with new data. Agreement with baselines and predictive realism successfully validate its generalization ability. Inference efficiency of the proposed model also defeats traditional physics simulation. This framework is also designed to easily integrate with other visual refinement techniques like wrinkle carving, which leaves significant chances to incorporate prevailing macing learning techniques in 3D cloth amination.

Poster Presentation OFFLINE

Random forest-based intrusion detection system

Bojun Song

This article chooses to use the Random Forest algorithm to improve the performance of network intrusion detection systems. The algorithm significantly improves the accuracy, recall and precision of network intrusion detection compared to traditional methods. The required data and experimental results were obtained from the LUFlow dataset by using a more accurate feature extraction method. Eventually, the readability and comprehension of the experimental results were enhanced by visualizing them. Overall, the performance of the network intrusion detection system based on the random forest method has been significantly improved. However, there are still some problems in the experiment, such as the lack of comparison with other commonly used intrusion detection methods or algorithms. Similar problems make the experiment lack of comprehensiveness. Therefore, future research should consider introducing more kinds of intrusion detection methods for comparative analysis to further validate and improve the performance of the system. In addition, extending the dataset of the experiments and improving the feature extraction techniques may also bring additional improvements. In summary, although the performance of the random forest-based network intrusion detection system has been improved, there is still much room for improvement and research potential.

Oral Presentation OFFLINE

Optimization of the D2D Topology Formation Using a Novel Two-Stage Deep ML Approach for 6G Mobile Networks

Ioannou Iacovos

Optimizing device-to-device (D2D) topologies is pivotal for enhancing the performance and efficiency of 6G networks. This paper introduces a novel approach for forming optimal subnet trees within the  6G networks using BDIx agents and advanced Minimum-Weight Spanning Tree (MWST/MST) algorithms augmented by Graph Neural Networks (GNNs) and FeedForward Neural Networks (FFNN). Our solution aims to significantly boost network performance, particularly in high-demand scenarios such as urban areas, large-scale events, and remote locations. Our approach dynamically adapts to changing network conditions, user movements, and traffic patterns by minimizing power consumption and maximizing throughput. We implement various MWST algorithms, including Kruskal's, Prim's, and Boruvka's algorithms, and introduce a GNN model to predict edge weights combined with FFNNs to select parent nodes (called GNN-FFNN model), aiding in the construction of minimum-weight spanning trees (MWST). Additionally, a "weighted distance" metric is proposed to analyze network performance comprehensively. The proposed AI/ML-driven solution integrates BDIx agents with MWST algorithms, focusing on optimizing subnets under gNodeB in 6G networks, enhancing data transmission efficiency, reducing latency, and increasing throughput. This research contributes to developing scalable and flexible network management solutions suitable for diverse configurations and architectures.

Virtual Presentation OFFLINE

Conception of an Autonomous Dynamic Analysis System for Android Malwares

Ahmed Mehaoua

This paper focuses on dynamic analysis for malware detection on Android. Initially, a literature review was conducted to understand both static and dynamic analysis approaches and their limitations, particularly highlighting the shortcomings of static analysis. The study demonstrates techniques for extracting various traces, such as system calls and network traffic, using dynamic analysis. The core of the study is the design of an automated system for the dynamic analysis of Android malware. This system automates the capture and analysis of APK traces using modules that monitor system calls, debug logs, and network traffic. It was found that relying on a single dynamic analysis module is insufficient, leading to false negatives, whereas combining data from all three modules enhances detection accuracy. Future directions include developing an intermediary using MQTT to reduce database load and improving the learning process for the three modules.

Oral Presentation OFFLINE

A Survey on Wheat Disease Identification and Classification Using Deep Learning

S M Naveen Raja

Wheat is one of the crucial cereal crops globally, and its productivity is affected by various diseases. Therefore, the timely and precise detection and classification of these diseases are important. This article provides a comprehensive overview of research works, which effectively utilized DL.

Virtual Presentation OFFLINE

Multi-Constraint Routing and Relay Scheduling Algorithms for Optical Networks

Longjiang Li

The challenge of optimal optical signal transmission in optical fiber networks is crucial for enhancing the network's reliability, performance, and service quality.  Traditional pathfinding methods, such as Dijkstra's algorithm, focus on finding the shortest path but fail to account for critical factors like optical signal loss and wavelength continuity. This paper proposes a novel algorithm that integrates traditional pathfinding methods with multi-constraint checks to effectively overcome these challenges. Inspired by the similarity between multi-constrained pathfinding in optical networks and vehicle charging path planning model, our approach aims to identify the optimal path in large-scale optical networks quickly. The simulation results demonstrate that our approach successfully addresses the complex requirements of optical signal routing and relay under multiple constraints, achieving promising outcomes. 

Virtual Presentation OFFLINE

2D Guided 3D Gaussian Segmentation

Kun Lan

We add an attribute to each Gaussian to represent its probability distribution across various categories, and use the 2D segmentation results as supervision. After learning the probability distribution, we use KNN and point cloud filtering algorithms to refine the segmentation results

Oral Presentation OFFLINE

Enhanced DOA Estimation Using Eigenvalue Reconstruction and Toeplitz Preprocessing

Shahzad Ali

Reliable Direction of Arrival (DOA) estimation is crucial for the performance of wireless communication systems. In this paper, we introduce a refined DOA estimation method that combines eigenvalue reconstruction of the noise subspace and Toeplitz preprocessing with the Multiple Signal Classification (MUSIC) algorithm. The proposed technique enhances the consistency of the noise subspace and improves the algorithm's resolution. Extensive simulations demonstrate that the method outperforms both the standard MUSIC and the MUSIC with Eigenvalue Reconstruction (MUSIC_ER) techniques. Notably, our approach shows enhanced performance in terms of root mean square error (RMSE) across snapshot ranges from 1 to 10. These enhancements make the proposed method (MUSIC_TR) a practical and effective option, especially in low-snapshot scenarios, providing an alternative solution for DOA estimation.

Virtual Presentation OFFLINE

The trend of high microbial contamination in livestock milk in the ASEAN region and distribution mapping

Endi Hari Purwanto

Human health risks associated with milk contamination can take many different forms. There is currently little data on the trends in microbial contamination and the mapping of its distribution. This study is to map the spread of microbial contamination in cattle milk throughout ASEAN and assess trends in this area. Using Boolean operators, a database originating from Scopus is collected. This research used 19,967 papers as references with topics or themes of bacteria, milk, microbes and antibiotics with loci in the ASEAN region. The analysis results show that 2021 is the peak of article production with 49 articles, followed by 2022 with 40 articles. The university that is the most productive institution is Khon Kaen University.Key research topics include antimicrobial resistance, lactic acid bacteria, bovine health issues, and fermentation in milk production. Research on antimicrobial resistance, the use of lactic acid bacteria in dairy products, cow health, and the milk fermentation process needs to be explored further. Collaboration between countries, especially Thailand and Malaysia, must also be improved to produce higher-quality research

Virtual Presentation OFFLINE

Overall Design and Physical Validation of Voice Interaction based on the ChatGPT Humanoid Robot Brain

Liang Yan

Aimed to the challenge of robotic voice interaction, this study leverages ChatGPT(Chat Generative Pre-trained Transformer) technology to develop a humanoid robot solution with a central processing system. Employing a top-down design approach, the solution encompasses the design of voice, video, and motion streams between users and the robot, enabling voice communication and expression output. By integrating hardware devices and a central control system, the entire humanoid robot system achieves a twofold purpose. On one hand, it combines data from conversational context, user tone and emotion, and user facial expressions to appropriately exhibit expressions. On the other hand, it formulates reasonable voice responses in conjunction with extracted statement content and emotional cues. Lastly, two physical prototypes of the humanoid robot are constructed. Experimental trials are conducted to assess the voice conversation and expression output capabilities of the humanoid robot, thereby confirming the rationality and effectiveness of the proposed solution.

Virtual Presentation OFFLINE

Automating citation formatting in scientific publications using ChatGPT

+359 879003327

In academic writing, the accuracy of citation formatting in scientific publications is essential for maintaining the integrity and consistency of scientific communication. However, manually formatting citations according to different styles, such as IEEE, APA, or MLA, can be time-consuming and error-prone. This paper presents an innovative approach to automate citation formatting in scientific publications using ChatGPT. It proposes an algorithm that incorporates a sequence of instructions and guidance, combined with the capabilities of ChatGPT, and greatly simplifies the process of formatting citations according to different styles. The proposed approach involves training ChatGPT with a dataset containing citation guides and examples from different formatting styles to improve its ability to generate correctly formatted citations. This work presents a comparative characterization of existing automated citation formatting systems and the proposed algorithm with ChatGPT. Their functionalities are analyzed and their advantages and disadvantages are highlighted. In addition, a SWOT analysis of the systems is performed, which examines their strengths, weaknesses, opportunities and threats. The analysis highlights the effectiveness and advantages of the proposed solution with ChatGPT. The results show that automation using ChatGPT not only facilitates accurate citation formatting, but also offers a practical tool for improving the quality and relevance of scientific publications. ChatGPT can significantly reduce formatting errors and improve the efficiency of academic writing, offering a scalable solution for researchers and institutions.