This paper presents a study on the dimensional accuracy of parts manufactured using 3D printers. For this purpose, both desktop and professional 3D printers operating with FDM technology are used. Three-dimensional models and real prototypes of two types of machine elements are developed: a shaft and a bushing. The 3D models are created in the CAD system SolidWorks. The preparation of the parts for manufacturing on the 3D printers is discussed, as well as the specific aspects of their measurement on a coordinate measuring machine. The printed parts are measured on a coordinate measuring machine in order to obtain reliable data on the diametral dimensions. The experimental results show a clear influence of the printer design on diametral accuracy, with the industrial printer achieving the best results. For the investigated process parameters, all three printers produce interference fits between shafts and holes, and the developed mathematical models allow compensation of the systematic error and adjustment of the nominal diameters to realize different types of fits on the same equipment.
This research addresses the integration of Artificial Intelligence (AI) into electrocardiogram (ECG) signal processing to improve detection and classification of cardiac anomalies. We present an AI-driven ECG analysis framework that employs reinforcement learning (RL) together with a hybrid CNN–LSTM architecture to enhance PQRST complex detection and arrhythmia classification. AI agents autonomously detect and label ECG features using RL for adaptive peak detection, while the CNN–LSTM model performs arrhythmia classification. Using the MIT-BIH Arrhythmia Database, the system achieved 99.58% PQRST detection accuracy, 99.85% classification accuracy, and 99.85% anomaly detection precision. A CNN extracts key ECG features, an LSTM models temporal dependencies, and a Softmax prediction module (SMP) produces the final classification. The proposed AI model advances real-time cardiovascular monitoring and IoT-based diagnostics, offering a highly accurate, automated solution for early cardiac disease detection.
<strong>This study provides meaningful insights into impulse buying behavior within the context of Grandlucky Supermarket; however, several limitations must be acknowledged. The research utilized a relatively small sample of 85 respondents obtained through convenience sampling, which restricts the generalizability of the findings to a wider population. The sample&rsquo;s demographic profile was predominantly composed of young adults and students, potentially limiting applicability to other age or occupational groups. Additionally, the study focused on a single retail brand, which constrains the transferability of the results to other retail settings such as discount stores or luxury outlets. The reliance on self-reported online questionnaires also raises concerns about potential response bias. Future studies are encouraged to employ larger and more diverse samples to strengthen external validity. Replication across various retail formats would clarify contextual differences in how store atmosphere and brand image influence impulse buying. Moreover, incorporating additional variables such as consumer personality traits, emotional states, and promotional impacts could yield a deeper understanding of the psychological drivers behind unplanned purchases. Finally, combining observational methods with survey techniques is recommended to provide more objective and comprehensive insights into consumer behavior</strong>
As the use of Internet of Things (IoT) increases every day, False Data Injection Attacks (FDIA) pose a significant risk to today&#39;s technology. Attackers can influence state estimations without the need to use traditional Bad Data Detection (BDD) methods by changing measurement data. This paper examines FDIA models and study their potential influence on grid stability. In addition, it assesses simulated attack scenarios using an IEEE 14-bus test system. It was found that an undetected FDIA can change critical operational parameters and coefficients such as bus voltage levels by up to 5% without being noticed. As a result, it necessary to make some good strategies to detect any threat. Secured communication protocols and abnormally detection using machine learning may be used in the future.
Track 5: Emerging Trends of AI/ML
Static authentication mechanisms (e.g., passwords, biometrics) check users only at session beginning, hence subject to hijacking. Constant authentication is more secure as it monitors the users activities throughout a session. One of the potential applications of non-intrusive continuous authentication biometric in virtual reality (VR) is eye-tracking due to the availability of in-depth gaze information. Models based on gazes, however, decadent as time passes as there are changes in behavior patterns. This paper applies the 26-month GazeBaseVR dataset by comparing Transformer Encoder, DenseNet, and XGBoost in short- and long-term authentication. With Transformer Encoder and DenseNet yielding up to 97% of short-term accuracy, the accuracy reduces to 1.78% after 26 months of time have passed. The model can be retrained every now and then on up-to-date gaze data and regains more than 95% accuracy. The findings highlight the importance of adaptive learning in guaranteeing continual gaze-based authentication during a longer duration of time. It is possible that future work would make the best use of retraining time periods and apply to other behavioral indicators, such as head and hand gestures, to achieve maximum resilience of long-term VR authentication
GPU-based framework for 3D Delaunay triangulation using the gFlip3D parallel insertion algorithm has been presented here. Traditional CPU implementations of Delaunay triangulation are computationally intensive and do not scale well for large point clouds. Making use of the massive parallelism of modern GPUs, gFlip3D gets up to 24&times; speedup over CPU baselines and keeps throughputs from 4 to 5 million points per second on datasets larger than 100 million points. The approach offers finalization of space for memory efficiency and also has the capability to output partial streams, allowing real-time rendering of triangulated geometry. The empirical evidence demonstrates a quasi-linear scalability concerning the execution time in conjunction with good frame rates (&ge;1 fps) for datasets up to 1 million points. Performance validated over years of strong experimentation over synthetic and real-world data. Results established gFlip3D as an extremely scalable, high-performance solution for graphics-intensive applications such as interactive visualization, AR/VR systems, and scientific modeling.
Federated learning (FL), which allows decentralized model training with no trade-offs in terms of data privacy, has been perceived as an important way to promote edge computing.&nbsp; The application of FL in edge computing systems is proposed in the paper with an emphasis on privacy-performance trade-offs.&nbsp; FL allows for compliance with laws related to privacy while allowing for cooperative learning among distributed edge devices without exposing sensitive information. Nevertheless, disadvantages exist in the decentralized method, including high communication costs, model drift, and limited resources. The research puts forward a privacy-conscious federated learning model founded on differential privacy and secure aggregation with the aim of ensuring maximum security for the data at the expense of losing minimum model accuracy. The model is tested with real-world edge computing setups with significant reduction in communication latency and superior convergence rates in the model. The findings indicate that the incorporation of privacy-preserving mechanisms has negligible adverse effects in terms of model performance, proving the feasibility of trading off between privacy and computational efficiency in edge-based FL systems. The paper culminates with remarks on the potential for optimizing model aggregation methodologies and lessening system heterogeneity with the aim of enhancing scalability and robustness in applications in edge intelligence
This paper is the detailed analysis of lane-detection systems for vehicle accident prevention based on the developments in accident-prevention methods and the image processing and machine learning algorithms to improve the road safety. The critical issues in ensuring proper identification of lane boundaries in different environmental conditions, such as among different lighting conditions, different road geometries and un-favorable weather conditions that strongly influence detection accuracy are dealt with. The methodology is based on a step-wise processing pipeline that fully uses the classical and contemporary methods. The experimental structure combines the Hough transform lane detecting algorithm and Kalman filtering to track the temporal consistency and make the comparative analysis of the traditional detection techniques and the improved hybrid techniques. Although much has already been achieved, more research needs to be done to make more accurate in challenging cases and integrate the driver aid systems in order to make driving safer. &nbsp;Key components include Gaussian noise reduction for signal enhancement, Canny edge detection with optimized threshold parameters (50-150), and probabilistic Hough Transform with fine-grained parameter space representation (&rho;=1 pixel, &theta;=&pi;/180 radians)
Autonomous driving technologies have been making giant strides to improve the safety of roads and their efficiency. There is a more humane factor that rides on the vehicle besides the reduction of accidents with time and aptitude for the reduction of accidents: adaptability to personalized driving experience, which depends on user acceptance. This paper presents a comprehensive approach to trajectory planning and tracking of autonomous vehicles with human-like driving behaviors. This is achieved based on an integrated framework of the Artificial Potential Field model and Model Predictive Control; consequently, several environmental variables and the manners of various drivers, either conservative or aggressive, are considered. APF only ensures safe and dynamic obstacle avoidance whereas MPC adapts the vehicle control according to the taste and preference of the user in real time. Simulations on car-following and lane-changing scenarios validate the proposed method, which can generate adaptive trajectories close to those produced by humans. The results are that the algorithm will really personalize driving patterns according to drivers and occupants&#39; preferences with respect to increasing user comfort and acceptance without compromising the high standards of safety.
An echocardiogram is a very vital process that provides imaging information on heart diseases diagnosis and follow up. Among the leading causes of death all over the world are heart diseases, and therefore quality images are necessary in the correct medical analysis, though noise and distortion tend to compromise the quality of images used in diagnoses. In this paper, an ML based innovative method has been discovered on the combination of Mask R-CNN with the Radial Basis Function Support Vector Machine (RBF-SSVM) and demonstrates better quality improvement and correct classification of quality echocardiogram images and improves the image recovery quality of noise removal through high-cardiogramimage-classifying accuracies and will be very helpful in application to the research topic of clinical application. The outcome of the experiments is such that 97.8 percent accurate in the classification of the experimental results has been attained, which represents the effectiveness of this approach to addressing the issues of echocardiogram imaging. The article is a breakthrough in the use of ML to enhance cardiac diagnostics
Financial time series forecasting is essential for making well-informed financial decisions. Two examples of traditional models that tend to fail to represent intricate temporal dependencies and generate interpretable predictions are long short-term memory (LSTM) networks and autoregressive integrated moving average (ARIMA).&nbsp; To enhance the accuracy and interpretability of financial time series forecasting, this study introduces a new transformer-based method. The transformer model is more capable of prediction as it effectively models long-term dependency in financial information through self-attention mechanisms. Explainability methods such as SHAP (Shapley Additive Explanations) and attention heatmaps are employed within the research to solve the black-box problem of deep learning models by clarifying the decision process of the model. Large-scale experiments on various financial datasets show that the new transformer model is more robust and accurate in its predictions compared to baseline methods.&nbsp; The robustness of the model to market fluctuations is illustrated by its superior R-squared values and lower mean absolute percentage error (MAPE) for a variety of financial assets.&nbsp; The explainability framework also identifies important predictors, providing valuable insights to financial analysts and decision-makers. By merging cutting-edge transformer models with interpretability, our research helps advance the corpus of financial AI and provides credible and transparent financial forecasting