<strong><em>The augmentation of fake news across online platforms has come forth as a challenge to society and threat to democracy.&nbsp; Fake news gnaws confidence in reliable news sources and threatens social cohesion and belief in democracy. Fake news comes from different sources and spreads like a wildfire. It becomes difficult to distinguish the authenticity of real news from fake news.&nbsp;While numerous studies have addressed fake news detection using machine learning algorithms, many conventional approaches are limited by their reliance on manual feature engineering or an incomplete understanding of linguistic context. This paper works on a more advanced approach using a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model to overcome this limitation. This research work puts emphasis on fine-tuning a pre-trained BERT model on a task-specific news dataset. Fine tuning can significantly improve detection accuracy. An extensive study has been carried out on the ISOT dataset taken from the University of Victoria that consists of thousands of real and fake news articles. The model used in the research achieved an accuracy of 99.97%, precision of 100%, F-1 score of 99.97% and recall of 99.94%, validating its superiority over previously reported methods. </em></strong>
The classification of question difficulty is a critical task in educational technology and adaptive learning systems, enabling personalized question delivery based on a learner&rsquo;s proficiency. Traditional methods using TF-IDF and shallow machine learning models such as XGBoost, while effective, often fail to capture deep contextual and semantic nuances across multiple languages. Although Large Language Models (LLMs) demonstrate strong generalization abilities, their deployment is computationally expensive and less efficient for focused classification tasks. In this work, we propose a fine-tuned multilingual BERT-based model for question difficulty classification, capable of understanding linguistic context in English, Tamil, Hindi, and Sanskrit. Unlike general-purpose LLMs, the finetuned BERT model provides task-specific optimization with lower computational overhead and improved interpretability. The model leverages contextual embeddings to identify semantic complexity, linguistic variation, and syntactic depth, leading to more accurate and language-agnostic difficulty predictions. Experimental evaluation on a multilingual question dataset shows that our approach significantly improves accuracy and F1-score over traditional TF-IDF and LLM-based baselines, achieving both performance and efficiency in multilingual educational assessment.
Abstract&mdash;Traffic congestion in urban areas has escalated into a critical socio-economic challenge, contributing to increased travel delays, fuel wastage, air pollution, and emergency re- sponse failures. Traditional fixed-timer traffic signals fail to adapt to dynamic traffic patterns, resulting in inefficient junc- tion management. This paper presents a comprehensive Smart Traffic Management System (STMS) that integrates real-time vehicle detection using custom-trained YOLOv8, density-based adaptive signal timing, emergency vehicle priority via audio- based siren detection, and a full-stack web dashboard using Flask and OpenCV. The system processes live video streams from intersection cameras, calculates lane-wise vehicle density, dynamically allocates green time, and instantly grants priority upon detecting emergency vehicle sirens. A real-time analytics dashboard provides live heatmaps, density graphs, and perfor- mance metrics. Experimental evaluation on Indian urban traffic datasets demonstrates a mean Average Precision (mAP@50) of 0.974, inference speed of 28&ndash;32 FPS on CPU, and a 42.1% reduction in average waiting time compared to fixed-timer systems. The proposed system offers a scalable, cost-effective solution for intelligent traffic management in smart cities.
Respiratory illnesses continue to be a substantial global health challenge, and timely and accurate diagnosis is vital for effective treatment. Manual diagnosis of respiratory diseases such as COVID-19, asthma, and chronic obstructive pulmonary disease, is prohibitively time-consuming, resource-intensive, and often not feasible in limited-resource settings. This research proposes an automated machine learning system for detection of cough sounds classified for contactless and non-invasive screening of respiratory diseases. The proposed system applies state-of-the-art acoustic feature extraction methods such as Mel-Frequency Cepstral Coefficients (MFCCs), Spectral Centroid, and Zero-Crossing Rate to capture subtle acoustic signatures in cough sounds, which are not perceptible to the human ear. The proposed system employs the COUGHVID V3 crowdsourced dataset consisting of one of the largest freely available datasets for publicly available cough sounds, including over 25,000 cough recordings with geographic variation, annotated by expert physician. Several classification models, including Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs), and Random Forests will be trained to discriminate across coughs annotated as healthy, COVID-like, and asthma-related. The proposed system consists of the complete pipeline for the task: data preprocessing (noise removal, normalization, and silence trimming extraction), feature engineering using the Librosa library, supervised model training, and real-time classification with confidence scoring. With initial results demonstrating classification accuracy of 85% to 90% on testing data, the proposed system is extensible to web-based interface, mobile applications, and telemedicine use. This lightweight, Python-based framework addresses an important need for accessible health AI by providing a scalable, inexpensive method for early respiratory disease screening, without requiring costly hardware or clinical facilities. The system demonstrates considerable potential as a deployable option for healthcare providers, diagnostic centers, telemedicine services, or remote health monitoring activities, especially in underserved populations where traditional diagnostic means are not widely available. In addition to the detection of COVID-19, this research describes a reproducible machine learning pipeline that could be applied to broader biomedical audio analyses, including breath sound profiling and diagnostics using speech as the diagnostic modality, thereby contributing to the field of audio-based digital health diagnostics.
&mdash;In recent years, virtualization has become an es sential technology for modern enterprise and academic comput ing. Virtualization uncouples software from hardware, enabling workload consolidation, disaster recovery, scalable test labs, and affordable deployment in the cloud perspective. Hypervisors create and manage virtual machines while establishing isolation, sharing resources evenly while supporting guest environments with different requirements. While students learn the theory behind virtualization, memory management, and scheduling, they do not have valuable opportunities to experience, visualize, and experiment with those concepts in a real and/or meaningful way. The gap between the abstract model and system-level behavior results in significant missed learning opportunities for computer scientists and engineers in training. This project seeks to remedy this gap with a novel, web-based hypervisor simulation with a visual interface. Our program enables stronger utility of hypervisor and operating system knowledge through the use of interactive dashboards, visualization of dynamic system resources, algorithmic step-throughs and ephemeral memory and CPU simulation. The simulator gives users the ability to customize and define scheduling and memory allocation strategies thus providing a real-world, logical context to learning with a practical application for students, teachers and hobbyists alike. User experimentation and feedback illustrates notable improvements in understanding, engagement, and preparedness for complex concepts dealing with virtualization.
MirageMap is a simulator created to make the hidden behavior of virtual memory systems visible and interpretable. Instead of reading paging and caching as static concepts, it lets them unfold dynamically, allowing the user to watch how access patterns stabilize over time. Each memory access leaves traces of both computation and perception&mdash;some appear as Mirages, illusionary activations of frames never used, while others manifest as Echoes, reflections from previously occupied frames. These symbolic traces give the system a perceptual layer, bridging logic and cognition. Built using Python with PyQt5, Matplotlib, and ReportLab, MirageMap evolved from a simple paging viewer into a self-analyzing cognitive simulator that demonstrates how computation can begin to acquire interpretive meaning. The outcomes of this simulation indicate that symbolic reasoning can be embedded within operating system concepts to produce interpretable memory behavior. MirageMap&rsquo;s hybrid nature makes it suitable not only for system analysis but also as a cognitive computing framework where algorithmic precision coexists with adaptive interpretation.
The Hypertext Transfer Protocol (HTTP) is the basis of the internet communications of today, where the accuracy of the request directly impacts the reliability and security of the whole system. Presently, systems mostly depend on elementary string matching or regular expression methods, which are incapable of recognizing the hierarchical syntax as per the HTTP specifications. This article demonstrates a novel context-free grammar (CFG)-based method to identify HTTP request lines in a Flask-driven web framework.The newly designed model grammatically structures an HTTP request and thoroughly checks its correctness while also making sure that the communication complies with the protocol standards. The performance test results indicate that the use of CFG-based checking leads to better precision and stronger consistency than the traditional methods, thus it is possible to detect incorrectly formed or even incomplete requests with great effectiveness. Moreover, the system is equipped with database logging and automated report generation for analytical tracking and reproducibility. The findings pinpoint the promise of grammar-driven validation in raising protocol compliance levels, enhancing network security, and being an instructional resource for the comprehension of syntactic structures in network communication.
The rapid development of web technologies has changed the way interactive applications are created, developed and distributed. This paper presents a new methodology for creating an interactive web application that combines valid regular expressions (regex) in a modern front-end framework with modern design principles. The project uses HTML5, CSS3, JavaScript, as well as libraries such as Bootstrap, Tailwind and GSAP to create an interface that is responsive, animated and user-friendly. The system also presents network topology considerations to help simulate and characterize interactive communication structures from both educational and practical application perspectives. This work is innovative in the way it combines regex-enabled data validation with interactive data visualization in network topology. It combines computer science theory with practice in modern web development. Experimental evidence shows that a dynamic website is more attractive, responsive and interoperable across different browsers than a traditional static website. Overall, the work spans modern web engineering/applications, interactive learning, visualization, and will enable future integration of backend systems and artificial intelligence (AI) enhancements, or further produce progressive web apps (PWA). Index Terms-Regular expressions (regex), web applications, interface development, network topology, interactive data visualization, responsive design. Index Terms-Regular Expressions (Regex), Web Applications, Frontend Development, Network Topology, Interactive Visualization, Responsive Design .
The rapid expansion of software systems in com bination with the ubiquitous growth of artificial intelligence has drastically changed the programming culture. Traditional integrated development environments mostly offer syntax-level support in a very superficial manner without actually under standing the developer&rsquo;s intention. This paper presents an AI powered integrated development environment that can perform semantic reasoning, generate code from natural language, create documentation automatically, and debug in an interactive man ner. The system is developed using Python, TailwindCSS, Monaco Editor, and Xterm.js, and it combines the traditional development ergonomics with advanced reasoning by large language models. The backend is fully autonomous and communicates in real-time through WebSocket along with having persistent data storage. Evaluation in practical scenarios reveals that the system can shorten the time spent on repetitive coding tasks by about forty percent and the number of debugging iterations by roughly twenty percent as compared to the usual methods.
This paper explains the full scope, the intricate details, and the result of the evaluation of a novel distributed database system that combines a Log-Structured Merge-tree (LSM-tree) storage structure with consistent hashing for the best data distribution and retrieval in a way that they complement each other. To begin with, the system that we have designed addresses the problems that have become the major challenges of the data-intensive applications that consume large amounts of data. It supports a great throughput of write operations while also preserving a strong read performance and fault tolerance. The architecture utilizes a fundamentally new ring-based topology with the automatic replication of data, the efficient mechanisms of secondary indexing, and the advanced memory management with the help of the intelligent memtable flushing. The result of a broad range of experiments confirms the high performance of our system in that it can achieve 45,231 operations/second write throughput, and this is a 17.6% betterment of Apache Cassandra and a 266% betterment of MongoDB. Under the test environment, the system shows linear scalability up to 12 nodes, and it is consistent that sub-millisecond latency is achieved for 95% of the read operations.
Wireless communication development relies heavily on Universal Software Radio Peripheral (USRP), which enables real-time, flexible experimentation. Its software-defined radio (SDR) technology support bridges the theoretical concept gap with reality. Prototype and rapid development can be hindered by limited flexibility and hardware constraints of existing wireless communication systems. This paper recommends a method for transcending such limitations through use of USRP in combination with GNU Radio to conduct SDR tests. For a wide range of wireless conditions, this arrangement allows the system to provide adaptive modulation, real-time signal processing, and protocol adaptive testing. Applications of the proposed method include cognitive radio, spectrum sensing, and 5G protocol development. Findings of wireless communication research demonstrate faster deployment, greater mobility, and improved accuracy. Merging USRP with GNU Radio demonstrates to be an effective means to transcend traditional limitations and encourages real wireless system innovation.
The research introduces a methodology for identifying the primary parameters necessary for the trajectory construction of a mobile robot, utilizing chaotic, dynamic systems. This enhances the ergodicity and adaptability of movement in a dynamic context. The suggested mathematical model integrates the production of chaotic sequences to establish reference points and algorithms for their localized tracking, considering safety limitations, including the regulation of barrier functions. The numerical simulations demonstrated the method&#39;s efficacy in circumventing local minima and achieving uniform workspace coverage while preserving movement stability. This facilitates the formulation of paths that consider unforeseen impediments and enhances the resilience of control algorithms against external disturbances. Simultaneously, it has been demonstrated that chaotic models are advantageous for developing nonlinear adaptive trajectories in intricate contexts where traditional methods exhibit constrained efficacy. Consequently, the suggested methodology ensures precise positioning and secure interaction with other agents within the Industry 5.0 context.