This study analyzes the impact of the loan-to-value (LTV) ratio on housing loan demand and housing price bubbles, emphasizing its importance in shaping investment decisions and risk management in the mortgage market. Using a bibliometric analysis technique, data were refined from the Scopus database, resulting in 198 articles published in English between 2014 and 2023. VOS Viewer was utilized to visualize bibliometric networks and identify research trends. Findings indicate a significant increase in publications from 2021 to 2023, influenced by the COVID-19 pandemic and changes in LTV regulations. The United States and the United Kingdom were identified as leading contributors to the research. Key themes include mortgage lending, macroprudential policy, housing market dynamics, and risk management. The study highlights the evolving nature of LTV research and its critical role in financial stability and macroprudential regulation, underscoring the importance of international collaboration in advancing knowledge in the mortgage sector.
Aimed to the challenge of robotic voice interaction, this study leverages ChatGPT (chat generative pretrained transformer) technology to develop a humanoid robot solution with a central processing system. Employing a top-down design approach, the solution encompasses the design of voice, video, and motion streams between users and the robot, enabling voice communication and expression output. By integrating hardware devices and a central control system, the entire humanoid robot system achieves a twofold purpose. On one hand, it combines data from conversational context, user tone and emotion, and user facial expressions to appropriately exhibit expressions. On the other hand, it formulates reasonable voice responses in conjunction with extracted statement content and emotional cues. Lastly, two physical prototypes of the humanoid robot are constructed. Experimental trials are conducted to assess the voice conversation and expression output capabilities of the humanoid robot, thereby confirming the rationality and effectiveness of the proposed solution.
To overcome the constraints imposed by the Hi3559 processor’s limited general video interfaces and poor device compatibility, a multi-interface video capture system based on field-programmable gate arrays (FPGA) is developed. By employing asynchronous double data rate (DDR) access techniques, a decoding selection module is designed to facilitate the transformation of the four video input formats. This video capture system can accept inputs in the PAL, high-definition multimedia interface (HDMI), Cameralink, and serial digital interface (SDI) formats. It employs an FPGA to decode these inputs and encodes them into the low-voltage differential signaling (LVDS) format for output, allowing seamless data exchange with the Hi3559 processor through the Mobile Industry Processor Interface (MIPI). The experimental results reveal that our system can precisely transcode 720p@30Hz PAL video and 1080p@60Hz Cameralink, HDMI, and SDI videos to the LVDS format which is adapted to the Hi3559 series. Video format conversion using our system is robust, ensuring smooth and uninterrupted video streaming without flickering or frame loss.
Task-oriented dialogue (TOD) systems are built to help users accomplish specific objectives. However, even though ongoing reviews and improvements have been made to its components, an official industrial standard has yet to be established. Additionally, TOD systems face limitations in detecting out-ofscope events, deciding when to access a database, and offering scalability for further processing. To address these issues, we introduce a comprehensive TOD framework and present solutions to overcome these limitations. We also investigate dialogue state tracking, the initial phase of the system, and assess how well it can identify out-of-scope events triggered by user actions not predefined in the conversation.
Scale up the large language models to store vast amounts of knowledge within their parameters incur higher costs and training times. Thus, in this study, we aim to examine the effects of language models enhancing external knowledge and compare the performance of extractive and abstractive generation tasks in building the question-answering system. To ensure consistency in our evaluations, we modified the MS MARCO and MASH-QA datasets by filtering irrelevant support documents and enhancing contextual relevance by mapping the input question to the closest supported documents in our database setup. Finally, we materiality assess the performance in the health domain, our experience presents a promising result not only with information retrieval but also with retrieval augmentation tasks aimed at improving performance for future work.
The Internet of Things (IoT) is a network that connects a vast number of objects, enabling them to communicate and interact with each other with human intervention. The IoT is seeing rapid growth in the field of computing. However, it is important to acknowledge that IoT is very susceptible to many forms of assaults due to the hostile nature of the internet. In order to address this problem, it is necessary to implement practical steps to ensure the security of IoT networks, such as the implementation of network anomaly detection. While it is impossible to completely prevent assaults indefinitely, timely discovery of an attack is essential for effective defense. Because IoT devices have limited storage and processing power, standard high-end security solutions cannot protect them. In addition, IoT devices are now autonomously linked for extended durations. Consequently, it is necessary to create advanced network-based security solutions such as deep neural network solutions. While several researches have focused on the use of neural network methods for attack detection, there has been less emphasis on detecting assaults, especially in IoT networks. The objective of this research is to develop a gradient clipping long shortterm memory network (GC-LSTM) that can efficiently and promptly identify IoT network assaults. The Bot-IoT dataset is employed for evaluating various detection methodologies. The incorporation of additional features resulted in improved results. The GC-LSTM model, as proposed, achieves a remarkable accuracy of 99.98%, 97.67% of detection rate, 87.34% of TNR and 34.67% of FAR.
In recent years, there has been an increasing number of software solutions presented to tackle the issue of energy usage at the application level. Nevertheless, there is little knowledge about the level of concern among software developers over energy use, the specific areas of energy consumption that they deem significant, and the potential solutions they propose for enhancing energy efficiency. Especially, the increasing amount of data and IoT devices require more storage space and computational power, which results in higher energy consumption. In order to address this problem, academics and professionals have been investigating several strategies to enhance energy efficiency in computer systems. It may be an interesting project to use deep learning algorithms, especially those that make use of natural language processing (NLP) methods, to estimate software energy usage based on Stack Overflow data. This NLP techniques can analyze the text of questions and answers. This involves tokenization, lemmatization, and named entity recognition to identify terms and phrases related to energy consumption. This study examines the concerns of practitioners about energy consumption on Stack Overflow via the utilization of lexicon-based sentiment analysis, a concept in NLP, combined with RNNs. The objective is to improve energy efficiency by forecasting time series data. The results of this study indicate that the practitioners’ desire to start conversations in the field of energy is closely linked to the utilization of ideas. This analysis of software energy consumption issues may assist academics in identifying the most significant concerns for software developers and end users.
This research article presents the design and development of a location system using a low-power wireless data network (LoRaWAN). The device is designed for tracking locations and reporting the status of specific areas. The core principle of the system is that the central processing unit collects data from various sensors, including satellite location sensors, temperature sensors, and relative humidity sensors. This data is then transmitted via LoRaWAN technology to a server, where it is processed and displayed on a map, accessible through a web server. The system provides the geographical location, specifies latitude and longitude, and displays real-time temperature and relative humidity of the nodes.
This work deals with the problem of prison overcrowding in Senegal and the use of electronic bracelets to reduce this overcrowding. Electronic bracelets collect a variety of data such as location, movements, communication data and biometric data. However, data security is a major concern. The aim of the work is to protect this data by using Internet of Things (IoT) and Fog Computing technologies to limit the data collection perimeter, thereby reducing the transfer of massive amounts of data to remote data offices. The architecture implemented aims to collect only the necessary data from remand and correctional office controlled by departmental courts, to comply with data protection laws and to implement security policies to prevent external attacks. This approach aims to guarantee data confidentiality while enabling the use of electronic bracelets to improve the prison situation in Senegal.
This study explores the application and fine-tuning of You Only Look Once (YOLOv8) models for real-time tomato recognition using drone imagery in greenhouse environments, with a focus on practical optimization strategies. Our evaluation of YOLO’s speed, robustness, and adaptability revealed that varying batch sizes and epochs had minimal impact on performance. Notably, the YOLOv8n model matched the performance of the YOLOv8x model while reducing training time by up to 60 times. Further fine-tuning identified the final learning rate (lrf) and dataset annotation quality as critical factors for model performance. Optimizing the lrf and enhancing dataset annotations significantly improved accuracy, underscoring their importance in effective YOLO model deployment. Our results demonstrate YOLOv8’s superiority over YOLOv5, with the optimized YOLOv8n model being ready for deployment in future tomato recognition tasks, paving the way for more efficient agricultural monitoring. This work provides valuable insights into object detection and offers practical guidance for researchers addressing similar challenges.
The maximum likelihood (ML) technique offers high performance for the direction-of-arrival (DOA) estimation but is computational expensive. Conventionally, this approach uses the sample covariance matrix (SCM) of the array output. The computation of SCM relies on the array size and available snapshots which consequently leads to a huge computational burden for large array and/or snapshot samples. If calculation of the SCM can be avoided, the reduction of computation complexity is evidently achievable. To circumvent this issue, a modified ML version is proposed. Exploiting the Nyström method allows us to eliminate the SCM computation. The resulting low-rank matrices can be used to construct an accurate signal subspace without calculating the SCM and its eigenvalue decomposition (EVD). Furthermore, the replacement of the SCM by the signal subspace establishes the modified ML function. Regarding to the computation complexity, the complex multiplications between matrices are compared. Several simulation results such as spatial spectrum, root mean squared error (RMSE) and simulation time are included to confirm the tradeoff between the computational time and DOA estimation performance.
In recent years, we have witnessed an increase in data transfer rates, which requires the development of new communication methods that can handle high-speed data transfer at challenging communication channels. One of the needs is the transmission of communication over serializer deserializer (SerDes) printed circuit boards (PCBs). which are used to transmit data between chips at high speeds of 10 Gbps and above, using the pulse amplitude modulation with four levels (PAM-4) encoding method, which enables lower losses and relatively low cost. Significant signal degradation is present in high-speed communication systems at SerDes, and inter-symbol interference (ISI) distortion dominates. One of the most effective methods to mitigate ISI distortion is the use of equalizers. The goal of this research is to study the performance of communication between two chips (transmitter/receiver) over SerDes PCB at 100 Gbps using the PAM-4 encoding method with an integrated continuous time linear equalizer (CTLE), feedforward equalizer (FFE), and decision feedback equalizer (DFE). The analysis includes a transmitter/receiver with PAM-4 encoding including the PCB channel response. Further, testing the performance of the combination of different equalizers while defining relevant values and parameters (rate, transmission, convergence rate, and equalizer coefficients). Performance are evaluated using signal-to-noise ratio (SNR) and bit error rate (BER) metrics. We investigated the BER performance for five PCBs of different lengths with analog CTLE and digital FFE-DFE equalizers and found that: For a small number of taps in FFE-DFE, a specific CTLE configuration is optimal, but for an optimal combination of FFEDFE, a different configuration of the CTLE is the best for all PCB lengths. We also show that the longer the PCB length, the more coefficients of the FFE-DFE are needed, consequently, more power is required to compensate for a longer PCB length. The components of your paper [title, text, heads, etc.] are already defined in its style sheet. *CAUTION: Do Not Use Symbols, Special Characters, Footnotes, or Math in Paper Title or Abstract. (Abstract)