← Back to Presentations
Online

Leveraging Explainable Transformers for Robust Financial Time Series Forecasting

Speakers: Rakesh Kumar

Track: Track 5: Emerging Trends of AI/ML

📑 No Slides 🎬 No Video

Abstract

Financial time series forecasting is essential for making well-informed financial decisions. Two examples of traditional models that tend to fail to represent intricate temporal dependencies and generate interpretable predictions are long short-term memory (LSTM) networks and autoregressive integrated moving average (ARIMA).  To enhance the accuracy and interpretability of financial time series forecasting, this study introduces a new transformer-based method. The transformer model is more capable of prediction as it effectively models long-term dependency in financial information through self-attention mechanisms. Explainability methods such as SHAP (Shapley Additive Explanations) and attention heatmaps are employed within the research to solve the black-box problem of deep learning models by clarifying the decision process of the model. Large-scale experiments on various financial datasets show that the new transformer model is more robust and accurate in its predictions compared to baseline methods.  The robustness of the model to market fluctuations is illustrated by its superior R-squared values and lower mean absolute percentage error (MAPE) for a variety of financial assets.  The explainability framework also identifies important predictors, providing valuable insights to financial analysts and decision-makers. By merging cutting-edge transformer models with interpretability, our research helps advance the corpus of financial AI and provides credible and transparent financial forecasting

Speakers

Rakesh Kumar
GLA University
GLA University

Details

Type
Online
Model
OFFLINE
Language
EN
Timezone
UTC+8
Views
418
Likes
4