Leveraging Explainable Transformers for Robust Financial Time Series Forecasting
ID:204
View protection:Participant Only
Updated time:2025-12-24 14:17:59 Views:131
Online
Abstract
Financial time series forecasting is essential for making well-informed financial decisions. Two examples of traditional models that tend to fail to represent intricate temporal dependencies and generate interpretable predictions are long short-term memory (LSTM) networks and autoregressive integrated moving average (ARIMA). To enhance the accuracy and interpretability of financial time series forecasting, this study introduces a new transformer-based method. The transformer model is more capable of prediction as it effectively models long-term dependency in financial information through self-attention mechanisms. Explainability methods such as SHAP (Shapley Additive Explanations) and attention heatmaps are employed within the research to solve the black-box problem of deep learning models by clarifying the decision process of the model. Large-scale experiments on various financial datasets show that the new transformer model is more robust and accurate in its predictions compared to baseline methods. The robustness of the model to market fluctuations is illustrated by its superior R-squared values and lower mean absolute percentage error (MAPE) for a variety of financial assets. The explainability framework also identifies important predictors, providing valuable insights to financial analysts and decision-makers. By merging cutting-edge transformer models with interpretability, our research helps advance the corpus of financial AI and provides credible and transparent financial forecasting
Keywords
Financial time series, Transformer model, Explainability, SHAP, Market prediction, Self-attention
Post comments