A cross-border community for researchers with openness, equality and inclusion

ABSTRACT LIBRARY

RaP-ProtoViT: Efficient Dual-Head Transformers for Robust Gastric Endoscopy Classification and Generalizable Clinical Deployment

Publisher: IEEE

Authors: Rezaee Khosro, Meybod University Khosravi Mohamadreza, Shiraz University of Medical Sciences Rachini Ali, Holy Spirit University of Kaslik Che Muda Zakaria, Surveying INTI-IU University

  • Favorite
  • Share:

Abstract:

We introduce RaP-ProtoViT, an end-to-end dual-head transformer for 8-class GI endoscopy (Kvasir-v2). A margin head (ArcFace/AM-Softmax) enforces angular separation, while a prototype head aggregates top-k token–prototype similarities (with M trainable prototypes/class); a lightweight input-adaptive MLP fuses the heads. A leakage-aware pipeline (pHash dedup + GroupKFold) prevents near-duplicate bleed-over. Training uses AdamW(+SAM) with cosine warm-up, DropPath, label smoothing, SWA, and post-hoc temperature scaling; two-stage HPO (MOTPE+ASHA → qEHVI) under Latency@224 ≤ 200 ms and memory constraints selects operating points. On Kvasir-v2 the model attains 99.1% accuracy, Macro-F1 = 0.991, Macro-AUPRC = 0.997, AUROC = 0.998, and ECE ≈ 0.9%, with per-class F1 tightly clustered in 0.988–0.994 and fold stability (±0.2 pp accuracy, ±0.002 Macro-F1). Ablations show margin-only/prototype-only variants reduce Macro-F1 to 0.967/0.975 and raise ECE to 2.8%/2.2%; removing adaptive fusion drops Macro-F1 to 0.984. The proposed HPO converges 2–3× faster and yields better final MF1/AUPRC/ECE than Bayesian TPE or Random+ASHA. The prototype head provides localized, intrinsically interpretable evidence, complementing the margin head’s discrimination, within a single-model deployment footprint. By advancing robust, interpretable, and computationally efficient AI for gastric endoscopy, our approach can improve early detection of gastrointestinal disease and enable reliable clinical deployment across diverse healthcare settings.

Keywords: Endoscopy classification, Vision transformer, Prototype learning, hyperparameter optimization.

Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)

Date of Publication: --

DOI: -

Publisher: IEEE

×

USS WeChat Official Account

USSsociety

Please scan the QR code to follow
the wechat official account.