Revisit Visual Prompt Tuning: The Expressiveness of Prompt Experts got accepted at ICLR 2026. By formalizing the connection between Attention and Mixture of Experts (MoE), we identify a key limitation in standard VPT: the restricted expressiveness of static prompts. To resolve this, we propose Visual Adaptive Prompt Tuning (VAPT), which conditions prompt experts on the input instance. This formulation is theoretically proven to achieve optimal sample efficiency and yields substantial performance gains, surpassing full fine-tuning on VTAB-1K by 7.34% and outperforming VPT in low-data regimes (1% data) by over 50%, all while using fewer parameters.