Global Trend Radar
arXiv cs.LG (Machine Learning) INT ai 2026-05-08 13:00

Efficient Test-Time Adaptation through Latent Subspace Coefficients Search

元記事を開く →

分析結果

カテゴリ
宇宙
重要度
59
トレンドスコア
18
要約
arXiv:2510.11068v3 Announce Type: replace Abstract: Real-world deployment often exposes models to distribution shifts, making test-time adaptation (TTA) critical for robustness. Yet most TTA methods are unfriendly to edg
キーワード
arXiv:2510.11068v3 Announce Type: replace Abstract: Real-world deployment often exposes models to distribution shifts, making test-time adaptation (TTA) critical for robustness. Yet most TTA methods are unfriendly to edge deployment, as they rely on backpropagation, activation buffering, or test-time mini-batches, leading to high latency and memory overhead. We propose \textbf{ELaTTA} (\textit{Efficient Latent Test-Time Adaptation}), a gradient-free framework for single-instance TTA under strict on-device constraints. ELaTTA freezes model weights and adapts each test sample by optimizing a low-dimensional coefficient vector in a source-induced principal latent subspace, pre-computed offline via truncated SVD and stored with negligible overhead. At inference, ELaTTA encourages prediction confidence by optimizing the $k$-D coefficients with CMA-ES, effectively optimizing a Gaussian-smoothed objective and improving stability near decision boundaries. Across six benchmarks and multiple architectures, ELaTTA achieves state-of-the-art accuracy under both strict and continual single-instance protocols, while reducing compute by up to \emph{63$\times$} and peak memory by up to \emph{11$\times$}. We further demonstrate on-device deployment on a ZYNQ-7020 platform. arXiv:2510.11068v3 Announce Type: replace Abstract: Real-world deployment often exposes models to distribution shifts, making test-time adaptation (TTA) critical for robustness. Yet most TTA methods are unfriendly to edge deployment, as they rely on backpropagation, activation buffering, or test-time mini-batches, leading to high latency and memory overhead. We propose \textbf{ELaTTA} (\textit{Efficient Latent Test-Time Adaptation}), a gradient-free framework for single-instance TTA under strict on-device constraints. ELaTTA freezes model weights and adapts each test sample by optimizing a low-dimensional coefficient vector in a source-induced principal latent subspace, pre-computed offline via truncated SVD and stored with negligible overhead. At inference, ELaTTA encourages prediction confidence by optimizing the $k$-D coefficients with CMA-ES, effectively optimizing a Gaussian-smoothed objective and improving stability near decision boundaries. Across six benchmarks and multiple architectures, ELaTTA achieves state-of-the-art accuracy under both strict and continual single-instance protocols, while reducing compute by up to \emph{63$\times$} and peak memory by up to \emph{11$\times$}. We further demonstrate on-device deployment on a ZYNQ-7020 platform.