This paper was accepted on the Workshop on Unifying Representations in Neural Fashions (UniReps) at NeurIPS 2025.
Activation steering strategies in massive language fashions (LLMs) have emerged as an efficient solution to carry out focused updates to boost generated language with out requiring massive quantities of adaptation information. We ask whether or not the options found by activation steering strategies are interpretable. We determine neurons answerable for particular ideas (e.g., “cat”) utilizing the “discovering specialists” methodology from analysis on activation steering and present that the ExpertLens, i.e., inspection of those neurons offers insights about mannequin illustration. We discover that ExpertLens representations are steady throughout fashions and datasets and carefully align with human representations inferred from behavioral information, matching inter-human alignment ranges. ExpertLens considerably outperforms the alignment captured by phrase/sentence embeddings. By reconstructing human idea group by way of ExpertLens, we present that it permits a granular view of LLM idea illustration. Our findings recommend that ExpertLens is a versatile and light-weight method for capturing and analyzing mannequin representations.

