Deep studying fashions excel in stationary knowledge however battle in non-stationary environments resulting from a phenomenon often called lack of plasticity (LoP), the degradation of their capability to study sooner or later. This work presents a first-principles investigation of LoP in gradient-based studying. Grounded in dynamical programs idea, we formally outline LoP by figuring out secure manifolds within the parameter area that entice gradient trajectories. Our evaluation reveals two main mechanisms that create these traps: frozen items from activation saturation and cloned-unit manifolds from representational redundancy. Our framework uncovers a elementary stress: properties that promote generalization in static settings, resembling low-rank representations and ease biases, straight contribute to LoP in continuous studying situations. We validate our theoretical evaluation with numerical simulations and discover architectural decisions or focused perturbations as potential mitigation methods.

