Notice
Recent Posts
Recent Comments
Link
| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | ||||||
| 2 | 3 | 4 | 5 | 6 | 7 | 8 |
| 9 | 10 | 11 | 12 | 13 | 14 | 15 |
| 16 | 17 | 18 | 19 | 20 | 21 | 22 |
| 23 | 24 | 25 | 26 | 27 | 28 | 29 |
| 30 |
Tags
- DualPrompt
- learning to prompt
- state_dict()
- CIL
- Face Alignment
- learning to prompt for continual learning
- img2pose: Face Alignment and Detection via 6DoF
- Mask diffusion
- Markov transition matrix
- Vector Quantized Diffusion Model for Text-to-Image Synthesis
- Discrete diffusion
- Class Incremental Learning
- Continual Learning
- Img2pose
- Class Incremental
- Energy-based model
- VQ-VAE
- L2P
- prompt learning
- ENERGY-BASED MODELS FOR CONTINUAL LEARNING
- Facial Landmark Localization
- PnP algorithm
- VQ-diffusion
- requires_grad
- Face Pose Estimation
- 베이지안 정리
- mmcv
- timm
- CVPR2022
- Mask-and-replace diffusion strategy
Archives
- Today
- Total
Computer Vision , AI
[One-page summary] Understanding plasticity in neural networks (arxiv 2023) by Lyle et al. 본문
Paper_review[short]
[One-page summary] Understanding plasticity in neural networks (arxiv 2023) by Lyle et al.
Elune001 2024. 1. 15. 21:29● Summary: stabilizing the loss landscape is crucial to preserve plasticity
● Approach highlight
- They show abrupt task change can drive instability in optimizers and drive plasticity loss


When loss change suddenly, $\hat{m}_{t}$ is updated more aggressively than $\hat{v}_{t}$ and that makes $\hat{u}_{t}$ instability. This simple solution is increasing 𝜖
- To understand the impact of optimization method on plasticity loss, they compare gradient descent and random walk (gaussian perturbation) for optimization method on plasticity loss

- Smoother loss landscape is both easier to optimize and has been empirically observed to exhibit better generalization

● Main results


● Discussion
- Why smoother loss landscape has better generalization performance