| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | |
| 7 | 8 | 9 | 10 | 11 | 12 | 13 |
| 14 | 15 | 16 | 17 | 18 | 19 | 20 |
| 21 | 22 | 23 | 24 | 25 | 26 | 27 |
| 28 | 29 | 30 | 31 |
- Discrete diffusion
- learning to prompt for continual learning
- img2pose: Face Alignment and Detection via 6DoF
- requires_grad
- learning to prompt
- L2P
- Energy-based model
- Mask diffusion
- Class Incremental
- prompt learning
- 베이지안 정리
- VQ-diffusion
- timm
- Vector Quantized Diffusion Model for Text-to-Image Synthesis
- Markov transition matrix
- Img2pose
- Mask-and-replace diffusion strategy
- CVPR2022
- PnP algorithm
- Face Pose Estimation
- Class Incremental Learning
- Facial Landmark Localization
- state_dict()
- ENERGY-BASED MODELS FOR CONTINUAL LEARNING
- mmcv
- Continual Learning
- DualPrompt
- VQ-VAE
- Face Alignment
- CIL
- Today
- Total
목록Continual Learning (3)
Computer Vision , AI
● Summary: Adopts energy-based method models classifier to continual learning to solve class incremental problem ●Approach highlight ○ Energy-based model for classifier: take class y and data x and as input and output is their energy value ○Energy-based models loss function makes minimize energy of positive pair and maximize energy of negative pair ● Main Results ● Discussion ○ Computational cos..
●Summary: Learning two disjoint prompt spaces makes rehearsal-free prompt-based continual learning more effectively ●Approach highlight ○Using task-invariant General prompt(L_g) at ViT 1~2nd layer and task-specific Expert Prompt(# of task ×L_e) at ViT 3~5th layer ○Prefix tuning: before MSA, concatenate key prompt to hidden representation of key and value prompt to hidden representation of value,..
Category: continual learning (class incremental) ●Summary: Use prompt with a pre-trained model for rehearsal buffer-free and task-agnostic continual learning ●Approach highlight ○Prompt pool memory space allows rehearsal buffer-free and task-agnostic ○ Penalize frequently-used prompts by using prompt frequency table H_t at training time for the diversity of prompt (1) ○ At training-time, If quer..