Computer Vision , AI

[One-page summary] Learning to Prompt for Continual Learning (CVPR 2022) By Wang et al. 본문

Paper_review[short]

[One-page summary] Learning to Prompt for Continual Learning (CVPR 2022) By Wang et al.

Elune001 2023. 4. 18. 17:20

Category: continual learning (class incremental)

Summary: Use prompt with a pre-trained model for rehearsal buffer-free and task-agnostic continual learning
Fig1. inference phase of L2P https://ai.googleblog.com/2022/04/learning-to-prompt-for-continual.html

 

Approach highlight
    ○Prompt pool memory space allows rehearsal buffer-free and task-agnostic

Fig2. Differences between L2P and Rehearsal-based methods

 

    ○ Penalize frequently-used prompts by using prompt frequency table H_t at training time for the diversity of prompt (1)

(1)

    ○ At training-time, If query(task) and key(prompt) belong to the same task, make them close

Total Summary

●Main Results

 

●Discussion

    ○New perspective to solve incremental learning using prompt

    ○How to determine the total number of prompts M in a prompt pool

 

 

reference

total summary: http://dsba.korea.ac.kr/seminar/?mod=document&uid=2574 , DSBA seminar 2022-10-14, Jaehyuk Heo