Paper_review[short]
[One-page summary] A Theoretical Study on Solving Continual Learning (NeurIPS 2022) by Kim et al.
Elune001
2024. 1. 15. 20:35
● Summary: Divide and conquer: Class Incremental Learning problem can be divided into within task prediction (WP) and out-of-distribution (OOD) detection and solved by optimizing each part.
● Approach highlight
- They solve CL by dividing it into TP and WP problems under the following two assumptions: 1. The domains of classes of the same task are disjoint, and 2. The domains of tasks are disjoint.
- They proved that optimizing $𝐻_{𝑇𝑃},𝐻_{𝑊𝑃}$ is equivalent to optimizing $𝐻_{𝐶𝐼𝐿}$
● Main results
● Discussion
- Using cross entropy means only capturing a subset of features and it will result in poor ODD detection because those missing features may be necessary to separate IND distribution and some out-of-distribution data. How to improve it?