CoOp, or Context Optimization, is an automated prompt engineering method that avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data. The context could be shared among all classes or designed to be class-specific. During training, we simply minimize the prediction error using the cross-entropy loss with respect to the learnable context vectors, while keeping the pre-trained parameters fixed. The gradients can be back-propagated all the way through the text encoder, distilling the rich knowledge encoded in the parameters for learning task-relevant context.
Source: Learning to Prompt for Vision-Language ModelsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Domain Generalization | 5 | 16.67% |
Prompt Engineering | 4 | 13.33% |
Zero-Shot Learning | 3 | 10.00% |
Few-Shot Learning | 2 | 6.67% |
Out of Distribution (OOD) Detection | 2 | 6.67% |
Image Classification | 2 | 6.67% |
Few-Shot Image Classification | 2 | 6.67% |
Image Augmentation | 1 | 3.33% |
Federated Learning | 1 | 3.33% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |