Papers
논문 관련 글을 모아놓은 공간입니다.
2023
- [논문리뷰] Vision Transformers Need Registers (‘23)
- [논문리뷰] Context Cluster: Image as Set of Points (ICLR ‘23 Oral)
- [논문리뷰] DPM: Deep Unsupervised Learning using Nonequilibrium Thermodynamics (ICML ‘15)
- [논문리뷰] VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning (ICLR ‘22)
- [논문리뷰] SimSiam: Exploring Simple Siamese Representation Learning (CVPR ‘21 Best Paper Honorable Mentions)
- [논문리뷰] IPViT: Intriguing Properties of Vision Transformers (NIPS ‘21 Spotlight)
- [논문리뷰] LiT: Zero-Shot Transfer with Locked-image text Tuning (CVPR ‘22)
- [논문리뷰] DALL-E: Zero-Shot Text-to-Image Generation (ICML ‘21 Spotlight)
- [논문리뷰] VQ-VAE: Neural Discrete Representation Learning (NIPS ‘17)
- [논문리뷰] Barlow Twins: Self-Supervised Learning via Redundancy Reduction (ICML ‘21 Spotlight)
- [논문리뷰] AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition (NIPS ‘22)
- [논문리뷰] data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language (ICML ‘22 Oral)
- [논문리뷰] MAE: Masked Autoencoders Are Scalable Vision Learners (CVPR ‘22 Oral)
2022
- [논문리뷰] SimCLR: A Simple Framework for Contrastive Learning of Visual Representations (ICML ‘20)
- [논문리뷰] MLP-Mixer: An all-MLP Architecture for Vision (NIPS ‘21)
- [논문리뷰] SWA: Averaging Weights Leads to Wider Optima and Better Generalization (UAI ‘18)
- [논문리뷰] ViT: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (ICLR ‘21 Oral)
- [논문리뷰] Style Transformer for Image Inversion and Editing (CVPR ‘22)