๐ŸŒ
GitHub
github.com โ€บ bcmi โ€บ DCI-VTON-Virtual-Try-On
GitHub - bcmi/DCI-VTON-Virtual-Try-On: [ACM Multimedia 2023] Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow.
Abstract: Virtual try-on is a critical image synthesis task that aims to transfer clothes from one image to another while preserving the details of both humans and clothes. While many existing methods rely on Generative Adversarial Networks ...
Starred by 497 users
Forked by 77 users
Languages ย  Python 99.8% | Shell 0.2%
๐ŸŒ
arXiv
arxiv.org โ€บ abs โ€บ 2403.05139
[2403.05139] Improving Diffusion Models for Authentic Virtual Try-on in the Wild
July 29, 2024 - Abstract:This paper considers image-based virtual try-on, which renders an image of a person wearing a curated garment, given a pair of images depicting the person and the garment, respectively. Previous works adapt existing exemplar-based inpainting diffusion models for virtual try-on to improve the naturalness of the generated visuals compared to other methods (e.g., GAN-based), but they fail to preserve the identity of the garments.
๐ŸŒ
Johannakarras
johannakarras.github.io โ€บ Fashion-VDM
Fashion-VDM: Video Diffusion Model for Virtual Try-On
We present Fashion-VDM, a video diffusion model (VDM) for generating virtual try-on videos. Given an input garment image and person video, our method aims to generate a high-quality try-on video of the person wearing the given garment, while preserving the person's identity and motion.
๐ŸŒ
Hugging Face
huggingface.co โ€บ spaces โ€บ texelmoda โ€บ virtual-try-on-diffusion-vton-d
Virtual Try-On Diffusion [VTON-D] - a Hugging Face Space by texelmoda
Upload images or provide text prompts for clothing, avatar, and background to create virtual try-on images. Get a generated image showing the avatar wearing the specified clothing against the chose...
๐ŸŒ
Segmind
segmind.com โ€บ models โ€บ try-on-diffusion
Try-On Diffusion Serverless API
Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on
๐ŸŒ
arXiv
arxiv.org โ€บ abs โ€บ 2405.11794
[2405.11794] ViViD: Video Virtual Try-on using Diffusion Models
May 28, 2024 - In this work, we present ViViD, a novel framework employing powerful diffusion models to tackle the task of video virtual try-on. Specifically, we design the Garment Encoder to extract fine-grained clothing semantic features, guiding the model ...
๐ŸŒ
Google
blog.google โ€บ products โ€บ shopping โ€บ virtual-try-on-google-generative-ai
How AI makes virtual try-on more realistic
June 14, 2023 - The result allows you to see what a top looks like on the model of your choice. Our diffusion model sends images to their own neural network (a U-net) to generate the output: a photorealistic image of the person wearing the garment.
๐ŸŒ
GitHub
github.com โ€บ zengjianhao โ€บ CAT-DM
GitHub - zengjianhao/CAT-DM: CAT-DM: Controllable Accelerated Virtual Try-on with Diffusion Model (CVPR 2024)
To enhance the controllability, a basic diffusion-based virtual try-on network is designed, which utilizes ControlNet to introduce additional control conditions and improves the feature extraction of garment images.
Starred by 134 users
Forked by 15 users
Languages ย  Python 99.8% | Shell 0.2%
๐ŸŒ
GitHub
github.com โ€บ Zheng-Chong โ€บ CatVTON
GitHub - Zheng-Chong/CatVTON: [ICLR 2025] CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2) Parameter-Efficient Training (49.57M parameters trainable) and 3) Simplified Inference (< 8G VRAM for 1024X768 resolution).
CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2) Parameter-Efficient Training (49.57M parameters trainable) and 3) Simplified Inference (< 8G VRAM for 1024X768 resolution).
Starred by 1.6K users
Forked by 196 users
Languages ย  Python 90.5% | JavaScript 3.3% | Cuda 3.3% | C++ 2.3%
Find elsewhere
๐ŸŒ
ACM Digital Library
dl.acm.org โ€บ doi โ€บ 10.1145 โ€บ 3581783.3612255
Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow | Proceedings of the 31st ACM International Conference on Multimedia
The warping module performs initial processing on the clothes, which helps to preserve the local details of the clothes. We then combine the warped clothes with clothes-agnostic person image and add noise as the input of diffusion model. Additionally, the warped clothes is used as local conditions for each denoising process to ensure that the resulting output retains as much detail as possible. Our approach, namely Diffusion-based Conditional Inpainting for Virtual Try-ON(DCI-VTON), effectively utilizes the power of the diffusion model, and the incorporation of the warping module helps to produce high-quality and realistic virtual try-on results.
๐ŸŒ
GitHub
github.com โ€บ fashn-AI โ€บ tryondiffusion
GitHub - fashn-AI/tryondiffusion: PyTorch implementation of "TryOnDiffusion: A Tale of Two UNets", a virtual try-on diffusion-based network by Google
PyTorch implementation of "TryOnDiffusion: A Tale of Two UNets", a virtual try-on diffusion-based network by Google - fashn-AI/tryondiffusion
Starred by 360 users
Forked by 52 users
Languages ย  Python
๐ŸŒ
Rlawjdghek
rlawjdghek.github.io โ€บ StableVITON
StableVITON
Given a clothing image and a person image, an image-based virtual try-on aims to generate a customized image that appears natural and accurately reflects the characteristics of the clothing image. In this work, we aim to expand the applicability of the pre-trained diffusion model so that it ...
๐ŸŒ
arXiv
arxiv.org โ€บ abs โ€บ 2404.17364
[2404.17364] MV-VTON: Multi-View Virtual Try-On with Diffusion Models
January 5, 2025 - To address this challenge, we introduce Multi-View Virtual Try-ON (MV-VTON), which aims to reconstruct the dressing results from multiple views using the given clothes. Given that single-view clothes provide insufficient information for MV-VTON, ...
๐ŸŒ
GitHub
github.com โ€บ Zheng-Chong โ€บ Awesome-Try-On-Models
GitHub - Zheng-Chong/Awesome-Try-On-Models: A repository for organizing papers, codes and other resources related to Virtual Try-on Models
The project is ongoing, and we welcome contributions in any forms to help improve and expand it. If you're interested in VTON or find this repo helpful, please ๐ŸŒŸstar and ๐Ÿ‘€ watch it ! [2025-10-03] DiT-VTON: Diffusion Transformer Framework for Unified Multi-Category Virtual Try-On and Virtual Try-All with Integrated Image Editing (arXiv)
Starred by 361 users
Forked by 19 users
๐ŸŒ
TheCVF
openaccess.thecvf.com โ€บ content โ€บ CVPR2024 โ€บ papers โ€บ Zeng_CAT-DM_Controllable_Accelerated_Virtual_Try-on_with_Diffusion_Model_CVPR_2024_paper.pdf pdf
Controllable Accelerated Virtual Try-on with Diffusion Model
It is the policy of the Computer Vision Foundation to maintain PDF copies of conference papers as submitted during the camera-ready paper collection. These papers are considered the final published versions of the work. We recognize the need for minor corrections after publication, and thus ...
๐ŸŒ
arXiv
arxiv.org โ€บ html โ€บ 2501.16757v2
ITVTON: Virtual Try-On Diffusion Transformer Based on Integrated Image and Text
March 15, 2025 - Virtual try-on, which aims to seamlessly fit garments onto person images, has recently seen significant progress with diffusion-based models. However, existing methods commonly resort to duplicated backbones or additional image encoders to extract garment features, which increases computational ...
๐ŸŒ
Segmind
segmind.com โ€บ pixelflows โ€บ virtual-try-on-tryon-diffusion
Virtual Try On with Try-On Diffusion - Pixelflow | Segmind
# Virtual Try-On Using Try-On Diffusion for Upper Body Clothing Virtual Try-On using Try-On Diffusion is an advanced technique that allows users to visualize how clothing will fit and look on them without physically wearing it.
๐ŸŒ
Medium
medium.com โ€บ tryon-labs โ€บ essential-virtual-try-on-research-papers-for-machine-learning-engineers-772224febf8d
Essential Virtual Try-On Research Papers For Machine Learning Engineers | by Kailash Ahirwar | TryOn Labs | Medium
April 1, 2024 - The warping module performs initial processing on the clothes, which helps to preserve the local details of the clothes. We then combine the warped clothes with clothes-agnostic person image and add noise as the input of diffusion model. Additionally, the warped clothes is used as local conditions for each denoising process to ensure that the resulting output retains as much detail as possible. Our approach, namely Diffusion-based Conditional Inpainting for Virtual Try-ON (DCI-VTON), effectively utilizes the power of the diffusion model, and the incorporation of the warping module helps to produce high-quality and realistic virtual try-on results.
๐ŸŒ
ScienceDirect
sciencedirect.com โ€บ science โ€บ article โ€บ abs โ€บ pii โ€บ S0893608025004319
Enhancing image-based virtual try-on with Multi-Controlled Diffusion Models - ScienceDirect
May 23, 2025 - To address this challenge, we introduce ... (MCDM-VTON), a novel approach that synergistically incorporates global image features and local textual features extracted from garments to control the generation process....