Skip to content

ggxxii/texdreamer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation [ECCV 2024]

Yufei Liu1, Junwei Zhu 2, Junshu Tang3, Shijie Zhang 4, Jiangning Zhang2, Weijian Cao2, Chengjie Wang2, Yunsheng Wu2, Dongjin Huang1*

1Shanghai University, 2Tencent Youtu Lab, 3Shanghai Jiao Tong University 4 Fudan University

Updates

[07/2024] TexDreamer is accepted to ECCV 2024!

Installation

We recommend using anaconda to manage the python environment. The setup commands below are provided for your reference.

git clone https://github.com/ggxxii/texdreamer.git
cd texdreamer
conda create -n texdreamer python=3.8
conda activate texdreamer
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt

Please also install xformers following: https://github.com/facebookresearch/xformers.git instructions.

Data Preparation

Download TexDreamer Trained models

You can find our model .zip from Huggingface Put the downloaded models in the folder texdreamer_u128_t16_origin. The folder structure should look like

./
├── ...
└── texdreamer_u128_t16_origin/
    ├── i2t
        ├── i2t_decoder.pth
        └── SMPL_NEUTRAL.pkl
    ├── i2uv
        ├── vision_encoder
            ├──config.json
            └──pytorch_model.bin
        └── i2t_decoder.pth
    ├── text_encoder
        ├── adapter_config.json
        └── adapter_model.bin
    ├── unet
        ├── adapter_config.json
        └── adapter_model.bin

Generate Human Texture from Text

From input .txt file

We provide a txt file with 6 sample prompts. You can find it in data/sample_prompts.txt. And the sample generation results are in output/t2uv.

python infer_t2uv.py --lora_path texdreamer_u128_t16_origin --save_path output/t2uv --test_list data/sample_prompts.txt

Since we load stabilityai/stable-diffusion-2-1 from local files, you may need first download it and change 'cache_dir' in function 'get_lora_sd_pipeline'.

Generate Human Texture from Image

From input image folder

We provide some sample images from Market-1501 dataset. You can find it in data/input. And the sample generation results are in output/i2uv.

Of course you can also use your own images.

python infer_i2uv.py --lora_path texdreamer_u128_t16_origin --save_path output/i2uv --test_folder data/input

Citation

If you find our work useful for your research, please consider citing the paper:

@misc{liu2024texdreamer,
      title={TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation}, 
      author={Yufei Liu and Junwei Zhu and Junshu Tang and Shijie Zhang and Jiangning Zhang and Weijian Cao and Chengjie Wang and Yunsheng Wu and Dongjin Huang},
      year={2024},
      eprint={2403.12906},
      archivePrefix={arXiv},
      primaryClass={cs.CV}}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages