Run the DALL-E Mini model in the terminal.
Download the latest DALL-E mini model artifacts to a local artifacts
folder.
See W&B for these downloads.
The mega model is tagged as mega-1-fp16
,
while the mini model is tagged as mini-1
.
Run python -m dalle-mini-terminal -a path/to/artifacts -- avocado toast
It will take a while though, even with the mini model.
$ time (source .venv/bin/activate; python -m dalle_mini_terminal --artifacts ./mini-1_v0_artifacts -- cats playing chess)
[...]
real 79m59.554s
user 85m35.281s
sys 0m17.885s
Install cuda
and cudnn
packages.
Also, probably need to reboot to load the kernel modules.
Arch Linux packages install the headers and binaries under /opt
.
Debian-based distributions often use /usr/lib
.
The underlying Python libraries assume that the cuda toolchain lives in
/usr/local/cuda-MAJOR.MINOR
.
So if using Arch Linux, you also need to run:
sudo ln -s /opt/cuda /usr/local/cuda-11.7
sudo ln -s /usr/include/cudnn*.h /usr/local/cuda-11.7/include
sudo ln -s /usr/lib/libcudnn*.so /usr/local/cuda-11.7/lib64/
sudo ln -s /usr/lib/libcudnn*.a /usr/local/cuda-11.7/lib64/
It will eat a lot of VRAM though; far more than my measly 950 has to offer.
This is all derivative of the iPython/Jupyter notebook hosted at [https://github.com/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb]. As such, I have reproduced the original license in this repository (see LICENSE.txt). The work is licensed under Apache 2.
See a list of the model's authors here.
Cite the model as:
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
Images generated by the model are one of: