Run the DALL-E Mini model in a terminal.
I've taken the upstream project's inference pipeline notebook and reimplemented as a normal Python module. Currently does not perform the 'optional' CLIP scoring and sorting. And obviously this module runs headlessly; images are saved locally.
The project can be setup (into a virtual environment for all dependencies)
by running make install
.
Download the latest DALL-E mini model artifacts to a local artifacts
folder.
See W&B for these downloads.
The mega model is tagged as mega-1-fp16
,
while the mini model is tagged as mini-1
.
Try running with
python -m dalle-mini-terminal -a path/to/artifacts -- avocado toast
It will take a while though, even with the mini model.
$ time (source .venv/bin/activate; python -m dalle_mini_terminal --artifacts ./mini-1_v0_artifacts -- cats playing chess)
[...]
real 79m59.554s
user 85m35.281s
sys 0m17.885s
Install the proprietary nvidia driver,
as well as the cuda
and cudnn
packages.
Likely also necessary to reboot, in order to load the kernel modules.
On Arch Linux, cuda libraries and binaries install into /opt
,
while the cudnn libraries install into /usr/lib
.
Debian-based distributions often use /usr/lib
for both cuda and cuddn.
The underlying Python modules assume that the entire toolchain lives in
/usr/local/cuda-MAJOR.MINOR
.
In other words, if using Arch Linux, it's also necessary to run:
sudo ln -s /opt/cuda /usr/local/cuda-11.7
sudo ln -s /usr/include/cudnn*.h /usr/local/cuda-11.7/include
sudo ln -s /usr/lib/libcudnn*.so /usr/local/cuda-11.7/lib64/
sudo ln -s /usr/lib/libcudnn*.a /usr/local/cuda-11.7/lib64/
The project can then be setup with make install-cuda
.
Running will eat a lot of VRAM though;
far more than my measly GTX 950 has to offer.
print_version
) out of __main__.py
into an internals.py
file.flax.jax_utils.replicate
?
It doesn't do anything for a CPU workload.jax.pmap
) when there is just
one compute unit?
(i.e. jax.device_count() == 1
)pyproject.toml
,
so that this project can be pipx
installable both with and without cuda.
mypy
is an option with this dependency chain.This is all derivative of the iPython/Jupyter notebook hosted at [https://github.com/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb]. As such, I have reproduced the original license in this repository (see LICENSE.txt). The work is licensed under Apache 2.
See a list of the model's authors here.
Cite the model as:
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
Images generated by the model are one of: