A collection of experiments with Generative Artificial Intelligence. I wanted to research how to generate moving images with AI Generators made for image generation.
When Google released DeepDream in 2015 I first got interrested in neural networks that can alter existing or even generate new images. I later also experimented with software such as Nvidea's GauGAN that turns simple drawings into landscapes and other Generative Adversarial Network like VQGAN+CLIP to generate image from text prompts alone.
In 2022 I got an early access to OpenAI's DALL·E text-to-image model. This image generator was incredible and after generating a few images I wanted to use it to make a video.
I used it's inpainting function to generate the missing parts of an image and then used the output as the input of the next image. I had to scale it down each time in Photoshop and saved all intermediate frames. I then stitched them all together in After Effects to mask each frame and generate a long zooming motion.
The problem with image generators like DALL·E or Midjourney for me is that they are only accessable through an API. Stability AI's code and model weights for "Stable Diffusion" have been released publicly and they can run on my computer. That way I was able to experiment more freely.