This cell will take a while to execute because it’s downloading a lot of code. You have to run this one, but only once per session. This cell downloads and installs some external code packages (like CLIP and the VQGAN code) that the rest of the cells depend on. A code cell that starts with !git clone.Running this cell just gives you information about the GPU that Colab has assigned to you. A code cell with a single command - !nvidia-smi.This cell doesn’t do anything, and you don’t actually have to run it. Each instruction is for a single cell, so there are 9 instructions. Follow along with the instructions below to complete your first run. The notebook that we’re using has 9 cells. So the way that you run a Colab notebook is by running each cell (i.e. You can execute the code in a cell by clicking on the “Play” icon. Once the programmer has written the code, they can hide it and just show the text description of what the cell does. Each cell runs a block of code, and can have a text description. Remember, Colab is a general purpose online programming environment, it’s not made specifically for making AI art, so there are some things that might seem unnecessary, and the interface is a bit confusing for newcomers.Ĭolab notebooks are made up of “cells”. It will be helpful for you to understand a bit about how Google Colab works in general. For this tutorial, we’ll be using this version (go ahead open it in a new tab). Here’s a list compiled by Reddit user u/Wiskkey. Her original implementation has been copied and tweaked many times since then, so there are lots of different versions that you can use. When Katherine Crowson first combined VQGAN and CLIP, she made it public in a Google Colab notebook (a notebook is the name for a program written in Colab) so that anyone could use it. So much so that it’s impractical to run it on a CPU. That last part is important because VQGAN+CLIP (and machine learning in general) takes a lot of processing power. Google Colaboratory (usually referred to as Colab) is a cloud-based programming environment that allows you to run Python code on servers that have access to GPUs (fast processors originally created for graphics). If at any time you feel that Colab is too complicated, jump straight to Method 2. If you’re on your phone, you should probably skip to Method 2. Note: Google Colab is designed primarily to be accessed from a computer. Text prompt: “A colourful cubist painting of a parrot in a cage”. Read on to find out how to do it yourself… Remember, no coding required! This soon resulted in a viral explosion of people using this technique to create incredible artworks and sharing them on platforms like Twitter and Reddit. The implementations of VQGAN+CLIP were made public on Google Colab, meaning anyone could run their code to generate their own art. The two algorithms were combined in various forms by AI-generated art enthusiasts like Ryan Murdock and Katherine Crowson. VQGAN is a generative adversarial neural network that is good at generating images that look similar to others (but not from a prompt), and CLIP is another neural network that is able to determine how well a caption (or prompt) matches an image. VQGAN and CLIP are actually two separate machine learning algorithms that can be used together to generate images based on a text prompt. First, an intro to VQGAN and CLIPįeel free to jump straight to method 1 or 2 if you’re just here for the tutorial. Text prompt: “It’s like that drug trip I saw in that movie while I was on a drug trip.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |