I’ve been playing with my new hobby: generative AI art.
I’m sure I’ll have more to say about this in the coming weeks; I’m going to be doing a deep dive into the machine learning models and applications over the next week or so. I don’t think most people really understand these natural language prompt systems are going to have on us over the next months and years. Artists understand, yes, some of them, but tools like DALL-e, Stable Diffusion, Midjourney, and GPT have gotten much better over the last year or two, and they’re quality — but not tooling — have reached commercial and production quality.
Some of these models have already been put into use for video upscaling. Think VHS to 4k conversions, for example. And some of the things being done with virtual humans… zoinks.
There’s a lot of music generation going on that’s just scary. There are many companies out there offering song generation on demand. I’ve got several bookmarked that allow you varying levels of customization parameters to prompt your own house music track, for example. I’ll be combining one of those with the SD video I rendered last night.
The use of these models for brainstorming is immense, and overwhelming. I was working on a new song last week when I started messing around with these audio platforms.
It almost makes artistry obsolete. Coming up with ideas is trivial now, just push the button and you’ve got something unique. Using it as a starting point and mixing in various prompted results in manual and automated ways is going to be an artform of its own in a decade, maybe next year.