You might have noticed the images in my last two emails were different than normally. I never planned for this, I must admit. Midjourney—the AI tool I’ve been using to create them—just released a new version 5 last week, and I’ve spent the most of this time trying to figure out what to do with it. Many hours of experimentation and forum threads later, I still have no idea how to consistently achieve the results I’m looking for.
As far as I understand, the previous Midjourney version was designed to make things look good by default. You could type in a single word or even a single emoji and get something visually interesting. Now it’s trying to be more realistic and more true to what you asked for. If you don’t spell it out in the right sequence of words, it won’t be there.
Early on in my adventure with Midjourney I discovered a magic spell. Here it is: “art nouveau stained glass window”. These words produced beautiful tangled botanical frames and ornaments which I could combine with almost any scene or subject that I was trying to depict.
Unfortunately when I utter the same magic spell now, Midjourney is giving me actual stained glass windows. They’re probably much more historically accurate, but have none of the elegance, playfulness and flow. The old version had a tendency to make things organically grow out of each other, which made for quite poetic and artistic depictions of nature, beautiful and ornamental stained glass windows that couldn’t possibly exist without breaking the laws of physics, and a lot uncanny body horror, with displaced, missing, or excessive fingers or limbs. The new version is much better at drawing realistic body parts, but it completely killed my one magical simple trick.
You might be wondering why does a site called Grandmotherly Wisdom write about the latest software update for an AI tool? Well, I’m sure your grandma would have used the AI, but that’s kinda besides the point. My main takeaway here is that I’ve been given access to a truly marvellous tool that would have completely blown the minds of the greatest artists of the past, and I’ve employed it to play the same trick over and over again. I never dared to explore what’s beyond my few favorite themes, and if it wasn’t for this software update I probably never would. My husband says the same thing about large language models, people are so quick to employ them for a simple and narrowly defined job without even trying to figure out what they are capable of.
To be fair, they’re doing it to themselves too. It’s called having a career, especially one started at a big org freshly out of college.
I’ve written earlier here about trading novelty for belonging, but these were all things I had a lot of exposure to. I had enough time and experience to develop a clear sense of what I like, usually over many years. I can order the same cappuccino over and over again because I’ve tried coffees made in a hundred different ways and learned from personal experience what consistently gives me the most joy. I can always go to the same hairdresser and ask for pretty much the same thing because I’ve tried red, orange, purple, black blue, and ash blonde hair colors with different cuts and learned that none of these fit me as well. Here I haven’t even yet scratched the surface of what’s possible with AI image bots or large language models. How could I commit to a single theme this early on without having the faintest idea of what else is out there?
In theory I could keep using version 4 and stay in my cozy garden of art nouveau fake stained glass windows, but for how long? AI tools keep evolving at an incredible pace, with previously unbelievable changes released every few months. I don’t want to limit myself to the few themes I can currently make without messing up everyone’s fingers and limbs too much when the new versions will be capable of so much more. Sooner or later, I’ll have to figure it out anyway.
Of course there are practical constraints to this creative journey. Exploring what’s possible takes time, and there isn’t much of it when taking care of a sick kid at home. Fast AI processing hours are costly, and the slow mode means a lot of waiting from one iteration to the next one. My previous workflow was simple, in 10 minutes I knew if my first idea for a picture is going to work out or if I should come up with an easier concept. Now I have no idea what I’m doing, but I’m more excited than ever to find out.
Will I discover a way to make new images looking similar to my old ones? Hopefully not, I’d rather use that as an inspiration for something much more mature and touching. For now, I’ll keep playing around and experimenting with different styles - not just in Midjourney, but also in my writing, parenting and even cooking and homemaking There’s such a vast universe of creative possibilities for us here to explore.
Omg the image of the child sitting in front of the cyborg is absolutely stunning
Sounds like a classic "explore vs exploit" dilemma (https://www.unknown-unknowns.xyz/p/unknown-unknowns-88-niches-get-stitches)!
Excited to see where your explorations take you :)