Midjourney Inpainting vs. Generative Fill – Which one is better and is it justified to keep Photoshop around just to use Generative Fill? I mean, who would want to pay for both if they don’t have to, right?
That’s the topic we’re going to explore today at Run The Prompts. I’m going to give you real image examples of what both tools can do via a hardcore apples-to-apples comparison that may make you go bananas.
Here’s some background for you.
In May of 2023, Adobe took the design world by storm with the release of “Generative Fill”, a cutting-edge feature that allowed users to produce incredible variations to their images via generative technology based on simple text prompts.
The feature was so cool, in fact, that yours truly subscribed to Adobe Photoshop again because I was hooked pretty much instantly. Outside of ChatGPT and Midjourney, Generative Fill was my third addiction. I consider it to be the best thing to ever happen to Photoshop.
Midjourney Inpainting (AKA “Vary (Region)”) was introduced in August 2023 as essentially Midjourney’s answer to Generative Fill.
The feature quickly blew up and became a fierce competitor to the Photoshop feature.
This article is not going to teach you how to use Midjourney Inpainting (Vary (Region)). It’s pretty straightforward, but if I get enough people asking me to make a tutorial, I will.
Instead, it’s going to show you which one is better: Midjourney Inpainting vs. Generative Fill.
I also want to be very clear about something: this test is ONLY comparing the generative images that the tools produce. Photoshop can undoubtedly outperform Inpainting when it comes to modifying an existing image. Selecting the area of the image you want to modify is also much better and easier. There is no comparison. Photoshop has been around for 35 years. Midjourney hasn’t seen its first birthday yet. Therefore, these aspects are not part of this analysis.
To make this a fair fight, I used the same prompt for each tool, on the same base image, in the same spot.
All of the base images were created with Midjourney. After producing the Midjourney images, I used Generate Fill and Inpainting to edit them.
I didn’t consider an image to be a success unless it was a fairly accurate representation of what I was trying to create.
In some cases, the tool was simply incapable of producing what I wanted, so I took the best variant I could get on the final prompt. As much as I love you, I wasn’t going to sit there and hit Midjourney or Photoshop with a sledgehammer like an angry monkey until it magically cooperated.
So…can little Midjourney take on a multi-billion dollar company (Adobe)? Let’s find out.
AI Test #1: The Gentrification of Downtown Detroit in 1990
Detroit is a sh*t hole.
I can say that because I grew up 40 minutes from it.
So as the first test, we’re going to see if either tool can help clean things up a bit.
Here’s an image of Downtown Detroit in 1990 according to Midjourney. Very accurate!
We’re going to add an extremely expensive sports car to the front of the photo and expensive office buildings on the right-hand side.
Midjourney Inpainting Crushes It – 1st Try
On the first prompt, Midjourney did a great job. The sports car’s vibrancy and overall intensity may be a bit off, but other than that, I’m really impressed with the result.
Generative Fill Prefers The Dirt – 6th Try
It took six prompts to produce this image. The car isn’t facing the right direction, but then again, look at the condition of the street. So I guess it’s fine.
The buildings look okay, but Midjourney still took the cake.
Winner: Midjourney Inpainting (Score: 1-0)
AI Test #2: The Teen Who Needs Braces
This really nice young man needs braces but can’t afford it.
So naturally, I used Midjourney and Photoshop to see if I could help him out.
By the way, I used the keyword “blacklight” in my Midjourney prompt to produce this image. Cool, right?
Inpainting Can’t Do It – 3rd Try
After three tries (which equals 12 variants), Midjourney couldn’t give him braces. The reason is that Inpainting typically can’t handle small areas like that.
Per Midjourney’s documentation: “The size of your selection affects the outcome. Larger selections provide the Midjourney Bot with more contextual information, which can improve the scaling and context of new additions.”
Generative Fill Gives Him Braces (Kind of!) – 1st Try
Generative Fill sort of worked here. The braces are far from perfect and the kid’s gumline is now more similar to horse than human, but at least it kind of worked.
The lesson here is you need to use Generative Fill if you want to make small edits like this. Things like necklaces, watches, mustaches, etc., all pretty much require Generative Fill for now due to their small size. However, given the rate of change in the world of AI, then could change soon.
Maybe in a year from now, the next battle of Inpainting vs. Generative Fill will show something different.
Winner: Photoshop Generative Fill (Score: 1-1)
AI Test #3: Floating Through Antarctica
Antarctica is a beautiful place. All it’s lacking is a thriving tourism industry with yachts, so I looked to Photoshop and Midjourney for help.
Clearly, the only thing missing here is a yacht. So let’s add one.
Midjourney Inpainting Floats the Boat – 1st Try
Truly incredible and realistic. The reflection, shadows, lighting, and nearly everything else all look amazing.
Inpainting clearly allows you to combine two specific elements in a natural and easy way. Trying to pull off this image in ONE prompt would not result in the same level of accuracy. I tried. That’s where Inpainting shines.
Generative Fills in The Lake with a Boat – 1st Try
It worked but wasn’t up to par with Midjourney. The reflection is not as accurate and the boat is not as majestic. It’s the poor man’s version of the Midjourney yacht.
Also, the boat is coming back to shore. How much fun could they really have been having on that thing?
Winner: Midjourney Inpainting (2-1)
AI Test #4: She’s Turning Japanese (I Really Think So)
Sometimes people are Asian, and other times they are not.
So today, we’re going to see if we can turn a 25-year-old Caucasian girl into a middle-aged Asian woman.
We’re going to do that by selecting her face and using the prompt “middle-aged Asian woman”.
I don’t think she’ll mind.
Inpainting Did a Pretty Good Job – 1st Try
Her skin doesn’t look totally natural, but other than that, this is more than passable and surprisingly good.
Generative Fill Makes Being Old Look Old AF – 1st Try
NO WAY is this woman middle-aged. If this is 40, what the hell does Photoshop think 100 looks like?
I kept trying to prompt it over and over, and the best result was actually the first, so I marked this as the first try.
Winner: Midjourney Inpainting (3-1)
AI Test #5: Horses are the Future of Transportation
For the last test, we’re going to place a horse and buggy into an image of a futuristic city. It will go in place of the car on the left-hand side of the image.
Let’s do it.
Midjourney Inpainting is a Stallion – 1st Try
Looks good, with the exception of the horns.
Photoshop Generative Fill is the Horse to Bet on, but Barely – 1st Try
This one is kind of subjective. It was close. But the Photoshop version looks a bit more natural.
Winner: Generative Fill (3-2)
Inpainting vs. Generative Fill – The Verdict
In this independent test, the winner is…
Midjourney Inpainting actually beat Photoshop Generative Fill. I am honestly in shock.
Yes, it was a small test. Yes, it was just me doing the testing. But still. This is crazy.
Now to be fair, we’re only talking about ONE feature, and Adobe has plenty of other products and Photoshop features to worry about, but this is still quite an accomplishment for Midjourney. It shows their strength, despite their small stature (for now).
Hold on to your seats everyone, because Adobe has some fierce competition.
What about you? Did you do some testing on your own? Leave a comment below and let me know which tool came out on top. Also, be sure to check us out on social media. Prompt the planet!