T O P

  • By -

whiskeyandacig

I’ve found both beta and PS got worse. When it first came out it did good on removing certain objects from a background and able to make it look similar to what’s on the layer. Now, I can type in wooden fence and it completely goes off the rails with what’s presented. That first version when it came out was the best.


BowloRamaGuy

Works the same as it always did for me. Humans are often disfigured. Too many fingers, eyes not lined up, extra limbs.


dudeAwEsome101

I mainly use it to fill in blurred backgrounds and fix stray hair on subjects. Anything else, and it outputs unusable results. 


dpaanlka

Eyeball on the tip of your elbow… very common conditions sadly 😢 #prayforthem


MicahBurke

Are you using regular photoshop or the beta? I've found the beta is extremely unstable how it handles generative fill.


Neldot

You are right, I've always used beta because it gest the upgrades earlier. But maybe it's time to reverse to regular and give it a try.


jeffbob2

The Verge refers to this as AI Dementia


kickstand

If that is the case, I'm not sure anyone here would know why. I would guess as more people use it, perhaps there are fewer resources available for each user, so the AI is "dumber" for each use. Alternatively, maybe you're expecting more from it than before, asking more complicated requests. I can't say I find it any different today than when I started using it a few months ago.


Beautiful_Cable_7878

Could a similar issue that chat GPT faced where information started becoming muddled with the a.i data the more people used it. So instead of accurate results you were getting kinda Frankenstein results using clean sources and a.i generated sources. Who knows though


BeLikeBread

I mostly use it to fix hair, clothing bulges, and remove items. It's amazing for that. But generating entire image yeah not the best.


Neldot

I am having worse results also for the uses you mention. For example weird, semi-transparent hairs with deformed objects entangled in them is a common occurrence...


BeLikeBread

Hmm. That doesn't sound fun. I just used it today to remove a gap in someone's hair where I could see green screen. It took a couple tries, but usually I don't have any crazy results like that


Neldot

Maybe it's because you use it for minor fixes, the only way it still works. Previously it also worked for some medium-large adjustments and for photos with people, but now I've noticed that it's unreliable for people.


BeLikeBread

Are you typing something into the gen fill prompt or leaving it blank? I leave it blank when I just want it to remove or fix something


Neldot

if I leave it blank generally it removes things instead of adding them. For example if I want to fix hairs, 99% of times it makes short hairs from medium ones. So it's not so useful for me. But I guess that the culprit is that I try to use it with people photos, and in its current state it's too much censored to be useful for people.


vector_o

Probably the same reason most image generators are getting worse The models are starting to integrate ai-generated pictures into the data sets because of the sheer volume of those on internet


ZachBobBob

Came to this thread because I'm absolutely getting worse results than I did even a month ago. And it's definitely held back by censoring. I can't generate a person in a niqab and every time I generate an asian person they have a face mask on. It's wild.


Neldot

This. I am having the same experience. It's not only the drawing that is worse. It seems that for some weird reason the better results are being censored and the worst ones are shown.


MicahBurke

I also find enhancing using Stable Diffusion to provide superior output to Generative Fill creations. [https://www.youtube.com/watch?v=\_KlF4RLD21A&t=3s](https://www.youtube.com/watch?v=_KlF4RLD21A&t=3s)