Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version Search this Thread
05-31-2023, 04:57 PM   #16
Moderator
Site Supporter
Loyal Site Supporter
MarkJerling's Avatar

Join Date: May 2012
Location: Wairarapa, New Zealand
Photos: Gallery | Albums
Posts: 20,391
QuoteOriginally posted by Wheatfield Quote
As we all know, artificial intelligence has become a thing. I suspect by now it is used in some form or another for every image shot with a newer cell phone camera.
That AI technology is now moving into the realm of commonality with the introduction of Generative Fill with Adobe Photoshop. Anyone who is on the Adobe Creative Cloud rental program can download the Beta version of Photoshop and very easily (I think perhaps too easily) create images from practically nothing.

The way it works is you load your image into Photoshop, select the subject and refine the selection as needed, then select the inverse and go to the Edit tab and select Generative Fill from the dropdown. This will open a dialog box where you tell the program what sort of fill you want for your background.
Click generate and Adobe creates 3 backgrounds and superimposes your image onto it. If you don't like what you see, pick the one that you like the most and do it again (and again) until the software comes up with something you like.
And that's all there is to it.
It can also be used to remove things from the image by lassoing the distracting elements and clicking on generative fill without entering anything. The program will then fill that area with what it thinks would be there behind whatever has been selected.
It can also be used to add elements to an image by lassoing an area of the image and when the dialog box opens, telling the program what you want as a fill.
The latter is something of a crap shoot at this point in time.

The processing is done on an Adobe server, so how fast the image generation works is going to be dependant on one's internet connection. An image off the 40MP XT-1 seems to take about half a minute or less.
Also, Adobe has placed restrictions on exactly what the tool can be used to generate, so hopefully the internet trolls will find it frustrating when making their ugly little memes attacking whatever has triggered them.

A couple of examples of what the new feature can do in my very less than skilled hands:

the first is a random truck photographed in a parking lot:



And using Generative Fill to put it someplace more truckish. I told the program to put it in a construction site.



Next is a picture of my truck, also in a parking lot:



And, using Generative Fill, I told it to park my truck on a logging road.



And yes, that's a Cummins badge on my truck.

And finally, an image I shot a couple of years ago in a park near where I live, and a couple of different versions:

This is as shot with no edits:



and an edit where I placed her in a nicer scene:



And another scene, this time I made the dog a little cooler:



This is still the Beta version of Photoshop, and I have found that over the past couple of weeks since I started playing around with the tool that it has gotten better, so it is still a work in progress as they refine the algorithm.

Some will see this as a giant step backwards for photography, and in many ways it is. Why bother going somewhere nice when you can shoot your subject in front of a blank background and just put it wherever you like? Some will see it as a very powerful creative tool.

Either way, it's here to stay for better or worse.
Very cool Bill. I'm impressed.

05-31-2023, 04:59 PM - 1 Like   #17
Site Supporter
Site Supporter
Digitalis's Avatar

Join Date: Mar 2009
Location: Melbourne, Victoria
Photos: Gallery
Posts: 11,694
Light is either coming from the wrong direction, or there is a discontinuity in light quality in the edited shots, experienced eyes can spot it. However, technology is bound to get better at matching such things over time. For studio photographers with control over lighting, creating elaborate composites out of seemingly incongruent elements has been our stock trade for decades ( Blue/Green screen photography anyone?) Adobe have just removed the irritating barrier of having to be able to light a green/blue screen with perfect evenness.

Last edited by Digitalis; 06-01-2023 at 09:38 PM.
05-31-2023, 11:23 PM - 1 Like   #18
Pentaxian




Join Date: Feb 2015
Photos: Gallery
Posts: 12,177
I've evaluated such content aware fill approach for large prints, and concluded that such feature was impressive for small images but unsuitable for large.
Problems happen at edges as the software find edges based on the image itself, edges are more or less accurate, edges are where assembly errors are make, not visible when images are displayed at sizes much lower than the resolution of the native image.
Say, AI software makes +-50 pixels error on edges, once the image is downsized x10 for display, the +-50 pixels error become +-5 pixels, it's not visible on web sized images.
In other words, for AI generative fill to work for professional large size prints, you'll need to start with a native image having gigapixels resolution.
So , for me, this kind of AI image fill is more like a toy, great for hobby, creative fun imagery, but I would consider for high quality professional works.

---------- Post added 01-06-23 at 08:32 ----------

QuoteOriginally posted by MarkJerling Quote
Very cool Bill. I'm impressed.
I'm impressed and it also reminds me of the speech to text recognition software, supposed to save time, as you speak 4 times faster than you write, the software will write for you.
So you speak in a microphone, and you get 70% recognition right. After hours of training the AI machine with your language style and voice, the speech to text is 95% error free, remains the 5% mistakes that you have to find by reading back and correct manually.
After the promise of something revolutionary, turns out writing is as fast and you don't need a headset and teach the machine to do a mediocre job.
Speech to text AI has been around for 20 years, and we still write our text using our fingers.

Last edited by biz-engineer; 05-31-2023 at 11:37 PM.
06-01-2023, 02:26 AM - 2 Likes   #19
Pentaxian
Dartmoor Dave's Avatar

Join Date: Aug 2012
Location: Dartmoor, UK
Photos: Gallery
Posts: 3,857
Some startling examples of what Generative Fill is capable of in this Petapixel piece: RIP Photographers? All The Wild Things People Are Doing With Photoshop's New Generative Fill Tool | PetaPixel

Some of them are really beautiful, but whatever they are, it's not photography as I understand the term.


* * * * * *

QuoteOriginally posted by BigMackCam Quote
I don't share many photos online, but when I do, I've been honest about any serious manipulation - editing out telegraph poles and wires, sky replacement, compositing to remove people in busy locations etc. (as a general rule, I'm not keen on that kind of editing, even pre-AI, and have only done it very rarely)... but then, I feel like a bit of a hypocrite as I routinely carry out spot replacment / healing to get rid of dust spots, bits of litter, a distant bird in an otherwise clear sky, that sort of thing - which results in better-looking but not entirely honest representations of what I actually captured; yet I don't explicitly mention those when I post a photo... So I really don't know... I'm rather conflicted and confused on what I personally find ethical in manipulation of a digital (or digitised) photo presented as such, and I suspect folks differ greatly on the matter. One thing's for sure - whilst Bill's examples show there are real creative uses for AI image manipulation, at this level I'm certain they've crossed over from being photographic works to composite digital art works...
I'm in the same frame of mind at the moment -- just not sure what I consider to be acceptable levels of manipulation anymore. The truth is that I sometimes use processing techniques nowadays that I would have absolutely refused to consider even five years ago. Back then, when I screwed up in taking a photo I considered it a matter of principle to leave the evidence that I had screwed up intact. But now when I screw up a photo in-camera, I know how to fix it in Photoshop and I've become more and more casual about doing that. And I'm talking about fixing things to the level of combining two frames of the same scene into one. Just a few weeks ago I did this one:



The train hadn't reached the bridge yet at the moment the double decker went past, so this is two frames taken a few seconds apart and combined together. And yes, I'm ashamed of myself. Especially since it's a pretty bland photo even after all that "fixing".

So my real concern about generative fill is that, now that it exists, I don't trust myself not to use it. Sooner or later I'll screw up a photo and know that generative fill can fix it, and then I'll have crossed that Rubicon with no going back.

I really miss the days of Kodachrome. Photography was so much simpler back then.

06-01-2023, 05:13 AM   #20
pid
Pentaxian




Join Date: Jan 2010
Photos: Gallery
Posts: 566
i only ask me: who gets any copyright fees for his work the KI is using?!
KI will use all our pictures here in the forum and elsewhere on internet. Think about that before you clap with your hands for this new feature ...
06-01-2023, 05:20 AM   #21
Moderator
Loyal Site Supporter




Join Date: Feb 2015
Location: Central Florida
Photos: Gallery
Posts: 6,033
QuoteOriginally posted by pid Quote
i only ask me: who gets any copyright fees for his work the KI is using?!
KI will use all our pictures here in the forum and elsewhere on internet. Think about that before you clap with your hands for this new feature ...
Only based on what I've seen so far, no one individual image will be identifiable as the source for the generative fill. Those seem to a mish-mash of several different ones, not recognizable as specific photos you or I took, and therefore I would not expect compensation for what I'd consider fair-use of images. It's not unlike coding IMO.
06-01-2023, 05:44 AM   #22
Site Supporter
Site Supporter
ehrwien's Avatar

Join Date: May 2016
Posts: 2,772
QuoteOriginally posted by gatorguy Quote
Only based on what I've seen so far, no one individual image will be identifiable as the source for the generative fill. Those seem to a mish-mash of several different ones, not recognizable as specific photos you or I took, and therefore I would not expect compensation for what I'd consider fair-use of images. It's not unlike coding IMO.
I think it wasn't with this generative fill feature discussed here, but another AI image "creation" software, where some "generated" images had the famous Getty Images watermark clearly visible? Getty Images is Suing Stable Diffusion for a Staggering $1.8 Trillion | PetaPixel

The first comments in this thread get a different feel when you ignore the fact that they are about this generative fill feature, but read them as if they are about photoshop in general,... takes me back some decades

There has always been image manipulation, the difference now is the ease of use combined with some impressive results, so we'll see more and more manipulated images where the amount of manipulation will grow higher and higher.

So what we should do is be aware, be sceptical, teach people around us a healthy amount of media literacy, or rather extend what we were aware of until now to images, to every image we see, be it on Flickr or in the news. Remember the pictures of the Pope in some fancy clothes a few months ago? There were more believers than there should have been (and I don't mean people who believe in God).

06-01-2023, 06:03 AM   #23
Moderator
Loyal Site Supporter




Join Date: Feb 2015
Location: Central Florida
Photos: Gallery
Posts: 6,033
QuoteOriginally posted by ehrwien Quote
I think it wasn't with this generative fill feature discussed here, but another AI image "creation" software, where some "generated" images had the famous Getty Images watermark clearly visible? Getty Images is Suing Stable Diffusion for a Staggering $1.8 Trillion | PetaPixel

The first comments in this thread get a different feel when you ignore the fact that they are about this generative fill feature, but read them as if they are about photoshop in general,... takes me back some decades

There has always been image manipulation, the difference now is the ease of use combined with some impressive results, so we'll see more and more manipulated images where the amount of manipulation will grow higher and higher.

So what we should do is be aware, be sceptical, teach people around us a healthy amount of media literacy, or rather extend what we were aware of until now to images, to every image we see, be it on Flickr or in the news. Remember the pictures of the Pope in some fancy clothes a few months ago? There were more believers than there should have been (and I don't mean people who believe in God).
Yup, I was aware that at least one early pioneer in AI images was likely relying on identifiable copyrighted images. Getty watermarked? Seriously?

Adobe's implementation seems to be different, and largely (only?) trained on their own Adobe-licensed images. If there are any of our own images somewhere in the mix, I expect fair use will apply. They won't be identifiable as one of ours.
06-01-2023, 07:25 AM   #24
Site Supporter
Site Supporter




Join Date: May 2019
Photos: Albums
Posts: 5,976
QuoteOriginally posted by gatorguy Quote
If there are any of our own images somewhere in the mix, I expect fair use will apply.
Don't expect it, don't expect it... AFAIK, every AI model so far has been trained on every image they managed to get their hands on, regardless of whether the owner has given permission or not.
06-01-2023, 07:33 AM   #25
Moderator
Loyal Site Supporter




Join Date: Feb 2015
Location: Central Florida
Photos: Gallery
Posts: 6,033
QuoteOriginally posted by Serkevan Quote
Don't expect it, don't expect it... AFAIK, every AI model so far has been trained on every image they managed to get their hands on, regardless of whether the owner has given permission or not.
I'm only basing my comments on what Adobe has said:

"At the core of Generative Fill is Adobe Firefly, which is Adobe's custom image-synthesis model. As a deep learning AI model, Firefly has been trained on millions of images in Adobe's stock library to associate certain imagery with text descriptions of them."
06-01-2023, 08:03 AM   #26
Veteran Member




Join Date: Apr 2023
Posts: 351
QuoteOriginally posted by Dartmoor Dave Quote

Some of them are really beautiful, but whatever they are, it's not photography as I understand the term.



The train hadn't reached the bridge yet at the moment the double decker went past, so this is two frames taken a few seconds apart and combined together. And yes, I'm ashamed of myself. Especially since it's a pretty bland photo even after all that "fixing".

So my real concern about generative fill is that, now that it exists, I don't trust myself not to use it. Sooner or later I'll screw up a photo and know that generative fill can fix it, and then I'll have crossed that Rubicon with no going back.

I really miss the days of Kodachrome. Photography was so much simpler back then.
I've found myself saying elsewhere don't be so hung up on photo-graphy as a term. We have images which are "drawn by light", except they have always been airbrushed, or burned or dodged or composited.. It just wasn't available to many people. These are drawn by multiple things but we can't them poly-graphy because a polygraph is lie-detector machine :-) But you're a picture maker. who does most of the work photo-graphically but some is done with your instructions to software. Who cares how the software works ? We'll judge things on the end picture, and what your intent was when you told to computer to do its stuff. Some intent is to deceive (the liquify tool in photoshop for example making someone look thinner than they really look), some is artistic (removing a telegraph pole from a landscape). If the way you saw the landscape in your minds eye was without the pole, very few people will say it was deceitful to remove it, whether that was the clone brush, content aware fill, or generative fill. Want to transplant a car from one scene to another ? you can spend all day looking for a picture to be the new backdrop, or AI can make one from what its data says should go into something to satisfy a description. AI isn't making the impossible possible, it's making the impractical doable.

As for the composite. Why be ashamed ? That wasn't how light got bounced off things and into the camera, but you pictured the shot with the bus and train in those positions at the same time, the fact they were at different times only matters if you're providing evidence of something (e.g the driver of that bus was distracted by a Steam Train, when he couldn't have seen it). I've shot two frames and realised I've cropped a foot or a hand out of the better shot and composited it together. I've done group shots and stacked them and then used the best face each person was pulling. In both cases I'm not showing something which wasn't there just things which weren't in the same picture at the same time. But then even a fast shutter speed does that, exposure of one part of the frame has ended before part of another begins, we think the image is an instant but different parts were taken at different times.

I've played a bit with the photoshop beta. Sometimes the fill is good, sometimes it sucks, big time. When you talk about crossing the Rubicon, really most of use did that the first time we used spot-healing, or red-eye fix, or the clone brush, so now we can remove obtrusive parked cars as easily as we heal a spot, or clone out a stray label, but we haven't lost our integrity. There are collages and composites where we're not really expecting people to believe what the image shows. I can clone out a parked car and fill in a Dinosaur but no-one will think I saw a T-Rex. And then we reach another line where we can make a composite which is good enough to fool many people a lot of the time and lead them to believe something false.
06-01-2023, 10:03 AM   #27
Pentaxian
Dartmoor Dave's Avatar

Join Date: Aug 2012
Location: Dartmoor, UK
Photos: Gallery
Posts: 3,857
Those are all completely valid points and I agree with many of them. I don't want to divert the thread into a discussion about processing as a whole because it's about generative fill in particular, but the part of your reply that really chimes with me at the moment is:

QuoteOriginally posted by James O'Neill Quote
And then we reach another line where we can make a composite which is good enough to fool many people a lot of the time and lead them to believe something false.
Strangely (being a lifelong Pragmatist), as I've aged I've become a sort of Kantian, and more and more of a strict one. Nowadays I tend to think that it's reasonable that a photograph shouldn't lead someone to believe something false. So lately I'm considering whether there is a categorical imperative that imposes a duty on me never to take any photograph that leads someone to believe something false.
06-01-2023, 09:19 PM   #28
Pentaxian




Join Date: May 2016
Photos: Albums
Posts: 1,990
QuoteOriginally posted by Dartmoor Dave Quote
The train hadn't reached the bridge yet at the moment the double decker went past, so this is two frames taken a few seconds apart and combined together. And yes, I'm ashamed of myself. Especially since it's a pretty bland photo even after all that "fixing".
Put the cyclist on top of the bus, that might liven it up a bit.

---------- Post added 06-01-23 at 09:22 PM ----------

QuoteOriginally posted by Dartmoor Dave Quote
So lately I'm considering whether there is a categorical imperative that imposes a duty on me never to take any photograph that leads someone to believe something false.
All single photos (vs. a series) requires a lot of interpretation on the part of the viewer, and much more likely than not that they will not know the actual facts of the image without outside information. So you might not be leading anyone to believe something false, but they will be interpreting the image using their own biases and experience.
06-06-2023, 08:50 AM   #29
Moderator
Loyal Site Supporter




Join Date: Feb 2015
Location: Central Florida
Photos: Gallery
Posts: 6,033
QuoteOriginally posted by pid Quote
i only ask me: who gets any copyright fees for his work the KI is using?!
KI will use all our pictures here in the forum and elsewhere on internet. Think about that before you clap with your hands for this new feature ...
Heer's one influential country's take on the copyright and AI training issue
Japan Declares AI Training Data Fair Game and 'Will Not Enforce Copyright' | PetaPixel
06-06-2023, 06:59 PM - 2 Likes   #30
Moderator
Loyal Site Supporter
Wheatfield's Avatar

Join Date: Apr 2008
Location: The wheatfields of Canada
Posts: 15,903
Original Poster
For those interested, I saw a Youtube video last night that gave some information of interest.
Right now, Generative Fill is 1024 pixels per inch, so if you grab a section of the image (as I did in my OP) and do a fill, the resolution is far below that of the rest of the image.
The guy had a pretty neat hack that involved doing a 1024x1024 square, and using that as a selection tool. That way the fill will be at the same resolution as the rest of the image.
He made an action with a stop to allow him to move the selection area across the image in steps. It seemed to work far better than just lassoing an area and hitting the fill button, as the fill resolution wasn't kinda fuzzy.
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
adobe, adobe generative, couple, edges, error, image, images, machine, photography, photoshop, pixels, prints, program, resolution, shot, software, text, tool, truck
Thread Tools Search this Thread
Search this Thread:

Advanced Search


Similar Threads
Thread Thread Starter Forum Replies Last Post
Contest #194 Fill the Frame Photobuddy25 Monthly Photo Contests 8 11-08-2022 02:48 AM
Expired Theme Contest #194 - October, 2022 (Fill the Frame) Ole Monthly Photo Contests 2 10-26-2022 08:33 PM
Fill the Frame submission requirements Moby Grape General Talk 3 10-04-2022 04:23 AM
Adobe Photoshop Lightroom 4 Software vs. Adobe Photoshop Lightroom 5 Software Update ASheffield Digital Processing, Software, and Printing 3 05-08-2014 05:52 AM
Adobe Photoshop & Adobe Premiere Elements 10 hman Ask B&H Photo! 1 02-17-2012 08:55 AM



All times are GMT -7. The time now is 01:34 AM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top