Originally posted by philbaum I have two bridge images that i print at 12" by 36" from my K20D. Instead of printing them at 180 dpi as Marc suggested, i resize them to 36" length and 300 dpi using Lighroom.
Yeah, but that's just shifting deck chairs around. I didn't mean that the printer will literally only use 180 dots of each ink color per inch; I meant you're only getting 180 (or whatever) dots of actual picture information into an inch of the print, and that's equally true whether you do the upsizing in Lightroom or let the printer driver do it. Either way, sure, you got 300 drop of each ink per inch, but some of those dots are basically just made up ("interpolated" is the more technical term). Whether those dots are interpolated by Lightroom or the printer driver is immaterial. Of course, chances are one will do a better job than the other, but I'd actually be inclined to suspect the printer would do a better than Lightroom unless Lightroom some some really fancy fractal-base upsizing algorithms. I doubt the difference would be visible to most observers in any case.
Here's a more specific way of putting it: the K20D generates pictures with a little over 3000 pixels on the short dimension (major rounding for simplicity), and you're talking about spreading those 3000 pixels out over 24 inches of paper. No matter how you slice it, that works out to 3000/24 pixels per inch - in other words, 125 pixels per inch. Each inch of paper is going to contain 300 drop of each ink, so *someone* is making up a lot of information to get 300 drops of each ink out of 125 pixels. Whether you upsize in Lightroom or let the printer driver do it, at some level you're still talking about 125 pixels per inch; you're just throwing more drops of ink at each pixel.
This is why sometimes people talking about ppi versus dpi, but that can be a misleading/confusing way of describing the distinction too if you don't already understand what's going on. That's why I'm describing it in terms of actual data generated by the sensor versus drops of each ink color. The printer is probably going to use 300 drops of each ink per inch no matter what, but the *real* resolution in terms of how much information there is per inch is out of your control. No matter how you work it, 3000 pieces of sensor information (what we might mean by pixels in this context) spread out over 24 inches is only 125 pixels per inch.
So regardless of method used to do the interpolation, a 24x36 print isn't going to be 300ppi in the same way a smaller print can be - lots of the drops of ink spit out by the printer are representing data that was just made up, not actually captured by the camera. Thus, the print won't stand *close* scrutiny the way a smaller print would. But my point is, it doesn't really have to. You say they look fine at a few inches, and that's because 125ppi is still not half bad. But if you can somehow set up a comparison where the same image is printed with 125ppi versus 300ppi, you would indeed be able to tell the difference if you looked closely.