Forgot Password
Pentax Camera Forums Home
 

Reply
Show Printable Version Search this Thread
03-06-2016, 09:52 AM - 1 Like   #766
Pentaxian
falconeye's Avatar

Join Date: Jan 2008
Location: Munich, Alps, Germany
Photos: Gallery
Posts: 6,863
Wrt the discussion started by @Simen1 ...

His arguments are all sound and valid.

Speaking generally however, there is, for any given image quality and any given moment in time, a sweet spot (for sensor size) which delivers this image quality in the most cost-effective way (read, at the lowest possible price).

I tried to elaborate a bit about this topic in my blog.

Two things to observe:
  1. The sweet spot wanders towards larger sensor sizes over time.
  2. The sweet spot wanders towards larger sensor sizes when increasing the given image quality.
The reason for #2 is that beyond some point, the cost of implementing the lens grows much faster than the cost of implementing the sensor. Think of the cost of f/0.5 lenses ...

The reason for #1 is that cost of silicon decreases over time, while the cost of glass does decrease less rapidly or does even increase.

To make things worse, #1 and #2 combine to yield a cummulative effect to favor larger sensor sizes over time, more rapidly than some people believe.

This is a general fact which can't be denied as it can be proven rigorously.

There are a few intersting corrolars to the above:
  • Pentax would vanish in the enthusiast market if they don't upgrade to FF at some moment in time (one may still argue about the correct moment in time though -- for many it is just right now, for some, too early, for some including myself, it is about 4 years too late).
  • FourThirds will be adopted by smaller image quality standards (such as compacts) and will become extinct in the enthusiast market.
  • Because small lenses (and small cameras) are a most-wanted item, there will be such lenses for, e.g., full frame, in the future. Just like professional-grade F/4 primes and F/5.6 zooms. This is still a large gap in the market and IMHO, one of a few ways Pentax could differentiate and gain market share. That's also one of the strengths of the Leica line-up.
  • As the difference between FF and cropped 44x33 medium format is so little (0.8x crop factor), all current medium format vendors will have to provide an upgrade path to full medium format, at an affordable price. Except maybe Leica, which replaces Leica S by Leica SL.
  • A fixed lens 44x33 medium format mirrorless camera must be imminent. The question is who delivers it first ...
  • There is an option ... that Canon and/or Nikon, when eventually jumping to professional-grade mirrorless, do this with cropped 44x33 medium format. As it would be affordable enough, saves their DSLR business from cannibalism, allows them to introduce a new mount with no questions asked, and bridges AF performance until multi-pixel AF has achieved professional tracking performance (which it eventually will -- btw, Sony just delivered a Dual-Pixel AF sensor to Samsung ...).
  • ... many more corrolars exist, think about it for a few minutes ...


03-06-2016, 10:17 AM   #767
Pentaxian




Join Date: Nov 2013
Posts: 4,614
QuoteOriginally posted by falconeye Quote
Wrt the discussion started by @Simen1 ...

His arguments are all sound and valid.
Only partially. People don't buy MF for high iso, they don't even buy necessarily for more MP and they don't buy neither for the max apperture / shallow deph of field.

Silicium price decrease over time but this is more complex. Most of the leverage for silicium is to put more into the same surface. This is not that interresting for sensor size. This may bring 4K and later on 8K or BSI but this doesn't mean a huge chip is going to cost nothing or consume no power. Intel predict they will stagnate by 2020 and if you did notice, the processor didn't evolve much in the past 10 years. What happened is we managed to close the gap between smartphone and desktop, not that processor overall are that much better today. They are like 2-4 faster while by previous evolution rate they should have been 30-50 time faster.

It is not clear until when the price will go down, and it is not clear neither of how this price is anyway a big factor in the price of the gear. With enough volume, an FF camera like K1 could sell for 500$ and the latest 70-200 too. MF would be maybe 2-3K. But the volume isn't there, worse the market shrik significantly each year. Sale go down drastically and if current price drop as a desperate move to get back some volume and sales, manufacturers might find themselves in a position where the R&D to go bigger/better will simply proove too costly to justify.

As long as there nothing better, and competitor don't do better, no reason to improve yourself. All manufacturers dropped price in japan just after pentax announced its FF for 1800$, that's no random event. It will not happen again for no reason.

But that doesn't sound neither like they have the money to invest or the market to sell the gear. All the contrary.
03-06-2016, 11:08 AM   #768
New Member




Join Date: Jun 2010
Posts: 24
QuoteOriginally posted by Nicolas06 Quote

Silicium price decrease over time but this is more complex. Most of the leverage for silicium is to put more into the same surface. This is not that interresting for sensor size. This may bring 4K and later on 8K or BSI but this doesn't mean a huge chip is going to cost nothing or consume no power. Intel predict they will stagnate by 2020 and if you did notice, the processor didn't evolve much in the past 10 years. What happened is we managed to close the gap between smartphone and desktop, not that processor overall are that much better today. They are like 2-4 faster while by previous evolution rate they should have been 30-50 time faster.
Sorry for moving part of the discussion off-topic, but I have to object to the statements about processor performance. Processors have evolved according to Moore's law over most of the last decade, although there has been a a slight slowing down over the last few years (as of the 22nm node at 2012) due to the fact that minimizing circuits is starting to reach the edge of what is possible. The prediction of Intel of stagnation for 2020 has to do with that, and the only way past that is to move on to a better technology platform than sillicon.

Here is a source: https://www.karlrupp.net/2013/06/cpu-gpu-and-mic-hardware-characteristics-over-time/
See the first two graphs under 'Raw Compute Performance'. The blue lines show performance increases of Intel's Xeon processors, the most expensive consumer processors are usually not too far from those in performance. As you see, between 2007 and 2014, there was an increase of 100 to 1400GFlops in single-precision, and 50-700GGFlops in double precision calculations. That is a factor 14 in 7 years, which is consistent with Moore's law (which would predict a factor 8 in 6 years, a factor 16 in 8 years, although that is an oversimplified interpretation of Moore's law)

Its easy to think processors have stagnated because not too many people need the full processing power of the newest CPU. Their PCs are often limited instead by memory bandwidth or GPU performance, and very few people still buy PCs with high-end CPUs (which used to be much more common when CPUs were often the bottleneck in the system). Even older CPUs can run almost any task thrown at them, so people feel no need to upgrade, but that does not mean the technology isnt progressing.

Similarly, while mobile processors have made huge leaps in performance, they are still very far away from competing with desktop processors. The numbers (number of cores and clock speed) might make them look similarly fast, but they are running a different instruction set, and therefore their speed cannot be compared using those numbers.

Getting back to cameras, once could ask themselves if there is a moore's law for sensors. I havent been following the market long enough to know, but it seems reasonable to assume that unlike CPUs, we aren't to near the edge of what is possible yet. Digital camera sensors are a much newer technology, and the investments made by industry to push the technology to its edge arent on the same scale.
03-06-2016, 11:44 AM   #769
Pentaxian
falconeye's Avatar

Join Date: Jan 2008
Location: Munich, Alps, Germany
Photos: Gallery
Posts: 6,863
QuoteOriginally posted by Nicolas06 Quote
Silicium price decrease over time but this is more complex. Most of the leverage for silicium is to put more into the same surface.
What xandos said.

And I'll add to it. Even if your argument would hold true (it doesn't), the process cost for a given area of silicon and a given process would decrease dramatically over time. That's because of the unavoidable deprecation of old technology / old machines / old fabs in a stagnating world. Canon in particular now works with their very old (more than ten y.o.) fabs and can still mantian its market share despite competitors using ten years more recent tech. If they wanted, Canon could deliver some extremely low-priced FF cameras ...

But what we are discussing?

That sensor sizes for a given budget have grown is an stablished fact by now. Just look at the market 2000, 2005, 2010, 2015 and it should be strikingly clearto everybody.

03-06-2016, 11:45 AM   #770
Pentaxian




Join Date: Nov 2013
Posts: 4,614
QuoteOriginally posted by xandos Quote
Here is a source: https://www.karlrupp.net/2013/06/cpu-gpu-and-mic-hardware-characteristics-over-time/
See the first two graphs under 'Raw Compute Performance'. The blue lines show performance increases of Intel's Xeon processors, the most expensive consumer processors are usually not too far from those in performance. As you see, between 2007 and 2014, there was an increase of 100 to 1400GFlops in single-precision, and 50-700GGFlops in double precision calculations. That is a factor 14 in 7 years, which is consistent with Moore's law (which would predict a factor 8 in 6 years, a factor 16 in 8 years, although that is an oversimplified interpretation of Moore's law)
Before, you got increase in frequency basically, if you double the frequency and kept the same architecture, boosted a bit the memory bandwidth and cache, you nearly got 2X performance on all programs. maybe you got only 1.8 or 1.9 but it was near 2X.

This no longer happen for many years, each new processor generation bring 5-15% more efficiancy at the same frequency every 2-4 years.

There was the tentative with multicore. The idea being you can do more in parallel, run arbitrar computation on each core. We got up to 4 for most desktop. Most laptop are actually 2 and only server have 8 or more. That a bit because it is difficult to speed up things by running more task in parallel. Outside of a few areas (graphics, scientific computations) general purpose program don't gain much with this approach. That's why basically we don't have more than 4 cores on desktop. In server it work quite well because when many client are connected, it is easy to deal with each client request independantly as they are actually independant requests. Anyway this avenue now stagnate. Issue like memory bandwitdth and cache size make it difficult to massively scale for the current type of application we have.

Then there the least usefull gain of all, massively parallel, identical computations to bump number like floating point operation per second. This sound great on benchmark but again this work mostly for graphics or scientific computer. For you typicall desktop outside games and photo/video editing this is useless. This doesn't help neither for the typical web server. This is not really a progress because device like that existed for year to handle graphics and scientific computation. This is called a GPU. Yeah now thank to that a modern CPU match the processing power of a GPU of 10 years ago. That's about it. And it doesn't have the necessary memory bandwidth to really follow. People that really need the performance for that, go with clusters of GPU. Not CPU.

So I stand my point. We stagnate. The day we stopped to increase frequency slowed down progress dramatically. Of course thanks to dozen billions investment it still improved and he continued for a bit, albeit slower and slower with less and less visible improvements.

I mean a benchmark does 3 things:
- generate eat and consume energy (the most visible effect)
- allow the manufacturer to put fancy specs and say they have the biggest one.
- allow the client to feel like he has the biggest one.

But if actual applications don't follow well...

We will see if hardware neuron networks or quantic computing will change things. But we are at the end of what current CPU model can bring, or at least we don't innovate nearly as fast as before.

Last edited by Nicolas06; 03-06-2016 at 11:51 AM.
03-06-2016, 11:47 AM   #771
Pentaxian




Join Date: Nov 2013
Posts: 4,614
QuoteOriginally posted by falconeye Quote
What xandos said.

And I'll add to it. Even if your argument would hold true (it doesn't), the process cost for a given area of silicon and a given process would decrease dramatically over time. That's because of the unavoidable deprecation of old technology / old machines / old fabs in a stagnating world. Canon in particular now works with their very old (more than ten y.o.) fabs and can still mantian its market share despite competitors using ten years more recent tech. If they wanted, Canon could deliver some extremely low-priced FF cameras ...
The reverse is true, investment for new fab technology grow exponentially as the gate go smaller. Basically there Intel... And that's it. The other are far behind. It is more Canon 20 years behind and the other only 10 years. And Intel doesn't make sensors. It just not worth it money wise.

Last edited by Nicolas06; 03-06-2016 at 11:53 AM.
03-06-2016, 12:02 PM   #772
Pentaxian
D1N0's Avatar

Join Date: May 2012
Location: ---
Photos: Gallery
Posts: 4,088
When foundries make the step to 45cm wafers (up from 30) prices will decrease. The full frame sensor became affordable at the end of 2012 when the D600 and D610 were launched. Pentax should have been ready so they could have launched a full frame in 2013, but Hoya probably didn't want to invest in that so they had to wait for Ricoh to give the go ahead.
03-06-2016, 12:25 PM   #773
Pentaxian




Join Date: Feb 2015
Photos: Albums
Posts: 3,224
QuoteOriginally posted by Nicolas06 Quote
Before, you got increase in frequency basically, if you double the frequency and kept the same architecture, boosted a bit the memory bandwidth and cache, you nearly got 2X performance on all programs. maybe you got only 1.8 or 1.9 but it was near 2X.
QuoteOriginally posted by Nicolas06 Quote
The reverse is true, investment for new fab technology grow exponentially as the gate go smaller. Basically there Intel... And that's it. The other are far behind. It is more Canon 20 years behind and the other only 10 years. And Intel doesn't make sensors. It just not worth it money wise.
You are a software guy. You are referring to what's called Moore's law of VLSI front-end process node doubling density every 18 months (https://en.wikipedia.org/wiki/Moore's_law and https://en.wikipedia.org/wiki/International_Technology_Roadmap_for_Semiconductors ), does provide compounded benefits: as transistor size decreases, bias voltages decrease, as well as gate and channel capacitance, therefore, smaller offer more function per square mm, more speed and less power dissipation (more MIPS per mW). Image sensor is mostly an large matrix analog circuity which does not follow the continuous improvement described by Moore's law, improvement of image sensors is slower and much more limited.

As far as cost / image quality, I think Falc is right. But he does not comment about the size and weight of the glass to put in front of a larger sensor. One could study how to reduce the weight and size of lenses for larger sensors (referring to the new Nikon 300mm prime lens).


Last edited by biz-engineer; 03-06-2016 at 12:37 PM.
03-06-2016, 01:10 PM   #774
Pentaxian




Join Date: Nov 2013
Posts: 4,614
QuoteOriginally posted by biz-engineer Quote
Image sensor is mostly an large matrix analog circuity which does not follow the continuous improvement described by Moore's law, improvement of image sensors is slower and much more limited.
Agree I was not clear on my wording but that was part of the point for me. Most improvment of silicium come from moore law that doesn't really work anymore (say officially by Moore itself) and that law never applied that much to sensors.

I think the circuitry and design like BSI benefit of it through. But the things evolve anyway much slower as you said.

And that my personal opinion, as the market shrink and less and less people are willing to buy a new camera because many find the old one still working and good enough, this will reduce investment and R&D... Slowing things even more.

We might not have got half the progress we got in the 5 last years, if we didn't benefit from research of sensor for smartphones.
03-06-2016, 01:25 PM   #775
New Member




Join Date: Jun 2010
Posts: 24
QuoteOriginally posted by Nicolas06 Quote
Before, you got increase in frequency basically, if you double the frequency and kept the same architecture, boosted a bit the memory bandwidth and cache, you nearly got 2X performance on all programs. maybe you got only 1.8 or 1.9 but it was near 2X.

This no longer happen for many years, each new processor generation bring 5-15% more efficiancy at the same frequency every 2-4 years.

There was the tentative with multicore. The idea being you can do more in parallel, run arbitrar computation on each core. We got up to 4 for most desktop. Most laptop are actually 2 and only server have 8 or more. That a bit because it is difficult to speed up things by running more task in parallel. Outside of a few areas (graphics, scientific computations) general purpose program don't gain much with this approach. That's why basically we don't have more than 4 cores on desktop. In server it work quite well because when many client are connected, it is easy to deal with each client request independantly as they are actually independant requests. Anyway this avenue now stagnate. Issue like memory bandwitdth and cache size make it difficult to massively scale for the current type of application we have.

Then there the least usefull gain of all, massively parallel, identical computations to bump number like floating point operation per second. This sound great on benchmark but again this work mostly for graphics or scientific computer. For you typicall desktop outside games and photo/video editing this is useless. This doesn't help neither for the typical web server. This is not really a progress because device like that existed for year to handle graphics and scientific computation. This is called a GPU. Yeah now thank to that a modern CPU match the processing power of a GPU of 10 years ago. That's about it. And it doesn't have the necessary memory bandwidth to really follow. People that really need the performance for that, go with clusters of GPU. Not CPU.

So I stand my point. We stagnate. The day we stopped to increase frequency slowed down progress dramatically. Of course thanks to dozen billions investment it still improved and he continued for a bit, albeit slower and slower with less and less visible improvements.

I mean a benchmark does 3 things:
- generate eat and consume energy (the most visible effect)
- allow the manufacturer to put fancy specs and say they have the biggest one.
- allow the client to feel like he has the biggest one.

But if actual applications don't follow well...

We will see if hardware neuron networks or quantic computing will change things. But we are at the end of what current CPU model can bring, or at least we don't innovate nearly as fast as before.
You might disagree with taking the number of GFlops as a useful indication of how fast a computer is. However, in my opinion its the only useful number as a CPU is a data processing machine. The fact that most software does not manage to use the increased computional power in newer CPUs does not mean the CPUs are not faster, it means that software isnt being optimized for newer hardware quickly enough. At a lower level the speed of a processor should be measured by how quickly it can run processes on data, and that is done by measuring GFlops.

I also agree that graphics/scientific computations benefit the most so far from the newer CPUs. That is partially due to the fact that those are two applications that heavily rely on CPU speed, and therefore need to be optimized for new hardware (as far as they run on CPUs, many scientific computations are now run on GPUs) . Most other programs simply don't need the full speed of the CPU.

I can't say anything about neuron networks, but about quantum computing: dont expect a working consumer quantum computer in the next 2 decades.
03-06-2016, 01:25 PM   #776
Site Supporter
Zygonyx's Avatar

Join Date: Jun 2011
Location: Ile de France
Posts: 3,018
QuoteOriginally posted by falconeye Quote
Wrt the discussion started by @Simen1 ...

His arguments are all sound and valid.

Speaking generally however, there is, for any given image quality and any given moment in time, a sweet spot (for sensor size) which delivers this image quality in the most cost-effective way (read, at the lowest possible price).

I tried to elaborate a bit about this topic in my blog.

Two things to observe:
  1. The sweet spot wanders towards larger sensor sizes over time.
  2. The sweet spot wanders towards larger sensor sizes when increasing the given image quality.
The reason for #2 is that beyond some point, the cost of implementing the lens grows much faster than the cost of implementing the sensor. Think of the cost of f/0.5 lenses ...

The reason for #1 is that cost of silicon decreases over time, while the cost of glass does decrease less rapidly or does even increase.

To make things worse, #1 and #2 combine to yield a cummulative effect to favor larger sensor sizes over time, more rapidly than some people believe.

This is a general fact which can't be denied as it can be proven rigorously.

There are a few intersting corrolars to the above:
  • Pentax would vanish in the enthusiast market if they don't upgrade to FF at some moment in time (one may still argue about the correct moment in time though -- for many it is just right now, for some, too early, for some including myself, it is about 4 years too late).
  • FourThirds will be adopted by smaller image quality standards (such as compacts) and will become extinct in the enthusiast market.
  • Because small lenses (and small cameras) are a most-wanted item, there will be such lenses for, e.g., full frame, in the future. Just like professional-grade F/4 primes and F/5.6 zooms. This is still a large gap in the market and IMHO, one of a few ways Pentax could differentiate and gain market share. That's also one of the strengths of the Leica line-up.
  • As the difference between FF and cropped 44x33 medium format is so little (0.8x crop factor), all current medium format vendors will have to provide an upgrade path to full medium format, at an affordable price. Except maybe Leica, which replaces Leica S by Leica SL.
  • A fixed lens 44x33 medium format mirrorless camera must be imminent. The question is who delivers it first ...
  • There is an option ... that Canon and/or Nikon, when eventually jumping to professional-grade mirrorless, do this with cropped 44x33 medium format. As it would be affordable enough, saves their DSLR business from cannibalism, allows them to introduce a new mount with no questions asked, and bridges AF performance until multi-pixel AF has achieved professional tracking performance (which it eventually will -- btw, Sony just delivered a Dual-Pixel AF sensor to Samsung ...).
  • ... many more corrolars exist, think about it for a few minutes ...
I agree 100% with your analysis.
03-06-2016, 01:54 PM   #777
Loyal Site Supporter




Join Date: Mar 2009
Location: Southern Indiana
Photos: Gallery
Posts: 15,504
The problem that I have with equivalence (which is what we are talking about in veiled terms here) is that it tends to emphasize wide open performance to the exclusion of everything else. If you happen to be someone who shoots at f2.8 or f4 on APS-C most of the time and doesn't really want to shoot f1.4 on full frame, much less the same thing on medium format, then it doesn't really matter if you can get a 35mm f1 lens or not.

On the other hand, if the value of a system comes down to which one has the fastest lenses available, then Leica probably wins hands down.

I would just say that when I look at medium format photos, I see a couple of things that stand out. They have really nice transitions from in focus to out of focus and they have an awful lot of detail without looking over processed. Now, some of that is attributable to the excellent skills of those who purchase Medium Format cameras, some is attributable to the excellent glass (few of these images if any are shot at f4 or wider), and some is attributable to the quality of the sensors. I have a hard time sorting how much is how much, but I would say that those who purchase medium format know what they are getting into and tend to have good skills prior to getting such a camera and they get it more for the glass than for the sensors, even though those are excellent.
03-06-2016, 01:56 PM   #778
Pentaxian




Join Date: Jul 2009
Location: Tromsø, Norway
Photos: Albums
Posts: 948
QuoteOriginally posted by thibs Quote
Ever looked through a 645Z VF? No? Then you can't understand (this is serious and not trying to insult you or something). Do it but you'll cry looking in any other VF then
No, I haven’t, but I have been looking trough the VF of a Pentax 6x7, a Bronica 6x7, a Hasselblad D3..something and a few more film based medium format cameras. Its quite a view! Just for the curiosity of it, I also tried a few large format cameras 4x5, 8x10 inch and a gigantic camera where I sat in a sofa inside the camera. Picture here.
03-06-2016, 03:38 PM - 1 Like   #779
Pentaxian
falconeye's Avatar

Join Date: Jan 2008
Location: Munich, Alps, Germany
Photos: Gallery
Posts: 6,863
QuoteOriginally posted by biz-engineer Quote
As far as cost / image quality, I think Falc is right. But he does not comment about the size and weight of the glass to put in front of a larger sensor.
A statement about that was implied in my post.
Because size and weight of the glass to put in front of a sensor is (as a first order approximation) independend of the sensor size (assuming a given image quality), I focussed on the cost of said glass. And that's decreasing with increasing sensor size.

QuoteOriginally posted by Nicolas06 Quote
Outside of a few areas (graphics, scientific computations) general purpose program don't gain much with this approach.
...
So I stand my point. We stagnate.
Even though this doesn't effect my argument, I disagree.

I agree that Moore's law does no longer apply. Point taken.

But Moore's law never was a law, it was a stupid extrapolation from observation. Unlike the laws of physics.

As long as no law of physics needs be broken though (which has never been possible and never will be), there is no direct limit for technological progress and no reason either why it should happen in a linear (or exponential) fashion.

And there fortunately is no law of physics which sets a lower limit for the space-time volume or energy required to do a computation. Even quantum physics sets no such barrier.

Which btw is unlike photography because physics does indeed set a hard barrier how the best possible camera (in a given space-time volume) can perform.

So, there will be progress in computational hardware. But nobody said it will be easy. Maybe, some new materials need be researched, maybe, it is sufficient to further miniaturize volume and power consumption of the current technology.

IMHO, drastically reducing power consumption alone will do the trick. Because wrt all other parameters, we already beat the humain brain made from organic materials. But we are still way below its processing power (*) resulting from its 0.15 quadrillion synapses [http://postcog.ucd.ie/files/Pakkenberg%202003.pdf] (~300 Petaflops). CMOS has the same processing density but it would simply melt down if scaled to the capacity of our brain (which is no lightweight construction either, with its separate circuitry for cooling and fuel) ...

Once that problem is solved though, we can leave all other problems to them (i.e., those artificial brains). No way the progress could stop there

___
(*) The current #1 supercomputer (33 Petaflops or 1/10 the speed of a single human brain) consumes 18 MW or what 1 million brains consume, leaving a gap of 1:10,000,000 wrt energy efficiency ...

Last edited by falconeye; 03-06-2016 at 03:56 PM.
03-06-2016, 04:01 PM   #780
Pentaxian




Join Date: Jul 2009
Location: Tromsø, Norway
Photos: Albums
Posts: 948
QuoteOriginally posted by falconeye Quote
A fixed lens 44x33 medium format mirrorless camera must be imminent. The question is who delivers it first ...
There is an option ... that Canon and/or Nikon, when eventually jumping to professional-grade mirrorless, do this with cropped 44x33 medium format. As it would be affordable enough, saves their DSLR business from cannibalism, allows them to introduce a new mount with no questions asked, and bridges AF performance until multi-pixel AF has achieved professional tracking performance (which it eventually will -- btw, Sony just delivered a Dual-Pixel AF sensor to Samsung ...).
Two very good points. A couple of years ago I thought Samsung was going to mass produce MF mirrorless cameras, with or without a mount. Now it could be many others, maybe even Pentax.

QuoteOriginally posted by Nicolas06 Quote
Only partially. People don't buy MF for high iso, they don't even buy necessarily for more MP and they don't buy neither for the max apperture / shallow deph of field.
So, they are buying MF because of they have the money and like large viewfinders? I just feel more sure that Pentax will release a full frame medium format camera soon, probably with the same 100Mp sensor as Phase One.

QuoteOriginally posted by Rondec Quote
The problem that I have with equivalence (which is what we are talking about in veiled terms here) is that it tends to emphasize wide open performance to the exclusion of everything else. If you happen to be someone who shoots at f2.8 or f4 on APS-C most of the time and doesn't really want to shoot f1.4 on full frame, much less the same thing on medium format, then it doesn't really matter if you can get a 35mm f1 lens or not.
I'm sorry If I contributed to that impression. I use the equivalency term more widely. I.E. when carefully trying to get the same large enough DoF at the same distance and field of view on different formats. When I claimed two stops larger apertures available for FF then MF, it was meant in the context "even at the ends of the aperture and ISO scales, this holds true".
Reply

Bookmarks
  • Submit Thread to Facebook Facebook
  • Submit Thread to Twitter Twitter
  • Submit Thread to Digg Digg
Tags - Make this thread easier to find by adding keywords to it!
auto, base, camera, company, compression, d810, design, dr, electrons, fa, hardware, iso, k-1, k-3, lenses, pentax, pentax body, pentax k-1, pentax news, pentax rumors, photos, pre-order, risk, sensor, specifications, timelapse, trip, vs
Thread Tools Search this Thread
Search this Thread:

Advanced Search


Similar Threads
Thread Thread Starter Forum Replies Last Post
Pentax K-3 II Officially Announced Adam Pentax News and Rumors 1015 07-03-2015 10:55 PM
Pentax K-S2 Officially Announced Adam Pentax K-S1 & K-S2 12 05-23-2015 06:49 AM
Pentax K-30 Officially Announced! Adam Pentax News and Rumors 245 09-12-2012 08:32 PM
Pentax K-5 Officially Announced Adam Pentax News and Rumors 533 03-06-2012 05:45 AM
K-5 Firmware 1.02 Officially Announced Ole Pentax K-5 50 01-20-2011 10:05 PM



All times are GMT -7. The time now is 05:46 PM. | See also: NikonForums.com, CanonForums.com part of our network of photo forums!
  • Red (Default)
  • Green
  • Gray
  • Dark
  • Dark Yellow
  • Dark Blue
  • Old Red
  • Old Green
  • Old Gray
  • Dial-Up Style
Hello! It's great to see you back on the forum! Have you considered joining the community?
register
Creating a FREE ACCOUNT takes under a minute, removes ads, and lets you post! [Dismiss]
Top