MTF, Lens & Sensor Resolution
I’ve been ‘banging on’ about resolution lens performance and MTF over the last few posts so I’d like to start bringing all these various bits of information together with at least a modicum of simplicity.
If this is your first visit to my blog I strongly recommend you peruse HERE and HERE before going any further!
You might well ask the question “Do I really need to know this stuff – you’re a pro Andy and I’m not, so I don’t think I need to…”
My answer is “Yes you bloody well do need to know, so stop whinging – it’ll save you time and perhaps stop you wasting money…”
Words used like ‘resolution’ do tend to get used out of context sometimes, and when you guys ‘n gals are learning this stuff then things can get a mite confusing – and nowhere does terminology get more confusing than when we are talking ‘glass’.
But before we get into the idea of bringing lenses and sensors together I want to introduce you to something you’ve all heard of before – CONTRAST – and how it effects our ability to see detail, our lens’s ability to transfer detail, and our camera sensors ability to record detail.
Contrast & How It Effects the Resolving of Detail
In an earlier post HERE I briefly mentioned that the human eye can resolve 5 line pairs per millimeter, and the illustration I used to illustrate those line pairs looked rather like this:
Now don’t forget, these line pairs are highly magnified – in reality each pair should be 0.2mm wide. These lines are easily differentiated because of the excessive contrast ratio between each line in a pair.
How far can contrast between the lines fall before we can’t tell the difference any more and all the lines blend together into a solid monotone?
Enter John William Strutt, the 3rd Baron Rayleigh…………
The Rayleigh Criterion basically stipulates that the ‘discernability’ of each line in a pair is low end limited to a line pair contrast ratio of 9% or above, for average human vision – that is, when each line pair is 0.2mm wide and viewed from 25cms. Obviously they are reproduced much larger here, hence you can see ’em!
However, it is said in some circles that dslr sensors are typically limited to a 12% to 15% minimum line pair contrast ratio when it comes to discriminating between the individual lines.
Now before you start getting in a panic and misinterpreting this revelation you must realise that you are missing one crucial factor; but let’s just recap what we’ve got so far.
- A ‘line’ is a detail.
- but we can’t see one line (detail) without another line (detail) next to it that has a different tonal value ( our line pair).
- There is a limit to the contrast ratio between our two lines, below which our lines/details begin to merge together and become less distinct.
So, what is this crucial factor that we are missing; well, it’s dead simple – the line pair per millimeter (lp/mm) resolution of a camera sensor.
Now there’s something you won’t find in your cameras ‘tech specs’ that’s for sure!
Sensor Line Pair Resolution
The smallest “line” that can be recorded on a sensor is 1 photosite in width – now that makes sense doesn’t it.
But in order to see that line we must have another line next to it, and that line must have a higher or lower tonal value to a degree where the contrast ratio between the two lines is at or above the low contrast limit of the sensor.
So now we know that the smallest line pair our sensor can record is 2 photosites/pixels in width – the physical width is governed by the sensor pixel pitch; in other words the photosite diameter.
In a nutshell, the lp/mm resolution of a sensor is 0.5x the pixel row count per millimeter – referred to as the Nyquist Rate, simply because we have to define (sample) 2 lines in order to see/resolve 1 line.
The maximum resolution of an image projected by the lens that can be captured at the sensor plane – in other words, the limit of what can be USEFULLY sampled – is the Nyquist Limit.
Let’s do some practical calculations:
Canon 1DX 18.1Mp
Imaging Area = 36mm x 24mm / 5202 x 3533 pixels/photosites OR LINES.
I actually do this calculation based on the imaging area diagonal
So sensor resolution in lp/mm = (pixel diagonal/physical diagonal) x 0.5 = 72.01 lp/mm
Nikon D4 16.2Mp = 68.62 lp/mm
Nikon D800 36.3Mp = 102.33 lp/mm
PhaseOne P40 40Mp medium format = 83.15 lp/mm
PhaseOne IQ180 80Mp medium format = 96.12 lp/mm
Nikon D7000 16.2mp APS-C (DX) 4928×3264 pixels; 23.6×15.6mm dimensions = 104.62 lp/mm
Canon 1D IV 16.1mp APS-H 4896×3264 pixels; 27.9×18.6mm dimensions = 87.74 lp/mm
Taking the crackpot D800 as an example, that 102.33 lp/mm figure means that the sensor is capable of resolving 204.66 lines, or points of detail, per millimeter.
I say crackpot because:
- The Optical Low Pass “fights” against this high degree of resolving power
- This resolving power comes at the expense of S/N ratio
- This resolving power comes at the expense of diffraction
- The D800E is a far better proposition because it negates 1. above but it still leaves 2. & 3.
- Both sensors would purport to be “better” than even an IQ180 – newsflash – they ain’t; and not by a bloody country mile! But the D800E is an exceptional sensor as far as 35mm format (36×24) sensors go.
A switch to a 40Mp medium format is BY FAR the better idea.
Before we go any further, we need a reality check:
In the scene we are shooting, and with the lens magnification we are using, can we actually “SEE” detail as small as 1/204th of a millimeter?
We know that detail finer than that exists all around us – that’s why we do macro/micro photography – but shooting a landscape with a 20mm wide angle where the nearest detail is 1.5 meters away ??
And let’s not forget the diffraction limit of the sensor and the incumbent reduction in depth of field that comes with 36Mp+ crammed into a 36mm x 24mm sensor area.
The D800 gives you something with one hand and takes it away with the other – I wouldn’t give the damn thing house-room! Rant over………
Anyway, getting back to the matter at hand, we can now see that the MTF lp/mm values quoted by the likes of Nikon and Canon et al of 10 and 30 lp/mm bare little or no connectivity with the resolving power of their sensors – as I said in my previous post HERE – they are meaningless.
The information we are chasing after is all about the lens:
- How well does it transfer contrast because its contrast that allows us to “see” the lines of detail?
- How “sharp” is the lens?
- What is the “spread” of 1. and 2. – does it perform equally across its FoV (field of view) or is there a monstrous fall-off of 1. and 2. between 12 and 18mm from the center on an FX sensor?
- Does the lens vignette?
- What is its CA performance?
Now we can go to data sites on the net such as DXO Mark where we can find out all sorts of more meaningful data about our potential lens purchase performance.
But even then, we have to temper what we see because they do their testing using Imatest or something of that ilk, and so the lens performance data is influenced by sensor, ASIC and basic RAW file demosaicing and normalisation – all of which can introduce inaccuracies in the data; in other words they use camera images in order to measure lens performance.
The MTF 50 Standard
Standard MTF (MTF 100) charts do give you a good idea of the lens CONTRAST transfer function, as you may already have concluded. They begin by measuring targets with the highest degree of modulation – black to white – and then illustrate how well that contrast has been transferred to the image plane, measured along a corner radius of the frame/image circle.
As you can see, contrast decreases with falling transfer function value until we get to MTF 0.1 (10%) – here we can guess that if the value falls any lower than 10% then we will lose ALL “perceived” contrast in the image and the lines will become a single flat monotone – in other words we’ll drop to 9% and hit the Rayleigh Criterion.
It’s somewhat debatable whether or not sensors can actually discern a 10% value – as I mentioned earlier in this post, some favour a value more like 12% to 15% (0.12 to 0.15).
Now then, here’s the thing – what dictates the “sharpness” of edge detail in our images? That’s right – EDGE CONTRAST. (Don’t mistake this for overall image contrast!)
Couple that with:
- My well-used adage of “too much contrast is thine enemy”.
- “Detail” lies in midtones and shadows, and we want to see that detail, and in order to see it the lens has to ‘transfer’ it to the sensor plane.
- The only “visual” I can give you of MTF 100 would be something like power lines silhouetted against the sun – even then you would under expose the sun, so, if you like, MTF would still be sub 100.
Please note: 3. above is something of a ‘bastardisation’ and certain so-called experts will slag me off for writing it, but it gives you guys a view of reality – which is the last place some of those aforementioned experts will ever inhabit!
Hopefully you can now see that maybe measuring lens performance with reference to MTF 50 (50%, 0.5) rather than MTF 100 (100%, 1.0) might be a better idea.
Manufacturers know this but won’t do it, and the likes of Nikon can’t do it even if they wanted to because they use a damn calculator!
Don’t be trapped into thinking that contrast equals “sharpness” though; consider the two diagrams below (they are small because at larger sizes they make your eyes go funny!).
In the first diagram the lens has RESOLVED the same level of detail (the same lp/mm) in both cases, and at pretty much the same contrast transfer value; but the detail is less “sharp” on the right.
In the lower diagram the lens has resolved the same level of detail with the same degree of “sharpness”, but with a much reduced contrast transfer value on the right.
Contrast is an AID to PERCEIVED sharpness – nothing more.
I actually hate that word SHARPNESS; and it’s a nasty word because it’s open to all sorts of misconceptions by the uninitiated.
A far more accurate term is ACUTANCE.
So now hopefully you can see that LENS RESOLUTION is NOT the same as lens ACUTANCE (perceived sharpness..grrrrrr).
Seeing as it is possible to have a lens with a higher degree resolving power, but a lower degree of acutance you need to be careful – low acutance tends to make details blur into each other even at high contrast values; which tends to negate the positive effects of the resolving power. (Read as CHEAP LENS!).
Lenses need to have high acutance – they need to be sharp! We’ve got enough problems trying to keep the sharpness once the sensor gets hold of the image, without chucking it a soft one in the first place – and I’ll argue this point with the likes of Mr. Rockwell until the cows have come home!
Things We Already Know
We already know that stopping down the aperture increases Depth of Field; and we already know that we can only do this to a certain degree before we start to hit diffraction.
What does increasing DoF do exactly; it increases ACUTANCE is what it does – exactly!
Yes it gives us increased perceptual sharpness of parts of the subject in front and behind the plane of sharp focus – but forget that bit – we need to understand that the perceived sharpness/acutance of the plane of focus increases too, until you take things too far and go beyond the diffraction limit.
And as we already know, that diffraction limit is dictated by the size of photosites/pixels in the sensor – in other words, the sensor resolution.
So the diffraction limit has two effects on the MTF of a lens:
- The diffraction limit changes with sensor resolution – you might get away with f14 on one sensor, but only f9 on another.
- All this goes “out the window” if we talk about crop-sensor cameras because their sensor dimensions are different.
We all know about “loss of wide angles” with crop sensors – if we put a 28mm lens on an FX body and like the composition but then we switch to a 1.5x crop body we then have to stand further away from the subject in order to achieve the same composition.
That’s good from a DoF PoV because DoF for any given aperture increases with distance; but from a lens resolving power PoV it’s bad – that 50 lp/mm detail has just effectively dropped to 75 lp/mm, so it’s harder for the lens to resolve it, even if the sensors resolution is capable of doing so.
There is yet another way of quantifying MTF – just to confuse the issue for you – and that is line pairs per frame size, usually based on image height and denoted as lp/IH.
Imatest uses MTF50 but quotes the frequencies not as lp/mm, or even lp/IH; but in line widths per image height – LW/IH!
Alas, there is no single source of the empirical data we need in order to evaluate pure lens performance anymore. And because the outcome of any particular lens’s performance in terms of acutance and resolution is now so inextricably intertwined with that of the sensor behind it, then you as lens buyers, are left with a confusing myriad of various test results all freely available on the internet.
What does Uncle Andy recommend? – well a trip to DXO Mark is not a bad starting point all things considered, but I do strongly suggest that you take on board the information I’ve given you here and then scoot over to the DXO test methodology pages HERE and read them carefully before you begin to examine the data and draw any conclusions from it.
But do NOT make decisions just on what you see there; there is no substitute for hands-on testing with your camera before you go and spend your hard-earned cash. Proper testing and evaluation is not as simple as you might think, so it’s a good idea to perhaps find someone who knows what they are doing and is prepared to help you out. Do NOT ask the geezer in the camera shop – he knows bugger all about bugger all!
Do Sensors Out Resolve Lenses?
Well, that’s the loaded question isn’t it – you can get very poor performance from what is ostensibly a superb lens, and to a degree vice versa.
It all depends on what you mean by the question, because in reality a sensor can only resolve what the lens chucks at it.
If you somehow chiseled the lens out of your iPhone and Sellotaped it to your shiny new 1DX then I’m sure you’d notice that the sensor did indeed out resolve the lens – but if you were a total divvy who didn’t know any better then in reality all you’d be ware of is that you had a crappy image – and you’d possibly blame the camera, not the lens – ‘cos it took way better pics on your iPhone 4!
There are so many external factors that effect the output of a lens – available light, subject brightness range, angle of subject to the lens axis to name but three. Learning how to recognise these potential pitfalls and to work around them is what separates a good photographer from an average one – and by good I mean knowledgeable – not necessarily someone who takes pics for a living.
I remember when the 1DX specs were first ‘leaked’ and everyone was getting all hot and bothered about having to buy the new Canon glass because the 1DX was going to out resolve all Canons old glass – how crackers do you need to be nowadays to get a one way ticket to the funny farm?
If they were happy with the lens’s optical performance pre 1DX then that’s what they would get post 1DX…duh!
If you still don’t get it then try looking at it this way – if lenses out resolve your sensor then you are up “Queer Street” – what you see in the viewfinder will be far better than the image that comes off the sensor, and you will not be a happy camper.
If on the other hand, our sensors have the capability to resolve more lines per millimeter than our lenses can throw at them, and we are more than satisfied with our lenses resolution and acutance, then we would be in a happy place, because we’d be wringing the very best performance from our glass – always assuming we know how to ‘drive the juggernaut’ in the first place!
Become a patron from as little as $1 per month, and help me produce more free content. Patrons gain access to a variety of FREE rewards, discounts and bonuses. |
Andy just learned more in twenty minutes, than in last 20 years!
Thanks, Les
Nicely written and great info. I really cannot thank you enough for sharing.