In this video I show you my usual calibration procedure for my monitor, thus ensuring a perfect foundation for good color management.
I’m using Eizo ColorNavigator6 software and the X-Rite ColorMunki Photo spectrophotometer which I seem to have had since for ever and it still functions better than nearly any other calibrator on the market today.
The final part of the procedure is Profile Validation to ISO 12646 in order to obtain the DeltaE2000 values of the new profile.
As ever folks I hope you find this content useful, and if you have any questions then please just ask!
Many thanks to all my Patreon members without who’s contributions making this content would be difficult to say the least.
In this video I demonstrate how to remove a tripod shadow from your images using Frequency Separation.
When a shadow is very distinct it can be removed with a modified luminosity mask, but sometimes shadows will be too indistinct to easily isolate with a mask.
Now obviously this method is a bit fiddly but seriously, on this image of Phils’ trying to use a luminosity mask selection would be impossible.
I’ve not been as careful or precise in the video as you would need to be when doing it on your own image – if I’d done the job to 100% perfection the video would have been 3 times as long!
Remember, subtlety is the key and as with any retouching it needs to be done slowly and methodically.
The next shots show how good the results are after another 10 minutes of careful work:
A few more refinements to brush opacity together with a bit more work with the spot healing brush to break up the more persistent dark areas and this shot would be good to go.
Adobe always issue a list of corrections, bug fixes and improvements to accompany an update.
I know a lot of people who read the update synopsis and decide it doesn’t apply to them, so they skip the update, because they see nothing that they think applies to them.
I also know a lot of people who just never bother – until the glorious day comes when they DO decide to run an update from 4 or 5 versions behind – and all does not go well.
Here’s why I ALWAYS recommend a CURRENT VERSION UPDATE such as the latest v20.0.1 to v20.0.2. (Please note I’m talking version updates NOT upgrades).
Every time an update is applied, the internal architecture of Ps is altered by Adobe, and the changes are often far more expansive than those listed in the update synopsis.
Say they facilitate a change to Content-Aware Fill – you think one thing has changed. But to facilitate that one change they’ve maybe had to change a few hundred lines or more of code in the background.
For all I or you know, the v20.0.2 update might well have changed thousands of code lines from those of v20.0.1.
The previous version of Photoshop ran from v19.0.0 all the way to v19.1.7 so there were 10 updates to PsCC2018, all of varying sizes and levels of complexity.
When Adobe created the v19.1.7 update for CC2018 it was designed to ‘overlay’ v19.1.6 architecture – NOT that of v19.0.0
Making small gradual changes to anything is far safer than trying to make large changes, and so I always recommend you apply every update to Photoshop and Lightroom when they are available.
Updates also designed to fit with the current version of your computer operating system – indeed some updates are built to improve the Photoshop-to-OS interface itself.
In all fairness to Adobe they have a massive job to do in making Photoshop work across so many platform variations. Don’t forget that PC users have a huge variety of machinery in terms of CPU, GPU, Main Board and RAM and all these variations can and do impinge upon Photoshop functionality.
By comparison Mac users are a lot easier for Adobe to cope with, but it all adds to the workload mix.
Many Mac users, myself included, had stuck with the fully updated El Capitan OS for a long time after the launch of first Sierra and later High Sierra OS versions due to suspect colour management policies within the latter versions.
As long as Apple and Adobe both continued to support the latest version El Capitan – which worked to virtual perfection – there was no desire to change; after all if it ain’t broke why fix it ???
But before October last year (2018) the Photoshop dev teams had reached a breaking point with backwards OS compatibility support for the forthcoming v20 release, and had to draw a line in the sand, dropping support for certain Windows versions and all pre-Sierra Mac OSX versions.
So fair play to ’em I say, because their job is a lot harder than most users care to imagine.
Adobe are not in the habit of releasing version updates for Photoshop that break anything, so take my advice and apply them as they appear; this way you are ALWAYS ‘on’ the current version. (Alas we can’t always say the same for version updates to Lightroom!).
Where they have been known to screw up in the past is in the premature release of new VERSIONS or what used to classed as version UPGRADES.
If you want to be super-cautious about either updates or indeed upgrades then my advice is always the same – back up your system drive before applying the update or upgrade. That way you can always roll-back if you need to.
Keeping your software application versions up to date is always the best policy.
If you have Photoshop on your system then go and update it NOW.
Don’t just think “I’ll do it later…..” because I know a lot of folk for whom later never comes!
And remember, if you have a Creative Cloud subscription then Lightroom, Photoshop and CameraRaw all need to up to date and in synch with each other all the time in order for the Lightroom>Photoshop>Lightroom workflow to function correctly.
For God’s Sake! Another bloody idiot YouTuber uploaded a video the other day where they were trying out the Fuji GFX 50, and just moments into said video he came out with the same old pile of junk which amounts to “it’s a bigger sensor so it soaks up more light”.
So now his 76,000+ subscribers are misled into believing something that is plain WRONG.
For anyone who didn’t manage to grasp what I was saying in my previous post HERE let’s try attacking this crackpot concept from a different angle.
Devotees of this farcical belief quite often liken larger sensors to bigger windows in a room.
“A bigger window lets in more light” they say.
Erm…no it doesn’t. A bigger window has an increased SURFACE AREA that just lets in a larger area of THE SAME LIGHT VALUE.
A 6 foot square pane of glass has a transmission value that is EXACTLY the same as as a 3 foot square pane of the same glass, therefore a ‘BIGGER WINDOW’ onto the outside world does NOT let in more light.
Imagine we have a room that has a 6×6 foot and 3×3 foot window in one wall. Now go press your nose up to both windows – does the world outside look any different?
No, of course it doesn’t.
The only property that ‘window size’ has any bearing on is the area of ‘illumination foot print’.
So basically the window analogy has ZERO bearing on the matter!
What lets light into the camera is the LENS, not the damn sensor!
The ‘illuminant value’ – or Ev – of the light leaving the back of the lens and entering the imaging plane DOES NOT CHANGE if we swap out our FX body for a crop body – DOES IT!!??!!
So why do these bloody idiots seem to think physics changes when we put a bigger sensor behind the lens? It’s pure abject stupidity.
The imaging area of a sensor has ZERO effect on the intensity of light striking it – that is something that is only influenced by aperture (intensity) and shutter speed (time).
With digital photography, exposure is ‘per unit area’ NOT total area. A dslr sensor is NOT a unit but an amalgamation of individual units called PHOTOSITES or pixels. Hence it is the photosite area that governs exposure NOT sensor total area.
There is a sensor on the market that blows all this ‘sucks in more light’ crap clean out of the water, and that sensor is the Sony IMX224MQV. This is a 1/3 class sensor with a diagonal dimension of just 6.09mm and 1.27Mp. By definition this one hell of a small ‘window’ yet it can ‘see’ light down to 0.005lux with good enough SNR to allow the image processor to capture 120 10bit images per second.
A cameras ‘window onto the world’ is the lens – end of!
Imagine going back to film for a moment – correct exposure value was the same for say Ilford FP4 irrespective of whether you were using 35mm, 120/220 roll film, 5×4 or 10×8 sheet film.
The size of the recording media within the imaging plane was and still is completely irrelevant to exposure.
Bigger recording media never have and never will ‘suck in’ more light, because they can’t suck in more light than the lens is transmitting!
The only properties of the sensor within the imaging area that WILL change how the it reacts to the light transmitted by the lens are:
Photosite surface area – number of megapixels
Sensor construction – FSI vs BSI
Micro lens design
CFA array absorption characteristics
After my previous post some stroppy idiot emailed me saying Ken Wheeler AKA The Angry Photographer says the Nikon D850 is a full frame Nikon D500, and that because the D850 is an FX camera and has better dynamic range then this proves I’m talking bollocks.
Well, Ken never said this in terms of anything other than approximate pixel density – he’s not that stupid and dick-heads should listen more carefully!
The D500 is an FSI sensor while the D850 is a BSI sensor and has a totally different micro lens design, AD Converter and IP.
Out of the 4 characteristics listed above 3 of them are DRASTICALLY different between the two cameras and the other is different enough to have definite implications – so you cannot compare them ‘like for like’.
But using the same lens, shutter speed, ISO and aperture while imaging a flat white or grey scene the sensor in a D850 will ‘see’ no higher light value than the sensor in a D500.
Why? Because the light emanating from the scene doesn’t change and neither does the light transmitted by the lens.
I own what was the best light meter on the planet – the Sekonic 758. No where does it have a sensor size function/conversion button on it, and neither does its superseding brother the 858!
There are numerous advantages and disadvantages between bigger and smaller sensors but bigger ‘gathering’ or ‘soaking up’ more light isn’t one of them!
So the next time you hear someone say that increased size of the imaging area – bigger sensor size – soaks up more photons you need to stop listening to them because they do not know what the hell they’re talking about.
But if you chose to believe what they say then so be it – in the immortal words of Forrest Gump ” Momma says stoopid is as stoopid does………”
Post Script:
Above you can see the imaging area for various digital sensor formats. You can click the image to view it bigger.
Each imaging area is accurately proportional to the others.
Compare FX to the PhaseOne IQ4. Never, repeat never think that any FX format sensor will ever deliver the same image fidelity as a 645 sensor – it won’t.
Why?
Because look at how much the fine detail in a scene has to be crushed down by the lens to make it ‘fit’ into the sensor imaging area on FX compared to 645.
Andy your talking crap! Am I ? Why do you think the worlds top product and landscape photographers shoot medium format digital?
Here’s the skinny – it’s not because they can afford to, but rather they can’t afford NOT TO.
As for the GFX50 – its imaging area is around 66% that of true MF and it’s smaller than a lot of its ‘wannabe’ owners imagine.
Sensor Size Myth – “A bigger sensor gathers more light.”
If I hear this crap one more time either my head’s going to explode or I’m going to do some really nasty things to someone!
A larger sensor size does NOT necessarily gather any more light than a smaller sensor – END OF!
What DOES gather more light is BIGGER PHOTOSITES – those individual light receptors that cumulatively ‘make up’ the photosensitive surface plane of our camera sensor.
Above we have two fictional sensors, one with smaller physical dimensions and one with larger dimensions – the bottom one is a ‘larger sensor size’ than the top one, and the bottom one has TWICE as many photosites as the top one (analogous to more megapixels).
But the individual photosites in BOTH sensors are THE SAME SIZE.
Ignoring the factors of:
Micro Lens design
Variations in photosite design such as resistivity
Wiring Substrate
SNR & ADC
the photosites in both sensors will have exactly the same pixel pitch, reactivity to light, saturation capacity and base noise level.
However, if we now try to cram the number of photosites (megapixels) into the area of the SMALLER sensor – to increase the resolution:
we end up with SMALLER photosites.
We have a HIGHER pixel resolution but this comes with a multi-faceted major penalty:
Decreased Dynamic Range
Increased susceptibility to specular highlight clipping
Lower photosite SNR (signal to noise ratio)
Increased susceptibility to diffraction – f-stop limiting
And of course EXACTLY the same penalties are incurred when we increase the megapixel count of LARGER sensors too – the mega-pixel race – fueled by FOOLS and NO-NOTHING IDIOTS and accommodated by camera manufacturers trying to make a profit.
But this perennial argument that a sensor behaves like a window is stupid – it doesn’t matter if I look outside through a small window or a big one, the light value of the scene outside is the same.
Just because I make the window bigger the intensity of the light coming through it does NOT INCREASE.
And the ultimate proof of the stupidity and futility of the ‘big window vs small window’ argument lies with the ‘proper photographers’ like Ben Horne, Nick Carver and Steve O’nions to name but three – those who shoot FILM!
A 10″x8″ sheet of Provia 100 has exactly the same exposure characteristics as a roll of 35mm or 120/220 Provia 100, and yet the 10″x 8″ window is 59.73x the size of the 35mm window.
And don’t even get me started on the other argument the ‘bigger = more light’ idiots use – that of the solar panel!
“A bigger solar panel pumps out more volts so because it gathers more light, so a bigger sensor gathers more light so must pump out better images………”
What a load of shite…………
Firstly, SPs are cumulative and they increase their ‘megapixel count’ by growing in physical dimensions, not by making their ‘photosites’ smaller.
But if you cover half of one with a thick tarpaulin then the cumulative output of the panel drops dramatically!
Also, we want SPs to hit their clip point for maximum voltage generation (the clip point would be that where more light does NOT produce more volts!).
Our camera sensor CANNOT be thought of in the same way:
We are not interested in a cumulative output, and we don’t want all the photosites on our sensors to ‘max out’ otherwise we’ll have no tonal variation in our image will we…..!
The shot above is from a D800E fitted with a 21mm prime, ISO 100 and 2secs @f13.
If I’d have shot this with the same lens on the D500 and framed the same composition I’d have had to use a SHORTER exposure to prevent the highlights from clipping.
But if bigger sensors gather more light (FX gathers more than DX) I’d have theoretically have had expose LONGER……….and that would have been a disaster.
Seriously folks, when it comes to sensor size bigger ones (FX) do not gather more light than smaller (DX) sensors.
It’s not the sensor total area that does the light gathering, but the photosites contained therein – bigger photosites gather more light, have better SNR, are less prone to diffraction and result in a higher cumulative dynamic range for the sensor as a whole.
Do NOT believe anyone anywhere on any website, forum or YouTube channel who tells you any different because they a plain WRONG!
Where does this shite originate from you may ask?
Well, some while back FX dslr cameras where not made and everything from Canon and Nikon was APSC 1.5x or 1.6x, or APSH 1.3x. Canon was first with an FX digital then Nikon joined the fray with the D3.
Prior to the D3 we Nikon folk had the D300 DX which was 12.3Mp with a photosite area 30.36 microns2
The D3 FX came along with 12.1Mp but with a photosite area of 70.9 microns2
Better in low light than its DX counterpart due to these MASSIVE photosites it gave the dick heads, fools and no-nothing idiots the crackpot idea that a bigger sensor size gathers more light – and you know what……it stuck; and for some there’s no shifting it!
Hope this all makes sense folks.
Don’t forget, any questions or queries then just ask!
If you feel I deserve some support for putting this article together then please consider joining my membership site over on Patreon by using the link below.
Alternatively you could donate via PayPal to tuition@wildlifeinpixels.net
If you are not yet a member of my Patreon site then please consider it as members get benefits, with more membership perks planned over the next 3 months. Â Your support would be very much appreciated and rewarded.
Before I go, there’s a new video up on my YouTube Channel showing the sort of processing video I do for my Patreon Members.
You can see it here (it’s 23 minutes long so be warned!):
Please leave a comment on the video if you find it useful, and if you fancy joining my other members over on Patreon then I could be doing these for you too!
Two Blend Modes in Photoshop EVERY Photographer Should Know!
The other day one of my members over on my Patreon suggested I do a video on Blending Modes in Photoshop.
Well, that would take a whole heap of time as it’s quite a big subject because Blending Modes don’t just apply to layers. Brushes of all descriptions have their own unique blend modes, and so do layer groups.
There is no need to go into a great deal of detail over blend modes in order for you to start reaping their benefits.
There are TWO blend modes – Multiply and Screen – which you can start using straight away to vary the apparent exposure of your images.
And seeing as my last few videos have been concerned with exposing for highlights and ETTR in general, the use of the Multiply Layer Blending Mode will be clear to see once you’ve watched the video.
Hope the video gives you some more insight folks!
My Members over on Patreon get the benefit of being able to download the raw files used in this video.
Nikon Z7 – I am a Bad Idea and a waste of YOUR money!
And NO – this title isn’t meant as clickbait!
I love Nikon cameras for many reasons.
I HATE Nikon as a company.
I dislike Canon cameras for numerous technical and ergonomic reasons.
I LIKE Canon as a company.
The Nikon D5 was THE FIRST Nikon camera I’ve used that I dislike and thought was like the proverbial bag of spanners.
But now there’s a new Nikon that takes over the mantle of Nikon at its very worst – and I’ve not even clapped eyes on one yet let alone handled one. I don’t need to play with one to know just how much of a rip-off this pile of rubbish really is.
This camera is £4000 at Wex here in the UK – yes, that FOUR THOUSAND of your hard-earned spondoolicks (for our overseas friends that’s ‘slang’ for pounds sterling).
We’ve already harangued the Z7 for its single media slot – and Canon followed suit with the EOS R, is that a coincidence?
But here’s the kicker, and the MAIN reason why the Nikon Z7 is a crock, and the indicator lies at the foot of page 57 in the Nikon Z7 user manual:
I think the first to show the AF problems with the Nikon Z7 was the ‘afro haircut idiot know-nothing from Philedelphia’ – you know, the guy who never knew how to use Photoshop until the other month when Matt Kloskowski showed him how – live on YouTube.
Lot’s of people are jumping on the DISS THE NIKON Z7 AF bandwagon as I’m typing this, but none of the morons are pointing out WHY the NIKON Z7 auto focus is so crappy.
So I will tell you why!!!!
There is no way to have any control finesse over the AF functionality.
Above is the main control functionality for the D5/500/850 MultiCAM 20K AF system.
You will see controls for Blocked Shot Response and Subject Motion. These roughly equate to Tracking Sensitivity and Acceleration/Deceleration Tracking on the controls for the Canon 61 point Reticular system found on the likes of the 1DX Mk1 and Mk2 and 5DMk 3 and Mk4.
The two controls on both Nikon and Canon dictate the auto focus SOLUTION spat out by the PREDICTIVE AF ALGORITHMS contained in the cameras AF engine processors.
The subjects degree and type of motion RELATIVE to the camera position DEMAND different setups within this control panel. It’s all to do with the camera AF resistance to MINOR and MAJOR changes in subject position between one frame and the next.
So this is the problem with the Nikon Z7 – because it’s utilizing so-called ‘on chip phase detect’ – which isn’t phase detect at all in reality – you cannot get control of these variable functions because they don’t exist in the cameras menu/firmware.
As far as I’m aware these sorts of controls are not available on the Sony cameras either.
But there is still a form of predictive AF algorithm at work in all mirrorless cameras, and it would appear that the one inside the Nikon Z7 is really poor in the way it’s balanced out with regard to it coping with moving subjects – especially those that move somewhat erratically and towards the camera.
Understand this people, the Nikon Z7 is a glorified D5000 that is not worth half the price you’ll have to pay for it.
Mirrorless systems have certain advantages over traditional dSLR systems:
Reduction in Shutter Lag times
Removal of Mirror Slap vibrations
Reduction of Weight leading to Greater Portability
But on-chip phase detection isn’t real phase detection, and it will not (for the foreseeable future) be anywhere near as fast or accurate as CORRECTLY setup phase detect autofocus on a top flight dSLR.
A sequence of 77 raw files that are all tack sharp and cover around 12 seconds of time – no mirrorless system is capable of doing this to the same degree of consistency as a correctly set dSLR.
The dSLR is NOT DEAD!
Don’t believe me?
Licensed Formula 1 pit and circuit access photographers make a very good living, and they stand or fall by the reliability of their camera gear. But they are all business people at the end of the day.
If a Sony A9 and that fancy 400mm Sony lens was as reliable as the Sony fanboys claim it is, then why will we not see a plethora of Sony rigs at Suzuka on Sunday? Just a thought…
But for heavens sake folks, if you have a hankering for a Nikon Z7 then PLEASE think about it – make yourself aware of the FACTS before you blow your wodge of wonga!
It’s NOT a professional camera in any way shape or form, and Dirk Jasper of Nikon Europe even says that – watch the video below at 19mins 48sec:
NOTE TO NIKON:Â If you want to try and get me to change my mind then all you have to do is send me one guys!
I promise I won’t lick it or sniff it like that Jared Polin idiot!
Landscape Photography Exposure, ETTR and Highlight Spot Metering Accuracy
CLICK ME to watch the Video!
In this short(ish) video I want to show you why your camera spot meter can be something of a ‘let down’ in exposure terms when you are trying to obtain an accurate highlight reading for your scene.
Most ‘in camera’ spot meters are a lot more imprecise than the user imagines.
Nikon spot meter ‘spots’ are generally 4mm wide. That means 4mm ON THE SENSOR!
On an FX camera the sensor is roughly 36mm wide, so the ‘spot’ actually has a ‘window’ or ‘measuring footprint’ that is 1/9th of the viewfinders horizontal field of view.
And don’t think that because you use a Canon you’re any better off – in fact you’re worse off because Canon spots are a tiny bit BIGGER!
In this example I use a shot taken with a Zeiss 21mm – this lens has a horizontal angle of view of 81 degrees.
So the 4mm Nikon spot has an angle of view equivalent to 1/9th the frame and hence 1/9th the horizontal AoV of the lens, in other words 9 degrees.
Aimed at the brightest highlight in the sky its footprint takes in sky tones that are dramatically less than highlights. So the reading it will give me is ‘darker’ than it should be.
My D800E has it’s highlight clipping/blow point 3.6 stops above its mid tone.
If I then apply ETTR to this reading by exposing at +3 to +3.3 stops it will result in blown highlights.
But if I use a 1 degree spot meter aimed at exactly the same place its much narrower angle sees ONLY THE BRIGHT AREA I’m aiming at. This gives me a much BRIGHTER reading, allowing me to push the exposure by +3.3 stops without blowing any of my highlights.
Hope this all makes sense folks.
Don’t forget, any questions or queries then just ask!
If you feel I deserve some support for putting this video and article together then please consider joining my membership site over on Patreon by using the link below.
Alternatively you could donate via PayPal to tuition@wildlifeinpixels.net
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behaviour or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.