Lightroom Dehaze – part 2

More Thoughts on The Lightroom Dehaze Control

With the dehaze adjustment in Lightroom (right) the sky and distant hills look good, but the foreground looks poor.

With the dehaze adjustment in Lightroom (right) the sky and distant hills look good, but the foreground looks poor.

In my previous post I did say I’d be uploading another video reflecting my thoughts on the Lightroom/ACR dehaze adjustment.

And I’ve just done that – AND I’ve made a concious effort to keep the ramblings down too..!

In the video I look at the effects of the dehaze adjustment on 4 very different images, and alternative ways of obtaining similar or better results without it.

You may see some ‘banding’ on the third image I work on – this is down to YouTube video compression.

In conclusion I have to say that I find the dehaze ‘tool’ something of an anti-climax if I’m honest. In fairly small positive amounts it can work exceptionally well in terms of a quick work flow on relatively short dynamic range images.  But I’m not a really big fan in general, and It’s possible to create pretty much the same adjustments using the existing Lightroom tools.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Image Retouching

 Image Retouching in Photoshop CC 2014

It’s very rare that we ever get a frame from our camera that doesn’t need retouching – that’s a FACT.

Imperfections in the frame can be both ‘behind the shutter’ and ‘in front of the lens’ – sensor dust and crud on the subject.  But you’ll take photographs where these imperfections are hard, if not impossible, to see under normal viewing.

But print that image BIG and those invisible faults will begin to be visually apparent; by which time it’s too bloomin’ late and they’ve cost you money; or worse still, a client.

The ‘visualise spots’ tool in Lightroom will show you a certain amount of ‘dust bunny’ type faults and errors, but the way Lightroom executes retouching repairs is not always ‘quite up to snuff’; and when it comes to dust, crap and other undesirables on the subject itself Lightroom will fail to recognise them in the first place.

Image retouching isn’t really all that difficult; but it can be an intensely tedious and time-consuming process.

To that end I’ve stuck these HD video lessons on my You Tube channel.

In these videos I illustrate how I deploy the Spot Healing brush, Healing Brush, Clone Tool, Patch Tool and Content Aware Fill command to carry out some basic image retouching on a shot of cutlery bright ware.

I demonstrate the addition of a ‘dust visibility’ curves adjustment layer – something that everyone should ‘get the hang’ of using – as a first step to effective image retouching.

When photographing glossy, high reflectivity subjects we need to remove the imperfections and smooth the surfaces of the subject without reducing the ‘glossiness’ and turning it matt!

Please note: a couple of these videos are in excess of 20 minutes duration and they will look better at full resolution HDV if you click the You Tube icon. Also, it takes a lot longer to do a job when you have to talk about at the same time!

I hope you get some idea as to how simple and straightforward my approach to image retouching is!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

HDR in Lightroom CC (2015)

Lightroom CC (2015) – exciting stuff!

New direct HDR MERGE for bracketed exposure sequences inside the Develop Module of Lightroom CC 2015 – nice one Adobe!  I can see Eric Chan’s finger-prints all over this one…!

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Twilight at Porth Y Post, Anglesey.

After a less than exciting 90 minutes on the phone with Adobe this vary morning – that’s about 10 minutes of actual conversation and an eternity of crappy ‘Muzak’ – I’ve managed to switch from my expensive old single app PsCC subscription to the Photography Plan – yay!

They wouldn’t let me upgrade my old stand-alone Lr4/Lr5 to Lr6 ‘on the cheap’ so now they’ve given me two apps for half the price I was paying for 1 – mental people, but I’ll not be arguing!

I was really eager to try out the new internal ‘Merge’ script/command for HDR sequences – and boy am I impressed.

I picked a twilight seascape scene I shot last year:

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

I’ve taken a 6 shot exposure bracketed sequence of RAW files above, into the Develop Module of Lightroom CC and done 3 simple adjustments to all 6 under Auto Synch:

  1. Change camera profile from Adobe Standard to Camera Neutral.
  2. ‘Tick’ Remove Chromatic Aberration in the Lens Corrections panel.
  3. Change the colour temperature from ‘as shot’ to a whopping 13,400K – this neutralises the huge ‘twilight’ blue cast.

You have to remember that NOT ALL adjustments you can make in the Develop Module will carry over in this process, but these 3 will.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

Ever since Lr4 came out we have had the ability to take a bracketed sequence in Lightroom and send them to Photoshop to produce what’s called a ’32 bit floating point TIFF’ file – HDR without any of the stupid ‘grunge effects’ so commonly associated with the more normal styles of HDR workflow.

The resulting TIFF file would then be brought back into Lightroom where some very fancy processing limits were given to us – namely the exposure latitude above all else.

‘Normal’ range images, be they RAW or TIFF etc, have a potential 10 stops of exposure adjustment, +5 to -5 stops, both in the Basics Panel, and with Linear and Radial graduated filters.

But 32 bit float TIFFs had a massive 20 stops of adjustment, +10 to -10 stops – making for some very fancy and highly flexible processing.

Now the, what’s a ‘better’ file type than pixel-based TIFF?  A RAW file……

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

So, after selecting the six RAW images, right-clicking and selecting ‘Photomerge>HDR’…

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

…and selecting ‘NONE’ from the ‘de-ghost’ options, I was amazed to find the resulting ‘merged file’ was a DNG – not a TIFF – yet it still carries the 20 stop exposure adjustment  latitude.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

This is the best news for ages, and grunge-free, ‘real-looking’ HDR workflow time has just been axed by at least 50%.  I can’t really say any more about it really, except that, IMHO of course, this is the best thing to happen for Adobe RAW workflow since the advent of PV2012 itself – BRILLIANT!

Note: Because all the shots in this sequence featured ‘blurred water’, applying any de-ghosting would be detrimental to the image, causing some some weird artefacts where water met static rocks etc.

But if you have image sequences that have moving objects in them you can select from 3 de-ghost pre-sets to try and combat the artefacts caused by them, and you can check the de-ghost overlay tick-box to pre-visualise the de-ghosting areas in the final image.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

Switch up to Lightroom CC 2015 – it’s worth it for this facility alone.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lumenzia – Not Just for Landscapes

Luminosity Masking is NOT just for landscape photographs – far from it.

But most folk miss the point of luminosity masking because they think it’s difficult and tedious.

The point, as I always see it, is that luminosity masking allows you to make dramatic but subtle changes and enhancements to your image with what are actually VERY fast and crude “adjustments”.

This in reality means that luminosity masking is FAST – and way faster than trying to do “localised” adjustments.  But the creation of the masks and choosing which one to use is what crippled the “ease factor” for most.

But with this new Lumenzia extension is so snappy and quick at showing you the different masks that, if you know what area of the image you want to adjust, the whole process takes SECONDS.

Let’s look at a White-tailed Eagle taken just 15 days ago:

Straight off the 1Dx it looks like this:

RAW unprocessed .CR2 file

RAW unprocessed .CR2 file (CLICK to view in new window)

Inside the Develop Module of Lightroom 5 it looks like:

camera

RAW unprocessed – (CLICK to view in new window)

A few tweaks later and it looks like:

Lr5adjust

Tweaks are what you can see in the Basics Panel + CamCal set to Neutral, and Chroma Noise removal in the Lens Corrections Panel is turned ON – (CLICK to view in new window)

Sending THIS adjusted image to Photoshop:

ps1

(CLICK to view in new window)

All I want to do is give a “lift” to the darker tones in the bird; under the wings, and around the side of head, legs and tail.

Using a BRUSH to do the job is all fine ‘n dandy BUT, you would be creating a localised adjustment that’s all-encompassing from a tonal perspective; all tones that fell under the brush get adjusted by the same amount.

A luminosity mask, or indeed ANY pixel-based mask is exactly what it says it is – a mask full of pixels. And those pixels are DERIVED from the real pixels in your image.  But the real beauty is that those pixels will be anywhere from 1% to 100% selected, or not selected at all.

Where they are 100% selected they are BLACK, and any adjustment you make BEHIND that mask will NOT be visible.

Pixels that are NOT selected will be WHITE, and your adjustment will show fully.

But where the pixels are between 1% and 99% selected they will appear as 1% GREY to 99% grey and so will show or hide variation of said adjustment by the same amounts…got it?

The Lumenzia D4 mask looks like it’ll do the job I want:

(CLICK to view in new window)

Lumenzia D4 mask (CLICK to view in new window)

Click the image to view larger – look at the subtle selections under those wings – try making that selection any other way in under 2 seconds – you’ve got no chance!

The “lift” I want to make in those WHITER areas of the mask is best done with a Curves Adjustment layer:

Select "Curve" in the Lumenzia GUI - (CLICK to view in new window)

Select “Curve” in the Lumenzia GUI – (CLICK to view in new window)

So hit the Curve button and voilà:

The Lumenzia D4 mask is now applied to Curves Adjustment Layer - (CLICK to view in new window)

The Lumenzia D4 mask is now applied to Curves Adjustment Layer – (CLICK to view in new window)

You can see in the image above that I’ve made a very rough upwards deflection of the curve to obtain an effective but subtle improvement to those under-wing areas etc. that I was looking to adjust.

The total time frame from opening the image in Photoshop to now is about 20 seconds!  Less time than the Lightroom 5 adjustments took…

And to illustrate the power of that Lumenzia D4 Luminosity mask, and the crudity of the adjustment I made, here’s the image WITHOUT THE MASK:

The effect of the luminosity mask is best illustrated by "hiding" it - bloody hell, turn it back on ! - (CLICK to view in new window).

The effect of the luminosity mask is best illustrated by “hiding” it – bloody hell, turn it back on ! – (CLICK to view in new window).

And at full resolution you can see the subtleties of the adjustment on the side of the head:

ll+lum

With Lumenzia (left) and just the Lightroom 5 processing (right) – (CLICK to view in new window).

If you want to get the best from your images AND you don’t want to spend hours trying to do so, then Lumenzia will seriously help you.

Clicking this link HERE to buy Lumenzia doesn’t mean it costs you any more than if you buy it direct from the developer.  But it does mean that I get a small remuneration from the developer as a commission which in turn supports my blog.  Buying Lumenzia is a total no-brainer so please help support this blog by buying it via these links – many thanks folks.

UPDATE June 2018: Greg Benz (the plugin author) has launched a comprehensive Lumenzia training course – see my post here for more information.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Colormunki Photo Update

Colormunki Photo Update

Both my MacPro and non-retina iMac used to be on Mountain Lion, or OSX 10.8, and nope, I never updated to Mavericks as I’d heard so many horror stories, and I basically couldn’t be bothered – hey, if it ain’t broke don’t fix it!

But, I wanted to install CapOne Pro on the iMac for the live-view capabilities – studio product shot lighting training being the biggest draw on that score.

So I downloaded the 60 day free trial, and whadyaknow, I can’t install it on anything lower than OSX 10.9!

Bummer thinks I – and I upgrade the iMac to OSX 10.10 – YOSEMITE.

Now I was quite impressed with the upgrade and I had no problems in the aftermath of the Yosemite installation; so after a week or so muggins here decided to do the very same upgrade to his late 2009 Mac Pro.

OHHHHHHH DEARY ME – what a pigs ear of a move that turned out to be!

Needless to say, I ended up making a Yosemite boot installer and setting up on a fresh HDD.  After re-installing all the necessary software like Lightroom and Photoshop, iShowU HD Pro and all the other crap I use, the final task arrived of sorting colour management out and profiling the monitors.

So off we trundle to X-Rite and download the Colormunki Photo software – v1.2.1.  I then proceeded to profile the 2 monitors I have attached to the Mac Pro.

Once the colour measurement stage got underway I started to think that it was all looking a little different and perhaps a bit more comprehensive than it did before.  Anyway, once the magic had been done and the profile saved I realised that I had no way of checking the new profile against the old one – t’was on the old hard drive!

So I go to the iMac and bring up the Colormunki software version number – 1.1.1 – so I tell the software to check for updates – “non available” came the reply.

Colormunki software downloads

Colormunki software downloads

Colormunki v1.2.1 for Yosemite

Colormunki v1.2.1 for Yosemite

So I download 1.2.1, remove the 1.1.1 software and restart the iMac as per X-Rites instructions, and then install said 1.2.1 software.

Once installation was finished I profiled the iMac and found something quite remarkable!

Check out the screen grab below:

iMac screen profile comparrisons.

iMac screen profile comparisons. You need to click this to open full size in a new tab.

On the left is a profile comparison done in the ColourThink 2-D grapher, and on the right one done in the iMacs own ColourSynch Utility.

In the left image the RED gamut projection is the new Colormunki v1.2.1 profile. This also corresponds to the white mesh grid in the Colour Synch image.

Now the smaller WHITE gamut projection was produced with an i1Pro 2 using the maximum number of calibration colours; this corresponds to the coloured projection in the Coloursynch window image.

The GREEN gamut projection is the supplied iMac system monitor profile – which is slightly “pants” due to its obvious smaller size.

What’s astonished me is that the Colormunki Photo with the new software v1.2.1 has produced a larger gamut for the display than the i1 Pro 2 did under Mountain Lion OSX 10.8

I’ve only done a couple of test prints via softproofing in Lightroom, but so far the new monitor profile has led to a small improvement in screen-to-print matching of the some subtle yellow-green and green-blue mixes, aswell as those yellowish browns which I often found tricky to match when printing from the iMac.

So, my advice is this, if you own a Colormunki Photo and have upgraded your iMac to Yosemite CHECK your X-Rite software version number. Checking for updates doesn’t always work, and the new 1.2.1 Mac version is well worth the trouble to install.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Camera Calibration

Custom Camera Calibration

The other day I had an email fall into my inbox from leading UK online retailer…whose name escapes me but is very short… that made my blood pressure spike.  It was basically offering me 20% off the cost of something that will revolutionise my photography – ColorChecker Passport Camera Calibration Profiling software.

I got annoyed for two reasons:

  1. Who the “f***” do they think they’re talking to sending ME this – I’ve forgotten more about this colour management malarkey than they’ll ever know….do some customer research you idle bastards and save yourselves a mauling!
  2. Much more importantly – tens of thousands of you guys ‘n gals will get the same email and some will believe the crap and buy it – and you will get yourselves into the biggest world of hurt imaginable!

Don’t misunderstand me, a ColorChecker Passport makes for a very sound purchase indeed and I would not like life very much if I didn’t own one.  What made me seethe is the way it’s being marketed, and to whom.

Profile all your cameras for accurate colour reproduction…..blah,blah,blah……..

If you do NOT fully understand the implications of custom camera calibration you’ll be in so much trouble when it comes to processing you’ll feel like giving up the art of photography.

The problems lie in a few areas:

First, a camera profile is a SENSOR/ASIC OUTPUT profile – think about that a minute.

Two things influence sensor/asic output – ISO and lens colour shift – yep. that’s right, no lens is colour-neutral, and all lenses produce colour shifts either by tint or spectral absorption. And higher ISO settings usually produce a cooler, bluer image.

Let’s take a look at ISO and its influence on custom camera calibration profiling – I’m using a far better bit of software for doing the job – “IN MY OPINION” – the Adobe DNG Profile Editor – free to all MAC download and Windows download – but you do need the ColorChecker Passport itself!

I prefer the Adobe product because I find the ColorChecker software produced camera calibration profiles there were, well, pretty vile in terms of increased contrast especially; not my cup of tea at all.

camera calibration, Andy Astbury, colour, color management

5 images shot at 1 stop increments of ISO on the same camera/lens combination.

Now this is NOT a demo of software – a video tutorial of camera profiling will be on my next photography training video coming sometime soon-ish, doubtless with a somewhat verbose narrative explaining why you should or should not do it!

Above, we have 5 images shot on a D4 with a 24-70 f2.8 at 70mm under a consistent overcast daylight at 1stop increments of ISO between 200 and 3200.

Below, we can see the resultant profile and distribution of known colour reference points on the colour wheel.

camera calibration, Andy Astbury, colour, color management

Here’s the 200 ISO custom camera calibration profile – the portion of interest to us is the colour wheel on the left and the points of known colour distribution (the black squares and circled dot).

Next, we see the result of the image shot at 3200 ISO:

camera calibration, Andy Astbury, colour, color management

Here’s the result of the custom camera profile based on the shot taken at 3200 ISO.

Now let’s super-impose one over t’other – if ISO doesn’t matter to a camera calibration profile then we should see NO DIFFERENCE………….

camera calibration, Andy Astbury, colour, color management

The 3200 ISO profile colour distribution overlaid onto the 200 ISO profile colour distribution – it’s different and they do not match up.

……..well would you bloody believe it!  Embark on custom camera calibration  profiling your camera and then apply that profile to an image shot with the same lens under the same lighting conditions but at a different ISO, and your colours will not be right.

So now my assertions about ISO have been vindicated, let’s take a look at skinning the cat another way, by keeping ISO the same but switching lenses.

Below is the result of a 500mm f4 at 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 500mm f4 at 1000 ISO

And below we have the 24-70mm f2.8 @ 70mm and 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 24-70mm f2.8 @ 70mm at 1000 ISO

Let’s overlay those two and see if there’s any difference:

camera calibration, Andy Astbury, colour, color management

Profile results of a 500mm f4 at 1000 ISO and the 24-70 f2.8 at 1000 ISO – as massively different as day and night.

Whoops….it’s all turned to crap!

Just take a moment to look at the info here.  There is movement in the orange/red/red magentas, but even bigger movements in the yellows/greens and the blues and blue/magentas.

Because these comparisons are done simply in Photoshop layers with the top layer at 50% opacity you can even see there’s an overall difference in the Hue and Saturation slider values for the two profiles – the 500mm profile is 2 and -10 respectively and the 24-70mm is actually 1 and -9.

The basic upshot of this information is that the two lenses apply a different colour cast to your image AND that cast is not always uniformly applied to all areas of the colour spectrum.

And if you really want to “screw the pooch” then here’s the above comparison side by side with with  the 500f4 1000iso against the 24-70mm f2.8 200iso view:

camera calibration, Andy Astbury, colour, color management

500mm f4/24-70mm f2.8 1000 ISO comparison versus 500mm f4 1000 ISO and 24-70mm f2.8 200 ISO.

A totally different spectral distribution of colour reference points again.

And I’m not even going to bother showing you that the same camera/lens/ISO combo will give different results under different lighting conditions – you should by now be able to envisage that little nugget yourselves.

So, Custom Camera Calibration – if you do it right then you’ll be profiling every body/lens combo you have, at every conceivable ISO value and lighting condition – it’s one of those things that if you don’t do it all then you’d be best off not doing at all in most cases.

I can think of a few instances where I would do it as a matter of course, such as scientific work, photo-microscopy, and artwork photography/copystand work etc, but these would be well outside the remit the more normal photographic practices.

As I said earlier, the Passport device itself is worth far more than it’s weight in gold – set up and light your shot and include the Passport device in a prominent place. Take a second shot without it and use shot 1 to custom white balance shot 2 – a dead easy process that makes the device invaluable for portrait and studio work etc.

But I hope by now you can begin to see the futility of trying to use a custom camera calibration profile on a “one size fits all” basis – it just won’t work correctly; and yet for the most part this is how it’s marketed – especially by third party retailers.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

The ND Filter

Long Exposure & ND Filters

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

A view of the stunning rock formations at Porth Y Post on the Welsh island of Anglesey. The image is a long exposure of very rough sea, giving the impression of smoke and fog.  30 seconds @f13 ISO 100. B&W 10stop ND – unfiltered exposure would have been 1/30th.

The reason for this particular post began last week when I was “cruising” a forum on a PoD site I’m a member of, and I came across a thread started by someone about heavy ND filters and very long exposures.

Then, a couple of days later a Facebook conversation cropped up where someone I know rather well seemed to be losing the plot over things totally by purchasing a 16 stop ND.

The poor bugger got a right mauling from “yours truly” for the simple reason that he doesn’t understand the SCIENCE behind the art of photography.  This is what pisses me off about digital photography – it readily provides “instant gratification” to folk who know bugger all about what they are doing with their equipment.  They then spend money on “pushing the envelope” only to find their ivory tower comes tumbling down around them because they THOUGHT they knew what they were doing………..stop ranting Andy before you have a coronary!

OK, I’ll stop “ranting”, but seriously folks, it doesn’t matter if you are on a 5DMkIII or a D800E, a D4 or a 1Dx – you have to realise that your camera works within a certain set of fixed parameters; and if you wander outside these boundaries for reasons of either stupidity or ignorance, then you’ll soon be up to your ass in Alligators!

Avid readers of this blog of mine (seemingly there are a few) will know that I’ve gone to great lengths in the past to explain how sensors are limited in different ways by things such as diffraction and that certain lens/sensor combinations are said to be “diffraction limited; well here’s something new to run up your flag pole – sensors can be thought of as being “photon limited” too!

I’ll explain what I mean in a minute…..

SENSOR TYPE

Most folk who own a camera of modern design by Nikon or Canon FAIL at the first hurdle by not understanding their sensor type.

Sensors generally fall into two basic types – CCD and CMOS.

Most of us use cameras fitted with CMOS sensors, because we demand accurate fast phase detection AF AND we demand high levels of ADC/BUFFER speed.  In VERY simplistic terms, CCD sensors cannot operate at the levels of speed and efficiency demanded by the general camera-buying public.

So, it’s CMOS to the rescue.  But CMOS sensors are generally noisier than CCDs.

When I say “noise” I’m NOT referring to the normal under exposure luminance noise that a some of you might be thinking of. I’m talking about the “background noise” of the sensor itself – see post HERE .

Now I’m going to over simplify things for you here – I need to because there are a lot of variables to take into account.

  • A Sensor is an ARRAY of PHOTOSITES or PHOTODIODES
  • A photodiode exists to do one thing – react to being struck by PHOTONS of light by producing electrons.
  • To produce electrons PROPORTIONAL to the number of photons that strike it.

Now in theory, a photodiode that sees ZERO photons during the exposure should release NO ELECTRONS.

At the end of the exposure the ADC comes along and counts the electrons for each photodiode – an ANALOGUE VALUE – and converts it to a DIGITAL VALUE and stores that digital value as a point of information in the RAW file.

A RAW converter such as Lightroom then reads all these individual points of information and using its own in-built algorithms it normalises and demosaics them into an RGB image that we can see on our monitor.

Sounds simple doesn’t it, and theoretically it is.  But in practice there’s a lot of places in the process where things can go sideways rapidly……..!

We make a lot of assumptions about our pride and joy – our newly purchased DSLR – and most of these assumptions are just plain wrong.  One that most folk get wrong is presuming ALL the photodiodes on their shiny new sensor BEHAVE IN THE SAME WAY and are 100% identical in response.  WRONG – even though, in theory, it should be true.

Some sensors are built to a budget, some to a standard of quality and bugger the budget.

Think of the above statement as a scale running left to right with crap sensors like a 7D or D5000 on the left, and the staggering Phase IQ260 on the right.  There isn’t, despite what sales bumph says, any 35mm format sensor that can come even close to residing on the right hand end of the scale, but perhaps a D800E might sit somewhere between 65 and 70%.

The thing I’m trying to get at here is that “quality control” and “budget” are opposites in the manufacturing process, and that linearity and uniformity of photodiode performance costs MONEY – and lots of it.

All our 35mm format sensors suffer from a lack of that expensive quality control in some form or other, but what manufacturers try to do is place the resulting poor performance “outside the envelope of normal expected operation” as a Nikon technician once told me.

In other words, during normal exposures and camera usage (is there such a thing?) the errors don’t show themselves – so you are oblivious to them. But move outside of that “envelope of normal expected operation” and as I said before, the Alligators are soon chomping on your butt cheeks.

REALITY

Long exposures in low light levels – those longer than 30 to 90 seconds – present us with one of those “outside the envelope” situations that can highlight some major discrepancies in individual photodiode performance and sensor uniformity.

Earlier, I said that a photodiode, in a perfect world, would always react proportionally to the number of photons striking it, and that if it had no photon strikes during the exposure then it would have ZERO output in terms of electrons produced.

Think of the “perfect” photodiode/photosite as being a child brought up by nuns, well mannered and perfectly behaved.

Then think of a child brought up in the Gallagher household a la “Shameless” – zero patience, no sense of right or wrong, rebellious and down right misbehaved.  We can compare this kid with some of the photodiodes on our sensor.

These odd photodiodes usually show a random distribution across the sensor surface, but you only ever see evidence of their existence when you shoot in the dark, or when executing very long exposures from behind a heavy ND filter.

These “naughty” photodiodes behave badly in numerous ways:

  • They can release a larger number of electrons than is proportional to their photon count.
  • They can go to the extreme of releasing electrons when the have a ZERO photon count.
  • They can mimic the output of their nearest neighbors.
  • They can be clustered together and produce random spurious specks of colour.

And the list goes on!

It’s a Question of Time

These errant little buggers basically misbehave because the combination of low photon count and overly long exposure time allow them to, if you like, run out of patience and start misbehaving.

It is quite common for a single photodiode or cluster of them to behave in a perfect manner for any shutter speed up to between 30 seconds and 2 minutes. But if we expose that same photodiode or cluster for 3 minutes it can show abnormal behavior in its electron output.  Expose it for 5 minutes and its output could be the same, or amplified, or even totally different.

IMPORTANT – do not confuse these with so-called “hot pixels” which show up in all exposures irrespective of shutter duration.

Putting an ND filter in front of your lens is the same as shooting under less light.  Its effect is even-handed across all exposure values in the scenes brightness range, and therein lies the problem.  Cutting 10 stops worth of photons from the highlights in the scene will still leave plenty to make the sensor work effectively in those areas of the image.

But cutting 10 stops worth of photons from the shadow areas – where there was perhaps 12 stops less to begin with – might well leave an insufficient number of photons in the very darkest areas to make those particular photodiodes function correctly.

Exposure is basically a function of Intensity and Time, back in my college days we used to say that Ex = I x T !

Our ND filter CUTS intensity across the board, so Time has to increase to avoid under exposure in general.  But because we are working with far fewer photons as a whole, we have to curb the length of the Time component BECAUSE OF the level of intensity reduction – we become caught in a “Catch 22” situation, trying to avoid the “time triggered” malfunction of those errant diodes.

Below is an 4 minute exposure from behind a Lee Big Stopper on a 1Dx – click on both images to open at full resolution in a new window.

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

Canon 1Dx
4 minutes @ f13
ISO 200 Lee 10stop

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

The beastly Nikon D800E fairs a lot better under similar exposure parameters, but there are still a lot of repairs to be done:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

A 4 minute exposure on a D800, f11 at 200ISO

Most people use heavy ND filters for the same reason I do – smoothing out water.

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

The texture of the water in the top shot clutters the image and adds nothing – so get rid of it! D4,ISO 50, 30secs f11 Lee Big Stopper

Then we change the camera orientation and get a commercial shot:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

Cemlyn Bay on the northwest coast of Anglesey, North Wales, Approximately 2.5 km to the east is Wylfa nuclear power station. Same exposure as above.

In this next shot all I’m interested in is the jetty, neither water surface texture or horizon land add anything – the land is easy to dump in PShop but the water would be impossible:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

I see the bottom image in my head when I look at the scene top left. Again, the 10 stop ND fixes the water, which adds precisely nothing to the image. D4 ISO 50, 60 secs, f14 B&W 10 stop

The mistake folk make is this, 30 seconds is usually enough time to get the effect on the water you want, and 90 to 120 seconds is truly the maximum you should ever really need.  Any longer and you’ll get at best no more effect, and at worst the effect will not look as visually appealing – that’s my opinion anyway.

This time requirement dovetails nicely with the “operating inside the design envelope” physics of the average 35mm format sensor.

So, as I said before, we could go out on a bit of a limb and say that our sensors are all “photon limited”; all diodes on the sensor must be struck by x number of photons.

And we can regard them as being exposure length limited; all diodes on the sensor must be struck by x photons in y seconds in order to avoid the pitfalls mentioned.

So next time you have the idea of obtaining something really daft, such as the 16 stop ND filter my friend ordered, try engaging your brain.  An unfiltered exposure that meters out at 1/30th sec will be 30 seconds behind a 10 stop ND filter, and a whopping 32 minutes behind a 16 stop ND filter.  Now at that sort of exposure time the sensor noise in the image will be astonishing in both presence and variety!

As I posted on my Book of Face page the other day, just for kicks I shot this last Wednesday night:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

Penmon Lighthouse in North Wales at twilight.
Sky is 90 secs, foreground is 4 minutes, D4, f16, ISO 50 B&W 10 stop ND filter

The image truly gives the wrong impression of reality – the wind was cold and gusting to 30mph, and the sea looked very lumpy and just plain ugly.

I spent at least 45 minutes just taking the bloody speckled colour read noise out of the 4 minute foreground exposure – I have to wonder if the image was truly worth the effort in processing.

When you take into account everything I’ve mentioned so far plus the following:

  • Long exposures are prone to ground vibration and the effects of wind on the tripod etc
  • Hanging around in places like the last shot above is plain dangerous, especially when it’s dark.

you must now see that keeping the exposures as short as possible is the sensible course of action, and that for doing this sort of work a 6 stop ND filter is a more sensible addition to your armoury than a 16 stop ND filter!

Just keep away from exposures above 2 minutes.

And before anyone asks, NO – you don’t shoot star trails in one frame over 4 hours unless you’re a complete numpty!  And for anyone who thinks you can cancel noise by shooting a black frame think on this – the black frame has to be shot immediately after the image, and has to be the same exposure duration as the main image.  That means a 4 hour single frame star trail plus black frame to go with it will take at least 8 hours – will your camera battery last that long?  If it dies before the black frame is finished then you lose BOTH frames……………

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Parallel Horizontals.

Quite often when shooting landscapes, or more commonly seascapes, you may run into a problem with parallel horizontals and distortion between far and near horizontal features such as in the image below.

Parallel horizontals that are not parallel - but should be!

Parallel horizontals that are not parallel – but should be!

This sort of error cannot be fully corrected in Lightroom alone; we have to send the image to Photoshop in order to make the corrections in the most efficient manner.

Here’s a video lesson on how to effectively do just that, using the simplest, easiest and quickest of methods:

You can watch the video at full size HERE – make sure you click the HD icon.

This is something which commonly happens when photographing water with a regular shaped man-made structure in the foreground and a foreshortened horizon line such as the receding opposite shore in this shot.  But with a little logical thought these problems with parallel horizontals being “out of kilter” can be easily cured.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Flash Duration – How Fast Can We Go

Flash duration – how long the burst of photons from flash actually lasts, does seem to get a lot of people confused.

Earlier this year I posted an article on using flash HERE where the prime function of the flash was as a fill light. As a fill, flash should not be obvious in the images, as the main lighting is still the ambient light from the sun, and we’re just using the flash to “tickle” the foreground with a little extra light.

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography,Red Squirrel

Flash as “fill” where the main lighting is still ambient daylight, and a moderate shutter speed is all that’s required. 1/800th sec @ f8 is plenty good enough for this shot.

Taking pictures is NEVER a case of just “rocking up”, seeing a shot and pressing the shutter; for me it’s a far more complex process whereby there’s a possible bucket-load of decisions to be made in between the “seeing the shot” bit and the “pressing the shutter” bit.

My biggest influencers are always the same – shutter speed and aperture, and the driving force behind these two things is light, and a possible lack thereof.

Once I make the decision to “add light” I then have to decide what role that additional light is going to take – fill, or primary source.

Obviously, in the shot above the decision was fill, and everything was pretty straight forward from there on, and aperture/shutter speed  selection is still dictated by the ambient lighting – I use the flash as a “light modifier”.

The duration of the flash is controlled by the TTL metering system and it’s duration is fairly irrelevant.

Let’s take a look at a different scenario.

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

The lovely Jo doing her 1930’s screen icon “pouty thing”. Flash is the ONLY light source in this image. 1/250th @ f9 ISO 100.

In this shot the lighting source is pure flash.  There’s very little in the way of ambient light present in this dark set, and what bit there is was completely over-powered by the flash output – so the lighting from the Elinchrom BX 500 monoblocks being used here is THE SOLE light source.

Considerations over the lighting itself are not the purpose of this post – what we are concerned with here are the implications for shutter speed due to flash synchronization.

The flash units were the standard type of studio flash unit offering no TTL interface with the camera being used, so it’s manual everything!

But the exposure in terms of shutter speed is capped at 1/250th of a second due to the CAMERA – that is it’s highest synch speed.

The focal length of the lens is 50mm so I need to shoot at around f8 or f9 to obtain workable depth of field, so basic exposure settings are dictated.  This particular shot was achieved by balancing the light-to-subject distance along the lines of the inverse square law for each light.

But from the point of view of this post the big consideration is this – can I afford to have movement in the subject?

At 1/250th sec you’d think not.  Then you’d think “hang on, flash durations are a lot faster than that” – so perhaps I can…..or can I ?

Flash Duration & Subject Movement

Flash duration, in terms of action-stopping power, is not as simple or straight forward as you might think.

Consider the diagram below:

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

Flash Power Output curve plotted against Output duration (time).

The grey shaded area in the diagram is the “power output curve’ of the flash.

Most folk think that a flash is an “instant on, instant off” kind of thing – how VERY wrong they are!

When we set the power output on either the back panel of our SB800/580EX etc, or on the power pack of a studio flash unit, or indeed any other flash unit, we are setting a peak output limit.

We might set a Nikon SB800 to 1/4 power, or we might set channel B output on a Quadra Ranger to 132Watt/sec, but either way, we are dictating the maximum flash output power – the peak output limit. The “t 5 time” – or to be more correct the “t 0.5 time” is the total time duration where the flash output is at 50% or above of the selected peak output limit we set.

Just to clarify: we set say, 1/4th power output on the back of a Canon 580EX – this is the selected peak output limit. The t5 time for this is the total time duration where the light output is at or above 50% of that selected 1/4th power – NOT 50% of the flash units full power output – do not get confused over this!

So when it comes to total “light emission duration” we’ve got 3 different ways of looking at things:

  1. Total – and I mean TOTAL – duration; the full span of the output curve.
  2. T 0.5 – the duration of the flash where its output is at 50% or above that level set by the user – the peak output limit.
  3. T 0.1 – the duration of the flash where its output is at 10% or above that level set by the user.

Anyone looking at the diagram above can see that the total output emission time/flash duration is A LOT LONGER than the t5 time.  Usually you find that t5 times are somewhere around 1/3rd of the total emission time, or flash duration.

Getting back to our shot of Jo above, if my memory serves me correctly the BX heads I used for the shot had a t5 time of around 1/1500th sec.  So the TOTAL duration of the flash output would be around 1/500th sec.

So I can’t afford to have any movement in the subject that isn’t going to be arrested by 1/500th sec flash duration, let alone the 1/250th shutter speed.

Why? Well that 1/250th sec the shutter is open will comprise of 1/500th sec of flash photons entering the lens, and 1/500th sec of NOTHING entering the lens but AMBIENT LIGHT photons.

Let us break flash output down a bit more:

In the previous article I mentioned, I quoted a table of Nikon SB800 duration times.  At the top of the table was the SB800 1/1 or full output power flash duration.  All times quoted in that table were t5 times.

The one I want to concentrate on is that 1/1 full power t5 time of 1/1050th sec.

Even though Nikon try to tempt you into believing that the flash only emits light for 1/1050th sec it does in fact light the scene for a full 1/350th sec – most flash manufacturers units are quoted as t5 times.

Now in most cases when you might employ flash – which let’s face it, is as some sort of fill light in a general ambient/flash mixed exposure, this isn’t in reality, a big problem.  Reduced power multiple pulse AutoFP/HSS also makes it not a problem.

But if you are trying to stop high speed action – in other words “freeze time”, then it can become a major headache; especially when you need all the flash power you can get hold of.

Why? Let’s break the diagram above down to basics.

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

The darker shaded area represents the “tail” of the flash output – the area that can cause many problems when trying to stop high speed action.

  • The first 50% of the total light output is over and finished in the first 1/1050th of the total flash duration.
  • The other 50% of the total light output takes place over a further 1/525th sec, and is represented by the dark grey area – let’s call this area the flash “output tail”.  Some publications & websites refer to this tail as after-glow.  I always thought that ‘after glow” was something ladies did after a certain type of energetic activity!
  • The light will continue to decay for a full 1/525th sec after t5, until the output of light has died down to 0% and the full “burn time” of 1/350th sec has been reached.

That’s right – 1/1050th + 1/525th = 1/350th.

So, if our shutter speed is 1/350th sec or longer we are going to see some ghosting in our image caused by the movement of the subject during that extra 1/525th sec post t5 time.

I need to point out that most speedlight type flash units are “isolated-gate bipolar transistor” devices – that’s IGBT to you and me. Einstein studio flash units are also IGBT units – I’ll cover the implications of this in a later post, but for now you just need to know that the IGBT circuitry works to eliminate sub t5 output BUT doesn’t work if your speedlight is set to output at maximum power.  And if you need access to full 1/1 power with your speedlights for any reason then IGBT won’t help you.

Let’s see the problem in action as it were:

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

A bouncing golf ball shot at 1/250th sec using full power output on an SB800.
The ball is moving UPWARDS.
The blur between points A & B are caused by the “tail” or “after-glow” of the flash.

And the problem will be further exacerbated if there is ANY ambient light in scene from a window for instance, as this will boost the general scene illumination during that “tail end” 1/525th sec.

We might be well advised, if using any form of non-TTL flash mode, to use a shutter speed equal to, or shorter in duration to the t5 time, as in the shot below:

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

A bouncing golf ball shot at 1/2000th sec using full power output on an SB800.

All I’ve done in this second shot is go -3Ev on the shutter speed, +1Ev on the aperture and +2Ev on ISO speed.

Don’t forget, the flash is in MANUAL mode with a full power output.

With the D4 in front-curtain synch the full power, 1/350th sec flash pulse begins as the front shutter curtain starts to move, and it “burns” continuously while the 1/2000th sec “letter-box” shutter-slot travels across the sensor.

In both shots you may be wondering how I triggered the exposure. Sitting on the desk you can see a small black box with a jack plug sticking out the back – this is the audio sensor of a TriggerSmart audio/light/Infra Red combined trigger system.  As the golf ball strikes the desk the audio sensor picks up the noise and the control box triggers the camera shutter and hence the flash.

Hardy, down at the distributors,Flaghead, has been kind enough to send me one of these systems for incorporation into some long-term photography projects, and in a series of high speed flash workshops and training tutorials.  And I have to say that I’m mighty impressed with the system, and at the retail pricing point ownership of this product is a no-brainer.  The unit is going to feature in quite a few blog post in the near-future, but click HERE to email Hardy for more details.

Even though I constantly extol the virtues of the Nikon CLS system, there comes a time when its automatic calculations fight AGAINST you – and easy high speed photography becomes something of a chore.

Any form of flash exposure automation makes assumptions about what you are trying to do.  In certain circumstances these assumptions are pretty much correct.  But in others they can be so far wide of the mark that if you don’t turn the automation OFF you’ll never get the shot you want.

Wresting full control over speed lights from the likes of Nikons CLS gives you access to super-highspeed flash durations AND high shutter speeds without a lot of the synching problems incurred with studio monoblocks.

Liquid in Motion,flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

Liquid in Motion – arrested at 1/8000th sec shutter speed using SB800’s at full 1/1 power.

Liquid in Motion,flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

Liquid in Motion – arrested at 1/8000th sec shutter speed using SB800’s at full 1/1 power. A 100% crop from the shot above.

Liquid in Motion,flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

“Scotch & Rocks All Over The Place”
Simple capture with manual speed lights at full power and 1/8000th shutter speed.

The shots above are all taken with 2x SB800s lighting the white background and 1 heavily defused SB800 acting as a top light.

One background light is set at 1/1 manual FP, the other to manual 1/1 SU-4 remote.  The top light is set to 1/8 power SU-4 remote.

The majority light in the shot is in fact that white background – it’s punching light back through the glass and liquid splash – the subject is backlit.

So, that background is being lit for a full 1/350th of a second.

But shooting in front curtain synch I’m using 1/8000th sec as a shutter speed, an exposure duration 3 stops shorter than the flash unit t5 time for full power. So in effect I’m using the combined background flash units as a very short-term continuous light source which lasts for 1/350th of a second, but the camera is only recording the very first 1/8000th sec – in other words, photons are still leaving the flash AFTER the rear shutter curtain has closed and the exposure is finished.

Finally, the shutter and flash are triggered by dropping the faux crushed ice through the IR sensor beam of the TriggerSmart unit.

This is very much along the lines of what’s termed HYPERSYNCH – a technique you can use with conventional slow burn studio flash units and certain types of 3rd party trigger units such as Pocket Wizards – but that’s yet another story, and is fraught with synch problems that you have program out of the system using the Pocket Wizard utility.

So, there’s more to come from me about flash in future posts, but for now just remember – there’s not a lot you can’t do with speed lights – as long as you’ve got enough of the little darlings!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

What Shutter Speed?

Shutter speed, and the choices we make over it, can have a profound effect on the outcome of the final image.

Now everyone has a grasp of shutter speed and how it relates to subject movement – at least I hope they do!

We can either use a fast shutter speed to freeze constant action, or we can use a slow shutter speed to:

  • Allow us to capture movement of the subject for creative purposes
  • Allow us to use a lower ISO/smaller aperture when shooting a subject with little or no movement.

 

Fast Shutter Speed – I need MORE LIGHT Barry!

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels

1/8000th sec @ f8, Nikon D4 and 500mm f4

Good strongish sunlight directly behind the camera floods this Red Kite with light when it rolls over into a dive.  I’m daft enough to be doing this session with a 500mm f4 that has very little in the way of natural depth-of-field so I opt to shoot at f8.  Normally I’d expect to be shooting the D4 at 2000iso for action like this but my top end shutter speed is 1/8000th and this shutter speed at f8 was slightly too hot on the exposure front, so I knocked the ISO down to 1600 just to protect the highlights a little more.

Creative Slow Shutter Speed – getting rid of light.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels

1/5th sec @ f22

I wanted to capture the movement in a flock of seagulls taking off from the water, so now I have to think the opposite way to the Kite shot above.

Firstly I need to think carefully about the length of shutter speed I choose: too short and I won’t capture enough movement; and too long will bring a vertical movement component into the image from me not being able to hold the camera still – so I opt for 1/5th sec.

Next to consider is aperture.  Diffraction on a deliberate motion blur has little impact, but believe it or not focus and depth of field DO – go figure!

So I can run the lens at f16/20/22 without much of a worry, and 100 ISO gets me the 1/5th sec shutter speed I need at f22.

 

Slow Shutter  Rear Curtain Synch Flash

We can use a combination of both techniques in one SINGLE exposure with the employment of flash, rear curtain synch and a relatively slow shutter speed:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels

6/10th sec @ f3.5 -1Ev rear curtain synch flash

A technique the “Man Cub” uses to great effect in his nightclub photography, here he’s rotated the camera whilst the shutter is open, thus capturing the glowing LEDs and other highlights as circular trails.  As the shutter begins to close, the scene is lit by the 1/10,000th sec burst of light from the reduced power, rear curtain synched SB800 flash unit.

But things are not always quite so cut-and-dried – are they ever?

Assuming the lens you use is tack sharp and the subject is perfectly focused there are two factors that have a direct influence upon how sharp the shot will be:

  • System Vibration – caused by internal vibrations, most notably from the mirror being activated.
  • Camera Shake – caused by external forces like wind, ground vibration or you not holding the camera properly.

Shutter Speed and System Vibration

There was a time when we operated on the old adage that the slowest shutter speed you needed for general hand held shooting was equal to 1/focal length.

So if you were using a 200mm lens you shot with a minimum shutter speed of 1/200th sec, and, for the most part, that rule served us all rather well with 35mm film; assuming of course that 1/200th sec was sufficient to freeze the action!

Now this is a somewhat optimistic rule and assumes that you are hand holding the camera using a good average technique.  But put the camera on a tripod and trigger it with a cable or remote release, and it’s a whole new story.

Why?  Because sticking the camera on a tripod and not touching it during the exposure means that we have taken away the “grounding effect” of our mass from the camera and lens; thus leaving the door open to for system vibration to ruin our image.

 

How Does System Vibration Effect an Image?

Nowadays we live in a digital world with very high resolution sensors instead of film. and the very nature of a sensor – its pixel structure (to use a common parlance) has a direct influence on minimum shutter speed.

So many camera owners today have the misguided notion that using a tripod is the answer to all their prayers in terms of getting sharp images – sadly this ain’t necessarily so.

They also have the other misguided notion that “more megapixels” makes life easier – well, that definitely isn’t true!

The smallest detail that can be recorded by a sensor is a point of light in the projected image that has the same dimensions a one photosite/pixel on that sensor. So, even if a point is SMALLER than the photosite it strikes, its intensity or luminance will effect the whole photosite.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images.

A point of light smaller than 1 photosite (left) has an effect on the whole photosite (right).

If the lens is capable of resolving this tiny detail, our sensor – in this case (right) – isn’t, and so the lens out-resolves the sensor.

But let’s now consider this tiny point detail and how it effects a sensor of higher resolution; in other words, a sensor with smaller photosites:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

The same detail projected onto a higher resolution sensor (right). Though not shown, the entire photosite will be effected, but its surface area represents a much small percentage of the whole sensor area – the sensor now matches the lens resolution.

Now this might seem like a good thing; after all, we can resolve smaller details.  But, there’s a catch when it comes to vibration:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

A certain level of vibration causes the small point of light to vibrate. The extremes of this vibration are represented by the the outline circles.

The degree of movement/vibration/oscillation is identical on both sensors; but the resulting effect on the exposure is totally different:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

The same level of vibration has more effect on the higher resolution sensor.

If you read the earlier post on sensor resolution and diffraction HERE you’ll soon identify the same concept.

The upshot of it all is that “X” level of internal system vibration has a greater effect on a higher resolution sensor than it does on a lower resolution sensor.

Now what’s all this got to with shutter speed I hear you ask.  Well, whereas 1/focal length used to work pretty well back in the day, we need to advance the theory a little.

Let’s look at four shots from a Nikon D3, shot with a 300mm f2.8, mounted on a tripod and activated by a remote (so no finger-jabbing on the shutter button to effect the images).

Also please note that the lens is MANUALLY FOCUSED just once, so is sharply on the same place for all 4 shots.

These images are full resolution crops, I strongly recommend that you click on all four images to open them in new tabs and view them sequentially.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/1x (1/320th) Focal Length. No VR, No MLU (Mirror Lock Up). Camera on Tripod+remote release.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/2x (1/640th) Focal length. No VR. No MLU. Camera on Tripod+remote release.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/2x Focal length + VR. No MLU. Camera on Tripod+remote release.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/2x Focal length. Camera on Tripod+remote release + MLU – NO VR + Sandbag.

Now the thing is, the first shot at 1/320th looks crap because it’s riddled with system vibration – mainly a result of what’s termed ‘mirror slap’.  These vibrations travel up the lens barrel and are then reflected back by the front of the lens.  You basically end up with a packet of vibrations running up and down the lens barrel until they eventually die out.

These vibrations in effect make the sensor and the image being projected onto it ‘buzz, shimmy and shake’ – thus we get a fuzzy image; and all the fuzziness is down to internal system vibration.

We would actually have got a sharper shot hand holding the lens – the act of hand holding kills the vibrations!

As you can see in shot 2 we get a big jump in vibration reduction just by cranking the shutter speed up to 2x focal length (actually 1/640th).

The shot would be even sharper at 3x or 4x, because the vibrations are of a set frequency and thus speed of travel, and the faster the shutter speed we use the sooner we can get the exposure over and done with before the vibrations have any effect on the image.

We can employ ‘mirror up shooting’ as a technique to combat these vibrations; by lifting the mirror and then pausing to give the vibrations time to decay; and we could engage the lens VR too, as with the 3rd shot.  Collectively there has been another significant jump in overall sharpness of shot 3; though frankly the VR contribution is minimal.

I’m not a very big fan of VR !

In shot 4 you might get some idea why I’m no fan of VR.  Everything is the same as shot 3 except that the VR is OFF, and we’ve added a 3lb sandbag on top of the lens.  This does the same job as hand holding the lens – it kills the vibrations stone dead.

When you are shooting landscapes with much longer exposures/shutter speeds THE ONLY way to work is tripod plus mirror up shooting AND if you can stand to carry the weight, a good heavy sand bag!

Shot 4 would have been just as sharp if the shutter had been open for 20 seconds, just as long as there was no movement at all in the subject AND there was no ground vibration from a passing heavy goods train (there’s a rail track between the camera and the subject!).

For general tripod shooting of fairly static subjects I was always confident of sharp shots on the D3 (12Mb) at 2x focal length.

But since moving to a 16Mp D4 I’ve now found that sometimes this let’s me down, and that 2.5x focal length is a safer minimum to use.

But that’s nothing compared to what some medium format shooters have told me; where they can still detect the effects of vibration on super high resolution backs such as the IQ180 etc at as much as 5x focal length – and that’s with wide angle landscape style lenses!

So, overall my advice is to ALWAYS push for the highest shutter speed you can possibly obtain from the lighting conditions available.

Where this isn’t possible you really do need to perfect the skill of hand holding – once mastered you’ll be amazed at just how slow a shutter speed you can use WITHOUT employing the VR system (VR/IS often causes far more problems than it would apparently solve).

For long lens shooters the technique of killing vibration at low shutter speeds when the gear is mounted on a tripod is CRITICAL, because without it, the images will suffer just because of the tripod!

The remedy is simple – it’s what your left arm is for.

So, to recap:

  • If you shot without a tripod, the physical act of hand holding – properly – has a tendency to negate internal system vibrations caused by mirror slap etc just because your physical mass is in direct contact with the camera and lens, and so “damps” the vibrations.
  • If you shoot without a tripod you need to ensure that you are using a shutter speed fast enough to negate camera shake.
  • If you shoot without a tripod you need to ensure that you are using a shutter speed fast enough to FREEZE the action/movement of your subject.

 

Camera Shake and STUPID VR!

Now I’m going to have to say at the outset that this is only my opinion, and that this is pointed at Nikons VR system, and I don’t strictly know if Canons IS system works on the same math.

And this is not relevant to sensor-based stabilization, only the ‘in the lens’ type of VR.

The mechanics of how it works are somewhat irrelevant, but what is important is its working methodology.

Nikon VR works at a frequency of 1000Hz.

What is a “hertz”?  Well 1Hz = 1 full frequency cycle per second.  So 1000Hz = 1000 cycles per second, and each cycle is 1/1000th sec in duration.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Full cycle sine wave showing 1,0.5 & 0.25 cycles.

Now then, here’s the thing.  The VR unit is measuring the angular momentum of the lens movement at a rate of 1000 times per second. So in other words it is “sampling” movement every 1/1000th of a second and attempting to compensate for that movement.

But Nyquist-Shannon sampling theory – if you’re up for some mind-warping click HERE – says that effective sampling can only be achieved at half the working frequency – 500 cycles per second.

What is the time duration of one cycle at a frequency of 500Hz?  That’s right – 1/500th sec.

So basically, for normal photography, VR ceases to be of any real use at any shutter speed faster than 1/500th.

Remember shot 3 with the 300mm f2.8 earlier – I said the VR contribution at 1/640th was minimal?  Now you know why I said it!

Looking again at the frequency diagram above, we may get a fairly useful sample at 1/4 working frequency – 1/250th sec; but other than that my personal feelings about VR is that it’s junk – under normal circumstances it should be turned OFF.

What circumstances do I class as abnormal? Sitting on the floor of a heli doing ariel shots out of the open door springs to mind.

If you are working in an environment where something is vibrating YOU while you hand hold the camera then VR comes into its own.

But if it’s YOU doing the vibrating/shaking then it’s not going to help you very much in reality.

Yes, it looks good when you try it in the shop, and the sales twat tells you it’ll buy you three extra stops in shutter speed so now you can get shake-free shots at 1/10th of a second.

But unless you are photographing an anaesthetized Sloth or a statue, that 1/10th sec shutter speed is about as much use to you as a hole in the head. VR/IS only stabilizes the lens image – it doesn’t freeze time and stop a bird from flapping its wings, or indeed a brides veil from billowing in the breeze.

Don’t get me wrong; I’m not saying VR/IS is a total waste of time in ALL circumstances.  But I am saying that it’s a tool that should only be deployed when you need it, and YOU need to understand WHEN that time is; AND you need to be aware that it can cause major image problems if you use it in the wrong situation.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

In Conclusion

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

1/2000th sec is sufficient to pretty much freeze the forward motion of this eagle, but not the downward motion of the primary feathers.

This rather crappy shot of a White-tailed eagle might give you food for thought, especially if compared with the Red Kite at the start of the post.

The primary feathers are soft because we’ve run out of depth of field.  But, notice the motion blur on them too?  Even though 1/2000th sec in conjunction with a good panning technique is ample to freeze the forward motion of the bird, that same 1/2000th sec is NOT fast enough to freeze the speed of the descending primary feathers on the end of that 4 foot lever called a wing.

Even though your subject as a whole might be still for 1/60th sec or longer, unless it’s dead, some small part of it will move.  The larger the subject is in the frame then more apparent that movement will be.

Getting good sharp shots without motion blur in part of the subject, or camera shake and system vibration screwing up the entire image is easy; as long as you understand the basics – and your best tool to help you on your way is SHUTTER SPEED.

A tack sharp shot without blur but full of high iso noise is vastly superior to a noiseless shot full of blur and vibration artefacting.

Unless it’s done deliberately of course – “H-arty Farty” as my mate Ole Martin Dahle calls it!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.