Frank asked me last week about the Topaz JPEG to RAW AI plugin and if it was any good.
Well, my response after downloading the free trial and messing about with it is this – it’s CRAP!
All it does is create a DNG file or 16 bit TIFF (you pick) from an 8 bit jpeg.
Should you opt for DNG then the DNG is NOT a raw file, it’s just a DNG with 8 bits per channel worth of information rattling around inside it. And if you opt for 16 bit TIFF then you end up with and 8 bit jpeg sitting inside a 16 bit TIFF container.
For the love of God, how the hell does the marketing department at Topaz think they can get away with this total bullshit – Ai learning my arse………….!
Are they trying to make the inexperienced and unknowledgeable actually believe that this expensive bit of garbage can re-engineer 32,000+ bits per channel from a poxy 8 bits per channel?
If you believe that then all I’ll say is that ‘a fool and their money are easily parted’………
Click the image above and it will open in a new window. We are looking at 200% magnification of an are of the original raw file (right), a full resolution jpeg created from that raw file (middle), and a shitty DNG created by this abomination from Topaz on the left.
Notice something about the Topaz image – it’s got more artifacts in it than the original jpeg (middle) because there has been some form of sharpening applied to it by the Topaz software – and yet there are NO controls for any application of sharpening in the Topaz UI.
Let’s take something different – a jpeg of a Clouded Ermine.
Click the image to see what the original jpeg looks like.
Now let’s feed that jpeg into the Topaz JPEG to RAW AI GUI
And now let’s look at a sectional 400% magnification comparison shall we
Sharpening artifacts again and noise that wasn’t there to begin with.
Seriously, you’d be better off tweaking your jpegs in Lightroom!
I do NOT recommend anyone purchase this Topaz product because it’s rubbish. But what concerns me is the manner in which it is being marketed; I see the marketing as just plain misleading.
I’m not dismissing all Topaz software – DeNoise is brilliant for certain tasks.
But Topaz JPEG to RAW AI is nothing short of misleading junk with a huge price tag – Christ, you could buy 12 months subscription to my Patreon Chanel for less!
One of my patrons, Paul Smith, and I ventured down to Shropshire and the spectacular quartsite ridge of The Stiperstones to get this image of the Milky Way and Mars (the large bright ‘star’ above the rocks on the left).
I always work the same way for astro landscape photography, beginning with getting into position just before sunset.
Using the PhotoPills app on my phone I can see where the milky way will be positioned in my field of view at the time of peak sky darkness. This enables me to position the camera exactly where I want it for the best composition.
The biggest killer in astro landscape photography is excessive noise in the foreground.
The other problem is that foregrounds in most images of this genre are not sharp due to a lack of depth of field at the wide apertures you need to shoot the night sky at – f2.8 for example.
To get around this problem we need to shoot a separate foreground image at a lower ISO, a narrower aperture and focused closer to the camera.
Some photographers change focus, engage long exposure noise reduction and then shoot a very long exposure. But that’s an eminently risky thing to do in my opinion, both from a technical standpoint and one of time – a 60 minute exposure will take 120 minutes to complete.
The length of exposure is chosen to allow the very low photon-count from the foreground to ‘build-up’ on the sensor and produced a usable level of exposure from what little natural light is around.
From a visual perspective, when it works, the method produces images that can be spectacular because the light in the foreground matches the light in the sky in terms of directionality.
Light Painting
To get around the inconvenience of time and super-long exposures a lot of folk employ the technique of light painting their foregrounds.
Light painting – in my opinion – destroys the integrity of the finished image because it’s so bloody obvious! The direction of light that’s ‘painted’ on the foreground bares no resemblance to that of the sky.
The other problem with light painting is this – those that employ the technique hardly ever CHECK to see if they are in the field of view of another photographer – think about that one for a second or two!
My Method
As I mentioned before, I set up just before sunset. In the shot above I knew the milky way and Mars were not going to be where I wanted them until just after 1am, but I was set up by 9.20pm – yep, a long wait ahead, but always worth the effort.
As we move towards the latter half of civil twilight I start shooting my foreground exposure, and I’ll shoot a few of these at regular intervals between then and mid nautical twilight.
Because I shoot raw the white balance set in camera is irrelevant, and can be balanced with that of the sky in Photoshop during post processing.
The key things here are that I have a shadowless even illumination of my foreground which is shot at a low ISO, in perfect focus, and shot at say f8 has great depth of field.
Once deep into blue hour and astronomical twilight the brighter stars are visible and so I now use full magnification in live view and focus on a bright star in the cameras field of view.
Then it’s a waiting game – waiting for the sky to darken to its maximum and the Milky Way to come into my desired position for my chosen composition.
Shooting the Sky
Astro landscape photography is all about showing the sky in context with the foreground – I have absolutely ZERO time for those popular YouTube photographers who composite a shot of the night sky into a landscape image shot in a different place or a different angle.
Good astro landscape photography HAS TO BE A COMPOSITE though – there is no way around that.
And by GOOD I mean producing a full resolution image that will sell through the agencies and print BIG if needed.
The key things that contribute to an image being classed good in my book are simple:
Pin-point stars with no trailing
Low noise
Sharp from ‘back’ to ‘front’.
Pin-points stars are solely down to correct shutter speed for your sensor size and megapixel count.
Low noise is covered by shooting a low ISO foreground and a sequence of high ISO sky images, and using Starry Landscape Stacker on Mac (Sequator on PC appears to be very similar) in conjunction with a mean or median stacking mode.
Further noise cancelling is achieved but the shooting of Dark Frames, and the typical wide-aperture vignetting is cancelled out by the creation of a flat field frame.
And ‘back to front’ image sharpness should be obvious to you from what I’ve already written!
So, I’ll typically shoot a sequence of 20 to 30 exposures – all one after the other with no breaks or pauses – and then a sequence of 20 to 30 dark frames.
Shutter speeds usually range from 4 to 6 seconds
Watch this video on my YouTube Channel about shutter speed:
Best viewed on the channel itself, and click the little cog icon to choose 1080pHD as the resolution.
Putting it all Together
Shooting all the frames for astro landscape photography is really quite simple.
Putting it all together is fairly simple and straight forward too – but it’s TEDIOUS and time-consuming if you want to do it properly.
But I wanted to try Raw Therapee for this Stiperstones image, and another of my patrons – Frank – wanted a video of processing methodology in Raw Therapee.
Easier said than done, cramming 4 hours into a typical YouTube video! But after about six attempts I think I’ve managed it, and you can see it here, but I warn you now that it’s 40 minutes long:
Best viewed on the channel itself, and click the little cog icon to choose 1080pHD as the resolution.
I hope you’ve found the information in this post useful, together with the YouTube videos.
I don’t monetize my YouTube videos or fill my blog posts with masses of affiliate links, and I rely solely on my patrons to help cover my time and server costs. If you would like to help me to produce more content please visit my Patreon page on the button above.
Over 11 hours of video training, spread across 58 videos…well, I told you it was going to be big!
And believe me, I could have made it even bigger, because there is FAR MORE to image sharpening than 99% of photographers think.
And you don’t need ANY stupid sharpener plugins – or noise reductions ones come to that. Because Photoshop does it ALL anyway, and is far more customizable and controllable than any plugin could hope to be.
So don’t waste your money any more – spend it instead, on some decent training to show you how to do the job properly in the first place!
You won’t find a lot of these methods anywhere else on the internet – free or paid for – because ‘teachers cannot teach what they don’t know’ – and I know more than most!
As you can see from the list of lessons above, I cover more than just ‘plain old sharpening’.
Traditionally, image sharpening produces artifacts – usually white and black halos – if it’s over done. And image sharpening emphasizes ‘noise’ in areas of shadow and other low frequency detail, when it’s applied to an image in the ‘traditional’, often taught, blanket manner.
Why sharpen what isn’t in focus – to do so is madness, because all you do is sharpen the noise, and cause more artifacts!
Maximum sharpening should only be applied to detail in the image that is ‘fully in focus’.
So, as ‘focus sharpness’ falls off, so to should the level of applied sharpening. That way, noise and other artifacts CAN NOT build up in an image.
And the same can be said for noise reduction, but ‘in reverse’.
So image sharpening needs to be applied in a differential manor – and that’s what this training is all about.
Using a brush in Lightroom etc to ‘brush in’ some sort of differential sharpening is NOT a good idea, because it’s imprecise, and something of a fools task.
Why do I say that? Simple……. Because the ‘differential factor bit’ is contained within the image itself – and it’s just sitting there on your computer screen WAITING for you to get stuck in and use it.
But, like everything else in modern digital photography, the knowledge and skill to do so has somehow been lost in the last 12 to 15 years, and the internet is full of ‘teachers’ who have never had these skills in the first place – hence they can’t teach ’em!
However, everyone who buys this training of mine WILL have those skills by the end of the course.
It’s been a real hard slog to produce these videos. Recording the lessons is easy – it’s the editing and video call-outs that take a lot of time. And I’ve edited all the audio in Audacity to remove breath sounds and background noise – many thanks to Curtis Judd for putting those great lessons on YouTube!
The price is £59.99. So right now, that’s over 11 hours of training for less than £5.50 per hour – that’s way cheaper than a 1to1, or even a workshop day with a crowd of other people!
So head off over to my download store and buy it, because what you’ll learn will improve your image processing, whether it’s for big prints or just jpegs on the web – guaranteed – just click here!
Become a patron from as little as $1 per month, and help me produce more free content.
Patrons gain access to a variety of FREE rewards, discounts and bonuses.
A lot of people imagine that there is some sort of ‘magic bullet’ method for sharpening images.
Well, here’s the bad news – there isn’t !
Even if you shoot the same camera and lens combo at the same settings all the time, your images will exhibit an array of various properties.
And those properties, and the ratio/mix thereof, can, and will, effect the efficacy of various sharpening methods and techniques.
And, those properties will rarely be the same from shoot to shoot.
Add interchangeable lenses, varied lighting conditions, and assorted scene brightness and contrast ranges to the mix – now the range of image properties has increased exponentially.
What are the properties of an image that can determine your approach to sharpening?
I’m not even going to attempt to list them all here, because that would be truly frightening for you.
But sharpening is all about pixels, edges and contrast. And our first ‘port of call’ with regard to all three of those items is ‘demosaicing’ and raw file conversion.
“But Andy, surely the first item should be the lens” I here you say.
No, it isn’t.
And if that were the case, then we would go one step further than that, and say that it’s the operators ability to focus the lens!
So we will take it as a given, that the lens is sharp, and the operator isn’t quite so daft as they look!
Now we have a raw file, taken with a sharp lens and focused to perfection.
Let’s hand that file to two raw converters, Lightroom and Raw Therapee:
I am Lightroom – Click me!
I am Raw Therapee – Click me!
In both raw converters there is ZERO SHARPENING being applied. (and yes, I know the horizon is ‘wonky’!).
Now check out the 800% magnification shots:
Lightroom at 800% – Click me!
Raw Therapee at 800% – Click me!
What do we see on the Lightroom shot at 800%?
A sharpening halo, but hang on, there is NO sharpening being applied.
But in Raw Therapee there is NO halo.
The halo in Lightroom is not a sharpening halo, but a demosaicing artifact that LOOKS like a sharpening halo.
It is a direct result of the demosaicing algorithm that Lightroom uses.
Raw Therapee on the other hand, has a selection of demosaicing algorithms to choose from. In this instance, it’s using its default AMaZE (Alias Minimization & Zipper Elimination) algorithm. All told, there are 10 different demosaic options in RT, though some of them are a bit ‘old hat’ now.
There is no way of altering the base demosaic in Lightroom – it is something of a fixed quantity. And while it works in an acceptable manner for the majority of shots from an ever burgeoning mass of digital camera sensors, there will ALWAYS be exceptions.
Let’s call a spade a bloody shovel and be honest – Lightrooms demosaicing algorithm is in need of an overhaul. And why something we have to pay for uses a methodology worse than something we get for free, God only knows.
It’s a common problem in Lightroom, and it’s the single biggest reason why, for example, landscape exposure blends using luminosity masks fail to work quite as smoothly as you see demonstrated on the old Tube of You.
If truth be told – and this is only my opinion – Lightroom is by no means the best raw file processor in existence today.
I say that with a degree of reservation though, because:
It’s very user friendly
It’s an excellent DAM (digital asset management) tool, possibly the best.
On the surface, it only shows its problems with very high contrast edges.
As a side note, my Top 4 raw converters/processors are:
Iridient Developer
Raw Therapee
Capture One Pro
Lightroom
Iridient is expensive and complex – but if you shoot Fuji X-Trans you are crazy if you don’t use it.
Raw Therapee is very complex (and slightly ‘clunky’ on Mac OSX) but it is very good once you know your way around it. And it’s FREEEEEEEEE!!!!!!!
Iridient and RT have zero DAM capability that’s worth talking about.
Capture One Pro is a better raw converter on the whole than Lightroom, but it’s more complex, and its DAM structure looks like it was created by crack-smoking monkeys when you compare it to the effective simplicity of Lightroom.
If we look at Lightroom as a raw processor (as opposed to raw converter) it encourages the user to employ ‘recovery’ in shadow and highlight areas.
Using BOTH can cause halos along high contrast edges, and edges where high frequency detail sits next to very low frequency detail of a contrasting colour – birds in flight against a blue sky spring to mind.
Why do I keep ‘banging on’ about edges?
Because edges are critical – and most of you guys ‘n gals hardly ever look at them close up.
All images contain areas of high and low frequency detail, and these areas require different process treatments, if you want to obtain the very best results AND want to preserve the ability to print.
Cleanly defined edges between these areas allow us to use layer masks to separate these areas in an image, and obtain the selective control.
Clean inter-tonal boundaries also allow us to separate shadows, various mid tone ranges, and highlights for yet more finite control.
Working on 16 bit images (well, 15 bit plus 1 level if truth be told) means we can control our adjustments in Photoshop within a range of 32,768 tones. And there is no way in hell that localised adjustments in Lightroom can be carried out to that degree of accuracy – fact.
I’ll let you in to a secret here! You all watch the wrong stuff on YouTube! You sit and watch a video by God knows what idiot, and then wonder why what you’ve just seen them do does NOT work for you.
That’s because you’ve not noticed one small detail – 95% of the time they are working on jpegs! And jpegs only have a tonal range of 256. It’s really easy to make luminosity selections etc on such a small tonal range work flawlessly. You try the same settings on a 16 bit image and they don’t work.
So you end up thinking it’s your fault – your image isn’t as ‘perfect’ as theirs – wrong!
It’s a tale I hear hundreds of times every year when I have folk on workshops and 1to1 tuition days. And without fail, they all wish they’d paid for the training instead of trying to follow the free stuff.
You NEVER see me on a video working with anything but raw files and full resolution 16 bit images.
My only problem is that I don’t ‘fit into’ today’s modern ‘cult of personality’!
Most adjustments in Lightroom have a global effect. Yes, we have range masks and eraser brushes. But they are very poor relations of the pixel-precise control you can have in Photoshop.
Lightroom is – in my opinion of course – becoming polluted by the ‘one stop shop, instant gratification ideology’ that seems to pervade photography today.
Someone said to me the other day that I had not done a YouTube video on the new range masking option in Lightroom. And they are quite correct.
Why?
Because it’s a gimmick – and real crappy one at that, when compared to what you can do in Photoshop.
Photoshop is the KING of image manipulation and processing. And that is a hard core, irrefutable fact. It has NO equal.
But Photoshop is a raster image editor, which means it needs to be fed a diet of real pixels. Raw converters like Lightroom use ‘virtual pixels’ – in a manner of speaking.
And of course, Lightroom and the CameraRaw plug in for Photoshop amount to the same thing. So folk who use either Lightroom or Photoshop EXCLUSIVELY are both suffering from the same problems – if they can be bothered to look for them.
It Depends on the Shot
The landscape image is by virtue, a low ISO, high resolution shot with huge depth of field, and bags of high frequency inter-tonal detail that needs sharpening correctly to its very maximum. We don’t want to sharpen the sky, as it’s sharp enough through depth of field, as is the water, and we require ZERO sharpening artifacts, and no noise amplification.
If we utilise the same sharpening workflow on the center image, then we’ll all get our heads kicked in! No woman likes to see their skin texture sharpened – in point of fact we have to make it even more unsharp, smooth and diffuse in order to avoid a trip to our local A&E department.
The cheeky Red Squirrel requires a different approach again. For starters, it’s been taken on a conventional ‘wildlife camera’ – a Nikon D4. This camera sensor has a much lower resolution than either of the camera sensors used for the previous two shots.
It is also shot from a greater distance than the foreground subjects in either of the preceding images. And most importantly, it’s at a far higher ISO value, so it has more noise in it.
All three images require SELECTIVE sharpening. But most photographers think that global sharpening is a good idea, or at least something they can ‘get away with’.
If you are a photographer who wants to do nothing else but post to Facebook and Flickr then you might as well stop reading this post. Good luck to you and enjoy your photography, but everything you read in this post, or anywhere on this blog, is not for you.
But if you want to maximize the potential of your thousands of pounds worth of camera gear, and print or sell your images, then I hate to tell you, but you are going to have to LEARN STUFF.
Photoshop is where the magic happens.
As I said earlier, Photoshop is a raster image processor. As such, it needs to be fed an original image that is of THE UTMOST QUALITY. By this I mean a starting raw file that has been demosaiced and normalized to:
Contain ZERO demosaic artifacts of any kind.
Have the correct white and black points – in other words ZERO blown highlights or blocked shadows. In other words, getting contrast under control.
Maximize the midtones to tease out the highest amount of those inter-tonal details, because this is where your sharpening is going to take place.
Contain no more sharpening than you can get away with, and certainly NOT the amount of sharpening you require in the finished image.
With points 1 thru 3 the benefits should be fairly obvious to you, but if you think about it for a second, the image described is rather ‘flattish – looking’.
But point 4 is somewhat ambiguous. What Adobe-philes like to call capture or input sharpening is very dependent on three variables:
Sensor megapixels
Demosaic effeciency
Sharpening method – namely Unsharp Mask or Deconvolution
The three are inextricably intertwined – so basically it’s a balancing act.
To learn this requires practice!
And to that end I’m embarking on the production of a set of videos that will help you get to grips with the variety of sharpening techniques that I use, and why I use them.
I’ll give you fair warning now – when finished it will be neither CHEAP nor SHORT, but it will be very instructive!
I want to get it to you as soon as possible, but you wouldn’t believe how long tuition videos take to produce. So right now I’m going to say it should be ready at the end of February or early March.
UPDATE: The new course is ready and on sale now, over on my digital download site.
A few days ago I uploaded a video to my YouTube channel explaining PPI and DPI – you can see that HERE .
But there is way more to pixel per inch (PPI) resolution values than just the general coverage I gave it in that video.
And this post is about a major impact of PPI resolution that seems to have evaded the understanding and comprehension of perhaps 95% of Photoshop users – and Lightroom users too for that matter.
I am talking about image view magnification, and the connection this has to your monitor.
Let’s make a new document in Photoshop:
We’ll make the new document 5 inches by 4 inches, 300ppi:
I want you to do this yourself, then get a plastic ruler – not a steel tape like I’ve used…..
Make sure you are viewing the new image at 100% magnification, and that you can see your Photoshop rulers along the top and down the left side of the workspace – and right click on one of the rulers and make sure the units are INCHES.
Take your plastic ruler and place it along the upper edge of your lower monitor bezel – not quite like I’ve done in the crappy GoPro still below:
Yes, my 5″ long image is in reality 13.5 inches long on the display!
The minute you do this, you may well get very confused!
Now then, the length of your 5×4 image, in “plastic ruler inches” will vary depending on the size and pixel pitch of your monitor.
Doing this on a 13″ MacBook Pro Retina the 5″ edge is actually 6.875″ giving us a magnification factor of 1.375:1
On a 24″ 1920×1200 HP monitor the 5″ edge is pretty much 16″ long giving us a magnification factor of 3.2:1
And on a 27″ Eizo ColorEdge the 5″ side is 13.75″ or there abouts, giving a magnification factor of 2.75:1
The 24″ HP monitor has a long edge of not quite 20.5 inches containing 1920 pixels, giving it a pixel pitch of around 94ppi.
The 27″ Eizo has a long edge of 23.49 inches containing 2560 pixels, giving it a pixel pitch of 109ppi – this is why its magnification factor is less then the 24″ HP.
And the 13″ MacBook Pro Retina has a pixel pitch of 227ppi – hence the magnification factor is so low.
So WTF Gives with 1:1 or 100% View Magnification Andy?
Well, it’s simple.
The greatest majority of Ps users ‘think’ that a view magnification of 100% or 1:1 gives them a view of the image at full physical size, and some think it’s a full ppi resolution view, and they are looking at the image at 300ppi.
WRONG – on BOTH counts !!
A 100% or 1:1 view magnification gives you a view of your image using ONE MONITOR or display PIXEL to RENDER ONE IMAGE PIXEL In other words the image to display pixel ratio is now 1:1
So at a 100% or 1:1 view magnification you are viewing your image at exactly the same resolution as your monitor/display – which for the majority of desk top users means sub-100ppi.
Why do I say that? Because the majority of desk top machine users run a 24″, sub 100ppi monitor – Hell, this time last year even I did!
When I view a 300ppi image at 100% view magnification on my 27″ Eizo, I’m looking at it in a lowly resolution of 109ppi. With regard to its properties such as sharpness and inter-tonal detail, in essence, it looks only 1/3rd as good as it is in reality.
Hands up those who think this is a BAD THING.
Did you put your hand up? If you did, then see me after school….
It’s a good thing, because if I can process it to look good at 109ppi, then it will look even better at 300ppi.
This also means that if I deliberately sharpen certain areas (not the whole image!) of high frequency detail until they are visually right on the ragged edge of being over-sharp, then the minuscule halos I might have generated will actually be 3 times less obvious in reality.
Then when I print the image at 1440, 2880 or even 5760 DOTS per inch (that’s Epson stuff), that print is going to look so sharp it’ll make your eyeballs fall to bits.
And that dpi print resolution, coupled with sensible noise control at monitor ppi and 100% view magnification, is why noise doesn’t print to anywhere near the degree folk imagine it will.
This brings me to a point where I’d like to draw your attention to my latest YouTube video:
Did you like that – cheeky little trick isn’t it!
Anyway, back to the topic at hand.
If I process on a Retina display at over 200ppi resolution, I have a two-fold problem:
1. I don’t have as big a margin or ‘fudge factor’ to play with when it comes to things like sharpening.
2. Images actually look sharper than they are in reality – my 13″ MacBook Pro is horrible to process on, because of its excessive ppi and its small dimensions.
Seriously, if you are a stills photographer with a hankering for the latest 4 or 5k monitor, then grow up and learn to understand things for goodness sake!
Ultra-high resolution monitors are valid tools for video editors and, to a degree, stills photographers using large capacity medium format cameras. But for us mere mortals on 35mm format cameras, they can actually ‘get in the way’ when it comes to image evaluation and processing.
Working on a monitor will a ppi resolution between the mid 90’s and low 100’s at 100% view magnification, will always give you the most flexible and easy processing workflow.
Just remember, Photoshop linear physical dimensions always ‘appear’ to be larger than ‘real inches’ !
And remember, at 100% view magnification, 1 IMAGE pixel is displayed by 1 SCREEN pixel. At 50% view magnification 1 SCREEN pixel is actually displaying the dithered average of 2 IMAGE pixels. At 25% magnification each monitor pixel is displaying the average of 4 image pixels.
Anyway, that’s about it from me until the New Year folks, though I am the worlds biggest Grinch, so I might well do another video or two on YouTube over the ‘festive period’ so don’t forget to subscribe over there.
Thanks for reading, thanks for watching my videos, and Have a Good One!
Become a patron from as little as $1 per month, and help me produce more free content.
Patrons gain access to a variety of FREE rewards, discounts and bonuses.
My YouTube Channel Latest Photography Video Training.
I’ve been busy this week adding more content to the old YouTube channel.
Adding content is really time-consuming, with recording times taking around twice the length of the final video.
Then there’s the editing, which usually takes around the same time, or a bit longer. Then encoding and compression and uploading takes around the same again.
So yes, a 25 minute video takes A LOT more than 25 minutes to make and make live for the world to view.
This weeks video training uploads are:
This video deals with the badly overlooked topic of raw file demosaicing.
Next up is:
This video is a refreshed version of getting contrast under control in Lightroom – particularly Lightroom Classic CC.
Then we have:
This video is something of a follow-up to the previous one, where I explain the essential differences between contrast and clarity.
And finally, one from yesterday – which is me, restraining myself from embarking on a full blown ‘rant’, all about the differences between DPI (dots per inch) and PPI (pixels per inch):
Important Note
Viewing these videos is essential for the betterment of your understanding – yes it is! And all I ask for in terms of repayment from yourselves is that you:
Give the video a ‘like’ by clicking the thumbs up!
YouTube is a funny old thing, but a substantial subscriber base and like videos will bring me closer to laying my hands on latest gear for me to review for you!
If all my blog subscribers would subscribe to my YouTube channel then my subs would more than treble – so go on, what are you waiting for.
I do like creating YouTube free content, but I do have to put food on the table, so I have to do ‘money making stuff’ as well, so I can’t afford to become a full-time YouTuber yet! But wow, would I like to be in that position.
So that’s that – appeal over.
Watch the videos, and if you have any particular topic you would like me to do a video on, then please just let me know. Either email me, or you can post in the comments below – no comment goes live here unless I approve it, so if you have a request but don’t want anyone else to see it, then just say.
Become a patron from as little as $1 per month, and help me produce more free content.
Patrons gain access to a variety of FREE rewards, discounts and bonuses.
Understanding colour inside Photoshop is riddled with confusion for the majority of users. This is due to the perpetual misuse of certain words and terms. Adobe themselves use incorrect terminology – which doesn’t help!
The aim of this post is to understand the attributes or properties of colour inside the Photoshop environment – “…is that right Andy?”“Yeh, it is!”
So, the first colour attribute we’re going to look at is HUE:
A colour wheel showing point-sampled HUES (colours) at 30 degree increments.
HUE can be construed as meaning ‘colour’ – or color for the benefit of our American friends “come on guys, learn to spell – you’ve had long enough!”
The colour wheel begins at 0 degrees with pure Red (255,0,0 in 8bit RGB terms), and moves clockwise through all the HUES/colours to end up back at pure Red – simple!
Above, we can see samples of primary red and secondary yellow together with their respective HUE degree values which are Red 0 degrees and Yellow 60 degrees. You can also see that the colour channel values for Red are 255,0,0 and Yellow 255,255,0. This shows that Yellow is a mix of Red light and Green light in equal proportions.
I told you it was easy!
Inside Photoshop the colour wheel starts and ends at 180 degrees CYAN, and is flattened out into a horizontal bar as in the Hue/Saturation adjustment:
Overall, there is no ambiguity over the meaning or terminology HUE; it is what it is, and it is usually taken as meaning ‘what colour’ something is.
The same can be said for the next attribute of colour – SATURATION.
Or can it?
How do we define saturation?
Two different SATURATION values (100% & 50%) of the same HUE.
Above we can see two different saturation values for the same HUE (0 degrees Hue, 100% and 50% Saturation). I suppose the burning question is, do we have two different ‘colours’?
As photographers we mainly work with additive colour; that is we add Red, Green and Blue coloured light to black in order to attain white. But in the world of painting for instance, subtractive colour is used; pigments are overlaid on white (thus subtracting white) to make black. Printing uses the same model – CMY+K inks overlaid on ‘white’ paper …..mmm seehere
If we take a particular ‘colour’ of paint and we mix it with BLACK we have a different SHADE of the same colour. If we instead add WHITE we end up with what’s called a TINT of the same colour; and if add grey to the original paint we arrive at a different TONE of the same colour.
Let’s look at that 50% saturated Red again:
Hue Red 0 degrees with 50% saturation.
We’ve basically added 128 Green and 128 Blue to 255 Red. Have we kept the same HUE – yes we have.
Is it the same colour? Be honest – you don’t know do you!
The answer is NO – they are two different ‘colours’, and the hexadecimal codes prove it – those are the hash-tag values ff0000 and ff8080. But in our world of additive colour we should only think of the word ‘colour’ as a generalisation because it is somewhat ambiguous and imprecise.
But we can quantify the SATURATION of a HUE – so we’re all good up to this point!
So we beaver away in Photoshop in the additive RGB colour mode, but what you might not realise is that we are working in a colour model within that mode, and quite frankly this is where the whole chebang turns to pooh for a lot of folk.
There are basically two colour models for dare I use the word ‘normal’, photography work; HSB (also known as HSV) and HSL, and both are cylindrical co-ordinate colour models:
HSB (HSV) and HSL colour models for additive RGB.
Without knowing one single thing about either, you can tell they are different just by looking at them.
All Photoshop default colour picker referencing is HSB – that is Hue, Saturation & Brightness; with equivalent RGB, Lab, CMYK hexadecimal values:
But in the Hue/Sat adjustment for example, we see the adjustments are HSL:
The HSL model references colour in terms of Hue, Saturation & Lightness – not flaming LUMINOSITY as so many people wrongly think!
And it’s that word luminosity that’s the single largest purveyor of confusion and misunderstanding – luminosity masking, luminosity blending mode are both terms that I and oh so many others use – and we’re all wrong.
I have an excuse – I know everything, but I have to use the wrong terminology otherwise no one else knows what I’m talking about!!!!!!!!! Plausible story and I’m sticking to it your honour………
Anyway, within Photoshop, HSB is used to select colours, and HSL is used to change them.
The reason for this is somewhat obvious when you take a close look at the two models again:
HSB (HSV) and HSL colour models for additive RGB. (V stands for Value = B in HSB).
In the HSB model look where the “whiteness” information is; it’s radial, and bound up in the ‘S’ saturation co-ordinate. But the “blackness” information is vertical, on the ‘B’ brightness co-ordinate. This great when we want to pick/select/reference a colour.
But surely it would be more beneficial for the “whiteness” and “blackness” information to be attached to the axis or dimension, especially when we need to increase or decrease that “white” or “black” co-ordinate value in processing.
So within the two models the ‘H’ hue co-ordinates are pretty much the same, but the ‘S’ saturation co-ordinates are different.
So this leaves us with that most perennial of questions – what is the difference between Brightness and Lightness?
Firstly, there is a massive visual difference between the Brightness and Lightness information contained within an image as you will see now:
The ‘Brightness’ channel of HSB.
The ‘L’ channel of HSL
Straight off the bat you can see that there is far more “whites detail” information contained in the ‘L’ lightness map of the image than in the brightness map. Couple that with the fact that Lightness controls both black and white values for every pixel in your image – and you should now be able to comprehend the difference between Lightness and Brightness, and so be better at understanding colour inside Photoshop.
We’ll always use the highly bastardised terms like luminosity, luminance etc – but please be aware that you may be using them to describe something to which they DO NOT APPLY.
Luminosity is a measure of the magnitude of a light source – typically stars; but could loosely be applied to the lumens output power of any light source. Luminance is a measure of the reflected light from a subject being illuminated by a light source; and varies with distance from said light source – a la the inverse square law etc.
Either way, neither of them have got anything to do with the pixel values of an image inside Photoshop!
But LIGHTNESS certainly does.
Become a patron from as little as $1 per month, and help me produce more free content.
Patrons gain access to a variety of FREE rewards, discounts and bonuses.
Lumenzia – Enhanced Twilight Sunset Sky and Lighting
Sunset lighting and sky as captured by the camera – image is in need of some enhancement.
Now THAT’S more like it! Simple enhancement in Photoshop using Lumenzia.
I’m sure you’ll agree that the image looks fantastic after the processing, but if you watch the video below you’ll see that it’s such an easy, quick and simple procedure.
The first key to this simple adjustment is the mask from behind which the colour enhancement is made:
The L2 Lumenzia mask, modified slightly with a white brush in the Overlay blend mode.
There are a number of ways that this mask can be created, but all of them are more time-consuming to create then by using the simple Lumenzia interface.
The second key move is to switch the blend mode of the colour overlay layer (then one this mask applies to) to the Hard Light blending mode within the layers panel.
The overall adjustment process is, other than the minimal amount of manual brush ‘tweaking’ of the mask, simply a matter of a few clicks here and there – it couldn’t be simpler really now could it.
OK, so I’ve made a tentative start on my new Photoshop video tutorials and I thought I’d upload this Colour Range Selection Tool Basics one to my Tube of Me channel – just so that everyone can see what the Fuzziness, Localised Colour Clusters and Range “do-hickies” actually do for your workflow process!
The colour range selection tool can be used for many different purposes within Photoshop where you want to make a selection based on Colour/Hue as opposed to a selection based on luminosity.
In this video I use it to effect a colour change to a specific object within an image; but in the previous video post I used it to ‘remove’ a black background.
But both cases amount to the same thing if you think about it logically – it’s just a way of ISOLATING pixels in an image based on their colour range.
Overall, this is a bit of a “quick ‘n dirty” way of doing the job, and I could do a little extra brush work inside the mask to tidy things up that little bit more!
But now you know how the tool itself works.
A purer way of changing localised colour involves a very different method – see these other videos on my channel:
Become a patron from as little as $1 per month, and help me produce more free content.
Patrons gain access to a variety of FREE rewards, discounts and bonuses.
Simple Masking in Photoshop – The Liquid Chocolate Shots
Masking in Photoshop is what the software was built for, and yet so many Photoshop users are unfamiliar or just downright confused by the concept that they never use the technique.
Mask mastery will transform the way you work with Photoshop!
Take these shots for instance:
Wanting a shot to look like liquid chocolate and cream on a black or white background is all well and good, but producing it can be either as simple or hard as you care to make it.
Trying to get a pure white background ‘in camera’ is problematic to say the least, and chucking hot melted chocolate around if fraught with its own set of problems!
Shooting on a dark or black background is easier because it demands LESS lighting.
Masking in Photoshop will allow us to isolate the subject and switch out the background.
Now for the ‘chocolate bit’ – we could substitute it with brown emulsion paint – but have you seen the bloody price of it?!
Cheap trade white emulsion comes by the gallon at less than the price of a litre of the right coloured paint; and masking in Photoshop + a flat colour layer with a clipping mask put in the right blend mode will turn white paint into liquid chocolate every time!
A tweak with the Greg Benz Lumenzia plugin will finish the shot in Photoshop:
A final tweak in Lightroom and the whole process takes from the RAW shot on the left to the finished image on the right.
The key to a good mask in Photoshop is ALWAYS good, accurate pixel selection, and you’d be surprised just how simple it is.
Watch the video on my YouTube channel; I use the Colour Range tool to make a simple selection of the background, and a quick adjustment of the mask edge Smart Radius and Edge Contrast in order to obtain the perfect Photoshop mask for the job:
Like everything else in digital photography, when you know what you can do in post processing, it changes the way you shoot – hence I know I can make the shot with white paint on a black background!
Masking in Photoshop – you mustn’t let the concept frighten or intimidate you! It’s critical that you understand it if you want to get the very best from your images; and it’s a vast subject simply because there are many types of mask, and even more ways by which to go about producing them.
It’s a topic that no one ever stops learning about – nope, not even yours truly! But in order to explore it to the full you need to understand all the basic concepts AND how to cut through all the bullshit that pervades the internet about it – stick with me on this folks and hang on for the ride!
Become a patron from as little as $1 per month, and help me produce more free content.
Patrons gain access to a variety of FREE rewards, discounts and bonuses.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behaviour or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.