When on location on a shoot, or when in post-production, scopes should be used to the maximum. The main problem faced is that they are hard to understand. What should we display, and what is the purpose? We’ll outline their use and how these tools can be used to help your content curation below.

When working on live shoots, you will always see a vision engineer in the gallery, deciphering monitors that display a range of waveforms, sinusoids and other displays that are reminiscent of a rocket-launch rather than a video image. These tools are used to correct all variables that go into curating the perfect image, and it is these tools – or scopes as we’ll refer to them as – that are found everywhere, be it in-camera, on external monitors or in grading software.

The primary reason for the use of scopes is that they are the only tools that can be used to analyse all of the pixels found within an image based on the electrical signal produced, and is beyond what we can see within the camera frame with our ‘simple’ vision. Shooting conditions can of course also leave out details. On the other hand, it is often the case that the external monitor used for reference when on a shoot is not well calibrated, and therefore shows incorrect colours or exposure as a base.  It is therefore important to use scopes during both the shoot and when in post-production to avoid errors, and to provide a sufficient colour-grade.

The theoretical side

additive model subtractive model

The additive colour model on the left, compared to the subtractive model on the right

Above you can see the two main colour models. On the left is the RGB (Red, Green, Blue) model – formally known as the additive colour model, and is used by all screens and anything that produces light. On the right is the subtractive colour model, which is used for printing as the light is received by the medium.

In terms of the latter, if we mix all the primary colours (Cyan, Magenta and Yellow), it gives us black on paper. On the other hand, for ‘electronic imaging’, if we were to add up the electricity of the primaries then we get as much current as possible and in turn gives us absolute white. It’s just like turning on a lightbulb at its maximum power for maximum light or turning it off for black.

It is important to mention this model in order to understand all the representations of the tools that we’re going to talk about, because everything will be based on a quantity of energy. It is this quantity of energy, which will sometimes be segmented in colour, that will be interpretated to help us further with the models below.

The histogram: the best known but perhaps the least useful

Histogram-explanation_01

The histogram only concerns the luminance of an image. Source – B&H Photo Video

A histogram demonstrates the number of pixels (the vertical axis) located within a luminance zoom, from left (black) to right (white). If for example you see a ‘mountain’ on the right-hand side, the image presented will be very white, whereas if this was found on the left-hand side, the image will be very dark. A mountain with a flat top meanwhile demonstrates that your image is overexposed or under-exposed.

The issue with histograms is that this representation is not in correlation with the composition of your frame. If your overexposed area is at the top of your frame (such as a sky for example), the ‘mountain’ will always be towards the right of the histogram. This interpretation then makes it difficult for you to identify possible corrections. We’ve outlined some examples of this below:

Histogram-explanation_0.

An overexposed image

 

Histogram-explanation_02

The image is more balanced. As it remains very white, the majority of the representation remains on the right: but you can guess the curtains (in the middle), and the jeans on the (left) of the representation.

As a result, we’d recommend trying not to have ‘flat tops’ in the histogram, and to have as even a distribution as possible.

The waveform monitor: the most versatile tool

The waveform monitor acts as a kind-of super-histogram, only with two major differences that makes it far more effective. The first is that the image is represented from left to right (i.e. the real frame), and secondly, the scale of the Y-axis on the left (vertical) will represent the luminance value, with 0 being black and 100 being white. This can be better understood by looking at the below image:

02_Waveform_luminance_01

As you can see, the image is overexposed to the left of the frame. We can therefore find the indication of this overexposure on the waveform monitor – again on the left – as we have reached 100 IRE, which wasn’t as visable when on location for the shoot. On the other hand, there is no problem with the blacks over the entire image. Let’s take a look at a better-exposed image:02_Waveform_luminance_02

Here we have recovered the windows from the building on the left, and there are now no more burnt pixels within the image. The use of the waveform monitor makes the image easy to interpret in order to contain the exposure values between the high bar and the low bar. This support can also be helped with colour as seen by the next tool that we cover.

The RGB waveform monitor: exposure AND balance!

The RGB waveform takes the same characteristics of the generic luminance waveform, only with this it displays the quantities of red, green and blue as per the same scale. This allows us to easily see if one colour is over-represented compared to another, but also to check the exposure of an image. Let’s look at the image below:

03_Waveform_RGB_01

We can see that the image has a dominance of magenta in the whites. This phenomenon is much more visible on the RGB waveform monitor, with the walls being the biggest defect as they are supposed to be white. There is an over-representation of red, and the image is overexposed. Let’s look at the corrected version:

03_Waveform_RGB_02

As demonstrated in the above, what is supposed to be white, on the left wall, shows RGB values that merge. This is normal as the addition of the three primary colours gives us white, hence the ‘additive’ model we discussed earlier in the article. The brown staircase meanwhile shows ‘more red’, which is normal as it is brown, and the right wall also correctly displays the RGBs combined, hence the white. These values can also be displayed as a ‘parade’, i.e. the three primary colours as a side-by-side as shown below:

03_Waveform_RGB_03

We can therefore perfectly analyse the three colours in order to obtain a perfect image.

False colours: better than zebras!

False colours can be compared in a way to those generated from a thermal camera, except that false colours react to exposure and luminance, and not to heat. Unlike zebras, which ‘scratch off’ anything that exceeds an IRE value that has been determined, false colours will follow a scale. For example, red will show anything above IRE, but you will still be able to see all the subtle differences, and can therefore be used for exposing skin tones. Let’s look at the example below:

04_Falsecolors_01

As we can see, the image is burnt and mostly red, with all the detail lost. We may decide to preserve the subject’s face, but then that is also overexposed, as the flesh tones should be between 40 and 50 IRE (and therefore green). This demonstrates the difference between false colours and zebras, with the latter only being able to show the sky as burnt.

04_Falsecolors_02

Here is the corrected version above. False colours demonstrate a very good graphic display of the exposure, and as you can now see the face of the model no longer has any red areas. Unfortunately, this functionality is rarely available in-camera unless you get to the higher-end production cameras i.e. the VariCam line-up. Nonetheless, all external monitors from the likes of Small HD and Atomos all have them.

Vectorscope: for colours… and faces!

Vectorscopes are both valuable and unique as it will focus on colour saturation whilst completing forgetting any form of luminance. The different shades are represented around a circle with the colour display going from the centre (and therefore zero saturation) to the outside (100% saturation and therefore the limit of the tool), according to an angle reflected by the colours found within the image.

Skin tones for example will all have the same reference, with a precise angle that is displayed by the vectorscope. As a result, it becomes extremely simply to enhance a face by bringing the values closer to this precise angle as symbolised by a line. Here are some examples outlined below:

05_Vectorsope_01

Above, the image is magenta and moves away from the line at around -40 degrees between red and yellow. This highlights that the skin tones are badly rendered.

05_Vectorsope_02

However, with the example above we achieve a perfect render of the flesh tones by simply relying on the tool. Success!