Thursday, December 10, 2015

Know your camera - edge enhancement or sharpness

In the first post I showed a series of images to demonstrate how the sharpness varies as a function of f-stop. In this post I also indicated that the images were not as good as they could be as I had changed the "sharpness" setting in the image controls menu of my camera to -5.
The word "sharpness" for this menu setting is somewhat misleading as it doesn't really affect the sharpness of the image. This parameter affects an aspect of the image which is usually called edge-enhancement. What this parameter does is it measures the difference in brightness between two adjacent pixels either horizontally, vertically or both. When "sharpness" is increased the darker pixel of the two is darkened a little more and the brighter pixel of the two is brightened a little. This is illustrated by the image on the left. The black solid line represents a video signal which changes abruptly from a low level (i.e. a dark object) to a high level (i.e. a bright object). The dotted blue line represents the resulting video signal after sharpening has been applied. The video signal gets slightly darker, just before it jumps up to the high level. The video signal is also slightly brighter then the original signal right after the transition from dark to bright.
The larger the difference in brightness between the two pixels, the stronger the brightening or darkening will be. The resulting brightening or darkening of a pixel will be the sum of all the brightening or darkening resulting from the difference with all its neighboring pixels.
Most consumer camera's use increased sharpening to give the image a sharper, more appealing appearance with more "pop". The same trick is used on most tv's. They too apply sharpening to make the image look more detailed and sharper. At first sight the images may appear more detailed when sharpening is applied but in reality sharpening destroys image information. It is a destructive transformation (i.e. non-reversible) which may destroy subtle details. This is most clearly seen on images where the sharpening has been increased too much. Edges will take on a cartoon-like appearance with clear black or white lines surrounding objects. The worst effect occurs when sharpening is applied multiple times on the same image. You may then end up with objects surrounded by multiple black and white contours. This may e.g. happen when an image was originally recorded in SD resolution where sharpening was applied, then this sharpened image is up-scaled to HD and again sharpening is applied on this HD channel, possibly by the broadcaster, or the tv it is being watched on, or both. I have included a link to some fine examples of the damage that may be done.
Additional problems may occur when the image is aggressively compressed to e.g. a JPG or AVCHD format after sharpening has been applied. Sharp edges do not compress very well in such formats and ringing artifacts may occur or large portions of the data are spent on encoding these edges, leaving less bits for encoding real detail.
I think this type of edge-enhancement is a relic from the age of analogue video. Analogue circuits have a rise-time which dampens abrupt signal changes and edge-enhancement can counteract this somewhat, but in digital video it can only be used to mask the fact that an image doesn't have much true sharpness or detail to start with.
Ideally, sharpening is applied only once in the entire image processing chain from camera to the tv-set of the viewer, and it should be applied in the tv set of the viewer. Due to the nature of the transformation it should be the last step in the entire image processing chain and I think it should be up to the viewer to decide how strong it's effect should be as the optimal value strongly depends on the characteristics of the display device and the viewers personal preference.
Many people who buy a (consumer-)camera and know about this sharpening operation the camera may apply, will look for this parameter in the settings menu and dial it down all the way as they are familiar with it's destructive effect and know that the default setting on especially consumer camera's is quite high. In fact, only the more advanced consumer (prosumer) camera's will allow you to dial it down. Professional camera's always allow you to change it's strength and may allow you to tweak other aspects of it's operation as well.
My camera, the Panasonic hc-x929/920 also allows you to tweak it's "sharpness" setting and from my experience with other camera's I initially decided to dial it down without properly verifying it's effect. Only later, after carefully studying it's effect on the image at different strengths I realized this camera behaved differently from what I had expected. I have taken a series of test shots of a scene, all taken with the same aperture of f5.6 (which gives the best "real" sharpness as I had figured out earlier as described in this post). Furthermore I used manual focus, whitebalance and exposure to ensure that the sharpness setting from the image menu of the camera was the only thing that had changed between shots. Let's show the whole series at once, the white number in each picture indicates the value of the sharpness parameter (click to see full size):

When examining the images it appears that applying a negative value of the sharpness doesn't really reveal any more details or remove any false contours. Instead it appears it applies a sort of averaging of pixels, or a Gaussian blur to the image. Applying a positive value does alter the image as expected by increasing contrast and creating fringes.
To verify my "theory" I first took the neutral image with sharpness zero and applied a "sharpen" operation to it in a graphics program to see if it would match any of the images sharpened in camera. I think the image sharpened "in post" (i.e. on the computer, after it had been recorded) matches the image recorded with a sharpness setting of +3 quite well.


As a cross-check, I also took an image recorded with a sharpness of +5 and applied a "soften"operation to see if that would match an image recorded with a lower sharpness setting. Here I could not get a proper match. Either the contrast did not match or the post-softened image had less detail.

This confirms that a positive value of the sharpness parameter truly applies an edge-enhancement as was to be expected.
But now for the more interesting part; what happens if you use a negative sharpness? First I took the neutral image again, recorded with sharpness zero and applied a "soften" operation and tried to find an image recorded with a negative sharpness which matched. I think it matches well with the image recorded with sharpness -5.

And again as a cross-check I took the image recorded with a sharpness of -5 and applied a "sharpen" operation to it in post to see if it would match any of the other images recorded with a higher setting of the sharpness parameter. Again, I think no proper match could be found. Either the contrast did not match, or the amount of discernible detail did not match.

These tests confirms that the neutral position of the "sharpness" parameter of this camera is truly neutral. No sharpening is applied which can be reduced by lowering the value of this parameter. This is quite unusual for a consumer camera. Instead, when lowering this parameter, pixels are averaged and detail is lost. My conclusion is that for best results this parameter should be left at zero.
Does this mean this parameter is pointless? Well, you could use it as a crude way of noise reduction. In my experience, when using high amounts of gain (e.g. 15db or higher) decreasing the sharpness to -2 will reduce the noise a little (as pixels are averaged) without significantly reducing true sharpness. After all, when there is little light, the sharpness of the lens is reduced already (due to the wide open aperture, as shown in the first post). However, reducing noise afterwards on a computer may give better results if you are willing to spend some time on it.
If you want to give the image more "pop" increasing the sharpness to +2 might give pleasing results without introducing ugly contours, but again, doing this afterwards on a computer may give better results or at least a better opportunity to judge the effect.
As a side note, if you look closely to the images you may have noticed that certain edges are emphasized regardless of the sharpness setting. E.g. the bottom and vertical edges of the blue rectangle on the right DVD box, the edges round the red circles, the edges on the left DVD box where black sits above the beige. I suspect this is the result of chroma subsampling, i.e. the colors are recorded with a lower resolution than the brightness information. Therefore the colors don't always overlap perfectly with the brightness and bleed into adjacent areas. It appears this may also affect brightness around edges. This would then be caused by the fact that the images are recorded in the AVCHD format with 4:2:0 color sampling. The 3mos sensor of the x920 should in theory be capable of producing highly accurate color detail, but unfortunately, a lot of it is discarded when the image gets recorded. Again, I'm not entirely sure what is causing these edges as I also find them to be wider then they ought to be but if you think you know what is causing this I would like to invite you to leave a comment.

Sunday, November 29, 2015

Know your camera - whitebalance


© STEPHEN WILKES
In the previous post I made a series of shots at different f-stops but equal exposure to observe the effect of the ND filter on the bokeh. The ND filter had no noticable effect on the rendering of the bokeh but I did notice something else in the series of pictures. The colors weren't entirely identical accross the series while the lighting was. It was an overcast day. The white balance of the camera was fixed at the "cloudy" preset so something else must have altered the colors. By loading the images in a graphics program and using the histogram and color picker tools I could see there was a slight shift in color towards cyan as the f-number increased. This could be explained by the ND-filter introducing a slight color cast to the image. ND filters are rarely exactly color neutral. If the built-in filter is slightly cyan, a slight color shift can be expected as the ND-filter covers a larger or smaller portion of the aperture.
f2.8

f5.6

If the hue of the ND filter is constant enough across individual copies, the camera engineers could let the camera automatically correct for the hue of the ND filter by tweaking the white balance setting as a function of f-number. Just nudge the white balance slightly towards red as the f-stop increases from f2.8 to f5.6 and the user would never notice a change in hue while the f-number is changed. Apparantly the camera does not perform such a correction. Or alternatively, it does, but there is a small margin of error, due to the variation in hue of the individual ND filters, and it is this margin of error observed here. I suspect it simply performs no correction.
So if we want to set the right white-balance, how should it be done? The first step consists of selecting what light source to balance on. This step may not always be as trivial as it initially seems. Let me demonstrate this by the following example.
Imagine you are standing on a square surrounded by white buildings. It is early morning, so the sun is still fairly close to the horizon. You want to make some shots of one particular building of which the wall facing the square is still in the shadow. You first want to take a total shot of the building so you position yourself at the other side of the square which is still bathing in the sun. By the time you have set up your camera and you look at the building you notice that the walls still have a bluish hue as they are still in the shadows while the walls closest to you appear white. In an effort to depict the building realistically you set the white balance in such a way that the walls close to you appear white, just as they appear to you. Then you record just the face of the building you wanted to film.
Next, you move closer to the building to shoot some closeups of various details of the building. You cross the square again and put up your camera in the shadow. By the time you are ready to make your first shot you notice that the wall of the building you want to shoot now appear white while the walls of the buildings on the opposite side of the square have a slight warm-yellow tone. Again you adjust your white balance again but now ensure that the walls closest to you again appear white.
While taking all these shots you have ensured that the colors of all images appear to the viewer as they have appeared to you. However, when the viewer views the end-result he will see a total of the building followed by a series of close-ups where the hue of the building has suddenly changed from one shot to the next. The viewer is unaware of the surrounding of the building and might not know when the camera was in the shadow and when in the sun.
The point of the previous example is that the choice of what light source to balance on depends on the contextual information the viewer has about the subject. It is ultimately a subjective choice which the camera operator has to make, depending on what the viewer will know or more precisely; what the director wants the viewer to know.
Once we know what light source we want to balance on we should let the camera measure the spectrum of the light source. Through either the color filters of the sensor (on a single chip bayer sensor) or the prism of a three-chip camera (like the Panasonic x929 I use) this spectrum will be split in three parts. Each part corresponds to a primary color channel of the camera (whose spectral range may partially overlap). It will measure the amount of light captured in each part. Then, usually assuming a spectral distribution of a black body or similar full-spectrum light source, it can infer the spectral distribution of the light source. An amplification factor will be applied to each channel to ensure that each channel will give off a signal of identical strength when they captures the pure, un-altered light from the light source.
In this process the camera makes an assumption about the spectral distribution of the light source. This assumption is correct for sunlight and roughly also to incandescent lights. Certain lightsources like fluorescent, led-lights or streetlights have a spectral distribution which is very different. Therefore if one whitebalances on such light sources, some colors may be recorded correctly, but others will not. Fortunately there are e.g. led-light sources which have been tweaked in such a way that they still give reasonably correct results despite the incorrect assumption of the camera. Furthermore, a camera may have special white balance pre-sets which take into account the particular spectral distribution of certain light sources. E.g. the fluorescent preset. For these particular light sources best results may be obtained by using the presets.
For full-spectrum light sources with a spectral distribution resembling that of sunlight we have to allow the camera to measure the light. Ideally we would simply like to point the camera at the light source and let it measure the light but unfortunately the lens will project an image of the light source on the sensor while we would like the light to be evenly distributed across the entire sensor. Fortunately there are tools to circumvent that problem.
One such tool is the white translucent lenscap or an expodisc like tool. Hold these in front of the lens while pointing the camera at the lightsource. These diffuse the light before it enters the lens causing an even distribution of all light across the image sensor.
A second tool is the grey card. Point the card at the light source and ensure it gets illuminated by the light source. Then point the camera at the grey card and ensure it fills the entire frame. The light is now being measured indirectly, after having been reflected by the card.
With both tools it is important that they are spectrally-neutral. This means that they transmit (lenscap) or reflect (grey card) all visible frequencies to the same degree. Otherwise one would measure the spectrum of the lightsource after it has been modified by the lenscap or card. After having set the white balance in this way with a wrong card, the card may look grey, but other colors may be off again as when using a light source with an unusual spectral distribution.
The next step is to ensure that the even, featureless image which gets projected in this way doesn't get under- or overexposed. Under exposure will lead to a weak signal from which no accurate measurement can be made. Over exposure will cause all channels ro give their maximum signal regardless of the spectral distribution of the light. Even when the image isn't fully overexposed, at high exposures the different color channels might start to repond slightly differently to an increase in brightness, thereby throwing off your white balance. In practice you can safely white balance on exposures from say 30% to 70%.
In fact, the built-in ND filter in de x929 complicates things a little. As the ND filter is not entirely color neutral you should first set your exposure to whatever is required for the shot, and then white-balance. If you alter the exposure after the white-balance measurement and the position of the ND filter is changed, your colors change slightly.
The built-in ND filters of professional video camera's are designed differently and it now becomes clear why. Professional camera's have a filter wheel with multiple ND filters with different strengths. You can set the wheel in three or four positions to change the ND strength. However, each filter will always cover the aperture completely. This has two advantages, first there will be no reduction of sharpness as described in this post, and second there is no hue-change while changing the f-stop. The only thing one has to think of with a professional camera is that one should set the position of the filter wheel before setting the white balance. Apart from this, the filters might also be made of better materials or a built-in electronic correction might be applied wich would eliminate the need to set the filter wheel beforehand, this I do not know.
And then the measurement can be made by either pressing the white balance button or (as in the case of the x929) activating the manual white balance reading through the menu. Now your colors should be accurate (to the best capability of your camera) across the entire color spectrum.
It may be tempting at times to use other objects which appear white or grey, and some objects may give satisfying results (e.g. clouds at noon) but they may also throw off your colors significantly (e.g. a piece of printer paper).
First of all, if you look critically at the objects around you, you may find it quite hard to find an object which truly appears white or grey. White comes in many shades and true white is a rare color. Furthermore, certain colors are downright misleading. The washing powder you use to whash your white clothing contains optical whiteners. These are chemicals which absorb UV light and re-emit it as a slightly bluish light. The bluish light overshadows the somewhat yellowish, garish light which comes from the clothing fibers themselves, giving them a bright white appearance when viewed under natural light (which contains a substantial amount of uv light). White balance on a white t-shirt and your colors may take on an unexpected warm, reddish tone.
Many types of papers are whitened using a similar principle which also make them unsuitable for an accurate measurement. Also realize that translucent lenscaps and grey-cards are usually made of plastics or similar materials which may change color due to aging under the influence of uv-light. Such aged lens-caps or grey cards may throw off your colors.
If a manual white balance as described above is not appropriate because the light source is not a full-spectrum light source, you might achieve better results with any of the presets which have been optimized for a specific light source.
If the light source is appropriate but the procedure can not be executed, e.g. because the camera can not be exposed to the same light as the subject, then you should also use a preset to get at least as close as possible. You can correct the colors of your clips after you have recorded them on a computer. This is easy as you can apply the same correction on all shots made under the same lighting.
Note to self: stop using AWB.
In any case it is best not to use the auto white-balance feature. Your white balance may change during a shot which makes it much harder, if not impossible to correct afterwards. Trust me, I have learned this the hard way. By the way, this example was shot with a Panasonic NV-MX300, a predecessor of the X920.
In addition to this, manufacturers may have tuned the auto-white balance and the pre-sets in certain ways which gives the camera a certain "look". For some reason unknown to me, Panasonic camcorders often have a slightly greenish tone to them when using a whitebalance preset or auto-whitebalance. When a Sony camera is off, it usually is too bluish and from the little experience I have with Canon I get the impression they tend to favor magenta. With a manual white balance, all these errors disappear and they all can produce proper colors.
Given the differences in color rendition of the three camera's in the above clip I can already conclude that they must all have been operating in their auto-everything modes. The author confirms this in a comment to the video. It may be tempting to draw conclusions about the quality of colors of these camera's but you can only realy draw conclusions on the quality of their auto-everything modes. A camera with a poor auto-white balance may very well out-perform a camera with a more clever auto-white balance mode once both camera's are properly manually white-balanced.

Saturday, November 14, 2015

Know your camera - stars and rainbows

In the previous post I disected the exposure system of my camera (the Panasocic hc-x929/x920); I showed the combination of iris blades and ND-filter used and showed how it affected the sharpness of the image. In order to get an optimal sharpness of my images I must select an f-stop near f 2.8 or f 5.6. There are however also some other image effects which get affected by the choice of f-stop. Although these will generally not be decisive in choosing a particular f-stop I still wanted to show them to you in this post.
The first, most obvious effect is how the bokeh is rendered.
I must add to this that we are talking here about a small sensor camera which by the nature of it's design has a very large depth-of-field (or DOF) in most situations. So out-of-focus elements will not be as common in it's footage as in the footage of e.g. a full frame dslr. Only at long focal lengths (i.e. zoomed in) or when focussing at very close distances and shooting at wide open apertures will this camera be capable of separating the subject from it's environment by bringing the environment out of focus while keeping the subject in focus. And as the DOF increases with increasing f number I'm pretty sure that beyond f11 the DOF will effectively run from the lens to infinity. It is the shape of the aperture between f1.5 and f2.8 which effectively determines it's rendering of bokeh.
Out-of-focus dots of light will be rendered as a projection of the aperture. Knowing what I know now about the aperture system of my camera I can conclude that the bokeh will be a regular octagon between f2.5 and f2.8 and also between f5.6 and f11. However, above f2.8 and below f5.6 I might expect an irregular bokeh which may give out-of-focus elements an unusual look. Beyond f11 the bokeh will have changed to a diamond shape.
Deducing is one thing, testing another. So I took a couple of pictures of the same out-of-focus subject at different apertures but with identical brightness (as far as this was possible) to see if I would notice anything unusual between f2.8 and f5.6. Obviously the image gets sharper as the f-number increases. Here are a few (click to enlarge):
f2.8

f4.0

f5.6
I can't see anything unusual about the bokeh, The f4.0 picture looks normal to me, as did the other pictures with other f numbers which I will not bother to show. So the "unusual" bokeh which I expected between f2.8 and f5.6 did not appear.
The second effect is how bright point sources of light are rendered. These are effectively rendered as stars where the points of the stars are oriented perpendicular to the orientation of the edges of the aperture blades. Let's show a few examples (click to enlarge):
f2.8
f3.1
f5.1
f5.6
The following can be obeserved:
  • The star gets smaller with increasing f-number
  • Between f3.1 and f5.2 an additional pair of vertical legs are superimposed onto the star which are slightly mis-aligned with the original short legs. (Not very easy to see).
  • At f5.2 a subtle diffraction pattern can be seen in the vertical legs where the light is separated in a rainbow pattern.
This can be explained by the design of the combination of the iris and built-in ND filter. As soon as the ND filter is shifted in front of the aperture, additional horizontal edges are introduced which add an additional pair of vertical legs to the star. Apparently the edges of the ND filter are not perfectly aligned with the edges of the iris giving them a slightly different angle.
When the ND filter covers the aperture almost entirely, two small slits remain which due to their small diameter cause a diffraction pattern like you might have seen in a typical high-school physics experiment. As the location of the minima and maxima depend on the wavelength of the light, the light effectively gets separated into it's individual wavelengths, giving a series of overlapping rainbow patterns. Once the ND filter covers the aperture entirely,  the star is regular again and has diminished in size.
Now, as a little encore I would like to show you some patters which occur at very specific combinations of focal length and focus distance. The aperture was in all cases wide open.

The top image appears when focussing on a short distance with the light source completely out of focus. In the center image the light source is brought slightly more into focus and on the bottom image the light source is almost in focus. The upper two images show artifacts which I haven't encountered yet in normal footage I shot with the camera, the bottom shot shows a familiar artifact; colorful "globes" surrounding a bright light source.
I'm not entirely sure what is causing this, but I suspect it is related to the fact that this model is a three-ccd camera (or 3mos as Panasonic likes to call it). Maybe the colorful patches of red, green and blue are internal reflections in the prism block which bounce back from the lens element, closest to the prism block, back towards the sensor. The distance between this lens element and the prism block will then determine if the reflections are in focus by the time they reach the sensors or not.
Notice how the camera interprets a range of wavelengths as one solid color of either red, green or blue. Only where the patches overlap as in the horizontal line to the right, a mix color is shown, in this case yellow.
Another effect caused by the fact that this is a three-ccd camera are the colorful fringes of the bokeh which can be seen at it's widest apertures. When you look at the image of the wide open aperture at the start of this post, or in the previous post, you can see a blue edge on the upper right, a red edge on the upper left and a green edge at the bottom. Sometimes when filming street lights in the background, such subtle colored edges can be seen, as if the colors don't align there perfectly. This is the telltale sign of a three-ccd camera.

Sunday, November 1, 2015

Know your camera - the aperture of the iris

In the previous post I described how the sharpness of my videocamera (Panasonic hc-x929/x920) varies as a function of it's f-stop. The conclusion was that the sharpness was optimal at two different f-stop values (f2.8 and f5.6) with a local minimum at f4 where the sharpness is reduced.
This behaviour is odd for a lens as I would expect just one f number where the sharpness is optimal with decreasing sharpness towards the extreme ends of the range. This motivated me to do another test to figure out how my iris changed as I changed the f-stop setting on the camera.
Have you noticed e.g. in films how out-of-focus streetlights in the background of a scene are often rendered as hexagons, octagons, circles or elipses? This is because a lens projects each out-of-focus point as a surface in the shape of the aperture. As different lenses have different implementations of their iris, these "point sources" essentially show you the shape of the aperture as created by the iris in your lens. They only show up when the light source is essentially a point source because then you only see one projection of your aperture. If the light comes from e.g. an entire building in the background than each point of that building creates a projection of the aperture and all these projections are superimposed onto each other. This will then create a blurred image of the building without the shape of the aperture being recognizable.
So one way of seeing the shape of the aperture is to take an out-of-focus image of a point source of light against a dark background. The smaller your light source is, the clearer the image of the aperture will be.

In order to see the shape of the aperture of my camera I took a halogen light and placed a piece of thin cardboard in front of it (at a safe distance, halogen lights can generate a lot of heat and radiation!). In the piece of cardboard I had pierced a small pinhole. Then I pointed my camera at the pin-hole in the cardboard and aligned it on a straight line with the pinhole and the light so I could see the brightest part of the light through the pinhole. Then I set the smallest f-number I could set (by first setting a fast shutter speed, after all it is a Panasonic camcorder, so no aperture priority mode here), switched to manual focus and chose a combination of zoom-factor and focusing distance which would give me the largest image of my aperture. I also had to experiment a little with the distance between the camera and the cardboard to determine what would work best.
Then I took a series of pictures, each with a different f-stop setting (and shutter speed to get proper exposure). This is the series of shots I got:
f 1.7
f 2.0
f 2.4
f 2.8
f 3.6
f 4.0
f 4.8
f 5.6
f 6.8
f 8.0
f 9.6
f 11
f 14
f 16
What are we looking at:
  • Between f1.5 and f2.8 you can see how an 8-blade iris creates an aperture in the shape of an octagon which gets smaller as the f number increases. (Also notice the diffraction rings at the outer edges of the aperture).
  • Between f2.8 and f5.6 a piece of smoked glass is shifted in front of the aperture covering a greater part of the aperture with increasing f number. The iris, and therefore the aperture does not change. Notice how the edges of the sheet of glass line up with the edges of the blades of the iris.
  • Between f5.6 and f11 the aperture is reduced further as the blades of the iris contract further
  • From f14 the aperture has changed to a shape more closely resembling a diamond and reduces further in size.
The "piece of smoked glass" which gets shifted in front of the aperture must be a built-in ND filter. Why was it designed in this way?
As indicated in my previous post, each lens has a "sweet spot" in it's aperture range where the sharpness of the image is optimal. Wide apertures soften the image due to spherical aberration and small apertures lead to diffraction diffusion. But this is a small-sensor camera. This means that the surface area onto which the image is projected is small when compared to other camera's, and the individual pixels on the sensor capturing the light must be very small given the resolution of the image. This implies that although the lens might be capable of projecting a particular part of the scene on just one pixel, the aperture will at fairly low f-numbers already smear this light out over neighboring pixels due to diffraction diffusion, thereby reducing the sharpness of the image.
If changing the f-number would only lead to a variation in aperture size, the range over which you could vary the f-number without sacrificing the sharpness of the image would become very small. Only the shutter speed would remain as a variable to control exposure.
To increase the useful range of f-stop settings the movable ND-filter is used. It allows you to control the amount of light passing through the aperture without having to change it's size. The aperture now stays in it's sweet spot between f-stops 2.8 and 5.6, that's a range of two stops. This implies the filter has a strength of ND4, i.e. only one-fourth of the total amount of light passes through it.
How does this explain the two distinct maxima in sharpness as a function of f-number? From the point the ND filter is shifted in front of the aperture you can consider the aperture to consist of essentially two apertures superimposed upon each other.
The first aperture has the shape and size of the physical aperture as determined by the iris blades. Only one-fourth of the total amount of light passes through it, i.e. the amount of light when the ND filter would be fully in place. Let's call this aperture 1. The second aperture consists of the area of the aperture as determined by the aperture blades which is not covered by the ND filter. This you could also call the "virtual" aperture. Initially, at f2.8 or lower f-numbers the remaining three-quarters of the light passes through this aperture. Lets'call this aperture 2. The final image as projected on the sensor is the two images of these two apertures superimposed upon each other.
Between f2.8 and f5.6 the image from aperture 1 is constant in both brightness and sharpness. The image from aperture 2 will decrease in brightness as the area which is not covered by the ND filter is reduced. As aperture 2 is reduced in size, the image produced by it will increasingly suffer from diffraction diffusion. As the ND filter is gradually shifted in front of aperture 1, aperture 2 produces initially the brightest image and therefore the reduction in sharpness due to diffraction difusion is clearly visible. However, as the ND filter shifts further, the contribution of aperture 2 to the total image brightness is gradually reduced to zero. Therefore it's detrimental effect on the total image also becomes less visible and eventually vanishes completely. This effectively leads to an initial reduction in sharpness which is restored once the ND filter has shifted completely in place and the "virtual" aperture 2 has closed completely. So this effectively explains the dip in sharpness.
The extension of the sweet spot of the lens by two stops comes at the price of a dip in sharpness during that extension. Incresing the strength of the ND filter could extend the range of the sweet spot even further but the dip in sharpness would be even more pronounced as the detrimental effect of virtual aperture 2 would be visible at smaller sizes of the virtual aperture due to it's relatively greater brightness compared to that of physical aperture 1. Likewise, reducing the strength of the ND filter would reduce the depth of the dip in sharpness but at the cost of reducing the range of extension of the sweet spot. I think this is an excellent demonstration of how the designers of this camera had to strike a balance between various design parameters. When it comes to designing an optical system, there are no free lunches; You can alter a design parameter to enhance the performance in one area, but you will suffer in another.
As a thought experiment let's see what would happen if you would want to upgrade this camera to UHD (or 4K) resolution. If you would want to keep the size, weight and cost essentially identical, the sensor size and size of the lens elements would have to remain the same. The increase in resolution would require the number of effective pixels to be increased by a factor of four or put differently, the diameter of each effective pixel would have to be halved. First, this would obiously lead to a reduction of the signal-to-noise ratio per pixel, assuming the increase in efficiency of a newer generation of sensor is less then a factor of four (which is a pretty safe assumption to make).
Second, assuming the lens itself would actually resolve the amount of detail required for this new generation of camera, any diffraction diffusion effects would already become vissible at wider apertures due to the smaller size of the pixels. To maintain an identical perceived sharpness you can no longer use the lowest end of the f-scale as spherical aberration will be more clearly visible. The usability of the high-end of the f-scale is also reduced as diffraction diffusion by the physical aperture will become visible at lower f-numbers as before. Furthermore, the amount of ND-filtering will have to be reduced in order to reduce the amount of diffraction diffusion caused by the virtual aperture, thereby reducing the extension of the sweet spot of the lens. In the end, you may very well end up with a camera which only allows you to vary the shutterspeed if you want the sharpness of your images to match the resolution.
What ways are there out of this? Well you could ofcourse increase the sensor size. This would imply an increase in the lens unit aswell, and therefore a total increase in size, weight and ofcourse cost. let's try another, not so obvious resolution; An increase in frame rate. A high shutter speed is only a problem when used in combination with a low frame rate as it leads to a stroboscopic effect. Suppose we could vary the frame rate along with our shutter speed. A shutter speed of 1/250 at 120 frames per second would simply lead to buttery smooth, natural looking motion. And as long as displays can not show us 4K images at 120 fps or for those who prefer a more filmic look at 30 fps we might even consider artificially reducing the framerate after they have been captured and deducing motion blurr from the differences between frames. Now I have to admit, this option has some challenges in itself. Increasing the readout speed (and therefore the framerate) of a sensor is anything but trivial. Encoding, storing and processing all these frames requires a lot of computing power and even when the framerate is artificially reduced to ease the encoding, transportation and storage requirements, the gains will propably be largely negated by the cost of creating artificial motion blur. You would at least end up with a camera that would eat batteries like there is no tomorrow and a big exhaustion fan to get rid af all the excess heat. I suspect the engineers would go with scenario one.
Well, at least we now have an explanation for the fact that we have two different f-stop values at which the sharpness is at it's maximum value and along the way we have gained a bit of empathy for the engineers who make these machines and have to make the tough compromises.


Tuesday, October 27, 2015

Know your camera - sharpness versus f-stop

One of the important things to do when starting to shoot is to get to know your camera. Know it's strong points and it's weaknesses. Use the first, mask the latter. The camera I'm currently using is the Panasonic hc-x929 which is identical to the hc-x920. For reasons unknown to me the model number varies slightly depending on the region where it is sold but they are technically identical. It has essentially the same lens and sensor unit as the ag-ac90 model, so much of what I am going to say in this post about my camera will also directly apply to these models. I will explain the type of tests I perform to assess my camera's properties so you may repeat them with your own camera and I will ofcourse show my results for this particular camera. If your camera happens to be similar the results will propably also apply to your's.
One of the first things to asses is it's sharpness as a function of it's f-number. Every camera has a sweet spot in terms of it's f-number where the lens performs best. In short, when selecting a low f-number (i.e. a wide open aperture) you are using the entire lens surface to capture the light, both the center and the edges. Most lenses suffer from spherical aberration. Light passing through the edge of a lens is generally not focussed to the same spot as light passing through the center, causing a blurred image. When a high f-number is used (i.e. a small aperture) you are using only the center part of the lens elements but the narrow opening between the iris blades will lead to diffraction diffusion. If you don't take these effects into account when choosing your settings your images may show up somewhat blurry.
In photography, as a rule of thumb it is generally assumed the optimal f-number is 1.5 to 2 stops above it's lowest f-number. However with this camera you will find there is a twist to that rule-of-thumb which I will demonstrate further down. Let's first just describe the "test-procedure" used.
Select a test scene to shoot with some fine, high contrast detail in the center of the image. (The sides of the images may suffer from unsharpness or other distortions which are not directly related to the f-number.) A test chart from which you can read off the actuel sharpness as a number is nice but not realy necessary for this mere amateur. In the end we will not be shooting all our footage with just one f-stop setting anyway, it suffices to know in what range you roughly want your f-stop to be.
Put the camera on a tripod or a stable surface. Use manual exposure and shutter speed and leave all other settings at their default setting for now. Disable all image stabilizer functionality. It may use digital tricks or create artificial motion blur, both of which may throw off your measurements. Once the camera has obtained focus, switch to manual focus so you can be certain you don't get an out-of focus shot because the camera suddenly decided to refocus. You will want to shoot a scene with plenty of light so you can use both high shutter speeds with wide apertures (low f-number) and slow shutter speeds with small apertures (high f-numbers).
No Panasonic consumer camera's will allow you to first set an aperture and then select a shutter speed to get correct exposure. Instead you will have to do the reverse; first select a shutter speed and than set the aperture to get a correct exposure. So start with the slowest shutter speed i.e. 1/50 (or 1/60 if you're recording at 60 frames per second). Don't use any of the slower shutter speeds as these require your camera to perform digital tricks which may affect your sharpness. As soon as you set the shutter speed, the aperture will be controled automatically. Now the aperture can be switched to manual again and it's value will be displayed on the lcd screen. Adjust the exposure when required and make a recording. Note with each recording the f-number used. There should preferably be enough light to start your measurement with a stop of f11 or higher and the amount of light should be constant. With each recording the shutter speed can be increased a notch and the f-number will have to be decreased a notch to arrive at the same exposure, so by starting at f11 you can be certain you have covered the entire range of interest.
After having taken all shots you can upload them to your computer and examine them one by one. Now you should note that not all are equally sharp. Let's show a few samples I made, all blown up to 200 percent:
f1.5
f2.8

f4

f5.6

f9.6
Now I have to admit that when I recorded these shots I had one of the image parameters of the camera (confusingly called "sharpness") set to minus 5. Only later I discovered that this setting is sub-optimal, With the default setting of zero the differences between the shots would have been more pronounced but for the purpose of this test it fortunately did not matter much. Hey, I make mistakes and describe my findings so you don't have to make the same mistakes.
Notice how f2.8 is sharper then both f1.5 and f4? This corresponds to our rule of thumb that the sharpest f-stop is about 1,5 to 2 stops away from the smallest f-number. However, do you also notice how f5.6 is sharper then f4? In fact f5.6 appears to be as sharp as f2.8.
I made a whole series of shots at different f-numbers and if I were asked to roughly sketch the sharpness as a function of f-number I would come up with something like this:

Conclusion; for optimal sharpness one should try to pick an aperture near 2.8 or 5.6. Try to avoid f4 if sharpness is of crucial importance or any of the apertures at the extreme end of the range. Note that this applies specifically to the camera I use i.e. the x920/x929 and ac90 from Panasonic. Other camera's may behave differently or have different optimal aperture settings.
After this discovery I performed another experiment which allowed me to more clearly see what is going on. In the follow-up post I will explain what is causing this odd behaviour.

Monday, October 26, 2015

Why I bother to write this blog

In this blog I want to summarize the various things I learn about a hobby of mine, making video's. Ofcourse I can simply write them down for future reference for myself but if I take the effort of writing them down I might just as well share them with others. I have learned a lot from the stuff others have put on the web so let's try to give something back.
Many people go through the trouble of making a video on how to shoot, edit or do other things video related but although this makes the information easy to consume, it doesn't make the information easier to find, especially when you're looking for something specific. I have learned quite a bit from messages on bulletin boards and ill-maintained websites which have withstood the passing of time and ended up in the indexes of google. I hope this blog can help the occasional visitor in the same way that the other sites have helped me.