Photography basics: Subject to background compression

It is usually believed by novel photographers that focal length (also called zoom at those stages) is just a measure of how close you are to the subject. If you take a picture at 50 mm it is expected to appear the same as if it is made with a 100vmm, as long as you get close enough to make the subject appear the same size in the frame. And those novel photographers usually will pass the following years believing this, as they take thousands of pictures, wondering why some lenses seem to take more appealing pictures of some subjects, why other lenses give best results for another kind of pictures.

The use of a short length on the picture exaggerates the size of the shoe, compared with the rest of the body. This provides strength and dynamism to the picture. Picture by Keia Eskuetan.

But it turns out that when you modify the focal length of the lens, you are not only modifying the apparent size of the subject, but also the relationship between the distances in the picture, and this makes a great impact in the mood of the scene. When taking pictures with a wide angle lens, usually all distances tend to appear exaggerated, especially on the borders.  Even the tiniest distance in the frontal axis tends to get amplified. This can be a problem or a feature, depending on how you work it: for a portrait, it tends to exaggerate the facial features, something that can be seen unpleasing when looking for beauty and perfection. But also, this enhancement of the perspective can provide of original compositions when used creatively, or when a more comical approach is desired. This is due to the increase of the field of vision the lens has, capturing more surrounding area and “bending” the light so everything can fit on the frame.

On the contrary, when using a telephoto lens, all distances in the lens axis get compressed. On the real world, a distance of 50 cm and 100 cm are seen as the second one being the double of the first one. But on the final picture this relation doesn’t hold, and the second one only seems slightly larger than the first one. When using a telephoto in portraiture, it usually yields flat faces, with little detail. It is useful, for example, to hide a prominent jaw or nose.

Como buitres al acecho
On this example, the “doves” are separated a distance similar to the one between the shoe and the knee in the former picture. In this case both seems almost equal, while in the previous picture the distance seemed much bigger.

This effect is what it is known as the subject to background compression of focal length. This is the reason that a wide angle lens is usually preferred for landscape photography. It is not only that it allows more field of view, which means capturing more scene. Also it increases the apparent distances, providing the landscape with volume. This is one of the main differences between a good picture and the typical one you get with your phone on a rush… the second one seems flat, boring and artificial. Also, in portraiture, a moderate telephoto is preferred for this reason (usually between 70 and 125 mm). It requires getting further from the model to get the picture, what can be seen as “cold” sometimes, but it helps to avoid giving too much importance to facial defects. Using a shorter length usually produces long noses, big ears or an exaggeration of the distances of the face, making them seem long. Using longer lengths usually provides flat and boring portraits, without personality. Anyway, depending on the face, sometimes getting out of the standard focal lengths can be a good option.

Also, an Instagram video has become popular in the last days, which shows why the camera seems to make us fatter. This is also related with this topic. As it can be seen in the video, as we increase the focal length, the face of the boy seems to get bigger. The fact is that there is no real difference in the size of the face by itself. The posterior parts of the face, which are clearly distinguished with a short focal length, tend to become “close” to the frontal of the face as we increase the zoom, which make us more difficult to differentiate, for example, where the face ends and the neck begins. The effect is also more notable because all the pictures are compared as a succession, giving the impression that the low length picture is more “thinner” and the long length picture is “fatter” than they really are if you look only at one, isolated. The more appealing picture, in fact, would be one of the intermediate ones.


How the face proportions are modified as the focal length increases. Gif by Dan Vojtech.

For more information about this topic, and a graphical example of this effect, I recommend reading the article Exploring How Focal Length Affects Images, by Andrew Childress , where a wider and more complete explanation, with example pictures, can be seen. So, the next time you want to take a picture, stop for a second and think: how do I want the background relate to the foreground? How do I want the distances in the lens axis relate with the distance? And once you answer to those questions, choose the right focal length, even if it means having to move a little bit. Remember: you should always move to adapt the picture to the chosen focal length, and never adapt the focal length to the position you have chosen for taking the picture.


Photography Basics: Composition patterns (II)

In the previous part of this topic we saw some of the basic rules for composing a picture. Composition, in photography, is the language that we use to write the history we desire to transmit, the syntax that we use to join our ideas together. Those rules were the most basic “phrases” that we can use to transmit some meaning to our viewers.

It is clear that for the same scene many different pictures can be taken. Some of those pictures will differ only in their technical parameters, but most of them will use different approaches, planes, and compositions to capture what it is seen. The difference between a standard picture and an appealing one is the use of an attractive composition. The rules we saw in the previous part were the basic ones, those that will turn a bad picture into a decent one. Of course, those are not the only rules we have. History, through art, has shown us many different patterns that work very well to attract the attention of the spectator. We are not bound to use them if we don’t want to (perhaps we don’t like the effect that some composition provides, or perhaps we simply consider that the rule doesn’t apply well to a specific picture). The point is that, having those rules in our toolbox, we will be able to use them when we consider that they will improve the result. When trying to compose a picture is always useful to have some references to start trying. Perhaps following them step by step is not the best choice, but having them as a “map” to follow will allow to expand our creativity as photographers.



One of the things that tend to catch the attention of the eye is a picture inside a picture. This can be, of course, in a literal way, taking a picture of a scene that has already a picture on it. But it also works very well on a metaphorical way. One way to achieve this is to surround the scene by a frame, natural or artificial. This frame can be anything, like some trees or branches, a wall (or a hole in it), a space left by a group of people… Just surrounding the scene with anything will increase the attractiveness.

Different approaches are possible, and many of them popular. For example, you can take a picture of a landscape from a window, making sure that the frame of the window is not caught. This will yield a clean picture. The other option is to close the window and take the picture of the landscape through it (being sure that no undesired reflections appear on the glass), in a way that the window and a part of the wall are shown. This way the window hole works as a frame, concealing the landscape and also including a context for the picture.

A different example can be a picture of an open door from inside a church (or any other low light building). Adjusting the exposure to the inside illumination will yield a good picture of the inside with the outside scene burnt. Exposing to the outside opens new possibilities. The inside will be dark, but the door silhouette will act as a frame for the outside scene, that will appear clear and sharp.

This relates with the negative space topic. The negative space happens when we don’t transmit our idea with the elements that we have in the picture, but with the gap they leave, or an absence. The mind is able to create a concept from the details that are not there. This is an extreme point of this rule, where the main elements are used as a frame for a concept or idea that is not there. For example, when photographing a crowd of people from upside, it is possible that a gap left by the people of the crowd has a recognizable shape, e.g. the silhouette of a person. This would create a message (the person) inside the main message (the crowd) by the absence of some people.

El señor Don Gato

 In this case the picture of the cat increases its appeal with the intense red leaves that surround the picture, acting as a frame. The “frame” doesn’t need to be perfect in order to be recognizable.


Direction of movement:

When taking a picture of a subject in movement, it is a good idea to leave some space on the side it is moving to. This rule works well in combination with the thirds rule. For example, if a car is moving from the left to the right side of the picture it is usually a better idea to place the car on the left third line and leave the right third of the picture empty. This way it transmits the idea that the car is moving to this place. Even if the car is completely static in our picture (fast shutter speeds), the idea of movement is there, because we can identify the front and the back of it. Inverting this pattern would be a good idea if we wanted to transmit the idea that the car is moving backwards.

This is important even with static subjects. For example, when taking a portrait or the picture of a person, is a good idea to leave some space in the direction s/he looking at. This way we can transmit the idea that there is something interesting in this direction. For an increase in effect, you can align the eye to one of the thirds intersections.

This rule can be broken when the composition requires it. For example, leaving space behind a moving person can transmit the “leaving” concept, abandoning something behind. Not leaving space in the direction that is looking, or concealing the person in the frame tightly, can transmit stress, paranoia or oppression. It is a good resource to communicate emotions.

Also, this rule can also be used in the inverse meaning, transmitting a message in an ambiguous scenery. Imagine a picture of one person, and two objects. The person is looking at them, and there is no clue in the scene to show which one of them is the person looking at. This creates an ambiguity that can be used in the picture to induce to mystery. But also, we might want to create a subtle indication to make the message clear. In that case, allowing more space around one of the objects than the other will show the brain which one is more important, and it will be inclined to focus on it. The additional space needs to be noticeable, but not necessarily exaggerated.


As a frontal portrait, this picture doesn’t imply any kind of movement. Still, providing air in the direction she is looking at provides a calm mood and some context, as we can see part of what she is watching. Not providing this space would have contributed to create a more dramatic mood, implying a more narrow space.



As it has been seen previously in the negative space, the brain has the ability to fill the gaps of nonexistent things to create new concepts. Closures are a moderate example of this concept. In negative space we used existent objects to create a nonexistent concept, but in closures we use the existent object in the picture to create a shape or concept. The difference is that the shape created is not complete, has a gap, and our brain will be the one that will fill it.

Returning to our crowd from above example, imagine that now the people are forming the shape of a man. In the former example it was the gap who made the silhouette, but now are the people forming the shape of a man. The closure happens when some of the people disappear, and the shape of the man is still visible, but with some spaces in the conformation. In this case the brain will close the gaps and we will still notice the man.

The example is a little bit forced but this concept appears in many pictures in a subtle way. The shape doesn’t need to be complex, a circle or a square will work. And it can be composed by any item in the picture. So perhaps, when photographing a man in the middle of the street, the circle shape can be conformed with the shadow he casts on the ground, a tree and a near building. As long as the three elements have similar color, luminosity or contrast the brain will try to fill the gaps and create a shape. In this case the shape also works as a frame (as in the first rule), and composition can place it in any thirds, obtaining an interesting use of many rules at the same time.

In the forest - Reprise

In this example, the trees, the dark sky and the low lit ground all merge together in a circular shape. The brain automatically recomposes the shape for us. In this case the shape is also used as a frame, making use of other composition rules to provide the picture of some mystery mood.


Of course, these are only a few examples of many more. But knowing them will allow you to take better pictures, to focus more on composition when working, and use them as building blocks for composing more complex dispositions of elements. Internet is full of many more examples that are worth knowing, in case you are curious.

Photography Basics: Composition patterns (I)

Not only the technique I important in order to achieve a good picture. Although usually a clear and sharp image is necessary for a good picture, it’s not all that is needed. Taking a picture is like telling a story: there is something behind it that we want to transmit. And as it happens in literature, knowing the language well is necessary to transmit it clearly and in a passionate way, avoiding boring essays. The same principle is applicable in photography.

Knowing the main rules that our eyes and brain follow allow us to create more appealing images. Knowing how our eyes move and what our mind expects is the way to twitch our compositions into a dynamic image. Of course, some people claim that those rules are made for breaking them. Those people can be classified into two groups: experienced people who know them and have practiced them until their assimilation, and those who don’t want to learn them and just take photographs wildly. The first ones use and take advantage of their knowledge to increase the quality of the pictures, and decide to break them because they know exactly how and why they are doing it. The latter ones cannot take any decision, and even if it’s true that they achieve good pictures, it is usually due to a combination of good luck and a good eye for looking at things (and even if they don’t know the rules, many times they use them without thinking about them, and can be found if the picture is analyzed).

This is why is important to know the rules and practice them until they are assimilated by the brain as something natural. Knowing them will allow us to get better pictures, but also we will be in condition to know when to avoid anyone of them to get a better result. If you know you can “forget” anytime, the opposite is not usually true (at least in the reasonable time-span of a photographic session).


Rule of the thirds:

This is the more popular and common rule to follow on photography. It states that if we divide the length of both sides of the frame into three equal parts, the most attractive points of interest of the picture are the lines that divide the frame and their intersections. This rule is so important that many cameras include optional guides that show those points, and can be activated in Live View mode.

This rule is not interpreted strictly, but as a guide. It is not necessary to include all the important parts of the scene in the thirds, or to force the composition into them. Sometimes it works just with most of the elements following the rule, or perhaps just the most important. On the other side, it is not necessary for the elements to be exactly on them, they can just be close enough for the eye to be comfortable with them. Of course, the closer they are the better, and if they move apart of them, is better if the displacement is towards the center of the image, as they will be closer to the golden ratio, which is also attractive to the eye.

This rule is the reason that when taking a landscape picture, the horizon should be aligned with the upper or lower third of the picture, and not the center. Choosing between them is a matter of which element is more important: the land or the sky. If some other element is present, like a person or a building, they can be aligned to one of the lateral thirds in order to increase the strength and the appeal of the image.

This rule can be broken when constraining the composition to these points makes our composition to lose an important element. Also, sometimes, another kind of composition might work better for a specific scene (see the Symmetry example for one of those).

An easy way to see the thirds is using the crop tool in Photoshop. In the upper menu you can select View> Rule of Thirds. This way the lines and intersections are drawn and you can adjust a cutout of the picture to complain with this rule.

Holy Week II

This picture of the Holy Week in my City obeys the Thirds Rule. The girl in the front is aligned with the right vertical third line (specifically, her right eye is in the intersection of the right vertical third line with the upper third line). Also, the woman far in the back is aligned with the left third line. All the candles are concealed on the lower third section of the picture, never crossing to the central third. Finally, the woman in the middle is aligned to the center of the frame (specifically her mouth). All this ordering transmit a mood of order and clearness to the picture.


When we take a picture, we can assign each element a weight, according to their apparent size and the subjective importance in the scene. Big elements and important ones usually are considered to have more weight. This rule states that when possible, we should make our picture balanced, distributing the elements in the frame in a way that their weights are compensated. For example, if we have just a beach ball in the sand, we can put it in a thirds intersection if we have anything else around to show. If we don’t, a good point would be the center, as this way the picture is balanced. Placing the ball in any side without anything else on the frame would decompensate the weight of the frame, making the eye to give more important to this side and forgetting the other.

If we have two elements, we should place both of them in different sides, as this way their weights get compensated. If we place both of the on the same part, the former effect happens, even more noticeably. for more than two elements, their location should be one that mostly compensates their relative weights, so every part of the image get approximately the same interest from the viewer.

In this case, the lever rule applies. The equilibrium between two objects is increased with the distance between them. If we have two element of the same weight, they should be placed symmetrically around some point (usually the center of the picture). When one of the elements weights more than the other, the former should be placed closer to that point of symmetry than the latter. This way the extra distance given to the small one gives more compensating power to it.

Mother and daughter The force of this picture resides on the close perspective. Both the mother and the child are opposed in the frame, each one occupying one side of the picture, leaving the center free. As the child is lower in size, the picture is taken closer to her in order to increase her apparent size. This way both figures get their weights compensated.

Repeating patterns:

When composing a picture, if we arrange the objects to form a repetition pattern, the eye will be more attracted to it than to the isolated objects. If arranging them is outside our control (surely you won’t be able to re-arrange the pyramids of Giza, and if you can I would like to know how) you can still get advantage of this rule. Sometimes positioning the camera in an specific angle can align the objects into a pattern (e.g. a line, a triangle or perhaps an amorphous cluster but with some recognizable repetition). In any case, groping similar objects to increase order is a good way to captivate the brain.

A special case of this rule is when the pattern extends outside our frame. The brain tends to compensate this fact imagining that the pattern extends outside the picture, forming a larger figure than the one seen. For example, to make a crowd of people look larger, just align them to one of the frame borders. The brain will be happy to assume that there are more people outside that they weren’t captured. The same is applied to buildings, cars or any other object. If a repeating pattern approach to the border, the brain will try to expand it outside.

The Fluor Rainbow

 The picture has two elements to catch the viewer attention. One is the color; the other is the repetition of the pattern with a little variation between them. Having the same element repeated catches the attention and increases the interest. The variations in color and volume of the liquids include enough differences to avoid becoming boring. If the frame had been cropped in a way that half of the two lateral bottles were out of the picture, the brain would have interpreted it as a longer succession of vials, continuing out of the frame.


This is perhaps the golden rule; the one that most people know because is intuitive: Arrange the main elements of the picture in a symmetric way using as many symmetry elements as possible. If the objects are symmetric around the center of the picture, good. If they are also symmetric around a plane, better. If you can find many symmetry planes in the same picture, the brain will probably be ecstatic. The more elements you can arrange, the better the picture will result.

But although this rule is very popular, it is only seldom used. This is because it has a dark side. In the same way that the brain is delighted when it sees symmetry, is also repulsed when it sees an image almost-symmetric-but-not-perfect. Even the slightest imperfection in symmetry will make the brain reject the picture. This makes this rule easy to understand, but very difficult to apply. It usually requires good planning and most of the times even some retouching and adjusting afterwards. This is the reason why this rule is only followed when the scene really requires it.

As an example, think on a landscape. As stated before, the rule of the thirds is the one that is usually applied. But imagine that you are taking a picture of a very high mountain, reaching the sky, and the image is perfectly reflected on a lake with the surface perfectly still. This is a case when going for symmetry is beneficial, as the horizon makes a perfect plane of symmetry for the image. Care should be taken to make the horizon perfectly horizontal and to allow both the mountain and the reflection to enter the frame. If any of those two points are not perfectly achieved, the brain will refuse to accept the image. On the other side, if we can arrange the mountain to be mostly symmetrical also around a vertical plane that cuts it in half, the image will gain even more strength.


 Apart of the gaze, the interest in this picture resides in the symmetry between both lateral sides. The central axis divides the picture in both almost identical halves. If the picture had a little more space on any side it wouldn’t work. Also, the picture was taken from a frontal perspective. Any lateral movement changing the perspective, even slightly, would have destroyed the symmetry, spoiling the composition.


Those are the most important composition rules. The best way to learn them is to practice, to take pictures applying them and analyzing your own pictures and many other to learn why they do/don’t work. In the next post, I will talk about some other rules, perhaps not so important, but that can make the difference used on the right moment.

Do you have anything to add to those rules? If so, please comment so we can increase the value of this explanation.

Photography Basics: How to Expose Correctly (II)

In the previous post in Photography Basics we saw what is the procedure to expose a picture correctly, especially of subjects that don’t reflect the expected amount of light. But a couple of important and more advanced topics were skipped. In this post I’m going to talk about one of them that, without being essential for taking pictures, is really useful when a good exposure is desired but in our scene there are present different subjects that reflect light in different ways.

A measurement by any other method… would not measure as well

As we previously saw, the photometer is the tool that measures the ambient light in order to achieve a correct exposition value. When the scene’s illumination is homogeneous and there are no problematic subjects, it knows how to do well its job, but when there are differences of illumination in different places of the scene, or different subjects (yeah… sometimes we need to photograph that black cat walking on the snow and all our plans go awry) measuring with the photometer gets complicated. To solve this, photometers usually provide 4 different modes of measure light. Each mode measures a different part of the scene and treats the lighting value in a different way, so choosing the correct one will help to obtain a measurement closest to the optimal value. Each camera may have different modes, but most of them are based on the following:

evalEvaluative metering: Is the standard method of measuring light, the most used and in some cheap cameras the only one available. It works in a simple way: just measures the amount of light for every pixel of the image and calculates the mean of the values. We obtain the average lighting of all the scene. This mode works well in scenes with a very homogeneous illumination, where there is an object or background that fills most of the frame, or when the different areas of the picture all reflect the same amount of light. It works well for general pictures (for example a landscape), but fails when two important points of the picture reflect a different amount of light.

partialPartial metering: This works in the same way as evaluative metering, but only averaging the central 6-9% of the frame, and all other information is discarded. As a result, the measurement is more precise on the subject that is situated on this zone of the picture. If we wanted to calculate the illumination of our (now famous) black cat, the evaluative metering would average its illumination with the background, probably showing that there is more light than the true value. Using partial metering will evaluate only the cat, obtaining a more precise measurement of the light that the cat is reflecting to us.

But what if the cat is not in the center of the frame? In this case, we can put the cat on the center, adjust the parameters until the photometer shows the desired value and, after that, recompose and move the frame until the cat is in the place we want. As the light the cat reflects doesn’t change if we move our camera, the values adjusted will be valid.

Yeah, but… What if I’m using a semiautomatic mode? In this case, the camera will measure automatically in the center when we take the picture. In order to avoid this, a function call Exposure Lock exists (sometimes called AE-LOCK, AE or simply *). When we activate this function the camera will measure light in the central area, will store the desired values and will use them for the next picture taken. We can put the black cat in the center of our frame, adjust the parameters, activate AE-LOCK so the camera stores the measurement and finally just recompose and take our picture. Even if the cat is no longer in the center, the exposure will be measured on it. This mode is also useful in backlight photography.

spotSpot metering: Works exactly the same as partial metering, but only on the 1-4% central area of the frame. As the area is smaller, the measurement is more precise, but also more susceptible of variation with small movements of the camera or the target. It works in the same way as partial, but provides better exposure value if the place we are measuring is very small or is surrounded by an environment with very different lighting.

centerCenter-Weighted Average metering: It’s a hybrid between evaluative and partial modes. It measures the light on all pixels in the frame and averages them, but gives more importance to the pixels close to the central point. It works well when it’s necessary to expose a difficult subject well, but also the background is important and we need to keep it under a reasonable value.

So, in the initial example of our black cat walking on the snow, two different approaches can be taken:

If the dynamic range of the scene doesn’t overflow the dynamic range of our camera (which means: if we can expose the snow well and our cat is not pure black, or we can expose our cat well and the snow is not pure white) then partial/spot is the recommended measuring method. You can focus one of the subjects, adjust exposure and recompose. The exposure is a value for the overall picture, so if the snow is well exposed for a set of parameters, the cat will also be correctly exposed. The measurement the photometer will provide for each of them will be different, but the parameters that you will use to set the mark in the correct point will be the same whichever the subject you choose is.

But if the dynamic range of the picture is greater than the dynamic range of the camera, we won’t be able to expose both subjects correctly on the same picture (without the use of some advanced techniques as HDR). This means that we can use partial/spot mode to expose the snow correctly (but our cat will be too dark) or we can use it to expose the cat correctly (but the snow will be burnt and without details, pure white). Or we can use Center-Weighted mode to expose the cat not-so-well-but-not-terribly. In this last case, the illumination provided by the photometer will correspond to the light the cat reflects, but with a little addition of the light from the snow. This will lead to a picture where snow is a little bit grey but not too much, and the cat will be a little bit darker than necessary, but not excessively; we change a little bit of quality from out cat to increasing a little bit the quality of the snow.

Does your camera have any other methods? Or perhaps you know more kinds of scene where those methods are good. If you want to add something, comments are open for your collaboration!

Photography Basics: How to Expose Correctly (I)

In the previous chapters of Photography Basics we saw the three pillars that affect the exposition: Aperture (I and II), sensor sensitivity and shutter speed. We also saw the implications that they have on the picture style when the values of those parameters are changed. But if you recall the examples that were given for all those cases, all of them started assuming that some correct exposition values had already been chosen and we just modified them to achieve the desired effect. Never was explained how to achieve those initial values. That’s what this post is about.

For this post I’m assuming that all work is done on M mode (full manual), which means that the photographer is in charge of everything. For all other modes, semiautomatic and automatic, part of this work is done by the camera with all the limitations that implies. Only in M mode the photographer is completely responsible of acquiring the correct exposition for the picture he wants to take.

The photometer

Every modern camera (and by modern I include even film cameras back in the 80s or earlier) has a device called a photometer. As the name implies the function of this tool is to measure the amount of light that enter the camera through the lens. In modern cameras an array of photodiodes produce a tiny electric current that the camera can measure. When those diodes are illuminated by light they change the current in a predictable way, so the camera can calculate the amount of ambient light and show us on the dial. In analogical cameras, the light changed mechanical resistance in a spring, which makes a needle move over a graduated scale.

Different exposition values set in a photometer. By GRPH3B18 via Wikipedia.

The dial the camera shows is similar to the one depicted on the right. It has a scale that goes from -2 or -3 to +2 or +3 (depending on the dynamic range of the camera, i.e. the maximum number of illumination tones that can distinguish between pure black and pure white). Under it, a dynamic mark moves around depending on the illumination.

Cameras, being as dumb as any other automatic device, need a standard to measure against. Without it the camera wouldn’t know if a specific amount of light is good enough for a situation. The pattern used by the camera is called grey-18, a grey color that reflect the 18% of the light that incises over it, which is very similar to what we would call a “50% brightness grey” in any software (not to be confused with 50 Shades of Grey, cameras and handcuffs have little in common). So what the camera does is to evaluate the amount of light that is entering through the lens and calculate, using the current parameters that are set in the camera, how the exposition of the picture would result compared with a grey-18. If the value is 0, it means that the amount of light that will be recorded will lead to a correct exposition for a subject that reflects 18% of the light. A value of +1 means that the camera will record the double of light that is needed, and -2 means that the camera will record only one fourth (2 ^ -2 = ¼) of the necessary amount of light.

The central card is grey-18, metering on it and aligning to 0 will yield a correct exposure.

If the picture is taken with the mark in the “+” side, that will lead to an overexposed picture, bright and with washed colors as we are acquiring more light than necessary. If the picture is taken with the mark on the “-“ side, the picture will be underexposed, resulting in a dark and noisy image because we are recording less light than necessary. An example of those situations can be analyzed in the picture below. So if we are photographing a grey-18 likish subject, and we are in the +1 point (double of the light needed) we will have to reduce 1 stop ISO, speed or aperture in order to achieve a correct exposure (mark on 0). If we have the mark in the -2 value, we will have to increase 2 steps speed, ISO or aperture, in any combination of them, to reach the 0 mark. So, as a summary, correctly exposing for a grey-18 is as simple as leading the mark to the zero point.

Left: Underexposed picture, where the details are lost in the dark areas. Center: Correctly exposed picture, all details can be seen. Right: Overexposed picture, clearer parts of the picture don’t retain the details and color appears washed.

But… hey! My picture is not grey…

And now the complications begin…

I tried to make it simple and easy, but you just couldn’t be satisfied with photographing grey walls, and grey skies, and even grey cats… You want also to capture the bright colors of the rainbow, and your family in that trip to the snow, and even that black cat crossing the black asphalt in a dark night of that mysterious black town where your car engine decided to go on strike without previous warning (apart of those little squeaky sounds that happened for the last 300 km).

It happens that the world is not grey, but the camera has no way to know that. The photometer can only compare to that illumination value, so is our responsibility to correct it. For example, we are traveling with our family to that fancy snowed mountain where people ski happily while on holidays. We take our camera from the bag, move the dials until we get our mark to the zero mark, we ask everybody to smile, we ask again hoping that this time they will listen to us… and finally push the shutter button. But the picture that we obtain is not as clear as the snow really is, and instead we get a dim grey snow and an overall dark image. That’s because snow reflects more than the 18% of the light it receives (usually is near 30%, but depends on the kind of snow and how the light reflects on it). The camera assumed that the snow reflects only the 18% and optimized the parameters for that amount. As a result it let to pass less light than necessary (you get grey snow as it had reflected 18% of the light) and you obtain less light than needed. The way to compensate this is to expose not for the zero mark, but for the +1, which means in “camera language”: Capture the double of light than what you would consider the normal. As 30% is approximately the double of light than 18%, this adjustment will work in a much better way. If still the picture is not bright enough, you can expose higher until you are satisfied with the result, always taking care that the snow doesn’t reach a pure white color, where it loses all the details.

Left: The black bottle is exposed assuming that 0 is the correct point. We capture more light than necessary and dark tones become cleared than they should. Right: Correctly exposed picture using the -2 mark.

The opposite is also true. We are walking in the park and suddenly a black cat appears. Fascinated with it (I don’t know why, but our example photographer gets fascinated by such cat) he takes the camera, exposes to zero and pushes the shutter button. The picture we get is overexposed and brighter than necessary, and our cat appears washed and greyish instead of the real black color. What happens is that the cat reflects less than 18% of the incident light (around 8-10%), but the camera allowed light to reach the sensor until achieved what it considered a 18% grey would be, which is too much light in this situation. In this case, as 10% is approximately half of the light than 18%, we should adjust our exposition values to the -1 mark to get the appropriate exposure.

In order to learn the correct values for many situations the best method is to practice and take loads of pictures, as experience will lead you to the optimal values for your camera. Anyway, approximate values can be deduced by observation. A grey-18 color is easy to memorize. Anything that is darker than it will get a lower exposition value. Black animals usually get -1, dark skinned people usually need a -2/3 exposition, or even -1. We will need to go down to -1&1/3 if they have really dark skin. Asphalt ranges from -1 to -2 and some black fabric can get up to -2 easily. White skin usually exposes correctly from +1/3 to +2/3, snow requires +1 to +2 and a white building wall or the clouds can definitely get up to +2. Memorizing all those values will eventually happen, but meanwhile they can be estimated just by looking at the scene and comparing it to a mental grey-18 representation of the color.


That’s too complicated… I’m sticking to semiautomatic.

Even in the semiautomatic modes this problem arises. In any semiautomatic mode is the camera who decides at least one of the parameters for the exposition. And it will adjust that parameter assuming that what you are taking the picture to is grey-18. So a snow picture with a semiautomatic mode will still yield a grey and underexposed image (except on special “snow” modes, that are programmed to compensate for this particular situation, but not others).

The way to solve this problem is called exposure compensation. When you activate it, what you do is to place a mark on any point of the exposure scale, and the camera will assume that is to that point to which it has to adjust the parameters instead of the zero mark. So in the case you want to take a picture in the snow you will have to adjust the exposure compensation to the +1 mark, and the camera will automatically adjust the parameters to it instead of the zero mark.

If you changed to semiautomatic for not having to learn the values of the different situations, you should return to M mode now, as you’ll have to learn them anyways to get nice expositions. This doesn’t that mean semiautomatic modes aren’t appropriated for other situation, but you will still need to know how to expose correctly.

After all this explanation you’re ready to go outside (or stay inside if the weather is bad, you have my permission) and start practicing how to expose correctly. In the next post some more advanced techniques will be explained that imply to control how the camera measures the light. But in order to understand those concepts, understanding how to correctly expose is necessary, and the funniest way to do it is taking pictures.

Photography Basics: Shutter Speed

In our explanation on how to correctly expose a picture, we have previously talked about the importance of aperture (I and II) and the sensor sensibility to light, and how they affect the artistic properties of our pictures. Now it’s time to talk about the shutter speed, the third and final pillar of the exposition process.

The exposition time is the amount of time that our sensor is receiving light. A longer exposition time allows a higher amount of light to enter the sensor. If the exposition time is too long, we will get an overexposed picture where everything will appear brighter and paler than it should, probably losing the details in the highlights. On the contrary, if the exposition is too short, we will get an underexposed picture where everything will appear darker than it should, and losing the details in the darker parts. In order to control the exposition time we change the shutter speed. Slower shutter speeds increase the exposition time while faster shutter speeds decrease the exposition time.

Shutter speeds follow the same principle as the ISO value: duplicating the time also duplicates the amount of light, while halving the time halves the amount of light. Many times the speeds are so small that representing them as a decimal number (for example 0.01 seconds) is not practical. Usually, if they are lower than 0.25 seconds they are represented as a fraction. 0.1 seconds as 1/10, 0.02 seconds as 1/50, etc. This means that if we want to halve the light we need to divide by two the number if we are working with numerators, but we will have to duplicate the number if we are working with denominators:

We have taken a correctly exposed picture of a flowing river at ISO 400, f/4 and 1/200 s. The overall aspect is correct, but water seems static and unnatural. To obtain a better result we want to decrease the shutter speed in order to capture the water motion in a soft way. We reduce ISO from 400 to 100 (2 steps) and aperture to f/8 (2 steps). The total reduction is 4 steps, which means that we have reduced light 16 times (2^4 times; we halved 4 times). In order to compensate this movement we need to increase the shutter speed 4 steps, from 1/200 to 1/13 seconds (1/200 –x2-> 1/100 –x2-> 1/50 –x2-> 1/25 –x2-> 1/13). For a camera configured in thirds mode it means 12 “clicks” (4·3 = 12). Multiplying by 2 a numerator for doubling the light is the same as dividing the denominator by two. If we had moved to the opposite side (faster speeds) we would have divided by two the numerator, which translates to multiplying by 2 the denominators.

Working with the camera is really easy, though, as moving the corresponding dial will always increase or decrease the shutter speed without having to think if we are working with fractions or decimal numbers. In a camera configured in thirds mode, three clicks in the reducing direction will reduce the amount of time to the half whichever the original value was.

Hansel forgot the oven on
A slow shutter speed (0.25 s.) allows the smoke to move while the sensor is exposed, providing an average trail of the movement of the smoke.

The shutter speed also has importance from an artistic point of view. When we are photographing still objects you can use any speed needed, but when the object is in movement some considerations arise: If the object moves fast enough, it will change its position relatively to the sensor while it is open, leaving a trail in the image instead of a clear image. A fast shutter speed will decrease the period of time the sensor is exposed, not giving the object time to move enough, and it will appear frozen in our picture. A fast shutter speed is the option desired when we want to get a sharp picture of a football player kicking the ball or a rally car in the moment it takes the decisive curve. On the other side, a slow shutter speed allows more light but will not freeze the action. We will use slow shutter speeds when we want to record the trails the cars’ lights leave at night or the movement of a person inside a dark room.

The “trail” effect at low shutter speeds can even be more exaggerated. Using a very long exposure time (low shutter speeds, low ISO values and small apertures) we can capture everything that is still all the time. Everything that moves, like cars or people, will reflect so little light to the sensor while they move that they will simply disappear from the picture, as the amount of light received reflected by the moving object is too small compared to the amount of light reflected by the background while the object is not there. Using this you can “empty” places without having to care about people, but only if the place is not very populated. If the place is very populated this technique won’t work, as the place occupied by one person will be soon occupied by another and on average the spot will be more time occupied than free, giving an “average” trail, which also has artistic possibilities.

An important consideration with shutter speed is related with motion-blur, caused by the shake of the photographer’s hand. Usually, at high shutter speeds the exposure time is so fast that the shake doesn’t affect the quality of the picture. But when you go slower than, usually, 1/50 seconds the shake of the hand starts to appear in the picture as a general blur that affects all objects in the scene. It is also worth knowing that this effect also depends on the focal length of the lens used. 1/50 seconds is the lower safe for a focal up to 50 mm, but when the focal increases (for example, with a telephoto) the safe speed increases with the reciprocal of the focal, which is the wordy way to say that for a 100 mm lens the safe speed is 1/100 seconds and for a 300 mm is a 1/300 seconds. Nevertheless, this is only a reference value, as a photographer with good pulse or standing (or using an image stabilized lens) can go a little bit slower without being noticeable in the picture, while other conditions like hard wind can make faster speeds needed in order to correctly take the picture.

Finally, another artistic consideration of the shutter speed is panning. This works on slow shutter speeds in an opposite way as explained before. With a slow shutter speed and a fast moving object what we will usually get is a sharp background and a trail of the trajectory of the object. Panning consists on following the moving object exactly at the same speed that it’s moving (i.e. to keep the object exactly on the same point of the viewfinder while you move the camera following it). When it is correctly achieved we will get the object perfectly on focus, as it hasn’t moved relatively to our sensor while the shutter was open, but the background, which has changed all the time, will appear as a trail, providing the desired sensation of speed while our main object is frozen. This technique is quite complicated for beginners as it requires following the moving object very precisely, and for a bold effect requires very low speeds, which means an steady pulse for a higher amount of time, but when is perfectly dominated provides very interesting and spectacular pictures.


The shutter speed (1/30 seconds) is slow compared to the movement of the motorbike, but following it makes the motorbike sharp while the background is blurred. Also, the fast movement of the wheels makes them blurred, increasing the speed effect. (Picture by Cù Đình Kiên)

With this is mind, the three pillars of exposition are completed, and now you have all the basic information needed for haunting anything the way you want. Now is time to prove it: go outside and start practicing. Do you have any doubts or remarkable experiences? Share them on the comments!

Photography Basics: ISO and noise

CMOS sensor, by Filya1 (Wikipedia).

In previous posts we saw one of the three pillars that sustain the exposure in photography: the aperture, in a theoretical and practical way. We saw that using a wide aperture let us use more light at the cost of decreasing the depth of field, and the artistic implications of this trade off.

Now we are going to study the second pillar of the exposition: the sensitivity of the sensor. The sensitivity, usually called ISO in the cameras, represents how receptive is the pixel in the sensor to an amount of light that reaches it. It is represented by a number, usually starting on 100, but sometimes in some cameras it can start even lower: 50. The higher the sensitivity the less light we will need to achieve a concrete exposure value. This means that by increasing the ISO value we will be able to take pictures in darker conditions.

The relationship between the ISO and the light is linear, instead of quadratic, as it happened with the aperture. This means that doubling the sensitivity will effectively double the amount of light that we receive.

Example: We want to take a picture inside a church, and we correctly expose it using a shutter speed of 1/10 s, f/5.6 and ISO 100. We want to keep aperture constant because it is in the sweet spot of our lens. The problem is that the people are slowly moving inside the building, appearing diffuse in the final picture, and we want to freeze them. If we increase our ISO to 800, we increase the light 8 times (800/100 = 8), which means we have doubled the amount of light three times (Log² 8 = 3, or what means the same: we have doubled 100 three times to reach to 800: 100 –x2-> 200 –x2-> 400 –x2-> 800). We can decrease the shutter speed by 8 times to compensate, up to 1/80 s (1/10 –x1/2-> 1/20 –x1/2-> 1/40 –x1/2-> 1/80). This speed is more than enough to freeze the slow movement of the people inside the church.

Some cameras change ISO in full steps. This means that increasing or decreasing the value means doubling or halving the amount of light. In a camera configured to change aperture or shutter speeds in thirds, this means that every time we change ISO one step, we can move 3 “clicks” the other values (if you don’t understand “click” nomenclature, refer to the theoretical explanation on aperture.

Example: We have the parameters of the last exercise: 1/80 s, f/5.6, ISO 800; reducing ISO to 400 means halving the amount of light (one stop). To compensate, we should move one stop (three clicks) the aperture or shutter speed opening or slowing it: 1/40 s or f/4 (but not both of them at the same time). Also, a combination of both can be used, using three “clicks” amongst both, for example: 1/60 and f/4.5 or 1/50 and f/5. In the first case we move two thirds the aperture and one third the shutter speed, on the other case just the opposite.

Other cameras, on the contrary, also accept thirds or halves of ISO values. In these cases the same rules as in aperture are valid: one click on any direction can be compensated with one click in the opposite direction in any of the other two parameters.

The ISO, seen this way, seems like a silver bullet against the lack-of-light-monster. But as always happens in photography there is a trade, and in this case is nor artistic but technical: Increasing the ISO also increases the noise in the picture. The noise is a small variance in the brightness and color of every pixel in the picture that cannot be controlled.

When a sensor captures a photon it generates a signal by releasing an electron to the internal circuit. The more electrons in the circuit, the more light the camera understands it has been captured. Sometimes, when no photon has been detected, a “rogue” electron can escape from the sensor (and goes living la vida loca). The camera doesn’t have any way to know that this one wasn’t released by light, so it understands it as that. When we increase the ISO value the sensor emits two, four, eight… electrons for each photon, but also when electrons escape by themselves they do in that same amount. This means that in a black pixel, when no light is received, the theoretical color should be black (0% grey), but it usually has a very small value of grey (e.g. 0.5% grey), because of the electrons that escaped without light interaction. When we increase the ISO, the pixel still doesn’t get any photon, but instead of releasing 1 electron, releases 8 each time, so we obtain 8 times more grey than before (a 4% grey).

The release of these electrons is completely random. If it wasn’t we could simply adjust brightness, contrast or levels in the final picture to compensate. But we cannot really know which pixels have noise, or in which amount, as it varies all the time. Instead of having a “mist” covering the picture we get a random “snow” that changes from picture to picture. Also, every white pixel is formed by 4 basic ones (1 red, 2 green and 1 blue), and not all of them emit the electrons at the same time, which means that every pixel of noise has a different color. As a result, increasing the ISO decreases the quality of the picture.

Also, from the previous explanation we can deduce another characteristic of noise: it affects more to the dark areas of the picture than to the highlights. As we can see, for an existing ISO value all pixels have a similar chance to produce noise (a final value in luminosity ranging from 0 –no “rogue” electrons- to x –being x an unknown value of luminosity-). In any dark pixel, with luminosity close to 0, the action of adding that amount x increases the luminosity in a huge amount relatively to the actual value (if the real luminosity is 1% and x equals 8%, we have increased the luminosity of that pixel by 8 times). On the other hand, if a pixel in the highlights has a value of 90 %, increasing it with an 8% means an increased luminosity of 1.08 times, a very less noticeable amount compared to the surroundings.


On this picture we can see Mr. Leprechaunius Smith, Mr. Brainsqueezer and Mr. Someweirdelvenname discuss around a table about the economic theory of “hydraulic macroeconomy”. Mr. Someweirdelvenname seems to be taking the argument too seriously. The picture is taken at ISO 100 (the lowest available) with the APS-C sensor of a Canon 500D. All noise reduction characteristics are disabled for this example.


This is a cutout at 100% magnification of the face of Mr  Someweirdelvenname. On the left side we see the picture taken at ISO 100. Although some noise can be appreciated on the background, in general quality is acceptable. On the right side we see the same cutout of the picture, but taken at ISO 3600. Two things are clear: The increase in noise is enormous, reducing the overall quality of the picture, and the noise is much more visible in the darker areas than in the white face. This second picture can be acceptable for printing at low size, but not for keeping cutting just a small part of it as a whole image.


Having the first part of the explanation in mind, some photographers who value quality tend to think on the ISO value as evil, and never increase it to avoid losing quality. This is a useful rule in general, but has a problem: Sometimes, not increasing the ISO makes the picture underexposed and darker that it should be, and afterwards it has to be correctly exposed on Photoshop. This means that all the noise that was in the darker areas is also there in the midtones once you have corrected it, as you increase the good and the bad parts at the same time. On the other side, if increasing the ISO value leads to a correctly exposed picture, the midtones might have more noise, but it will be less noticeable except in the darker areas, which means a general increase in quality. As a rule, ISO should be increased in all the cases when not increasing it would lead to an underexposed picture, and not increased in all the cases where you can achieve a correct exposure with the rest of the parameters.

Also, as an even more general rule:

Taking a low quality photograph is better than not taking it at all.

Now is time to go out and practice with the sensitivity of the sensor. Try to take some pictures on a dark room or at night, change the values and see the effects. Any questions? Want to share your experience with the ISO? Comments are open for you to contribute with anything you have to say.