The good use of color relationships

When we think in memorable photographs that became special to us we can easily group them into two huge categories: Black and white pictures and color pictures. Black and white is a classic, as the first pictures ever taken after the invention of the camera used this system due to the lack of means to capture color. From that moment on it endured the advances of progress and technology and still nowadays is easy to find groups and associations of good photographers that defend the lack of color. The advantages are clear: once we get rid of all the information the color transmits the brain can focus on the light as a whole, on the contrasts, and the difference in illumination that remarks and enhances textures. But deleting the color of a scene also means destroying a lot of information that can be not only useful, but also catchy and expressive. Sometimes, the only element that makes a good picture great is the careful selection of colors, tones and hues that combined together in a unique way catches the eye and attaches to the brain forever.

The main issue about color, and probably the only one most amateur photographers think about, is its correctness. Some people consider that the colors in the picture must reproduce the original ones of the scene in the most precise way, while others argue that tuning and increasing their appeal is a valid and amusing way of transforming the boring reality into art. But in both cases the colors have to seem perfectly coherent to the viewer. You might capture the yellow tone of the skin of an apple with incredible precision or enhance its natural appealing drifting its hue to a bold and luminous green, but transforming an apple into a purple sphere would surely make the observer of the picture confused, with the exception of the picture being part of any surrealistic context that complete the picture meaning. Correctness of color is a chain that links the picture to the real world, making it recognizable to the public of our work, and so a great effort is often done to achieve it (calibrating screens, acquiring ICC color profiles and processing pictures in 16 bit color depth are just few examples of how photographers are concerned about this topic).

Sunset in my backyard  :))
Color here is as correct as in many other pictures taken everyday. Still, few pictures can capture the appealing atmosphere the color transmits on this one. Picture by Ludmila Liber.

But aside of correctness, only experimented people tend to start thinking out of the box and analyzing the different relationships the colors of the pictures have between them and how they affect the global result. Of course, a strawberry is expected to be red (bright red, ideally), but many different combinations can be designed with this basic idea in mind: Should it be in the ground, surrounded by green grass? Or perhaps some grass should be removed so the brown of the dirt can be used also as a composition tool? Some photographers might be inclined to photograph the strawberry against the sky, creating a contrast between red and blue… All these compositions are realistic, but not all of them achieve the same grade of boldness once developed into the final picture.

Music, as another form of art, is well concerned about the different elements that compose it, and their relationships; so a musical scale was developed to bring order into a chaos of sounds and frequencies. The musical scale allows the composer to choose carefully which notes will be brought together to obtain a melody that is memorable along the centuries (if you don’t believe me, see how simple is the beginning of Beethoven’s fifth symphony, and the strength it transmits, usually represented as “Fate knocking at the door”). Greeks, in the 4th century BC, realized that colors maintain a similar relationship, and developed what is called the chromatic scale, dividing the scale into half tones as musicians do. Even accepting that color combinations can change as fashion does, it’s also true that some combinations of the scale have the ability to catch the attention of the viewer independently of the year, location or context (What does a black and yellow striped pattern bring to your mind?). Achieving harmony can be a difficult task, and using it in an artistic and creative way can only be done from the photographer’s point of view, without constraining rules. But it’s also certain that basing our choices into sound facts can increase our chances of success into achieving a special picture that will be remembered.

Goethe, in his work about the theory of color, showed some well-defined combinations that work fine together, based on how they are interpreted by our eyes and brain. He also established different psychological considerations that these combinations carried with them, and how can be used to tune the mood of the image. Some useful examples that are usually worth memorizing are the following, taking into consideration that colors must be interpreted as guides and not as pure hues (blue means pure blue, but also, cyan, aquamarine and similar colors).



Both colors, being primary, are interpreted by the brain as similar in brightness, even though the eye is more sensitive to green than red. The more predominance the eye has to the green is compensated by the psychological effect the red carries, in the sense of heat, danger, and action. Having this consideration in mind, the harmonic combination is the one that combines them in a ratio of 1:1, which means that no one dominates over the other and both are well balanced in the picture in a similar amount. This can be achieved in a symmetric way (i.e. giving half of the frame for each one), or mixed together but respecting the proportions; but in any case, to achieve harmony, we should not be able to see one in a more dominant way than the other. In the strawberry example before: we should be sure to increase the apparent size of the berry in relation to the grass, so the area of the picture that grass occupies is similar to the area of the berry, not letting the grass to capture all the attention.

La rana verde de ojos rojos (Agalychnis callidryas)
Red tones in the flower and eyes are compensated with the greens of the background and frog, having both almost the same amount of space. This helps to equilibrate the mood of the picture. Picture by Esteban Cartin


In this case, brain is much more sensitive to orange than blue due to its psychological considerations. Orange, as happened with red, is associated with heat, activity and danger. It’s a very catchy color that tends to attract the attention, specially if it’s not surrounded by any other warm color. On the other side blue is considered as a cold color, associated with ice or gelid water, sky on a cold day, and with calmness and sadness. Also, the eye is less sensitive to blue than to the other warmer colors. In order to compensate this difference of strengths, a relationship of 3 parts of orange for every 8 parts of blue is recommended (27%, around one fourth). If, for example, we want to photograph the sand of the Sahara Desert against a blue sky, we could be tempted to follow the rule of the thirds and compose the horizon occupying the upper or the lower third. But an alternate composition worth to be explored is to let the sand occupy the lower fourth of the frame and assign the other three fourths to the sky (supposing there is something of interest in the sky). Even if this doesn’t work, this color rule shows us that aligning the horizon to the lower third is better that doing it to the upper third… as long as there isn’t any other subject in the frame that alters the composition.

Orange only supposes about a quarter of the total color, resulting in a balanced mixture even having into account the asymmetry on the picture. A higher amount of orange would decrease the importance of blue tones, making them secondary and unattractive. Picture by Andy Lee


The same consideration used for the orange-blue example can be done here, with a little modification. Although yellow approximates more to the colder part of the scale than red or orange, it still retains its danger consideration (think on wasps or tropical frogs). This makes it a bold color for the brain if it’s not surrounded by other warm colors. But on the other side, purple is one of the colors the brain is less sensitive to. In fact, true spectral violet color is very rare in nature (what most people call violet is, in fact, indigo), and the eye is so poorly adapted to see it that appears very dim when observed (compared to a same intensity green light, for example). This modifies the color ratio even on a more drastic proportion, assigning three parts of yellow for nine parts (instead of eight) of purple. As a rule of thumb, the more shifted a blue is to purple, the more predominant space we should allow it, and the more shifted a purple is to magenta or red, the less space we should provide to it.

Colores opuestos
Yellow, being much more luminous and catchy than purple, adjusts to only one fourth of the picture, allowing the purple most of the space. Picture by Maelia Rouch


On this case, the warm colors shall be given less space than the blue, with a proportion of 7 for warms and 9 for blue. Blue is only slightly increased because the mixture the yellow with red dilutes the strength of red (as a chilling effect of the red color instead of potentiating it). The best ratio for yellow and red becomes 3 of yellow to 4 of red. This makes a final ratio of 3:4:9. It must be noticed that increasing the amount of yellow doesn’t modify too much the effect on the blue, as the chilling effect is constant on these proportions. On the other side, decreasing the amount of yellow increases dramatically the amount of color perceived by the eye in the red part of the picture, approximating the rations to the orange-blue example.


With these few examples in mind it is easy now to start thinking on how to combine colors for our next pictures in a more effective way. Of course, this doesn’t mean that without a good subject in our picture we will get an incredible image, and also doesn’t mean that we should constrain our composition and framing to these rules in a strict way. They must be considered as guides, like in the Sahara example, where thinking on them might open new ways of composing; or when photographing a subject that already approximates to those relations, but twitching them a little bit might suppose an increase in the effect perceived.

Do you have any other idea of color combinations that work well? If so, please comment and share it we us. We will be happy to learn even more and be able to try it.


Photography Basics: How to Expose Correctly (I)

In the previous chapters of Photography Basics we saw the three pillars that affect the exposition: Aperture (I and II), sensor sensitivity and shutter speed. We also saw the implications that they have on the picture style when the values of those parameters are changed. But if you recall the examples that were given for all those cases, all of them started assuming that some correct exposition values had already been chosen and we just modified them to achieve the desired effect. Never was explained how to achieve those initial values. That’s what this post is about.

For this post I’m assuming that all work is done on M mode (full manual), which means that the photographer is in charge of everything. For all other modes, semiautomatic and automatic, part of this work is done by the camera with all the limitations that implies. Only in M mode the photographer is completely responsible of acquiring the correct exposition for the picture he wants to take.

The photometer

Every modern camera (and by modern I include even film cameras back in the 80s or earlier) has a device called a photometer. As the name implies the function of this tool is to measure the amount of light that enter the camera through the lens. In modern cameras an array of photodiodes produce a tiny electric current that the camera can measure. When those diodes are illuminated by light they change the current in a predictable way, so the camera can calculate the amount of ambient light and show us on the dial. In analogical cameras, the light changed mechanical resistance in a spring, which makes a needle move over a graduated scale.

Different exposition values set in a photometer. By GRPH3B18 via Wikipedia.

The dial the camera shows is similar to the one depicted on the right. It has a scale that goes from -2 or -3 to +2 or +3 (depending on the dynamic range of the camera, i.e. the maximum number of illumination tones that can distinguish between pure black and pure white). Under it, a dynamic mark moves around depending on the illumination.

Cameras, being as dumb as any other automatic device, need a standard to measure against. Without it the camera wouldn’t know if a specific amount of light is good enough for a situation. The pattern used by the camera is called grey-18, a grey color that reflect the 18% of the light that incises over it, which is very similar to what we would call a “50% brightness grey” in any software (not to be confused with 50 Shades of Grey, cameras and handcuffs have little in common). So what the camera does is to evaluate the amount of light that is entering through the lens and calculate, using the current parameters that are set in the camera, how the exposition of the picture would result compared with a grey-18. If the value is 0, it means that the amount of light that will be recorded will lead to a correct exposition for a subject that reflects 18% of the light. A value of +1 means that the camera will record the double of light that is needed, and -2 means that the camera will record only one fourth (2 ^ -2 = ¼) of the necessary amount of light.

The central card is grey-18, metering on it and aligning to 0 will yield a correct exposure.

If the picture is taken with the mark in the “+” side, that will lead to an overexposed picture, bright and with washed colors as we are acquiring more light than necessary. If the picture is taken with the mark on the “-“ side, the picture will be underexposed, resulting in a dark and noisy image because we are recording less light than necessary. An example of those situations can be analyzed in the picture below. So if we are photographing a grey-18 likish subject, and we are in the +1 point (double of the light needed) we will have to reduce 1 stop ISO, speed or aperture in order to achieve a correct exposure (mark on 0). If we have the mark in the -2 value, we will have to increase 2 steps speed, ISO or aperture, in any combination of them, to reach the 0 mark. So, as a summary, correctly exposing for a grey-18 is as simple as leading the mark to the zero point.

Left: Underexposed picture, where the details are lost in the dark areas. Center: Correctly exposed picture, all details can be seen. Right: Overexposed picture, clearer parts of the picture don’t retain the details and color appears washed.

But… hey! My picture is not grey…

And now the complications begin…

I tried to make it simple and easy, but you just couldn’t be satisfied with photographing grey walls, and grey skies, and even grey cats… You want also to capture the bright colors of the rainbow, and your family in that trip to the snow, and even that black cat crossing the black asphalt in a dark night of that mysterious black town where your car engine decided to go on strike without previous warning (apart of those little squeaky sounds that happened for the last 300 km).

It happens that the world is not grey, but the camera has no way to know that. The photometer can only compare to that illumination value, so is our responsibility to correct it. For example, we are traveling with our family to that fancy snowed mountain where people ski happily while on holidays. We take our camera from the bag, move the dials until we get our mark to the zero mark, we ask everybody to smile, we ask again hoping that this time they will listen to us… and finally push the shutter button. But the picture that we obtain is not as clear as the snow really is, and instead we get a dim grey snow and an overall dark image. That’s because snow reflects more than the 18% of the light it receives (usually is near 30%, but depends on the kind of snow and how the light reflects on it). The camera assumed that the snow reflects only the 18% and optimized the parameters for that amount. As a result it let to pass less light than necessary (you get grey snow as it had reflected 18% of the light) and you obtain less light than needed. The way to compensate this is to expose not for the zero mark, but for the +1, which means in “camera language”: Capture the double of light than what you would consider the normal. As 30% is approximately the double of light than 18%, this adjustment will work in a much better way. If still the picture is not bright enough, you can expose higher until you are satisfied with the result, always taking care that the snow doesn’t reach a pure white color, where it loses all the details.

Left: The black bottle is exposed assuming that 0 is the correct point. We capture more light than necessary and dark tones become cleared than they should. Right: Correctly exposed picture using the -2 mark.

The opposite is also true. We are walking in the park and suddenly a black cat appears. Fascinated with it (I don’t know why, but our example photographer gets fascinated by such cat) he takes the camera, exposes to zero and pushes the shutter button. The picture we get is overexposed and brighter than necessary, and our cat appears washed and greyish instead of the real black color. What happens is that the cat reflects less than 18% of the incident light (around 8-10%), but the camera allowed light to reach the sensor until achieved what it considered a 18% grey would be, which is too much light in this situation. In this case, as 10% is approximately half of the light than 18%, we should adjust our exposition values to the -1 mark to get the appropriate exposure.

In order to learn the correct values for many situations the best method is to practice and take loads of pictures, as experience will lead you to the optimal values for your camera. Anyway, approximate values can be deduced by observation. A grey-18 color is easy to memorize. Anything that is darker than it will get a lower exposition value. Black animals usually get -1, dark skinned people usually need a -2/3 exposition, or even -1. We will need to go down to -1&1/3 if they have really dark skin. Asphalt ranges from -1 to -2 and some black fabric can get up to -2 easily. White skin usually exposes correctly from +1/3 to +2/3, snow requires +1 to +2 and a white building wall or the clouds can definitely get up to +2. Memorizing all those values will eventually happen, but meanwhile they can be estimated just by looking at the scene and comparing it to a mental grey-18 representation of the color.


That’s too complicated… I’m sticking to semiautomatic.

Even in the semiautomatic modes this problem arises. In any semiautomatic mode is the camera who decides at least one of the parameters for the exposition. And it will adjust that parameter assuming that what you are taking the picture to is grey-18. So a snow picture with a semiautomatic mode will still yield a grey and underexposed image (except on special “snow” modes, that are programmed to compensate for this particular situation, but not others).

The way to solve this problem is called exposure compensation. When you activate it, what you do is to place a mark on any point of the exposure scale, and the camera will assume that is to that point to which it has to adjust the parameters instead of the zero mark. So in the case you want to take a picture in the snow you will have to adjust the exposure compensation to the +1 mark, and the camera will automatically adjust the parameters to it instead of the zero mark.

If you changed to semiautomatic for not having to learn the values of the different situations, you should return to M mode now, as you’ll have to learn them anyways to get nice expositions. This doesn’t that mean semiautomatic modes aren’t appropriated for other situation, but you will still need to know how to expose correctly.

After all this explanation you’re ready to go outside (or stay inside if the weather is bad, you have my permission) and start practicing how to expose correctly. In the next post some more advanced techniques will be explained that imply to control how the camera measures the light. But in order to understand those concepts, understanding how to correctly expose is necessary, and the funniest way to do it is taking pictures.

No time to watch the stars

No time to watch the stars

No time to watch the stars supposes a regression, a return to the origin. When I started taking photographs in a serious manner (which means: knowing what I was doing) I was really limited in my photographic material, my range of movement and my creativity. It was a time of pure experimenting, of trying to find interest in everyday places, and of simple compositions. It was in that time when I specialized in night photography. Getting impressive pictures at day with the things and places I had around was difficult, but night and its exotic illumination offered a new view that was usually unappreciated. Our eyes are mostly designed to work in daylight and they don’t get the subtleties of dim light as well as a camera does. That means that taking a photograph in a common place, with that illumination, tinted the entire picture with a halo of magic and oneiric mood that completely transformed everything into something new.

Campos de cebada
Taken on June 2009.

The heaviest exponent of that time style, and the one I play tribute today is “Campos de cebada” (Barley fields). It was taken even before I had my first reflex camera in a crop field near my home. It was a warm night at the beginning of summer (which proved to be a really interesting one, thanks to photography) and I took advantage of the good temperature to explore a little bit around. The result is the picture of a very common and characteristic crop field in my area, but in a completely opposite attitude compared to the typical landscape picture. Of course quality is poor, as it was taken with a tiny compact camera and my ability with photographic software was in its beginnings, but the point is that happened to be a memorable one.

Today’s picture is a reinterpretation of that one, in a way that also resembled my mood on those times. I didn’t plan to go taking the picture; I just decided to do it on the spot, right after arriving home from work. I didn’t decide to take a night picture, my intention was only to capture the sunset, but I prolonged it just because I was in no hurry at that moment. I didn’t desire to take a picture as a tribute to this one; I had another picture in mind, implying a more important role of the stars and sky and where the artificial light was just an accessory and not the central part. I didn’t decide to capture the car lights neither, it just happened that I saw the cars approaching and I took the picture just to try… You know… Because now in a card you can store more than 500 pictures and one more or less is just the same. Even, I didn’t capture the foggy clouds on the right side on purpose… They were just there and I used them to compose the frame. After returning home I realized that those clouds were also in the original one (7 years ago), almost on the same place. At the end, it happens that mi tribute picture occurred mostly by chance, the same “engine” that created the original one.

The title is straightforward. No one of the people that passed on the cars that night stopped to look at the stars. Only me, the bored and lonely photographer, was there, looking at them while the rest of the world passed around thinking on dinner, sleeping or watching TV. The picture captures this in the literal way, moving people, just leaving a trail of light behind them in the rush for arriving home, while the stars, static on the top, contemplate the human world that means nothing to them. And between two worlds: a camera; a little object that moves tiny electrons to capture light. Light that exists only at that moment and one moment afterwards simply disappears just to be substituted with some other light, similar, but different, always changing (in the same way that you can’t swim two times on the same river, as it flows to the sea). People can take that same picture in that same place anytime, with the same illumination and the same composition… But it won’t be the same light, the same stars that passed that night above our heads… unnoticed.

Because time never goes back.

Exsate Golden Hour

exsate-golden-hour-d9fcda-w240It will be no surprise for any visitor that planning is one of the success elements for any photographic session. Sometimes you’re carrying your camera in the right moment and you get a wonderful picture, but most of the times the “magic” doesn’t simply appear in front of us, but we have to make it happen. Studying the place, the weather and many other factors help to make the session a success.

Exsate Golden Hour is a free Android App that allows us to plan any exterior photographic session. It gives us information such as sunrise and sunset times, position of the moon, time range of the golden hour, blue hour, etc. With all that information we will be able to know, for example, which is the best moment to achieve a portrait session with the magic glow of the sunset light, or when will be the moon rising or setting in an astrophotography session. The app provides all of this information on a summary view for easy lookup.

Diagram shows for a set time the sun and moon position, and some events like golden hour and blue hour.

In order to have a more graphic representation of all the information, the app provides a diagram where the sun and moon altitudes are depicted versus the time of the day. All calculations are made for the current location of the photographer, but it can be changed for any position we desire. This way is easy to compare what will be the position of the moon for a specific position of the sun, or to easily check when the two bodies will maintain an specific relationship (e.g. a night without moon, a sunset with a moon under 30 degrees or a sunrise with the moon on its zenith). Also, below the graph, a bar representation shows information about specific events that can happen, like the golden hour, the blue hour, sunsets with expressive skies due to the presence of clouds or clear days with no wind, suitable for drone photography. All those events can be also checked in the summary list, for more information about them.

The map allows, in the example, to know the correct time to take a picture with the sun just above the church.

Apart of the times, the program provides a map where the location can be set. The map overlaps an azimuthal scale around the selected point and draws overlaid the position of the sun and moon. This allows us to see a projection in the ground of the position the two bodies maintain in the sky, in case we want to plan a photograph where any of them is in a specific position. It also shows the position where the sun and moon raised and set that day, and if they are above or below the horizon on that moment. Although this function is interesting for planning pictures that require to control the position of the bodies against some static objects (for example, buildings), it lacks of some advanced controls that other applications have. For example, it is impossible to measure the relative altitude between the selected position and some other point, so it might happen that although the sun is still above the horizon as planned we cannot achieve our desired picture because it is behind a mountain or building. Also, sunset times are relative to 0 degrees altitude, but if the horizon has elevations the real sunset time will be earlier than predicted, or if we are in the elevation the sunset time will be later; the app doesn’t allow having those cases into consideration.

When the set conditions are met, the app will notify us the event.

But the true strength of this application comes in the form of prediction. The events described previously are selected by the application by defect, but it is also possible to make our own events. For example, we want to photograph a sunset where the sun is just at the side of the bell tower of a church, and we know approximately what altitude the sun must be and also the azimuth respect to our desired camera position (this second parameter can be easily calculated in the “map” view). In this case what we can do is to create an event for the sun to be in this region, add some “spice” if desired, as requesting the weather to be clear or the moon to be in another specific position, and finally confirm the event. From that moment on the app will notify us of the next time the conditions will be met and also will overly a bar in the diagram, so we can easily see how long it will last or if it overlaps with another event, like a golden hour. One drawback of this mode is that it requires some practice to achieve good results, as there are many tunable parameters that are not always completely explained. It’s a powerful option but not as intuitive as it could be.

So, in summary, Exsate Golden Hour allows us to plan any exterior session with precision, for any geographical point and program alert so we get noticed when some desired conditions are met. In the geographical aspect it’s a little bit limited, but might get more powerful in future versions. Also, it doesn’t require an internet connection, although some functionality might not be fully usable (like map).

Conditions and event information for any chosen day.

Ease of use: 3.5 / 5 (The higher the better)

Specificity: 3 / 5 (Higher doesn’t mean better)

Applicability: 4.5 / 5 (Higher tends to be better)

Name: Exsate Golden Hour

Producer: Exsate Multimedia Solutions

Platform: Android

Price: Free

Size: 2.6 MB (apk); around 7 MB (storage); about 25 MB (cache).

Download: Google Play, Exsate official website.

Tone map

One common mistake assumed by many starting amateurs in photography, especially the more purist, is believing that all that matters is what the camera can capture, and all post-processing is evil. It is a point of view that all correction should be made in the camera and if they are correctly performed, the image that we will obtain is the best possible outcome of the process. I also went through this phase, a little bit after the “eccentric filter phase” and just before the “HDR hole phase”. It is true that good parameters make a picture good, and it’s difficult to obtain a good quality picture without the proper measure of focus, exposition and many other scene-dependent parameter, but that is not the whole of the story.

Something you learn after taking many many and many pictures is that what you see is just an approximation of what reality really is (not everybody see the same amount of green-blue tones, for example), and that what the camera capture is just an approximation of what the eye really sees. For example, the dynamic range (the difference in luminosity between what the eye or the sensor consider pure white and pure black) ranges from 5 to 8 f-stops for most compact cameras and can increase to around 11 f-stops for high end cameras, while the eye is estimated to be around 14 f-stops. Also an eye in good conditions can detect more color tones than a camera. All of this means that even with the best parameter, we will usually not achieve a picture that shows the reality as it truly was when we took it. Even for people defending a realistic point of view of photography, the use of some corrections is something necessary to increase the quality of the picture.

Tone map is one of the filters/processes that help to achieve more natural pictures. It derives from a technique called HDR (High Dynamic Range), which combines two or more pictures with different expositions to achieve a final picture that captures a bigger dynamic range than any of the original ones. In this case, tone map can simulate this effect to a point, simulating an extended dynamic range similar to the eye, but only using one picture. What the filter does is to average luminosity along all the histogram (i.e. avoiding vast zones with similar luminosity while some luminosities doesn’t appear on the picture), increasing the contrast in the darker and clearer areas (shadows and highlights) while reducing contrast in midtones areas. This translates into a more natural picture where everything seems to have a more uniform illumination.

This technique not always gives good results, as it is very dependent of the illumination conditions of the place and the quality of the picture. It usually is very useful in photography under natural light or when working in interiors with artificial light (not controlled by the photographer, like fluorescent lights or similar illuminations), although sometimes it can also increase quality in pictures taken with a frontal flash. In high contrast situation, when this difference on illumination is important, or in pictures where the blacks suppose an important factor for the character of the image, usually this filter doesn’t perform well, though sometimes used in a small amount can provide an interesting effect.


Original B
Original picture: The shadows obscure the image. The poor girl can’t be sure if it’s her mother or somebody else impersonating her.

To show how this filter is applied, we’ll start with the picture on the right. It is a nocturnal picture of a statue in my city, which represents a woman and her child, who is talking to her. As it can be seen, the picture has a wide tonal range, from almost black shadows in the sky to almost white highlights in the street lights. Also, the woman’s face is shaded and her eyes can be barely seen. In general, the picture has an illumination pattern good for an artistic picture at night, but it is not the best illumination for an informative picture (e.g. for an use in a tourism catalogue). In this case we don’t want to play with the shadows but to obtain a clear and representative picture of the statue. This supposes a perfect example for applying this process.

Layer order to begin.

Step 1: Duplicate the layer where the image is twice. If the image is just in one layer, a standard duplication is enough. If working with many layers inside a group, duplicate the whole group and merge it after. This will condense all group information into a new layer, while conserving the original group. After this duplicate the new layer again. Put both layers inside a new group and, if working with a group, put this new group inside the original one. The situation after the process should be similar to the one in the picture.

Step 2: Leave the lower image as it is. It will provide the base for all the modifications. Desaturate the upper layer, and invert it. The effect is to discard all color information (in on the lower layer) and retain only brightness information, which is inverted (midtones are mildly affected while highlights and shadows are exchanged).

Before the gaussian blur the effect is too strong. Promediating the brightness allows to obtain a more natural effect.

Step 3: Apply an opacity value of 75% to the upper layer, so the lower layer can provide the color information and contribute with a 25% of the original illumination. We will obtain an almost grey image that we will call the map. Now change the group blending mode to “Soft light”. Soft light will brighten the original image if the map is clearer than 50% grey, and will darken it if the modified is lower than 50% grey. At this point we should obtain a very “washed” picture similar to the one depicted on the right. This is considered a hard mapping, as every pixel is changed only considering its own information. In order to obtain a better effect, we have to average every map’s pixel with the surrounding ones, so every pixel will have some information about the brightness of the pixels around and produce a more natural and appealing effect.

Step 4: Select the upper layer and apply a Gaussian blur filter. We can adjust the radius of the effect, measured in pixels. The more blur we add, the more information we give to any pixel about the surroundings, and more dim the effect will be. Values close to 0 will yield the hard mapping effect, while too long values (255, maybe upper in some software) will yield an image similar to the original. In my experience, a value lower than 100 pixels is never satisfactory for a good quality picture, while a range between 100 and 200 pixels tends to provide good results. The way to decide is just by trial and error, but with a little bit of practice is really easy to determine the best amount of blur. The effect shall result appealing and natural to the eye.

Step 5: After this process, reduce opacity of the group to 90 %. Higher opacities should not be used, as sometimes can merge bad with the original image and produce some artifacts. 90% opacity offers almost the same effect and merges better with the picture. Also, after applying the filter, an increase in color saturation can be noticed. It is not always undesirable, but it depends on the situation. If it is not wanted, just add a saturation adjustment layer over the mapping group that reduces overall saturation by a 10%. This will make the colors similar to the original.

The picture at the end should look like the one in the comparison behind this paragraph, with a more natural and clear illumination. If in some pictures the effect seems good in “shape” but not in “amount”, you can reduce the opacity of the group in order to adjust the quantity of mapping (in this case I reduced to 66%). Also, sometimes, this effect doesn’t blend well with the picture, washing it too much or destroying critical contrast for the composition. In this cases don’t force it and just avoid using the filter. In my workflow I have an action with this filter recorded, and I apply to all pictures I take. Sometimes the effect is good, sometimes it requires some adjustments and sometimes is awful and I just remove it. For me, it just takes pushing one button and waiting a couple of seconds and, many times, it can increase the quality of the picture noticeably. In this case, some more adjustments could be done, as re-adjusting the black point so we recover a little bit of the contrast on the bodies, but this goes out of the scope of this example.

Now, the girl can recognize her mother and feels safe. Also will do the tourist who saw the guide when they pass around. Good job photographer!

And now is your turn… Have you ever used this filter? Do you know any variations of it? If so, comment and share your opinion, so this information can be more complete and useful for everybody.


Today, I want to show you a little example of picture processing carried out a few months ago. The original picture,  on the left, was not taken by me; it is a self-portrait taken by the gifted photographer Keia Eskuetan (if you don’t know her work yet, you should go visit her gallery) and nicely contributed to my archive. Sometimes I like to play with someone else’s work as it allows me to try different things. The fact that the picture was taken by another person avoids the bias of including my own style in the picture, and allows me to obtain different and nice things using the constraints provided by the picture. In this case I’m really satisfied with how the picture developed and I’m using it as an example of one possible way to process a photograph.

The original self-portrait taken and processed by her.

In this case the starting material was a little bit complicated. The file I received was correctly exposed and nicely focused, but it happened to be saved in JPEG and was a little bit cropped vertically. There are some important factors to consider: Starting from a JPEG instead of a Raw file disallows the use of some high quality retouching that could be used for adjusting sharpness or light temperature among other parameters. This still can be done in Photoshop (or any other software), but using a lower quality starting point and less reliable techniques. Furthermore, having a cropped JPEG means that the picture has been edited and saved at least once (possibly twice). Every time we save a JPEG file, the compression process discards a little bit of information, worsening its quality. This is imperceptible to the eye, but when processing afterwards can be an issue. As a rule of thumb you should always work with a lossless format until the end, when you can save a copy on JPEG as a distribution file.

Rotating 90 degrees and cropping a little bit completely change the look of the picture.

Step 1: After opening the picture up on the software the first thing to do is to prepare the file to work with it. This means, first of all, changing the color depth from 8 bits/channel to 16 bits/channel. JPEG is stored using the first value in order to save space. Increasing the depth to 16 allows us to use a much wider and richer set of colors and tones. Our final target will be a monochromatic JPEG, but even if it means less information, during the process we can benefit from the additional colors we can use (for example, in gradients). Also, in this step I decided the composition I wanted the final picture to have. First I chose an aspect ratio of 3:2 (or 2:3, depending orientation). I use this ratio as a signature of my work. After trying some cutouts I chose one that keeps the head, the camera and a part of the legs. Losing the shoes, an important part of the outfit and the original picture, is compensated by the strength gained by the final composition where they are not needed to explain the image. Finally, I decided to turn it 90 degrees, so it seems like our model is straight and floating in the air instead of crouching on the floor. This is the final composition and the idea that gives name to the picture: Weightless.

Finally, we need to duplicate the background layer and add it to a group, so afterwards, when we add some adjustment layers, we can work with all of them at the same time.

To avoid losing quality while working on the picture always work with a lossless format in 16 bits/channel. Change to 8 bits/channel and a loss format only to save a distributable copy at the end of the process.

Step 2: Although our final picture is a monochromatic picture, in order to achieve a good quality B&W conversion we need a good color image. The exposition of the picture is good, so there is no need to touch the levels. The color was pretty close to the right one, but it needed a little correction. In this case a duplicated layer of the image, with the color adjusted automatically and merged at 50% opacity did all the work.

After tone-mapping, we obtain more detail in hair and legs and skin has a softer and more natural color.

After that, a “tone map” adjustment was applied in order to reduce micro contrasts in the higher contrasted parts of the image and increase contrast on the more flat areas. This tends to approach the overall image to the way the eye saw the scene, simulating a tiny HDR correction. This effect is much more subtle and less powerful than real HDR, but has the advantage that you only need one picture instead of at least two with different exposition (even on real HDR pictures tone mapping filters are used). This adjustment, anyways, is usually worth trying when photographing people, especially on interior or closed environments, as it tends to give the pictures a more natural lighting if the exposition is correct. It is to be noticed that this is always not the best option anyways.

My software doesn’t allow applying a tone map automatically, but it is an easy filter to build from scratch. Make two duplicates of your actual set-up (duplicate the working group, then combine the entire group to a layer and finally duplicate that layer again) and combine them inside a new group, inside the working one and on top of all the layers. Leave the lower layer as it is, and desaturate and invert the upper one. Change the opacity of the upper layer to 75% (we won’t change this anymore). We also set the opacity of the group to 90%, and the fusion mode to soft light. At this point we should see a lighter and very soft image. Now, on the upper layer, apply a Gaussian blur. The lowest recommendable value is 100 px, and good working values usually range from 100 to 200. The lower the value, the softer the image will be, the higher the blur, the more similar it will be to the original. We shall adjust the blur to a point where we soften some of the contrast on the clearer and darker areas but without softening the midtones. It usually takes a little bit of practice but it is easy to spot the right point. Finally, if we like the softness achieved but we want to make it more subtle we just have to decrease opacity of the group.

At this point we already have our composition and a good image for starting the real part of the process.

Result of selectively converting to black and white.

Step 3: We need to go from a color image to a black and white one. There are several ways to carry out this step, some better than others. The most commonly used by inexperienced people is simply by desaturating the image, which consists on maintaining the illumination value of the pixel while deleting all the chromatic information. This might seem like a good procedure but it presents a couple of problems: usually our eyes doesn’t sense brightness in the same way that the camera does, so a desaturation tend to offer flat greys in situations where a better contrast can be achieved. Besides, having three color channels to work with allow us to change illumination and contrast selectively on different areas of the picture, depending on the dominant color. Desaturation, on the other side, always chooses the same formula, without interpreting whether it is the best option for the situation or not.

In order to control the overall contrast of the picture, we have to create an adjustment layer on black and white mode. This layer allows us to convert the image to a black and white one selecting the absolute luminosity that the different colors of the picture will have. Do you want the red lips to appear dark while the Klein blue dress appears very bright? Just reduce intensity of the red color and increase the luminosity on the blue part.


Values used for the conversion in this example.

This procedure gives us a lot of power over the picture, but with a great responsibility: Adjacent parts of the picture with different colors usually have the same brightness; when you increase the brightness of one and reduce the other sometimes “patches” appear, giving an ugly appearance. In order to solve this issue, it is recommendable to keep the values of adjacent parts of the adjustment controls relatively close (for example, if you set reds to 100%, a good value for yellows and magentas are between 50 and 150, but not usually -100 or 300). Also, for the same reason as before, adjusting too much the conversion tends to increase noise, as noise is uniformly distributed in brightness but randomly distributed in color. This means that when you separate luminosity on any two colors, you are separating the brightness of both kinds of noise, making it more noticeable.

It seems complicated, but with a little bit of practice it is very easy to achieve good results. In this case, the values used are the ones depicted in the image, and the final result of the adjustment is shown above.

Using B&W adjustment instead of desaturating allows more control on the conversion process. Using very different values for adjacent colors can lead to an increase of noise and patches appearing, so a careful control must be taken.

Contrast is selectively modified for every part of the body, to obtain the best result.

Step 4: In the last step we adjusted the overall contrast, but now it is time to adjust the contrast selectively on different parts of the image. To do this first we have to use our preferred method for selecting the area of interest, and after we create a curves adjustment layer. Playing with the curves allows modifying the brightness of the area in very creative and powerful ways. The intention of this post is not to show all the power curves have, but as an example, in order to increase contrast you set the middle point of the curve to its own value, and decrease the point between the black and the midpoint and raise the point between the midpoint and the white. The more accused the variation is, the higher the contrast. You can selectively increase the contrast by variating shadows more than lights, lights more than shadows or by setting the “midpoint” in any place where you consider that the neutral point should be.

In this case I increased the contrast of the legs and the skin (each part on its own layer and with own optimal values), raised the luminosity of the dress and decreased the luminosity of the hair. These modifications all together tend to increase the sense of depth of the body and create attitude by the rising of contrast and strengthening of shadows.

The clearer background isolates the model and increases the lightning sensation.

Step 5: The last important part is clarifying the background, removing shadows, details and increasing the isolation of the girl. The technique is the same as in point four, we select the background with our preferred method and increase luminosity in a controlled way until satisfied with the process. Removing the background eliminates distractions that can make the eyes wonder away from the main character, which is the place where we want them to stay. Also, a clearer background means and increase in the illumination and contrast perceived by the observer, without having to change anything in our main part of the picture, so it supposes an easy modification with a powerful increase in the overall attractiveness.

Step 6: Finally, we increase the sharpness of the picture. It is very important to leave this process always to the end and to apply it with the picture on its final size. If we want to have two copies of a different size, the best procedure is to duplicate the image, resize one (or both) to the desired size and after that increase the sharpness separately. Sharpness is very susceptible of the size and resizing after sharpening tends to increase artifacts and reduce the quality.

Many ways to increase sharpness are possible. One of the most used is the unsharp mask. In order to use it, you first need to duplicate the working group and combine all layers.

Instead of this method, I like to use a sharpening method based on a high pass filter. This method is better because it works similar to an adjustment layer, which can change opacity, be duplicated, combined or moved in order to fine-adjust the effect, instead of modifying a combined layer that cannot be modified afterwards.

First of all you need to duplicate the working group and combine. After that, a high pass filter is applied. The high pass filter removes all blacks and whites and leaves the greys, increasing contrast in border areas. A parameter (radius) can be adjusted. The higher the value is, the more noticeable the effect will be, but will be more prone to generate artifacts, like halos or Moire patterns. Moreover, the lower the size of the image is, the lower the value needs to be to achieve an actual sharpening. I usually work in the 0.5-1.5 px range for images lower than 6 MP and between 1-3 px for a 6-20 MP size. After applying the filter adjust opacity to 90% and fusion mode to soft light. In case you require more sharpening, instead of rising opacity to 100% or increasing the radius over 3 px, it is better to duplicate the layer, as the effect stacks. You can lower the opacity of the second layer if the effect is now too noticeable.

Always perform sharpening on the last step of the process, and always at the final size of the image. Some photographers prefer to save a picture without sharpening and, when they need a copy for any purpose, they just make a copy, resize adequately and apply sharpening on that copy.

Final picture.

After all this process we get our final picture, which can be seen on the right. This procedure represents one of the possible examples of how a picture, taken with a concrete idea in mind, can be changed dramatically just by some simple processing effects and a little bit of imagination.

As a final remark, remember that JPEG doesn’t allow 16 bits/channel, so if you want to save it in this format, first you need to combine all layers and, after that, change to 8 bits/channel mode and save the file. Doing it this way allows us to do all the modifications with the highest amount of colors and possible tones, and just reducing the depth of color for the saving step.

What do you think of the picture? Do you like it more before or after? Is there a retouching you did that you are especially fond of? Share your impressions on the comments and give any idea you would like to share.

It’s fun to play with ink

It's fun to play with ink

Yesterday, I became bored after writing the umpteenth page of my thesis, so I took the decision of grabbing my camera and playing a little bit with it, just for fun. Since I read Light: Science and Magic I like to play with illumination, to fiddle with light configurations and practice a little bit under the control of my improvised studio. This week I have been working with a chemical compound known as triethanolamine, a colorless liquid much more dense and viscous than water, and I enjoyed the curls and lines that formed while it slowly mixed with water (as the refraction index of the mixture changes, just in case you needed that explanation). Thinking yesterday about that brought to my mind the pictures that I sometimes see of water, droplets and ink, and I decided to try my own version.

I stole a little bit of ink from my fountain pen (dear pen, if you’re reading this –I doubt it- I’m sorry, I won’t give you back that ink), I took the first photographically decent glass recipient I found around, and I started to play with what I had. My first attempt was disastrous, which is normal as I was doing exactly the opposite I should be doing. But after a little bit of “thinking process” I arrived to a satisfactory light configuration that, although it was not optimal, worked reasonably well. After that, it was all just dropping droplets of ink while the intervalometer carried out the responsibility of taking the pictures. After more than a hundred, this is the chosen one.

As a quick recipe for anybody wanting to do something like this, you need a brilliant and soft light just behind the glass, and all the rest of the room to be dark. The light must extend only to the border of the frame and not more, in order to get dark enough borders in the glassware. In my case it extended a little bit more than necessary but, without being optimal, it worked well. I used a wireless flash behind a white blanket as my main light, and a fast enough shutter speed so the residual ambient light of the room was not captured. The rest, as the book I mentioned before claims, is magic.

This picture is posted on my Flickr, 500px and Instagram accounts, in case you want to see it larger. Do you like the picture? Have you ever tried to do a similar picture using a different light arrangement? If you did, please comment and share with all of us your method for capturing ink droplets in water.