casey crocker arts

ART IS THE MEDIUM FOR THE MIND

Blog

view:  full / summary

Three Little Lessons : Crash Photo

Posted by caseycrockerphoto on June 28, 2010 at 5:45 PM Comments comments (0)

Three Little Lessons : Crash Photo


1

 The lower the ISO setting the more color and overall tonality your image will have. 100 iso has twice the colors as 200 iso. 200 has twice that of 400 and so on. Think of like crayons. Do you want to use the 128 pack or do you.want the 8 pack? Keep this in mind too...the ISO and the shutter speed are closely related, like a marriage. When the ISO is higher like 800 and 1600 the faster the shutter speed can be. You reduce color but you increase shutter speed. So when you can't sacrifice the shutter due to motion blur, either by your own hand or due to the subject, you'll have to sacrifice a bit of color fidelity.  Sometimes a fair balance can be made. Just like a marriage.  

 

2

The shutter speed records time. The illusion of time in an image is due to the quickness or slowness of the shutter door. This door is iinside the camera and is next to the film or digital sensor. The human hand can generally only hold the camera steady for as long as the lenses length and retain a sharp image. Any time longer can result in a blur due to the photographers movement. For instance if you use a 200mm lens then the longest shutter speed you can hand hold without camera shake is a 200th of a second. There are image stabilizing lenses and cameras that can help with this.It is also common for a photographer to set a shutter speed and choose a ISO film speed that ensures a proper exposure. Even if a sacrifice in color occurs. With digital photo a slight underexposure (dark picture) is acceptabe because the photo can be brightened later. However too much underexposure and brightening can create graininess in a print that won't really be noticed on the screen. 

 


3

The aperture of the camera allows you to not only change how much light is let into the camera but primarily allows you to change how much of the scene is allowed to be on focus. Now you may notice that the camera will give you aperture settings that are f/4, f/5.6, f/8, f/11, f/16, & f/22.  The lower the number the more light is allowed through the camera AND the shallower the depth of field of the scene (small amount of area in focus). This is a choice that needs to be made. The aperture controls space while the shutter controls time. A photograph is generally about one of these two things so when shooting consider whether the scene is more about time or more about space. If you'll notice there is a pattern in the numbers above. F/16 has twice the amount of depth of field than f/8, which in turn has twice the depth of field than f/4 and so on. So if we were to shoot a landscape with a waterfall in it the settings would be close to 100 ISO at f/22 with a shutter speed around 1 second. This maximizes color, gives a great (deep) depth of field, and creates a flow in the water. If we were shooting a musician or athlete the settings would be quite opposite: ISO 1600 at f4 (ish) at around 1/500th of a second. This would sacrifice color and depth of field but cause a quickening of the shutter speed enough to stop the blur of the person playing their instrument or sport. 

 


Selective Colors Part 2 :

Posted by caseycrockerphoto on April 22, 2010 at 6:22 PM Comments comments (0)

Selective Colors Part 2 : The Monochrome


(written for Camren Photographic's March 2010 Newsletter)


Selective color and black and white photography has a certain allure in the early months of Colorado. As the winter light fades and the sun begins it return so does the photographers desire to explore. Until the green of the spring returns the monochromatic image has a certain allure and the selective color process of channel mixing is an excellent way to achieve the best monochrome conversion. The channel mixer tool inside photoshop allows the photographer to take color information and change it to grayscale information. This sounds easy, but this conversion requires some finesse. The easiest way to get a black and white image is simply to desaturate the image, i.e., remove the color, but provides no contrast control. In Part One of this pair of articles the selective color tool was used to remove certain colors for artistic effect. Last month served as a good practice model for creatively removing selected colors for effect. In Part Two the color is going to be removed and converted specifically to change the contrast of the black and white photograph. Using the process of channel mixing, the existing color information will be converted to grayscale, but based on the color’s original tone.


Click the image above to see visuals of this technique. The Channel Mixer tool will offer a way to create more interesting and dramatic images. Remember, begin with a full color image and think about what contrast the image needs. A monochromatic image consists of black and white and the shades of gray in between. The question of contrast has to do with how much gray exists between the two extremes. For instance, Channel Mixer can make a low contrast color image into a higher contrast black and white. When thinking about contrast ask yourself, “Does the contrast need to be increased or decreased? Do the reds need to become light when in gray-tone or do they need to become dark? Does the sky need to become darker?”


To start, click on “Image” then Channel Mixer. To keep things simple remember that even though a photo presents many colors the actual photograph consists of three primary colors; that of red, blue, and green. These three colors comprise the volume of the adjustable color, so changing these particular colors will have the most pronounced effect on the image. There are three other colors; those of cyan, magenta, and yellow. These colors can also be adjusted to fine tune the final result, but the three major colors are the ones that will show the most drastic change. Red, green, and blue are the colors to focus on in order to alter the image’s black and white look.


At this point, the Channel Mixer control should be open, preview should be checked, and monochrome should also be checked. In the edit drop-down area “Gray” is now the only option. There are two examples of Amazing Fell Tree. One of the examples shows the ground bright and the trunk dark. The other is an inversion of that, depicting the ground as dark and the tree as bright. These represent the two opposite approaches to mixing the channels. Now this process used to be done in camera and using filters of an extreme color that will change the contrast of the black and white film. Two questions are to be asked of yourself in doing this: do I lighten the warm colors of the scene and darken the cold ones? Or do I lighten the cold colors and darken the warm ones? If this were done in camera and we chose to do the first (lighten the cold colors) a cold colored filter would be used. If the second is chosen (darken the cold colors) a warm filter would be used. So, to darken the blue of the sky, a red, orange, or yellow filter would be used. To make a green apple appear white a green or blue filter would be used. This choice is important at this stage.


The first example shows the image in the simplest of black and white conversion; the aforementioned desaturation. There are many shades of gray present, so the tonality is full, but is low in contrast and low in key. Low key means the image is predominately dark. High key means the image is predominately bright. The exposure is accurate because there is no highlight blow out, however a little detail was lost in the background due to the lack of color. Nothing really stands out as the main subject and critically this version appears rather flat and dark.


The second example of the Amazing Fell Tree expresses its tonality with a severe adjustment to the image’s warm color information. The example shows a 150% decrease of blue, a 185% increase in red, and a 85% increase in green. This is the equivalent of adding a red filter, which darkens the cold colors and lightens the warms. All the blues of the original color image are now blackened and any warm areas, such as the yellow of the grass, are now brightened. This is not an appropriate effect because now the tree is simply too dark and now the grass, a supporting subject, is now brighter than the primary subject. This contrast control is not going in the right direction for Amazing Fell Tree, but this channel mixing approach is useful for other future images.


The third example of Amazing Fell Tree shows a different contrast control option that keeps the tree brighter than the ground and back ground, which is certainly more appropriate. This contrast result is the equivalent of using a green filter which would lighten cold colors and darken warm ones. In this case the blue of the tree was lightened. The settings are like that of using a green or blue filter. The Channel Mixer settings show a 200% decrease of red information, an increase of 135% of green, and a 165% increase of blue. The major change occurs due to the near omission of all the red information and a heavy addition of blue. The green change was the controller change and allows for fine tuning of the contrast. Generally, change two major colors and then use the third to finesse the image’s contrast needs.


The final example shows what the final adjustments would look like in full color if “monochrome” was turned off. Oddly, it shows a red tree and green grass. The original picture shows a blue tree with yellow grass. Interesting to think that in lightening the blue tree using channel mixer what happened was a great deal of red was added to the equation. The blue and the red cancelled one another out and voila; a bright tree with bright roots, a rich and dark ground, and a little more information in the forest. A vast improvement brought to you by Camren and the Channel Mixer.

Click here for illustrations:

http://www.camren.com/03_03_10_SC01.html


 


Selective Colors Part 1 :

Posted by caseycrockerphoto on April 22, 2010 at 6:19 PM Comments comments (0)

Selective Colors Part 1 : Selective Color Removal


(written for Camren Photographic's February 2010 Newsletter)


Selective color and black and white photography has a certain allure in the early months of Colorado. Everyday simply presents excellent opportunities to allow the photographer to explore the selective color and monochrome image types. Low lying clouds and near cobalt skies help create tones in color that will translate beautifully into a selective color photograph or black and white imagery. This article is Part One to a pair of articles on the topic of color subtraction. Color information can be adjusted so the image will turn into a selected color or monochromatic (pure black and white) image. This month is not as much about shooting in monochrome as it is a practice model for selective filtration of color to either keep or totally remove colors. Next month we will discuss that an understanding of colors will effect and enhance the desired contrast of a final black and white photograph. Each process will use similar techniques, but differing tools.


A monochrome image is a photograph (or work of art) that contains only a single color and its tones. In photography this is generally a black and white image, but sepia and selenium toned images are also considered monochromatic. A polychrome image is a photo (or image) that contains more than one color and their tones. Monochrome, for instance, is not necessarily black and white and the shades of gray in between. Picasso had a “blue-period” where he primarily used paint of a blue color, which created a certain mood. A final color photograph could be made in a similar fashion with the addition of an overwhelming amount of blue and be considered monochromatic. Sepia toning a photograph is achieved by adding a tint of red and yellow and is also monochromatic. With color photography, the image starts off polychromatic. Inherently, there are many colors to a color photograph, however, individual colors can simply be added or subtracted, muted or saturated, and lightened or darkened. Here, we will begin with a Photoshop tool that will allow the photographer to begin subtracting entire separate colors and is called the Hue/Saturation image adjustment.


Click the image above to see visuals of this technique. The Hue/Saturation tool will offer a way to create more interesting and dramatic images. Remember, begin with a full color image and think about what colors could be subtracted. Click on “Image” then Hue/Saturation. You may see an option for “Selective Color,” which allows for this selective coloration to happen, but in a more complex way. To keep things simple also remember that even though a photo presents many colors the actual photograph consists of three primary colors; that of red, blue, and green. These three colors comprise the volume of the adjustable color, so changing these particular colors will have the most pronounced effect on the image. There are three other colors; those of cyan, magenta, and yellow. These colors can also be adjusted to fine tune the final result, but the three major colors are the ones that will show the most drastic change.


At this point, the Hue/Saturation control should be open, preview should be checked, and master should appear in the edit drop-down area. Click this drop-down and choose a color channel you want to get rid of. That color will be desaturated. One of the examples of the Bird of a Wire expresses its color with only the red wire remaining, though the original shows both blue and red wires. This was achieved by removing, or desaturating, both the green and the blue channels by one-hundred percent. This selective color effect is very straight-forward. Then, with only the red channel remaining a further adjustment was done; a twenty-five percent increase of saturation was added to the red channel. Now, boosting the saturation of a color can promote pixelization, so an increase of twenty to fifty percent and no higher is recommended.


This technique is also beneficial with noise reduction. Generally, color noise from long exposures or high ISO photograph occurs in the green, magenta, or cyan color channels. Try removing fifty percent of one of the green, magenta, or cyan channels and see if troublesome noise is reduced. This application also helps encourage for surface detail to remain, rather than become sacrificed during other types of noise reduction. If you are having trouble with a particular color, try separating it from the entire set of colors. For instance, want to lighten a yellow wall because it is too bold and the rest of the colors are lighter? Take the yellow channel and either lighten it or desaturate it a bit and the rest of the colors will remain close to their original state. Keep in mind that all the colors that include any amount of yellow will also be naturally effected.


The other place to attempt this technique is the image adjustment called Selective Color. This adjustment is more complicated because you can not only separate one color from the rest and adjust the lightness and saturation of that color, but also change its tint and shade. In other words, the Selective Color tool allows for a particular color’s combination of red, green, and blue to be completely changed. A yellow-red can be changed into an orange-red. Blue-violet can be changed into red-violet. With more experience, Selective Color may become your go-to tool for these effect, however, Hue/Saturaion offers a simpler fashion. Rather than adjust the saturation of a particular color, the hue can be changed. So, if you are having any kind of trouble with a particular color or just want to be more creative, as the saying goes, “divide and conquer.” Separate the colors into their parts and see if there is some problem solving power or creative potential the Hue/Saturation tool can provide. Next month these selective color concepts will continue into applications with black and white images using the Channel Mixer tool.

Click here for illustrations:

http://www.camren.com/02_02_10_SC01.html

3D Imaging Magic

Posted by caseycrockerphoto on April 22, 2010 at 6:16 PM Comments comments (0)

3D Imaging Magic


(written for Camren Photographic's November 2009 Newsletter)


Although the 1950s are most often considered the 3-D movie decade, the first feature length 3-D film, "The Power of Love," was made in 1922. Ever since, the use of 3-D technology in theaters and television has risen in popularity. Film-makers James Cameron and Steven Spielberg are currently pushing 3-D technology in Hollywood by assimilating motion-capture 3-D effects with 3-D viewing during the filming process. Surely, this will forever alter the movie-going experience. But, how does making something with two dimensions into an illusion of three dimensions work? The magic occurs during a combination of projection and those goofy looking glasses. Whether you have used 3-D glasses for a big screen or at home experience you have to admit 3-D glasses are fascinating. Considering they have such high entertainment value, it may be surprising how amazingly simple 3-D glasses are. Put 3-D specs on an all of the sudden they make the movie or television show look real and directly in the same space. Wearing 3-D glasses makes you feel like you are a part of the action. What is the past and the present of this technology? Where is this technology going? Why is the 3-D of today better than the 3-D of yesterday? The answer is simple–the color Green.


Be sure to click on the picture of the 3-D glasses above for a visual compliment to this article. The 3-D technology of today and yesterday differ in unique ways, but they are based on the same principles of how binocular vision operates. Most human beings and animals use what is known as binocular vision to perceive depth and see the world in the three dimensions of length, width, and height. The binocular vision system relies on the fact that we have two eyes, which are approximately three inches apart. This separation causes each eye to see the world from a slightly different perspective. The brain fuses these two views together because it understands the differences and uses them to calculate distance and create the sense of depth. For objects up to about 20 feet the binocular vision system lets someone easily tell with good accuracy how far away an object exists. If there are multiple objects in our field of view, we can automatically tell which is furthest and which is nearest and how far away and apart they are. The illusion of 3-D tricks the brain into believing that binocular vision is in effect.

If you look at the world with one eye closed you can still perceive distance, but accuracy decreases and a slower reliance on visual cues takes place. To see how much of a difference the binocular vision system makes have a friend toss you a ball and try to catch it while keeping one eye closed. Try the same exercise in a fairly dark room or at night. The difference binocular vision makes in catching the ball is even more noticeable.


If you have ever used a View-Master or a stereoscopic viewer you have seen the binocular vision system in action. In a View-Master or stereoscope the audience is presented with a pair of images, but each individual eye is presented with its own image. These two images slightly differ in point of view. Originally, the camera photographed the same scene from slightly different positions to create images known as a stereo pair. The images are directly viewable using a parallel lens device, like the View-Master, which allows each eye to only see its respective image. When shown the stereo pair simultaneously in the stereoscope the two images blend together and form a single three dimensional image.


In a movie theater the reason why you wear the classic 3-D glasses is to feed different images into your eyes just like a View-Master does. The screen actually displays two images and the glasses cause one of the images to enter one eye and the other to enter the other eye. There are two common systems for doing this: anaglyph and polarization.

The anaglyph 3-D method uses glasses that contain a red gel filter for one eye and a blue gel filter for the other. This is the classic 3-D experience. In this system two images are displayed on a screen through two projectors; one projecting in red and the other in blue (sometimes green, but this promotes discoloration even though it is more effective for depth. Cyan gels make a good compromise for color and depth). The gel filters on the glasses allow only one image to enter each eye. This may sound counter-intuitive, but the red gel allows only the blue light through and vice versa. In effect, the combination of the red projection and red gel block one another, admitting only the opposite into the eye. The same is true for the other eye. These glasses are called anaglyph glasses. You cannot really do justice to a color movie when you are this method because we actually see three colors; red, blue, and green. If one color is omitted, which anaglyph 3-D promotes, then a good third of the overall coloration of the picture is lost. The illusion of depth of field also weakens.


The polarization 3-D method incorporates glasses, but in a different way. This method is more commonly used in modern 3-D movie projections. Modern movie theaters do not project two images from two projectors with the polarization method. Rather, the images are already combined and the single lens is sending two signals with two opposing polarizations. The audience must then wear polarized glasses with lenses that transmit the light of the movie in a special way so that each eye receives a different type, or angle, of polarization. Each lens has its respective polarization directions adjusted to be 90 degrees different from one another. This makes is possible for the left eye to see its picture and does not allow light of another polarization to pass through. Vice versa is true for the right eye. Similar to the anaglyph system, each lens blocks the light meant for the other eye. This method is more effective because it allows all three colors of light to reach the eyes, which in turn makes the experience more real because the entire visible spectrum and all its colors and tones are present. This means the color green is no longer subtracted from the 3-D equation, like the anaglyph method. And the color green is a very special color to human depth perception.


Light consists of three colors: red, green, and blue. Together, in unison, the eye can see all humanly perceivable colors. Anaglyph 3D glasses do not give us quite the same effect because in using them we are only presented a pair colors to mix. Red in one eye and blue in the other. Some glasses used a red and cyan or red and green combination. Cyan is a half blue and half green mixture, so the eye is seeing both at the same time in one eye. This can make the image look red. If a green gel is used for a lens, then a green image mixes with a red, totally absent of blue, which generally results in a muddy, unpleasing color with qualities of gray. A rose filter in one eye and cyan for the other might solve this problem. However, because polarizational materials do not interfere with the perception of color, and they do actually enhance the dynamic range of a scene, they are the best bet in at home or large scale 3D image viewing.


The color green also has context with digital camera sensors. Large amounts of contrast and color depth are generated through the color green. Take a CCD or CMOS sensor and you will find that inside the mosaic color array that makes up the final picture are twice as many green pixels as there are red and blue combined. In a CCD or CMOS equipped digital camera half of the pixels are dedicated to the color green. The color green aides in depth perception, color transition, tonality, and therefore an impact on an image’s three- dimensionality. Simply put, green consists for a large share of the visible spectrum. Without the color green, old methods of 3-D trickery fall short and digital cameras wouldn’t be able to render scenes the way our eyes see. An amplification of this area of the spectrum is also why night vision works.

There are some more complicated systems as well, but because they are expensive they are not as widely used. For example, in one system, a TV screen displays the two images alternating one right after the other. Special LCD glasses, called shutter glasses, block the view of one eye and then the other in rapid succession. This system allows color viewing on a normal TV, but requires the purchase of specialized equipment.


While 3-D technology is impressive, some people still want a solution that does not require them to wear glasses. This is quite the challenge for motion pictures, however there is one way to create three-dimensional images in every day places. Movies often advertise with this system. This method relies on a display coated with a lenticular film. Lenticules are tiny lenses on the base side of a base layer. The screen displays two sets (or more) of the same image. The lenses direct the light from the images in a particular fashion so that each eye sees a single image. Since often, this system is used with more than one image, so that as the image is moved, different images are visible. This technology requires content providers to create special images for the effect to work. They must interlace two sets of images together. If you were to try and view the still or video feed on a normal screen nothing but a quilt-work of overlapping images would be seen.


As far as creating glasses-free movie experience, Sony is introducing a single lens 3D video camera for motion pictures. Based on their image separation design using a single lens is beneficial because the viewer could see a normal 2D movie without the use of glasses or have the option of a 3D experience. (More information on the camera’s design is available here: http://www.sony.net/SonyInfo/News/Press/200910/09-117E/index.html)


Fuji is also introducing a compact 3D camera with a twin-lens design, called the 3DW1, that allows the user to snap photos and video and view that image in 3D with the naked eye. The camera incorporates a dual-CCD image sensor technique to do so. Literally, the camera takes 2 pictures on 2 sensors. Though a lenticular screen might do the job, the camera also contains a special LCD screen that projects each image separately so each eye sees separate images, resulting in a 3D experience. The print method for these 3D images incorporates lenticular technology. (Information about the Fuji 3D W1 here: http://www.fujifilm.com/products/3d/camera/finepix_real3dw1/) No matter how it is considered, 3-D technology is still a very popular pursuit and is gaining future potential.

[]

Click here for illustrations:

http://www.camren.com/tech/11_11_09_3D.html?act=GetArticleAct&articleID=2326


The Photo Fusion Part 2: High Dynamic Range

Posted by caseycrockerphoto on April 22, 2010 at 6:14 PM Comments comments (0)

Photo Fusion Two:

High Dynamic Range (HDR)


(written for Camren Photographic's September 2009 Newsletter)


Two special fusion techniques are unique to digital photography and widely accepted for their sleight of hand and illusion. These fusion techniques are the Panoramic Stitch and the High Dynamic Range merge. Last month, the Panoramic fusion (stitching) was discussed and now the High Dynamic Range fusion (HDR) will be explained in what aims to be an easy to apply purpose and methodology. The purpose of HDR is to reduce contrast of high contrast scenes. Previsualization is required. If this does not occur, exposure issues or image overlay proplems could easily arise, resulting in a less than successful image. In the case of HDR, exposure bracketing might pose timing problems, so fine tuning an approach helps circumvent possible problems.


Early on in photography's history, photographers have tried to reduce the contrast of high contrast scenes. As early as 1850, a man maned Gustave LeGray developed a technique to bring out detail in areas that were too dark while also keeping areas from becoming too light. When photography was young and film quality was low LeGray accomplished a new technique that we use today in HDR . To photograph his seascapes (picture at left) without overexposing the detail in the sky and without underexposing the detail in the shadows, he took two pictures. One exposure was for the sky and another was for the sea. He then sandwiched the two together during print production and generated positive prints that maintained both areas of the scene, creating the first know High Dynamic Range photographs. The approach of HDR photography of yesterday is remarkably similar to that of today.


The same technique is necessary for digital HDR fusions and this tutorial will use a process similar to LeGray's. By clicking on the above image of the dragonfly, images and screen shots of the HDR process will appear in two pages. The example image above is a HDR fusion consisting of two images. Even though a classic HDR contains more than two images, assembling two images is as easy as a fusion of three or five images. The idea behind this illustration is the importance of technique.

Even the simple act of using a polarizer will help reduce glare and increase the dynamic range within a single image. (Without doubt, LeGray would’ve loved a polarizer). Some photographers simply use the Shadow/Highlight controls to boost shadows and or regain highlight detail, but as figure 1 will show, the results can be mediocre because this adjustment will promote image noise because it amplifies detail. HDR fusions add information rather than amplifiy and a simple HDR fusion can be made with a minimum of two images, such as the example of the dragonfly above. Photographers certainly have been known to use more than five photos within a fusion to expand the image's tonality. This does tend to give the final photo a painted effect, full of bold colors and a certain surrealism. This dragonfly HDR is something more practical, simple, and describes the nature of an expanded dynamic range.


A way to consider HDR is as if it is a multiple exposure. Different exposures of the same subject of varying lightness and darkness are taken and then digitally placed on top of one another to be blended into a single shot. At least one image for shadows (an overexposure) and another for highlights (an underexposure) are needed. In-camera bracketing is a useful tool for HDR because the succession of exposures happen quickly. Manual brackets might be hard to time quickly because each exposure must be taken as close to the same instant as possible. Otherwise, things like clouds or trees may move too much and result in a blurry HDR.


This tutorial expects that several, ready to fuse photographs already exist. However, here is a quick start to shooting for HDR: shoot in RAW for the best detail and to counter-act file compression. (A RAW conversion program may be required to open the RAW files). Use a tripod so nothing compositional changes. A polarizer is helpful and will lessen the total number of shots needed for an HDR and using the camera's lowest ISO will maximize the camera's color capturing potential. Set your aperture and shutter values however you wish, but be sure to change the exposure setting over and over for each image taken. It is recommended here that you prioritize either the aperture or shutter speed. Keep one of the two values constant, then make exposure sacrifices to the other value at a single stop difference. (I.E., keep the aperture at f/11 and fluctuate the shutter speed). Still images, such as landscapes, architecture, and flowers are a great start. The dragonfly worked out simply because the insect did not move between the two quick exposures. A tripod and polarizer were used and the ISO was set to a constant 200. The shadow detail image was at f/8 at 1/8th of a second and the highlight detail image was at f/8 at 1/250th of a second.


The main reason for doing this process is to maximize bit-depth. Bit-depth is the amount of color the camera will produce. (Thinking of bit-depth as a box of crayons would be a good metaphor). A JPEG image (which most people shoot) are not designed for HDR. JPEG images must be saved as 8 bit images, meaning they are low on the color reproduction scale, but still capable of gathering 256 individual colors. Not bad, but RAW images and Tiff images can automatically record and save far more color. Tiff files can be captured in 16 bit color, but not all cameras offer the option of recording a Tiff. (Tiffs are notoriously slow, and the popularity of the Raw file has become the professional standard). Raw files can gather between 12 and 14 bit depth (check your camera for the 12/14 option). This means that theoretically a straight out of the camera 12 bit image can contain 4,096 colors and a 14 bit image can contain 16,384 colors. Ideally, an HDR image should be shot in Raw file mode and at a maximum bit depth. This way, the camera is prepared to produce images that contain 64 times more potential colors than a JPEG. (This alone is a great reason to shoot in Raw mode). In the same respect an ISO (film speed) setting also dictates how many colors an image contains. 100 ISO contains twice the color potential as 200 ISO. So on and so forth. By the time 1600 ISO is compared to 100 ISO we are only using 8 colors of crayon as opposed to 128. (This alone is a good reason to keep your ISO low when possible).


Computer programs allow the HDR photographer to assemble multiple images together quickly and with little effort. A program is required to assemble HDR images, although clever layering and masking could be put into play. Photomatrix, Artizen, and Photoshop offer HDR fusion. Adobe Photoshop CS2, 3, and 4, have the HDR capability. CS3 & 4 allow you to simply open the pictures then run an automation (click file, then automate, then "Merge to HDR"). Any adjustments you want to make to your captured RAW files must be done first. Photoshop will then prompt for a type of HDR exposure adjustment. At this point, make a light adjustment if you wish. The end result will look different, as this is a preview of what the HDR will become. After hitting "ok," the program will blend the two together, allowing the best parts of each image to come through. You'll see an increase in shadow and highlight detail a shadow/highlight adjustment cannot offer a single image. You'll notice an increase in color information that a saturation adjustment cannot produce because now there are truly more tones and colors. This process is a fairly simple and highly effective way of improving your overall image quality. The HDR fusion will often grant the eye a more pleasing contrast and popularly gives the photographer something that better matches what is in their mind's eye. []

Click here for illustrations:

http://www.camren.com/09_09_09_photofusionhdr1.html?act=GetArticleAct&articleID=2326


The Photo Fusion Part 1: The Panoramic Stitch

Posted by caseycrockerphoto on April 22, 2010 at 6:11 PM Comments comments (0)

Photo Fusion One:

P a n o r a m i c s


(written for Camren Photographic's August 2009 Newsletter)


There are two uniquely digital fusion techniques widely accepted for their sleight of hand and fascinating illusion. These fusion techniques are the Panoramic Stitch and the High Dynamic Range Merge. These two photo fusions require previsualization. If this does not occur, exposures or the image overlay could easily be inaccurate, resulting in a less than successful image. Also in the case of panoramics perspective might have to be adjusted or a great deal of cropping will have to occur, so fine tuning an approach to these techniques helps circumvent possible problems. Over the next two months, the panoramic fusion (stitching) and the high dynamic range fusion (HDR) will be explained in what aims to be an easy to apply methodology. First up is the panoramic photo fusion.

Panoramics are often seen as unique, encapsulating, and immersive depictions of a setting. Their very form is distinct from the typical 4x6 or 8x10 format. Using a wide format such as the panoramic is like a magic trick or an illusion. And the technique has been used for over a century. Today, we have the benefit of technology, whereas before the process was quite intense requiring images to slowly be assembled together , dodged, and burned to perfection in the darkroom.

Please click on the above image to see images and screen shots of the process described below. (Keep in mind, there are two pages of illustration.) The photograph is of Zapata Falls, near the Great Sand Dunes National Park. The example image above is a panoramic consisting of two images. Even though a classic pano is longer than two images, assembling two images is as easy as three or five images fusion. The idea behind this illustration is the importance of technique.

This tutorial expects that several, ready to fuse photographs already exist. However, here is a quick start to shooting for panoramics: Use a tripod and do not use a polarizer. Set your camera however you wish, but be sure to repeat the same exposure setting over and over for each image. Typically, images shot in an auto mode vary in exposure and will effect your image corrections later in the process. Use a normal focal length for your camera and level your camera so that when you swing the camera to take a succession of photos the images will align well. Be sure to stop fully between images and give some over-lap room. This way they photos will be sharp and the program will know where to stitch the seam.

Current technology allows the panoramic photographer to assemble multiple images together into a single elongated picture through a computer. This fusion technique is called “stitching.” For several years, programs, such as PTGUI, Panotools, and Adobe Photoshop CS2, 3, and 4, have “stitching” or “merge” capabilities. These programs allow you to tell them where the images that need to be assembled are. CS3 & 4 allow you to simply open them, then run an automation (click file, then automate, then photomerge). The programs will then prompt for a type of correction. An interactive photo-merge is a good choice, just in case the automated stitch needs some tweaking, such as perspective adjustments or a moving of the pieces that will make up the panoramic puzzle.

One reason a dual-image photo fusion, such as this, is a good idea is because it increases the resolution of the image beyond the resolution of the camera. For instance, the image was shot with a Nikon D200 with two 10.2 megapixel images. These two images combined effectively give 15 megapixels. (Take a panoramic with four images and a 20.4 mp image results). In this case, the camera was up against a wall of people, so backing up was not an option, nor was widening the focal length of the lens away from 35mm. (If a focal length beyond 35mm is used then normal perspective is lost, vignetting occurs, and significant cropping will have to happen. So, panoramic images work best when a “normal” lens is used.) This waterfall situation repeats an often problematic, but compelling situation than when trouble-shot using this dual-image panoramic technique results in a high resolution, wider than available photograph.

Low to the ground, the first picture was taken, then the camera was simply moved up and after incorporating a little overlap area the second was taken. The rock and the waterfall were both important subjects. The same camera settings have to apply or else exposure becomes an issue. (Camera settings were as follows: Nikon D200, ISO of 200, 2x neutral density filter to slow the shutter, 4 second shutter value, aperture value at f/16, and white balance set to cloudy day, or 9000K). So, this picture benefits in two ways: the lens appears to be wider than it really is and a near doubling of resolution occurs. Color fidelity increases, as does contrast and overall image sharpness. The panoramic fusion offers a technique that is truly different and is far greater in quality than simply cropping an image to look elongated. The panoramic fusion will trick the eye because it will actually include what appears in the peripheral vision.[]

Click here for illustrations:

http://www.camren.com/email/08_08_09_photofusionpano1.html?act=GetArticleAct&articleID=2326


The Micro on Macro

Posted by caseycrockerphoto on April 22, 2010 at 6:09 PM Comments comments (0)

The Micro on Macro


(written for the June 2009 Camren Newsletter)


This month's photography magazines and industry business papers are full of talk of new accessory gear, the popularity of digital photo frames, and of the rise of the cellular phone camera.


However, there is one timely and popular thing occurring lately, something the cell-phone cams perform poorly at. (Though you can set your phone's wallpaper with these pretty images of nature). This thing is the macro photography of flowers. Even during a walk, macro photos are everywhere. You have to get close to the subject and simply be fascinated. Depth of field is a critical thing to consider when photographing flora. Cell-phone cameras cannot render the richness in depth of field that a DSLR or 35mm camera can. In turn, the DSLR and 35mm do not perform as precise in depth of field separation as medium or large format cameras do. Its physics and smaller format cameras simply have trouble separating layers of space in a photograph from other layers of space. The purpose of this article is to help apply control over the DSLR or 35mm camera's aperture so refinement of depth of field can be attained.


Fine-tuning the aperture setting is something people tend to have trouble with. Now, most SLR camera are equipped with a depth of field preview button, which allows you to see what depth of field a particular aperture will render before a photograph is captured. This subtracts wasted shots and adds more accurate ones. (Remember, the smaller the aperture number, i.e. f/4, the shallower the depth of field, meaning the less is in focus. Vice versa for, f/22). So, after you have set up your tripod, gained your composition, chosen to use or not use a polarizer or reflector or diffussion of some kind, have set your ISO and shutter, the time for aperture has come. Press the depth of field preview button if your camera has one. If you do not like the result, alter the aperture setting.


For instance, click on the above photo of the Spiderplant blooms. (In doing so a larger version will appear in your browser). These photos were taken in the shade with a small reflector bouncing light onto the scene. Tripod was in place and the ISO was set low, for tonality and color reproduction (very important). A range of photos were captured, but none of which were taken at anything less then f/8. F/8 is an excellent starting point with anything macro. Typically, faster apertures will not give you a desirable beginning field of depth. Like the photo illustration above, a starting point of f/11 helped, but the depth of field offered by f/11 and f/16 were more of the desired effect. And the difference between f/11 and f/16 is significant (a single f-stop makes a large difference the closer the photographer becomes to the subject).


A lens' focal also effects depth of field. A wide-angle lens at f/16 pictures depth entirely differently than a telephoto lens at f/16. The Spiderplant bloom was photographed with an 80-200mm lens (set at 200mm) specifically because it gives working distance from the subject and has an excellent macro reproduction. This lens paired with a Nikon pk-12 extension tube gives a 1:2 magnification ratio, meaning the object is recorded at half its life size. (1:4 ratio would grant a quarter of life size magnification and 1:1 would be life size). Beginning with a slightly telephoto lens is helpful when you want to have more of a dramatic effect from one aperture to the next when photographing your macro world. There are other tools out there that increase magnification, but approaching apertures with a desire for mastery is an excellent way to improve your macro image making.

Click here for illustrations:

http://www.camren.com/tech/MicroMacro.html?act=GetArticleAct&articleID=2326

 


Shooting Raw

Posted by caseycrockerphoto on April 22, 2010 at 6:04 PM Comments comments (0)

Shooting RAW


(written for Camren Photographic's March 2009 Newsletter)


The topic of shooting in photographic RAW format has come up in recent conversations many times at Camren. The question tends to come about when we help people select from the type of image quality and formats a camera offers. This goes beyond resolution and image compression. So, this newsletter offers its readers an insight into shooting RAW.


First and foremost, RAW is not an easy image format choice to tackle; It takes time and finesse. Choosing Jpeg image quality is quicker for one major reason; you allow the camera to process the image. Keyword process. A raw image is considered un-processed. When a photograph is taken, light reacts with the image sensor. We call these CCD or CMOS sensors. This is what makes a digital camera a digital camera. Where the film would be is now this sensor. Once this sensor receives light, it sends it through a processor, or an in-camera mini-computer. This processor makes certain adjustments to the image, things like auto-contrast and saturation boosts. Color-balance and image sharpness are also tweaked by this mini-computer. The image is then put to the compact flash or other media card as a Jpeg image. When RAW format is chosen, this image processor is by-passed and the photo is placed onto the card without these adjustments. It also technically isn't an image. At least not yet.

What you have to do is to pull the RAW file into conversion software. This typically is a program that carries with it an extra expense. Software such as Photoshop CS 3 (or the new CS 4 and Lightroom 2.0) contain a built in RAW conversion software. In this case it's called Adobe Camera Raw. Other companies offer conversion software, such as Nikon and their Capture NX2. Canon typically includes this software inside the box of the camera upon purchase. These softwares require you to make adjustments to the photographic image before it can be opened in editing software. Consider these adjustments as decisions.


After these adjustments are made, you then have the option of saving the image into whichever format you wish, such as a Jpeg. The main reason for doing this is that you inhibit the camera from making average adjustments to the photo before its turned into a Jpeg. Finesse. The end result will be more to your personal liking because you customize the image...something the on-board mini-computer cannot acheive. It's a large step. You simply can't just email a RAW file or post it to the web. You will have to adjust the following things in the raw software: White Balance, i.e. color temperature of the shot. This is a more accurate way of correcting color balance. Exposure, i.e. the contrast of the image. Much like adjusting the levels of an image. Image sharpness, or the appearance to the pixels, i.e. soft and smooth or sharp and full of contrast. Color Vibrance, which is different than color saturation in that vibrance represents more of a separation of colors rather than a boosting of color intensity. Of great imporatance is the ability to boost the information in just the shadow areas and/or inhibit highlight area blow-out with a tool called recovery (much like dodging or burning in the darkroom).


All of these adjustments are made before the image is compressed into a jpeg, which makes this process minimally-destructive to the image. Compression causes image information loss, so reducing this information loss is a good thing. You'll find your gradations of color and tones are more pleasing because they are more accurate. Think of it this way, if you shoot 100 ISO on your digital camera, by the time the mini-computer is done with the image, then you make adjustments without raw software, the final image is going to represent something more like an 800 ISO image. So, in fine-tuning these changes, you'll keep more color fidelity, tonality, and resolution and reduce the appearance of noise considerably. Plus, you'll have a sense of accomplishment.


Shooting well through the camera without the need for image correction is ideal. This mastering reduces your processing time. Newspaper photographers can get the image out quicker when shooting a well-shot Jpeg. Photoshop isn't the end all be all fix, but understanding RAW can help get you where you may desire to be photographically. And an ideal circumstance is learning from mistakes. If you learn from your pattern of mistakes based on the corrections you have to make in the RAW software, you will improve your overall image taking skill and hone your inherent talent.

 



Rss_feed