| Chuck
Pivetti Father, Friend and Photographer Photography Articles |
| |
Gammagram,
August 2005, Tip of the Month, Chuck Pivetti Expose your color slide film so that there are not a lot of texture-less highlights when your slides are projected on the high beam setting of a Kodak Carousel Projector. The Club’s projector is bright and we use a beaded screen. Using -1/3 to -1/2-stop Exposure Compensation will give you more saturated colors with most slide films, and using a polarizer when shooting outdoors in sunlight will help both contrast and color saturation. For example, shooting in a redwood forest at noon on a sunny day will give an impossible exposure range. On the other hand, very early in the day or very late in the day when no sunlight is filtering down through the trees you might be able to get texture in both flowing water and tree bark. In the slot canyons of Arizona and Utah, at midday the difference in exposure for a sunlit wall and one lit by reflected light can be almost twenty stops. (And slide film has only a five or six-stop range.) Gammagram, September/October 2005, Tip of the Month, Chuck Pivetti When shooting people outdoors in direct sunlight using color slide film, fill flash is a handy way to open up the shadows in the eye sockets, under the nose, under the chin, and under a hat brim. For a realistic effect, you don’t want to eliminate the shadows, you just want to lighten them up a little. In an article on fill flash, the late Galen Rowell once wrote that he liked to set his ambient exposure for 1/3 stop under for added saturation and he liked to set his flash for 1-2/3 (1.67) stops under to preserve natural looking shadows. Gammagram February 2010, Judging General Photography, Chuck Pivetti, Co-authored with Grant Kreinberg Technical Qualities. Focus. Is what should be sharp, sharp? And is what shouldn’t be sharp, not sharp? Exposure. Does the photo have the proper tonality for the scene; is there texture in the shadows that should have texture in them; is there texture in the highlights that should have texture in them? Are shadows and highlights used to make the scene appear to have depth, or does the lighting make it appear flat? Shutter Speed. Is motion intentionally blurred to give the sense of motion? Is there blur where it’s not intended? Color. Is there any color-shift that detracts from the photo’s message? Are colors under or over-saturated? Esthetic Qualities. Composition. Does the composition draw the viewer’s eye to the subject? Have distracting elements been eliminated? Have the important elements been arranged properly in the frame? Color. Is color used as a compositional element? Are warm and cool colors used properly to lend a 3-dimensional quality? Lighting. Does the direction, intensity, or color of light on the subject add or detract? Cropping and Framing. Is the frame the right size and shape to set off the subject? Action. Is there action in the photo? Are people or animals in the photo interacting with each other or with some object, or are they just looking at the camera? Message. What does the photograph say? Does it give the viewer a clear message? Does it tell a story? Originality. Is this an image we haven’t seen before? Has the subject been handled in a new and refreshing manner? Impact. Most importantly, does the photo have impact? Does it grab your attention? Does it speak to you; does it sing to you; does it command you to look; will you remember it? Competition can be a little tough when your pride and joy gets a low score from a so-called expert. Just remember that photography is a combination of left-brain and right-brain functions. Judges are great at dealing with all the left-brain aspects of an image, but it’s difficult for anyone to totally teach or critique right-brain stuff. If you think a judge misses the point and fails to appreciate your photograph, re-enter it again with a different judge and see how it does. Gammagram October 2010 Tip of the Month: Better Focus, Chuck Pivetti When auto-focusing on a flying airplane or bird with a telephoto lens, your lens can spend so much time searching that the bird or airplane will be too far away by the time focus is achieved. Check the little switch on your telephoto lens to make sure it’s set to the mid-to-infinity search range rather than to the near-to-infinity range. Then focus on a distant object about the same distance you expect the bird or airplane to be when you focus on it. These steps should cut down the amount of searching to achieve focus. When shooting from a tripod with a DSLR with “live view,” you can zoom in on the live view with the same zoom button that you use for zooming in on preview images. Then with the highly magnified image, you can focus very precisely. Gammagram June 2014 Fine-Art Photography, A Three-stage Workflow, Chuck Pivetti, Co-authored with Bob Hubbell Introduction If we want photographs to grace our walls, or maybe even the walls of a gallery, we need to slow down and think about what we’re doing. We need to visualize what we want to create. We need camera technique to turn that vision into a good digital image file. And, we need to turn that digital image file into a work of art. Think of these as three stages: “Pre-capture,” “ Capture,” and “After-capture. “ Pre-capture Pre-capture is the most important, the least understood, and the most challenging. At pre-capture we must take time to switch gears. We must change the way we see. Our eyes don’t see; our brains see. Our eyes are just lenses for our brain-cameras. And our brains process what comes through the eyes applying rules. And those rules can smother creativity. We hear photo critics refer to rules: “third points,” “clean borders,” “use diagonals,” “use S-curves,” “use odd numbers of items,” and such. These are not rules; they are guidelines that often work. Creativity is not about rules; it’s about imagination, inspiration, inventiveness. It’s about seeing in a new way. To capture what we see, we need to master our camera equipment or it will divert our brains back to applying rules. We need to be visualizing a picture and what it is to say. At pre-capture we see in a new way, we visualize how our scene will be rendered. We get ready to create. Capture Capture is important, but in the electronic age it’s easy to think the camera knows what to do. It does not! We have to be in control, not the camera. The camera is our tool just as a brush is the artist’s tool. We must interpret the light and determine how to use it. We must decide where to position the camera, what to include and what to exclude, what should be in focus and what should not, and we must determine the “decisive moment” to trip the shutter, to make the capture. Our cameras don’t know how to process our image files, so we should capture RAW1 quality image files to provide latitude for after-capture processing. Capture is the stage at which we create an image file that can be turned into a work of art. After-Capture After-capture is when we turn pixels into art. It’s easier when we‘ve paid attention to pre-capture and capture. After capture processing is accomplished in three steps. The first step is archiving. The second step is non-destructive processing that preserves original pixels, and the third step is destructive processing, which actually changes pixel values. Step One The first step is archiving. We download, rename, and store our image files on a computer’s internal or external hard drive using a filing system that makes sense to us. Our system should allow us to quickly find any digital image file. In archiving our image files we should add keywords that will help in a search for an image, and we can also add personal identification and copyright status. Step Two The second step in after capture is nondestructive processing. We can use a program like Lightroom or Adobe Camera Raw (ACR) to edit our images in a way that is totally reversible, or “non-destructive.” In non-destructive editing, the original image pixels are preserved. Edits like cropping, brightening, darkening, adding vibrancy, and adjusting dynamic range are saved as instructions on how to display the image. When we are satisfied with our nondestructive editing, we should save the original RAW image file with its adjustments. Our original pixels will still be intact. (We should be our own critics; we should always see things to improve. Since photo editing software keeps getting better and better, there will come a time to go back to the original image file and start over, using newer software and more experience.) After saving the edited RAW file, we can then open the file in Photoshop and continue non-destructive editing using layers. We should save this version of our image with its layers. This file is our “master.” At this point we will have saved two versions of the same image, a RAW file format, and a PSD or TIFF “master” containing layers. Step Three In step three, we further process our “master” for output to a print or to the web. At this point we make destructive adjustments like flattening, cropping, reducing color bitdepth, re-sizing, resampling, applying artistic filters, etc. If we save an output version, we should rename it so we won’t overwrite our “master” file. When we save an output version, we will save three versions of the same image. On completion of this stage, we’ve finally turned our digital data into a work of art. In Summary Mastering pre-capture, capture, and aftercapture will lead to excellence in our pursuit of the perfect fine-art photograph. We will be able to create a picture that says something simple, but says it clearly and elegantly. Gammagram November 2014, Pre-Capture, The First Important Stage To a Creative Photograph, Chuck Pivetti, Co-authored with Bob Hubbell Introduction In our first article, we mentioned the three stages of workflow in creative photography: pre-capture, capture, and after-capture. In this article we discuss the first stage, pre-capture. Of the three stages, pre-capture is the most important and the most challenging. Pre-capture Pre-capture is the analysis of a scene to determine the technique required to capture it. That analysis must consider many elements, some objective, some subjective, some both. As much as manufacturers would have you believe otherwise, cameras do not have brains. They are just tools. They have no idea what the photographer’s purpose is. If the photographer is a serious birder he/she will want a crisp, detailed rendition of a bird centered in and practically filling the frame. The bird will be well lit, with a catch-light in the eye. The background if included, would be simple, represent the bird’s natural habitat, and would not compete with the bird for the viewer’s attention. The same general approach applies to many other areas of photography, such as architecture, landscape, graduation portraits, and the ever popular baby picture. If the photographer is an artist, the image may be blurred; it may show only part of the bird, or the bird may be small relative to the background. The bird may be in silhouette. Tonal values may be selected to enhance the entire image rather than just the bird. The Artist may change or even distort the bird to make an expressive image. The photographer’s imagination plays a major role. If the photographer is a birder who is also an artist, things get interesting. This photographer wants an accurate picture of the bird, combined with more expressive elements that provide a more artistic rendering of the whole scene. Composition can be more flexible, with the bird smaller in the frame, or maybe in a corner. Lighting may be unusual. Colors might be enhanced or muted, the background may be changed, but not so much as to make the picture inauthentic. The photographer’s imagination plays lesser role. What is really different about these three approaches? All three are legitimate. Surely they overlap. We could quibble about them forever, but one thing stands out: the attitude of the photographer. What does he/she want to express? What is the photographer’s goal in making this picture of a bird? Conclusion Pre-capture includes finding a scene you like, then pre-visualizing, planning a photo of that scene. So find a scene you like. Plan your photograph. Mentally explore the subject in different ways. Consider the result if you were to move in, walk around, get on your knees, use different lenses, different angles, vertical and horizontal framings, etc. Whatever first attracted you to that scene may not be well represented in your first visualization, so you keep digging, exploring, expanding and finding more; always with a purpose in mind. Practice and experiment until the process becomes instinctive. ![]() In these three examples, the second has the purpose of describing the species for those interested in birds, the first is aimed to appeal not only to birders, but to a broader audience that might enjoy the beauty of an artistic composition, and the third to appeal to those who would like a large print as décor in their homes. Each was taken with a different purpose and different audience in mind. So, at pre-capture, the first consideration should be: “Why am I taking this photograph?” Gammagram February 2015, “Pre-capture - Capture - After-capture”, Chuck Pivetti, co-authored with Hubbell Introduction In previous articles we discussed pre-capture: adjusting our seeing and perception, changing our everyday way of seeing, with a photographer’s eye so we can capture a photograph that says what we want it to say. In this article we move on to using the camera to capture that photograph. Capture So, what’s to talk about? We can set our DSLR on auto-focus and auto exposure, point it at our subject, click the shutter button, and capture an image. Well, as we’ve said before, as amazing as our cameras are, they still do not have brains. They have no idea why we are capturing a picture or what we are trying to say with that picture. Camera equipment is so amazing that we must take care that it doesn’t distract us from what’s most important: the content of the image. Capturing a great picture requires that we pay careful attention to how we arrange that content within the frame of the viewfinder. We have to think how we are going to represent our three dimensional scene in only two dimensions. What clues will we provide the viewer of our photograph to see depth? How will we separate the subject from the foreground and the background? Where will we place the point of interest? How do we select the best camera position? A camera has no aesthetic appreciation; it’s just a dumb tool. So we must make it record what we see so a person viewing the photograph will also see it. We make the decisions as to composition. And, the very word “composition” is intimidating because it implies rules. But there really are no rules. Composition is the arrangement and treatment of the various elements in the scene. And how we arrange those elements depends entirely on what we want our picture to say. As we frame our photograph in the viewfinder, paying attention to four things will help us get better pictures: 1. Near-middle-far 2. Viewpoint 3. Balance 4. Simplicity
Near-Middle-Far What’s in the foreground and background should not be there by accident; it should be there for a purpose. It should support, not detract from, the subject. The camera will move all three onto the same plane, so we must provide visual clues to separate them, clues like perspective, selective focus, and aerial perspective, to name a few. Viewpoint Our viewpoint determines linear perspective, which adds to the illusion of depth in our two-dimensional photograph. Viewpoint also determines the relative position of objects in the frame. So we should select a camera position that places elements where they support the subject, create the appearance of depth, and are in balance. Our camera’s position can also determine the arrangement of graphic elements like lines and shapes. Balance Elements in the frame should be in balance. A large object near the center of the frame can balance a smaller object near the edge of the frame. When it comes to compositional balance, it’s not just the size of objects that give them weight. Bright objects have more weight than dark objects. People have more weight than things. Symmetry can provide balance, but we must be careful not to create a boring, static composition. ![]() Simplicity Simple compositions can make powerful and elegant statements. Elements that don’t support the main subject should be eliminated. ![]() Balance The question that this unique exhibit causes me to ask is, “What do my photos really depict?” I am telling stories about life in the early 21st century, or am I re-creating pictures based upon a model of photographic conventions and beauty that was created seventy-five or more years ago? Am I copying images that many other people already make, or do I bring a unique vision to my work that tells a different story or tells the human story in a different way? subdued. Two common methods of subduing a distracting background are selective focus and aerial perspective. Selective focus can throw foreground or background, or both, out of focus. Aerial perspective is usually the result of atmospheric haze, making distant objects like mountains less distinct. Working the shot With these four compositional principles in mind, we move left or right, up or down; try different focal lengths; place the subject at different locations in the frame. We try different camera settings. It’s digital,there’s no expensive film to waste, so we capture a variety of compositions. We practice and learn. And, we mustn’t forget to use our feet. We don’t just zoom in and out. We use our feet to move around. Using our feet to move in and out creates a different perspective and a different composition from what we get by zooming the lens. Let’s not get lazy, let’s work the shot. And, since we’re not professionals, let’s have fun doing it, otherwise, what’s the point. Here are some examples of arrangements of compositional Elements. Remember, shapes and lines can be implied or can be completed by the edges of the frame.
![]() Gammagram March 2015 AFTER CAPTURE By Bob Hubbell and Chuck Pivetti Introduction This is our third article in the series, “Pre-Capture,” “Capture,” and “After -Capture,” a three-stage digital workflow. In the first article, we discussed “Pre-capture,” about getting into the right frame of mind to start seeing potential photos, about seeing with a photographer’s eye. In the second article, “Capture” we discussed capturing what we saw with that photographer’s eye, about seeing not only the subject, but what was in front of it and what was behind it, and, most importantly, about moving around and working the shot. Now it’s time to discuss “After-capture;” time to think about what we’re going to do with those pictures after we capture them in the camera. Two parts to the After-capture Stage Let’s think of “After-capture” as consisting of two important parts, organizing and post-processing. As camera club members, we take a lot of photos, so if we don’t have a well organized filing system for all those photos, we’ll never be able to find a particular photo later. After we complete a “shoot”, we’re anxious to move the photos to a computer. And, we will even be more anxious to use that computer to output those photos. We are so proud of those photos we want to share them right now. We want them on a device so we can show them to friends and family. We want to enter them in club competition. We want to share them on-line. There may even be one so great we want to print it, frame it, and hang it on the wall right now. Whoa… We’ve got to fight the urge to output our neat photos before we’ve completed the after-capture stage of our digital workflow, until we’ve gotten organized. It sounds tedious, but it’s really not, if we let Adobe do most of the work for us. And part of it can actually be fun. Organizing The first part of our after-capture workflow is downloading and filing our image files. Downloading to a computer can be done several different ways. But, our Adobe software does it best. Both Bridge and Lightroom provide a means to create file folders and subfolders, to rename our photos, to embed keywords and other metadata, to convert Camera RAW files to DNG files, and to store those files in both a primary and a backup location, all during the download process itself. So, if you use Bridge or Lightroom to do these things during the actual download, you will have taken the first important step in the after-capture stage of an organized workflow. We just need to take a few minutes to start the process. We insert the camera’s card in a card reader. We select that card in Bridge or in Lightroom. In Bridge’s “Get Files from Camera” we designate a location in which to store our digital image files, to create a folder and, maybe, sub-folders to hold those files, to select a location to store backups in, to rename and date the files, to check the boxes “Convert to DNG,” to designate a metadata template, and to specify metadata to add. Lightroom works in a similar fashion, except we would use the “Import” menu items that do all the same things. Good filing systems are hierarchical. Computers all use hierarchical filing systems so that there is a “Path” from the hard drive down through a system of folders and subfolders to each file. On Macs, the path to a file can be seen by using “CMD+Click” on the file name at the top of an open window. In the window for this Word document,that “Path” reads: MacHD>Users>Bob Hubbell>Files>Camera Club>Workshops>Three Stage Workflow>Capture in Two Parts.doc. (Both Bridge and Lightroom have a Workspace option to display the path to a selected file and we should consider using that option.) So, to sum up, in an organized photo filing system, there is a logical path to each and every digital image file starting with the hard drive and going down through folders and subfolders. A typical path to a photo might look like this: ComputerHD>PICTURES>TRAVEL>AFRICA>TANZANZANIA 2013>SERENGETI20130910>Wildebeest Crossing Mara Ri20130910.0023.dng. This is the path through the computer filing system to the 23rd photo of Wildebeest crossing the Mara River in the Serengeti Plain taken on September 10th, 2013. During download we could embed metadata in the image file that includes name, address, and e-mail address of the photographer, copyright status, and keywords like Africa, Wildebeest, Migration, Serengeti, etc. This will be added to the metadata that the camera has already stored in the image file like make, model, and serial number of camera, camera settings, date and time of capture, GPS coordinates of camera location, etc. We each need to create a system that works for us. Digital photographs accumulate at an amazing rate. Even when we get very selective and only save the “good ones.” Some of us will probably be more comfortable with an alphabetical or a numerical system. Most professionals file something like this: Photo Category folder>Shoot+date (yyyymmdd) subfolder>SubjectName+ Date+image number. An example would be: Hard Drive>Photos>Paid Shoots> FamilyPortraits>Smith Family 20150224>Smith John and Mary 20150224 0034.dng. Once we get all our new images descriptively named and tucked away in places where we can find them again, even ten years from now, we can move on to the fun part of after-capture, post processing. Post-processing Post processing? If we are at all serious about our photography, we aren’t going to let the camera convert our sensor data into an actual picture, we are going to do it ourselves. When we select RAW image quality in our camera’s menu, we save every bit of information that the sensor gathered at the moment of capture. But, that data is NOT AN IMAGE. Something has to interpret that data. Both Bridge and Lightroom will immediately create thumbnails representing our RAW files. But, those are really no more than place-holders representing the RAW file. Those thumbnails don’t begin to show us all the potential in each of our photos. It’s not until we start using the adjustments in Adobe Camera Raw (ACR) or Lightroom that we begin to see the potential of our digital image files. And we can move sliders around and try all kinds of adjustments in these programs and never worry about damaging our RAW file, because those adjustments are all totally reversible. Any post processing in these applications is called “non-destructive” processing. Even cropping is reversible… All photo enhancements can be made in ACR. (Since Lightroom uses ACR for its Develop Module we refer to either one when we say “ACR.”). ACR provides all basic adjustments like those for Straightening, White Balance, Exposure, Contrast, Shadows, Highlights, Blacks, Whites, Clarity, Vibrance, Saturation, Red Eye Removal, Noise Reduction, Sharpening, Resampling, Lens Issues, etc. ACR even provides for changing color space and bit depth. If our goal is just a good, straight photograph, we need go no further than ACR. We can save an image as a DNG, TIFF, PSD, or JPEG directly from ACR. We can also create a proof sheet or a PDF slideshow directly from ACR. Everything we’ve done in ACR is non-destructive! Every thing we’ve done is reversible. Every bit of the original pixel data remains untouched. We can always go back and start our post-processing all over again. All ACR adjustments are written as metadata instructions on how our image should be displayed. The metadata containing adjustments can be written to one of several locations; it can be written to either “sidecar” files located in a central location, in the same folder that contains the DNG files, in the image file itself, or in Lightroom’s “catalog.” How you set up your download process will determine where the adjustments are stored. To go beyond the straight photograph, we have to move on to Photoshop. But we only need Photoshop when we are creating something more than a photograph. Examples would be adding Type Layers, combining parts of different images, applying effects with filters like liquefying, and distorting, or posterizing. But for now, that’s enough about Photoshop. Let’s talk more about where we should store our ACR adjustments. The authors recommend saving ACR adjustments in the image file itself. But, neither ACR nor Lightroom can do that if the image files are left in the Camera RAW file format. Camera RAW file formats are proprietary to each and every make and model camera. Although Adobe makes it possible to open those files with their software, to do this, they constantly provide new software upgrades as each new camera comes on the market. The software allows us to open these files and make ACR adjustments, but we cannot write to those Camera RAW files. And, we really, really need to understand that. Now, Adobe has provided a wonderful solution. They provide the means to convert our Camera RAW files to Digital Negative Files (DNG or .dng). DNG is universal and open software. DNG is also the native raw format in several high-end professional cameras that cost tens of thousands of dollars (what’s that tell you?). Converting to DNG has several great advantages. First and foremost is that your ACR adjustments can be saved right in the DNG file, itself, and so they cannot become separated from the image file. (Almost all the questions addressed to Tim Grey’s daily Q&A column have to do with retrieving lost connections in the Lightroom Catalog) But wait, there’s more! Converting to DNG strips away a lot of manufacturers’ metadata that is of no use to us. So our image data plus our adjustment data actually add up to a smaller file. And, probably most importantly, if a manufacturer no longer supports its own proprietary Camera RAW format it will not affect our old archived DNG files. Gammagram April 2015, The Zen Master Meets a Digital Camera, Chuck Pivetti, Co-authored with Bob Hubbell There is a mountain of fabled beauty somewhere in the East. One day a Zen Master went hiking on that mountain. Presently he met a digital camera, also climbing the mountain. The views were gorgeous and the mountain itself spectacular. Its peak was said to be of indescribable beauty but it is almost always obscured by a thick blanket of clouds. The camera takes many photographs, clicking away. The Master merely looks. Finally the camera cannot contain itself any longer and says, “I’m getting great shots! My automated knowledge base tells me if I am set properly for the light. Further, it applies the rule of thirds to my compositions and monitors the edges of my frames for distractions. It tells me if I’m holding the camera level. It even tells me just where I am on the mountain!” THWACK! The Master hits the camera a heavy blow with a tripod leg. (Not to worry; the camera’s waterproof, shockproof body is made of case hardened titanium, able to withstand a blow of 9.0 on the Richter scale.) The camera realizes perhaps it needs to operate differently, but it finds no answer in its automated knowledge base. They continue up the mountain. Every so often the Master stops and views the astonishing scenery. He smiles quietly at the lovely pictures he sees. The camera tries to aim just where the Master is looking, using its autofocus zoom lens and high dynamic range auto-compensation setting but they don’t produce the images it’s hoping for. Finally the camera asks the Master, “How do you find those spectacular pictures?” “First you look, then you see,” the Master replies. “But Master,” the camera replies, “You don’t even have a camera. And my camera is a technological wonder. My lens has a 16-element, 13 group configuration, with fluoride optical glass and an ultrasonic motor. Your eye has only one element, made out of liquid, and it doesn’t even zoom! How can you see any pictures at all?” THWACK! A fierce blow to that dense titanium. “I see beautiful pictures because I have a soul. I feel. You do not have a soul. You cannot feel; you can only look,” says the Master. “You can only record what is there. Let us work together so that you can record what I see, which is what I feel.” The camera becomes enlightened: Great photographs come from outside the camera! Its automatic knowledge base is merely a servant. Hmmm. The camera puts itself on silent shutter and follows the Master. It reconfigures itself to Manual Mode in order to serve with more sensitivity. Together they continue up the mountain, working together to make photographs. Always looking, sometimes seeing. It’s hard work. Drops of sweat appear on the camera’s polarizer but the photographs keep improving. Occasionally they even glimpse the peak through its blanket of clouds. Gammagram November 2015, After Capture, Chuck Pivetti, co-authored with Bob Hubbell ![]() Bob’s First Camera INTRODUCTION In previous articles we asked you to think about photography in three stages: “Pre-Capture”, “Capture” and “After-Capture.” “Pre-capture” is when you put aside your everyday concerns, adjust your way of seeing, and start seeing with a “Photographer’s Eye. “Capture” is when you use the camera to actually capture what you see with your photographer’s eye, when you “work your shot,” when you make sure you’ve gotten the very best photo of your subject. (Interesting that in the digital age we don’t “take” pictures, we “capture” them…) And now we’re going to share with you our thinking on “After-capture:” down-loading, saving, organizing, managing, and post-processing your photos. Hopefully we’ll keep you from making some of the mistakes we’ve made. WHAT IS A DIGITAL PHOTO? What the heck does a digital camera capture? Back when we used film, we had some idea of what was going on. When we exposed film, a latent image was formed. And, that latent image could be turned into a visible image by developing it. With film the resulting photo was some-thing tangible. We could pick it up, feel it, smell it, and see it. Not so with a digital image. Digital cameras perform a lot of hocus pocus that is far beyond our understanding. But they don’t “take pictures.” They capture millions of little bits of binary code. And that code has to be interpreted by software and converted to a visible image on a computer display. Your camera’s instant playback of your capture is only one of an infinite number of interpretations of that binary code. Your camera interprets the data with it’s own built-in “algorithms” to generate a visible raster image on its LCD. That playback will reflect the settings you’ve made on the camera. Some of your camera settings will affect the capture and some will affect the Playback. “Capture settings” include focal length, focus, aperture, shutter, and ISO. Think of these as “fixed” affects. You’re pretty much stuck with them. “Playback settings” include White Balance, Dynamic Range, Saturation, Contrast, vibrancy, etc. Think of these as “flexible,” there’s wiggle room to tweak them after the capture. If you’ve selected “jpeg” quality in your camera’s menu, the camera will save one interpretation out of all that binary code and it will discard all data that’s not needed for the interpretation. And, that’s the film-day equivalent of letting a 2-hour lab develop your film, hand you the prints, and throw away the negatives. There’ll be times when you’ll settle for that jpeg image, and there’ll be times when you’ll want something better, so you’ll save the camera’s RAW files and process them in photo software. And, that’s where we’re going next. ORGANIZING AND SAVING YOUR PHOTOS With digital cameras it’s easy to capture thousands of photos. And, when you post-process your photos, you create one or more derivatives of each. So how are we ever going to find a particular photo among thousands? Here’s a scenario you’ve probably lived through: Time to go! Where are the car keys? We look in all the logical places. Not there. Near panic. Maybe in my gardening pants? . . Nope… Panic! Now contrast that with the following. You return to a large metropolitan air-port after a trip. Where’s the car? Ah, on your stub you’ve written, “Parking Garage Two, Third Floor, Aisle 6, Stall 57.” And here it is. That’s more like it, and that’s sort of how you ought to park our photos. Didn’t know your computer was like airport parking? Instead of garages, your computer has hard drives. Instead of floors, it has folders; instead of aisles it has subfolders. And instead of stalls has file names. Dedicate one hard drive to photos and name it “Photos.” Duh. Create folders in that hard drive with names like “Family,” “Friends,” “Travel,” “Nature,” “Architecture,” or what best fits how you categorize. You might file by year and then by category. Dedicate another hard drive as a backup for “Photos.” You could, maybe, name it “Photos Backup.” Brilliant, huh? As you download and post-process your photos, save them in both hard drives using exactly the same folder and sub-folder system. Why backup? Because you can’t totally trust these gadgets. Anyway, external hard drives are cheap compared to the value of our photos. Once you have a system, you can down-load and save your photos with confidence that you will always be able to quickly find a particular photo. DOWNLOADING These Adobe programs are designed to manage photo files. They will rename and serially number the photos from your day’s shoot. They will add key-words and your personal metadata template to each photo. They will let you select existing folders or create new folders in which to store and backup your photos. When you save RAW photo files, Bridge or Lightroom can automatically convert your RAW file formats to DNG (Digital Negative) file format during the down-load process. You can choose to keep the RAW files as well as the DNG files, but we see no point in that. Using Bridge or Lightroom, you should critically review the photos on the cam-era card and select only the keepers to download. That’s not easy, and most of us keep too many. Too many? From an afternoon shoot in the Serengeti I had hundreds. POST PROCESSING With every photo we get two chances. The first one is when we do our very best to get it right in the camera. The second is when we do our best to improve it in post-processing. When you do a good job in the camera, you can enhance the photo in post-processing. But, there’s not much you can do for a bad capture. You should do as much post-processing as possible in Adobe Camera Raw. (If you’re a Lightroom user, the Develop module of Lightroom is Adobe Camera Raw. So from now on we’ll just call it “ACR/LR.”) Photo editing in ACR/LR does not re-place or destroy any of the data saved with the original capture. Instead, it writes additional instructions on how to display the image. If you’ve saved your files in camera RAW, those instructions are saved in a separate file called a “sidecar” file. If you converted your RAW files to DNG, the additions are written into the DNG file, itself, which has many advantages. ACR/LR has become so sophisticated that now we only need Photoshop for actual image altering. Here’s an example of an image as shot, as processed in ACR, and as cleaned up in Photoshop. In the below series all the images are from the same DNG file. ACR was used to straighten the horizon, crop, slightly correct vertical perspective, dehaze, add vibrance and clarity, and create the tonality of the image. In Photoshop, the two signs, the low fence, and the man at the table were removed. No post-processing step in the light-house image was irreversible. Even the Photoshop version is reversible because all cloning was done to a blank pixel layer and that version and its layers were saved as a PSD. “But, be careful, some post-processing steps are irreversible. Take care to al-ways keep your original RAW/DNG file. Once you’ve done your basic editing in ACR, work on derivative files like PSDs and JPEGs. ACR, in its recent upgrades, has almost everything you will ever need to bring an image up to a true representation of your vision. Lighthouse in fog as shot Lighthouse with ACR used Lighthouse Photoshopped You can comfortably work in ACR with the knowledge that you won’t compromise the potential of your original capture. As you gain experience at post-processing, and as the software is up-graded, you can return to your original capture and create a totally new version, one with even more pizazz. And, A Last Thought: Digital Workflow is Probably not Linear Photography requires both vision and technique. It’s both left-brained and right-brained. Vision informs technique and technique informs vision. Both grow with careful practice. Thus our sequence---Pre-capture, Capture and PostCapture--- may not be as rigid a sequence as it seems. You should find yourself going back to earlier steps because later steps may re-quire that you change some of your earlier thinking. Further, working deeper into an image often brings deeper in-sight maybe even leading you to a totally different interpretation. Not only will you find yourself going back to your RAW/DNG data, but also sometimes you’ll feel a need to go back to the original scene of the capture and try again. That’s both the challenge and the reward of photography. Summary Save your captures in RAW or DNG. Give them identifying names and add keywords and personal metadata. Save them in an organized filing system. Post-process photos in ACR/LR. Use Photoshop to alter the reality of the original capture, to remove distracting elements, to stitch together panoramas, to combine bracketed layers (HDR), to build composites, and to apply artistic filters. Rename each derivative from the original capture to avoid over-writing and losing that original. ![]() [errata note: ARC under 2nd image should read ACR] Gammagram April 2016, FotoSpeak 101, Lesson 1, Chuck Pivetti, coauthored with Bob Hubbell (Bob and Chuck Clarify Some PhotoSpeak) “Focal-length Multiplier,” “35-mm Equivalent,” “Cropped Sensor,” “APS-C Sensor,” “Full-frame Sensor”. Do you apply a factor to your lens focal length to understand how it will affect your photo? Maybe you shoot with a Canon and you multiply your lens focal length by 1.6? Or you use a Nikon and multiply by 1.5? Or, maybe, like us, you’re a micro-four-thirds shooter and you multiply by 2? But then, if you have a camera with a “full-frame” sensor, you don’t need a multiplier, right? Those only apply to “cropped-sensor” cameras? Why do we think in photospeak terms like “35-mm equivalent,” “focal-length multiplier,” “cropped sensor,” and “full frame?” Let’s explore the origin of these terms and why we use them. It actually goes back to the 1880s when Thomas Edison’s assistant, William K. L. Dickson, developed a practical motion-picture camera.
![]() That camera used George Eastman’s new celluloid film, and it had sprockets to advance the film through the camera. At about the same time, the French brothers, Auguste and Louis Lumiere, developed a practical motion-picture projector. Voila! A whole new entertainment industry was born: the “Movies!” The early 1900s saw an explosive growth of the movie industry. By 1910 Charlie Chaplin and Mary Pickford were house-hold names. Folks in McAlester, Okla., knew who Douglas Fairbanks was. Big movie studios like Paramount, RKO, 20th Century Fox, Warner Bros., and MGM were soon cranking out movie after movie. Movie film, being produced in large quantities, became inexpensive and readily available. Early movie film was 35-mm wide and had sprocket holes to move the film vertically through cameras and projectors. Space between the holes was 24 mm. Somehow, a four-thirds aspect ratio was settled on for an image area 24x18 mm. So, what did old movie film have to do with today’s digital photospeak?Every-thing! In 1913, a guy named Oskar Barnack, head of R&D for Ernst Leitz Optische Werke, Wetzlar, Germany, got the idea of using movie film for still photography. So, he built a prototype 35-mm still camera that was a little clunky, but worked. In 1924 Oskar’s company started produc-tion of the first still camera to use 35-mm movie film, the “Leica”. Oskar ran the film horizontally through the camera, so he decided on an image area 24 x 36 mm for a 2:3 aspect ratio. Oskar’s camera was a big hit; it could be used quickly, informally, and even surreptitiously. Photographers like Cartier-Bresson, known for his “decisive moment,” made the Leica famous. Soon other manufacturers started producing 35-mm cameras. Over the next seventy years, we would see a proliferation of 35-mm cameras. They came in point-and-shoots, rangefinders, single-lens reflexes and even twin-lens reflexes. From 1970 to 2000, the 35-mm camera far outnumbered all other formats. A built-in light meter and the 35-mm film cartridge loaded with color slide film made it easy to use. By 1990, the camera of choice in every camera club in the world was the 35-mm single-lens-reflex camera (SLR). Then….. The dawn of the 21st Century brought a paradigm shift to photography. Film was replaced with pixels. But there was a problem. A digital sensor the size of the 35-mm format would be too expensive for casual and amateur photog-raphers. Fortunately, a sensor that size wasn’t really necessary to produce quality images. To make an adequate and affordable digital SLR, manufacturers adopted a smaller sensor, about 15x22.5-mm. They called it the “APS-C” format after the “Classic” format of the short-lived “Advanced Photo System” of the 1990s. Photographers who had grown up with the 35-mm camera, who had never known any other format, now had to figure out what focal-length lens to use. To help them, the industry introduced the concepts of “Full Frame,” “Cropped Sensor,” “35-mm Equivalent Focal Length,” and “Focal-length Multiplier.” Our need to adjust to a different format is all Oskar’s fault. He built a camera that used 35-mm movie film and created the 24x36-mm format that would dominate photography for a hundred years, a format that would become so indelibly stamped in our brains that its “full frame” would be our frame of reference in the digital age. And that’s what old movie film had to do with today’s photospeak terms: “Full Frame,” “Cropped Sensor,” “Focal Length Multiplier,” and “35-mm Equivalent.” Epilogue What might all this mean to you? It probably depends on how old you are. If you began serious photography with a digital camera, it probably means nothing! You simply choose lenses with the focal lengths you need. But, if like us, you were a 35-mm film shooter, you probably have to adjust your thinking. Lens focal lengths don’t trans-late directly from filmcamera thinking to digital-camera thinking. So, if you shoot an APS-C sized camera, you probably need the “focal-length multiplier.” Let’s say you’re shopping for a long lens, a 400 or 500 mm lens. The clerk smiles and brings out a 500-mm, 18 pound monster, priced at $7,995.00. Then you smile and say, “But I shoot the APS-C size Nikon,” and, using Nikon’s multiplier, you calculate 500/1.5 equals 333, so you say, “What do you have in a 300-mm lens? The clerk, no longer smiling, brings out the 2 pound, $500, Nikon 70-300-mm lens. You leave the store with a perfectly serviceable long lens that’s “equivalent” to a 105-450-mm, weighs 2 pounds instead of 18, and costs $500 instead of $8,000. In summary, if you’re shooting with old Oskar’s full-frame, enjoy it and take advantage of its full potential. But if you’ve caught up with the times, leaving old Oskar’s legacy behind; your “cropped sensor,” does the job nicely. Keep in mind, though, that equipment never replaces vision and technique. Keep practicing. Gammagram May 2016, PhotoSpeak 101, Lesson 2, “White Balance,” “Color Temperature,” and Why Hot Horseshoes are Cool, co-authored with Bob Hubbell (Bob and Chuck Clarify More of the Mysterious Language of Photography) Oh, the agony. One of us had proudly entered what was surely to be “Open” image of the night, an incredible photo of a cherubic, bare-bottomed, wide-eyed, two-month old grandson looking right into the camera while floating on a cloud, only to hear the judge mark the image down because the cloud wasn’t pure white. Ouch! The image’s “White Balance” was off…You know what “white balance” is, right? It means white is white and not some other color. Duh… Well, actually, when white is white in your photo it indicates that no colorcast has been introduced by the light source. Whites and grays have been rendered “neutral.” An example of a colorcast introduced by the light source might be a US flag photographed at sunset. The white stripes would appear pink and the blue field would appear purple. If you’re like us, you probably set your camera on automatic white balance (AWB) and forget about it. Then again, if you’re a perfectionist you probably worry about it.Cameras and post processing software have settings to correct for typical light sources we might encounter. These settings usually include sunny, cloudy, shade, tungsten, fluorescent, flash, custom, and color temperature. Presumably, if you select the setting that matches your lighting, your photo will contain white whites and gray grays, and all other colors will fall into line. Any colorcast created by the lighting will be corrected. Settings like “sunny,” “cloudy,” “shade,” “tungsten,” and “fluorescent” are easy to understand. “Custom” usually requires us to record a setting using a “Gray Card” lighted by the same source we will be using for our photo. But, what the heck is “Color Temperature?” Well, here’s what Wikipedia says: … the color of a light source is the temperature of an ideal black-body radiator that radiates light of comparable hue to that of the light source…Huh? “Black body radiator?” “Temperature?” “Comparable hue?” Here we go again back to the 19th century to try to explain a photospeak term. First, imagine the village blacksmith heating a horseshoe in his forge until it glows. At first, the horseshoe glows red; then, as it gets hotter, it glows yellow; and then white; and finally, when it’s really hot, it glows blue. That hot horseshoe is a “black body radiator.” Maybe not an ideal one, but close enough. So, its color as it reaches higher and higher temperatures, measured in “Degrees Kelvin,” can used to describe similar colors of ambient light under which we take our photos. “Degrees Kelvin?” Yep, in 1848, Sir William Thompson, First Baron Lord Kelvin, presented a paper at the University of Glasgow in which he said that “absolute zero,” the total lack of heat, was -273º Celsius. By international agreement, a “Kelvin scale” was established with -273ºC as its zero point. Now, here we digital photographers are, a hundred and sixty years later, using Lord Kelvin’s scale to describe the color of light sources for out photography. The scale below shows light sources and the Kelvin temperatures of their comparable colors radiated by that horseshoe. ![]() Now all this may seem straightforward. We can set our cameras to correct a color cast created by any of several light sources. And if that doesn’t do it, we can make further corrections during post processing. But watch out, it gets tricky. The first question is, “If it looks white to me under a particular light source, won’t the camera record it as white?.” And the answer is, “Probably not, because our brains apply their own kind of automatic white balance. As a result, we tend to see white as white under various light sources.” The next question is, “If our brains have automatic white balance, why don’t they automatically adjust the white balance when we look at a photo?” And the answer is, “When we look at a photo, our brains are busy adjusting for the source of light under which the photo is being viewed, which is more dominant than any colorcast in the photo itself.” And then the most important question, “Do we really want white to look white in our hoto?” And the answer is, “Not always.” Why not? Well, we call the hours just before sunrise and just before sunset, “the golden hours.” Those are the hours when our photos have a nice “warm” glow. Outdoor portraits during these hours are very flattering. And, who would want to remove that warm glow from the candles on a child’s birthday cake? Heck, we might even make that glow a little warmer yet in post processing. To summarize: Photographic light comes in color hues ranging from red through yellow and white to blue. These hues can be compared to the temperature of heated iron using the Kelvin temperature scale. Lower temperatures are reddish-yellowish. Higher temperatures are bluish. And be aware, the settings on your camera (and in photo processing software) correct for the temperature of the light source. If you select a higher temperature source of light, the photo will be redder. In Adobe Camera Raw or Lightroom, for example, increasing the temperature slider will make the image redder, decreasing it will make the image bluer. And just to keep you on your toes, somewhere back in the 18th century, the art world developed the habit of calling reddish hues “warm” and bluish hues “cool.” So highKelvin-temperature light sources produce “cool” photos and low-Kelvin-temperature sources produce “warm” photos. So, colorwise, hot horseshoes are really cool. ![]() Gammagram June 2016 PhotoSpeak 101, Lesson 3, A “Stop” Is An EV, Chuck Pivetti (Bob and Chuck Clarify Another PhotoSpeak Term) “It’s a stop overexposed.” “Open up two stops.” “It’s a couple of stops under exposed.” “Stop down to get more depth of field.” “Stop” may be the most over-worked and misused term in photography. Pay attention now: a “stop” is an Exposure Value (EV). If you search the internet for,“In photography, what’s a STOP?,” you will find about a hundred sites that say a “stop” is an “f-stop.” They are wrong! A “Stop” is an EV. Repeat that to yourself several times, “A stop is an EV.”And what’s an EV? Well, it’s an Exposure Value. And what’s an Exposure Value? An Exposure Value is a step in exposure that doubles or halves the amount of exposure. Doubles or halves the exposure? Sounds like a lot, but it really isn’t. ![]() Here are seven exposures of a gray card. From left to right each exposure is one EV more than the preceding exposure. The metered exposure is in the center. So, the camera captured a range of seven EVs from very dark gray to very light gray. Exposure is a combination of three elements: 1. amount of light; 2. time of exposure to light; 3. sensitivity to light. The amount of light is controlled by the aperture, the exposure time is controlled by the shutter, and the sensitivity is controlled by the ISO value. And all three of these things, in turn, are determined by the photographer. So how did “stop” come to mean EV? Here’s where we think this ubiquitous photospeak term may have come from: old manual film cameras. We both grew up with cameras that had levers and dials to set aperture and shutter values. ![]() The levers and dials to change settings clicked into little spring-loaded detents at each full EV value. In other words at each “stop.” Could it be that simple, a “stop” was really just a “stop?” The levers and dials to change settings clicked into little spring-loaded detents at each full EV value. In other words at each “stop.” Could it be that simple, a “stop” was really just a “stop?” Let’s dig into stops a little deeper. The typical sequence of shutter values that increase exposure by one EV is: 1/1,000, 1/500, 1/250, 1/125, 1/60, 130, 1/15, 1/8, 1/4, 1/2, 1s, 2s, 4s, 8s, 15s, and 30s. The typical sequence of aperture values that increase exposure by one EV is: f/64, f/45 f/32, f/22, f/16, f/11, f/8, f/5.6, f/4, f/2.8. And, the typical sequence in ISO values that increase exposure by one EV is: 100, 200, 400, 800, 1600, 3200, and 6400. These three give us great flexibility in how we set the exposure. Time for a quiz: Choose the more accurate statements considering that you are taking a meter reading in a field of snow: A. You need to overexpose by two stops. B. You need to increase the metered exposure by two stops. The second choice is better. Why? Because if you overexpose by two stops, everything will be blown out! That’s not what you want. Remember 18% gray, the brightness level of the average photographic scene? Your meter recommends that all exposures be set to produce images with brightness averaging around 18% gray. So the meter’s recommended exposure for the bright snow scene will be underexposed because sunny snow isn’t average. Thus you have to overrule the meter’s recommendation and open up maybe two stops to get a realistic image. Note that you didn’t “overexpose” by two stops. The meter’s reading would have underexposed the image. Instead, you increased the exposure by two stops to get a “proper” exposure. You corrected the meter’s “mistake”. But wait! You see a black cat in the snow. It runs into a cave and you follow, hoping for a good image. Hmmm. There’s some light but it’s mighty dark in there. You whip out your camera and get a meter reading. What now? The meter’s going to tell you to lighten up! (We could all use that.) But we know the meter can’t really think the situation through. All it can do is suggest an exposure that will make the image average in brightness. You need the dark moodiness of cat + cave, so you need to decrease the meter’s suggested exposure one or two stops---maybe bracket and try both---to produce the darker image that you want. In both examples we’ve been thinking in stops-- -EV’s---Exposure Values. But remember, meters can’t think! Sweet automatons that they are, all they can do is suggest is an exposure that will produce the appearance of average light! Fortunately, that works in many cases (after all, lots of scenes are in average light) but as careful photographers, we want to decide which scenes should be rendered as average and which need more careful consideration. And we do that by deciding whether the EV suggested by the meter will produce the desired image, or if we need to adjust the meter’s reading up or down. In this article we’ve tried to do three things: clarify that “stop” means Exposure Value; explore where the term, stop, came from; and illustrate why thinking in terms of Exposure Values helps us think clearly about adjusting exposures out in the field. And we think it’s time to stop. Gammagram July 2016 PhotoSpeak 101, Lesson 4, FP Flash, Chuck Pivetti (Bob and Chuck Clarify more Photospeak) Your Nikon or Canon Speedlite has a feature called “FP Flash”, sometimes referred to as “high-speed flash.” Here’s another term with hundred-year old roots back in Germany when the Leica appeared on the scene to begin the 35-mm camera era. The Leica had a revolutionary new kind of shutter. Instead of a leaf shutter located between the lens elements, this new shutter was located in the back of the camera just in front of the film, or at the camera’s “focal plane.” This “focal-plane” shutter consisted of two cloth curtains that ran horizontally just in front of the film. The first curtain opened to begin the exposure and the second curtain followed to end the exposure. Exposure duration was determined by the time between the start of the first curtain’s movement and the start of the second curtain’s movement. Eventually the focal plane shutter found its way into large-format press cameras with an interesting result referred to as “focal plane shutter distortion.” Focal Plane Shutter Distortion ![]() In the above photo, the camera was panned left to right as the shutter curtains moved, exposing the film, from top to bottom of the focal plane (remember, the scene is inverted at the back of the camera). This rendered the spectators as leaning to the left and the race car as leaning to the right. When we two were kids, the comics always portrayed fast moving cars and trains as leaning forward because that’s the way they appeared in news photos. The “Forward Lean” The distortion resulted from the slow movement of the curtains. Even if the time between the two curtains starting to move was only 1/1,000 second, it took the curtains 1/30 second to complete their travel. In that 1/30 second, a train traveling at 60 miles per hour would move about three feet. So the top of the train would actually be a couple of feet ahead by the time the shutter completed its exposure. And what’s this have to do with flash photography? Well, if the light source doesn’t last as long as the shutter curtains are moving, only a portion of the film or sensor will be exposed by that light source. That problem was solved in the days of the Speed Graphic by a flash bulb designed to provide light for the 1/30 second that it took the shutter curtains to expose the entire film. And, guess what? That flash bulb was called an”FP,” or Focal Plane, bulb. (Those bulbs got very hot, so the flash “gun” of the day had a button to eject the bulb.)All modern digital cameras with interchangeable lenses use focal-plane shutters. The new shutters are quicker; some can travel across the film plane as quickly as 1/250 second. But, a modern electronic flash puts out an almost instantaneous burst of light that can only be used at exposures of 1/250 second or longer. Shorter exposures would result in only part of the image receiving flash exposure. At higher shutter speeds (shorter exposures), the second curtain starts covering the sensor before the first curtain has uncovered it. The highest shutter speed (shortest exposure duration) at which both curtains are clear of the sensor is called the flash synchronization speed, or “synch speed,” of the camera. For most in-door flash photography, ambient light levels are low enough that the flash becomes the primary source of illumination. And, since flash duration is practically instantaneous, shutter speed has no effect on exposure. But, outdoors in daylight, flash photography is a whole new ball game. Flash can do a great job of filling dark shadows in outdoor portraits. But, if you had to use shutter speeds of 1/250 or slower, you would not be able to use large enough apertures to soften the background. Here’s where “FP” synch comes to the rescue. When your speedlite is set to FP Synch, you can use as high a shutter speed as you need, because in FP mode your speedlite will flash repeatedly as the focal-plane-shuter curtains move across the face of the sensor. There is a price to pay for using this feature. First, the effective range of your flash will decrease considerably. Second, recycle time will increase. But, for most outdoor people pictures neither will create a problem. Although higher shutter speeds are used with FP Synch, exposure duration can actually be longer than would result from using a slower shutter speed. If flash is the primary source of light, one instantaneous burst will result in a much shorter exposure time than a flash that pulses repeatedly while the shutter curtains are moving. In summary, focal-plane shutters achieve very short exposures by creating a small gap between rapidly moving curtains. “FP Synch” is a flash mode in which the electronic flash pulses while the gap between the shutter curtains is moving across the focal plane. . “FP Synch” allows us to use electronic flash with exposures shorter than the camera’s “synch speed.” “FP Synch” is useful in outdoor portraiture, but it reduces the effective range of the flash and increases recycle time. It may also result in a longer exposure due to repeated bursts of light during the entire shutter travel. Gammagram August 2016 [see Gammagram for embedded images?], PhotoSpeak 101, Lesson 5, “f-stop”, Bob Hubbell and Chuck Pivetti Clarify Another PhotoSpeak Term Don’t you just love to use a little PhotoSpeak when you’re around the point-and-shoot or cellphone crowd of photographers? And what’s better than good old “f-stop?” An “f-stop” here, an “f-stop” there, sounds like you really know what you’re talking about. You say, “Stop down to get more depth of field…” How come you “stop down” but don’t “stop up,” you “open up” instead, and aren’t confessing anything. You know when you say “stopping down” or “opening up,” you’re talking about changing the size of the “aperture,” and you know the aperture is that little hole up by the lens that lets in light. And, you also know from reading one of our previous articles that a “stop” is an Exposure Value (EV). Therefore, an “f-stop” must be a way of selecting an EV by selecting an aperture setting. ![]() Aperture Mechanism Inverse Square Law But, where does the “f” enter into it? It’s the aperture, right? So why isn’t it an “a-stop?” And, the weird photographer you are, you use larger numbers to represent smaller apertures. Why? If you’ve ever felt confused about this stuff, read on and you will wonder why you were ever confused in the first place. We are going to tackle that confusing, mysterious, and obscure science and mathematics of the camera and shed light on it all.. And, maybe mix some metaphors along the way...First, you should have a thorough understanding of electromagnetic radiation and particulate photon energy as expressed by Einstein and Plank in the equation E=hf. Just kidding. Unfortunately, most modern digital cameras add to the confusion by displaying exposure settings in simple numbers (instead of the fractions that they are), like “8.0” and “250.” As a photographer, you should always write (and think) these exposure settings as “1/250 @ f/8” It’s okay to say “eff eight”, but you should always write it as “f/8” and always think of it as “f divided by 8” because it’s really 1/8 of the lens focal length. (Similarly, you should get in the habit of writing the shutter setting as a fraction like “1/250” and think “one two-hundred-and-fiftieth of a second.”) With your thinking no longer fooled by those little numbers in the camera display, you see that an aperture of f/4 on a 100 mm lens would have a diameter of 25 mm, on a 50 mm lens that same f/4 aperture would have a diameter of 12.5 mm. Yep, the same f-stop gives different aperture diameters on lenses of different focal lengths. How can this be? Well, it’s all about the physics of light; in this case, the “inverse square law.” (Wikipedia says the “inverse square law” is any physical law stating that a specified physical quantity or intensity is inversely proportional to the square of the distance from the source.) Think of it this way. At night you shine a flashlight on a wall. As you walk toward the wall, the lighted area on the wall gets smaller and brighter. And, since light obeys the “inverse square law,” if you approach the wall from 16 feet to 11 feet, the light on the wall becomes twice as bright. That’s because 11 squared is half of 16 squared. Put another way, the light beam at sixteen feet spreads out over twice the area that it does at 11 feet. Now think of the aperture near the front of the lens as being the flashlight and the image plane at the back of the camera as being the wall. If you move that aperture farther from the image plane, the light passing through it has to cover a greater area so it’s intensity at any one point will be reduced. So, stating the aperture value as a function of focal length makes it possible for the same aperture value to result in the same exposure no matter the lens focal length. Summing all this up, what are we really trying to say? First, f-stops are aperture settings and each of these settings changes exposure by one EV. We think the “stop” part of the expression comes from the fact that many years ago, aperture settings were made by moving a lever that clicked into place at each EV. And, we said the “f” stands for focal length. And, further, we suggest that you keep your thinking correct by always writing aperture settings as fractions like f/4, f/5.6, f/8, etc. We further suggest that when you think about them you think of them as focal length divided by 4, by 5.6, by 8, etc. And, maybe the most difficult idea, the same f-stop will have different diameters on lenses of different focal lengths because of the inverse square law, the law that says light spreads over a larger area as distance to the source increases and, as a result, loses brightness. That’s why f/5.6 will result in one full EV less exposure than f/4. In any case, we urge you to use Manual Exposure Mode. It will force you to think about exposure settings. You can still rely on the exposure meter in your camera, but think of it as suggesting an exposure setting that you can either accept or reject. You are smarter than your camera because you know things it doesn’t. You know whether your subject is brighter or darker than average, you know if you need to stop action, and you know if the background needs to be soft or sharp. An interesting aside; shortly after George Eastman invented his Kodak, he offered a model with adjustable apertures. He wanted it simple enough for anybody to use, and he thought that f-stops were beyond the understanding of the average person, so he developed what was to become a very short-lived “US” aperture scale. US 1 was f/8, US 2 was f/11, US 3 was f/16, and US 4 was f/22. Apparently George could not imagine that one day there would be lenses of different focal lengths or lenses faster than f/8. |