The Key Elements of the Film Look How Digital Cinematography is Changing Contemporary Filmmaking Table of Contents Preface 1 Introduction 2 Pre-Analysis 3 The Film Industry Goes Digital 3 RED Cameras 4 Footage Quality 5 Portability and Affordability 7 Methods of Production 9 Methods of Post-Production 10 Hardware and Software Issues 12 The Merging of Video and Photography 13 Projects Made with RED 14 The Future for RED 14 HDSLR Video 15 HDSLRs in Professional Situations 15 Icelandic Feature Film Made with HDSLRs 17 Problems with HDSLR Video 18 Pre-Analysis Conclusion 20 Initial Problem Statement 22 Analysis 23 From Video to the Film Look 23 Frame Rate 24 Interlaced vs. Progressive 24 Shutter Speed 25 Depth of Field 25 Aspect Ratio 27 Color Grading 28 Dynamic Range 29 Film Grain 29 Elements to be Tested 29 Final Problem Statement 30 Test Methodology 31 What Will Be Tested 31 Factorial Experiment Design 31 Instructions and Bias 32 One or More Viewings? 33 Test Questions 34 Design and Production 36 Test Footage Design 36 Shooting the Test Footage 37 First Two Attempts 38 The Perfect Setting 39 Color Grading and Preparing Test Footage 39 Testing and Results 43 Conducting the Test 43 How Results are Described 44 Absolute Results 45 First Viewing and All Viewings 45 Second Viewing Averages 46 Relative Results 48 Student’s T-test 49 Comments from Participants 49 Discussion 50 Discussion of Test Results 50 DOF Specific Results 50 Color Grading Specific Results 51 Significance of the Results 51 The Testing Method 52 Conclusion 55 Bibliography 56 Appendix i Test Results i Interview Transcripts iii Saga Film Iceland, interview transcript iv Framestore, Iceland vii Nick Strange Thye interview, transcript viii Prami Larsen, interview transcript xiv Sidney Plaut, interview transcript xx Sammy Larsen, email interview xxiv Thomas Øhlenschlæger, interview transcript xxvi Preface I would like to thank Jörundur Arnarson and Linda Kristjánsdóttir of Framestore in Reykjavík, Örn Sveinsson and Ingvar Hreinsson of Saga Film in Reykjavík, Nick Strange Thye and Thomas Øhlenschlæger of Ghost in Copenhagen, Sidney Plaut of Spearhead Pictures in Copenhagen and Prami Larsen of the Danish Film Workshop for granting me interviews and giving me a great insight into the film industry today. My supervisor Stefania Serafin I thank for her guidance and Viggo Holm Jensen who was very helpful throughout the project. I would also like to thank my fellow Medialogists Camilla Hägg, Niels Christian Nilsson and Birna Rún Ólafsdóttir for assisting me in shooting and acting in the testing videos, and all those who participated in the test. A CD is included with this report in a pocket on the last page. It includes the report in a PDF and text format as well as the four videos produced for this project. Introduction You might be reading these words from a sheet of paper you are holding in your hands or you might be reading them from the screen of your computer. These are the two most likely options and although the words will be exactly the same there is a fundamental difference in these two presentations. One is analog, tangible, imperfect. The other is digital, binary, flawless. One is solid, the other is fleeting. One can never ever be copied perfectly, the other can be multiplied a billion times and always be exactly the same. In his book, Cinema in the Digital Age, Nicholas Rombes writes; “At the heart of the perfect digital image - coded by its clean binaries - is a secret desire for mistakes, for randomness [...].”  [Rombes] A central theme in his book is the term mistakism, which he says is the secret desire people have for seeing mistakes. People want flaws.  Human beings have the extraordinary talent to connect and feel something towards not only other people but also things. Music, film, books, and any object or fictional character can evoke feelings. Everything can have a personality, a story, and our society thrives on these stories. We don’t want perfection, we want experiences. That is why we like the little imperfections of things because it makes us connect to them and it gives them personality and a story. One of peoples favorite way to share stories is through moving images and as with so many other things we are in the middle of the process of moving away from analog into digital in that field. Perhaps these are the biggest changes the film industry has ever gone through since the addition of color and audio to film early last century. This change is happening in all aspects of production; shooting video, recording audio, post production and cinema projection to name a few. This brings us the theme of this project. How is the digital revolution changing the way people make movies? What stays the same? Are we still trying to produce the same look as before and what is that look exactly? The following pre-analysis will take a look at the changes of going away from analog into digital in regards to shooting professional video. Pre-Analysis This Pre-Analysis will focus on two of the major players in professional looking video going digital. On the professional level the most popular digital cinematography cameras by far are the RED cameras and they are changing almost the whole process of making high-end video. On the amateur or prosumer level there are HDSLRs that are capable of shooting very professional looking video in high definition. Before discussing these two types of cameras lets first look very briefly into the digital history of the film industry. The Film Industry Goes Digital In 1992 Avid introduced Film Composer, the first native 24 FPS1 digital nonlinear editing platform which enabled moviemakers to edit on computers for the first time [Kadner, p.246]. It took a few years for the technology, hardware and software, to be more readily available for the masses. In the last decade or so digital editing of video has completely overtaken the editing and color grading processes both in high-end productions and consumer video, and this also applies to footage shot on film. Digital video recorders also became popular in this same period but they were almost exclusively used by amateurs and to capture home-video because of the relatively low resolution they were able to produce. Film still dominated the high-end market whether it was for making movies, commercials or broadcast television. That is up until a couple of years ago. With digital imaging sensors getting bigger and the constant increase in frame-rates and pixel-count, the quality of digital video is finally close to catching up to the quality of film. RED Cameras “I have to give the utmost respect to Jim Jannard for opening up a Pandora’s box of new technology innovation. [...] Thanks to Jim for igniting this revolution and keeping it going. Cheers, mate.” Rodney Charters, cinematographer [Kadner, p.31] It was in 2004 that Jim Jannard began his quest of making an affordable digital cinematography camera that could deliver the same quality as a 35mm film camera. He noticed that the sensors in still cameras were already very big and capable of shooting in RAW mode which is essential for accurate post processing of digital footage. At the time he was the CEO of Oakley Inc., famous for its high quality sunglasses. He assembled a team of highly qualified individuals from many different professions that he felt were important aspects of making this idea a reality [Kadner, p.2]. Two of the most important aspects were making a very high resolution camera and create the best possible compression of the footage without loosing any apparent quality. The team managed to create a camera that met all the standards that they had set in the beginning, but they still did not have a market for it. They displayed the camera at NAB in Las Vegas in 2006 and took down pre-orders but did not give any promises on delivery dates [Kadner, p.4]. Over the next months and years a strong online community developed on reduser.net where anyone interested in the camera could come and voice their opinions. This created a big hype around the camera and hundreds of people pre-ordered it and put down a deposit without knowing when they would actually get their product or even what exactly they were buying. This also allowed the development team to incorporate their suggestions into the design of the camera’s hardware and software which helped to ensure that the camera fulfilled the users expectations [Kadner, p.247].  The RED One camera body.2 The first 25 cameras were shipped on August 31st 2007 and it was not until early 2009 that they finally caught up with the waiting list and were able to ship cameras upon order [Kadner, p.9]. In these three years since the camera first shipped the camera has become very popular and in many ways lived up to the hype. Currently there is only one RED camera officially released, the RED One. In development are the Epic which is the next release and Scarlet which will not be released for a few years. This report will use the term RED camera for the RED One and also for the concept of the RED cameras. The Scarlet and the Epic camera bodies.3 Although they have not been released, Peter Jackson and his team are currently using 30 Epic cameras to shoot The Hobbit in 3D, where two Epic cameras are mounted together on a rig.4 The next few sections describe the main areas where the RED camera and its output differs from the quality of 35mm film, both regarding strengths and weaknesses. Footage Quality “We shot a series of 35mm film and RED side-by-side comparison tests with every possible lighting situation we would need for this project. We did daytime exteriors/interiors, nighttime interiors/exteriors, and makeup tests with men and women. Then we brought in our whole team, including producer Joel Silver (The Matrix, Lethal Weapon), to view the projected results on film and pick out the film-originated footage. Nine out of ten people in the room picked the RED footage as the best, which surpassed even my own hopes.” Albert Hughes, director, The Book of Eli [Kadner, p.56] Albert Hughes filmed The Book of Eli with RED cameras after testing the quality against footage shot on 35mm film. Footage from RED cameras has different qualities to film and it has its strong points and weak points, but as the quote from Albert Hughes suggests it can deliver excellent quality footage. That is the opinion across the board everywhere I have seen or heard it discussed. From now on in this chapter when discussing footage from any cinematography equipment we will assume that the scene was properly lit and exposed unless mentioned otherwise. The sensor in the RED One camera is called Mysterium, a 12 megapixel CMOS sensor. It is similar in size as a super 35mm sensor, only slightly smaller.5 It gives the same angle of view and depth of field as super 35mm cameras, so the comparison between these two formats is very straight forward. When comparing footage from 35mm film on one hand and RED footage on the other, there are many notable differences. The first and perhaps most important is that film has more latitude. When printing the negative of film you can make substantial changes to the exposure both scaling back footage that was over exposed and vice versa. The same can be said for RED footage, but no to the same degree. The RED gives approximately 11 T-stops6 of latitude, 5 ½ stops on each side. That can go a long way when fixing exposure for a shot, but people should not rely on it too heavily and people should still be careful and expose their scenes properly [Kadner, p.123]. Although there is more latitude to work with when you shoot with film, it is a different process than with digital footage. After the film has been scanned into a certain format, for example DPX7, it can be modified like any digital file, which can be a very exact process with accurate numbers for all settings. However if people need to go back to re-scan the film to change something with the exposure it is not as exact. Then people can add slightly more red and dim the blue a bit, etc. In that sense you have more accurate control over footage that was digital from the start. [Thye] With footage from 35mm film an obvious side effect is film grain. People have gotten so used to this that they sometimes feel that digital formats that have no grain are missing some factor that makes it look like film. Another thing is film jitter, as the film runs through the mechanism it might not be placed perfectly centered and so the footage jitters slightly [Plaut]. These qualities of the film make the RED footage, and likely other digital footage of similar caliber, look very sharp. “We ended up adding diffusion filter to knock back some of the RED’s sharpness. Film grain has a noticeably softening effect, but with RED you can see every hair on an actor’s beard and every imperfection.” Arthur Albert, DP, E.R. Another aspect of digital footage that sets it apart from film is that the pixels are square from the start. Film has crystals that capture the light. When they are developed and scanned into a computer the footage has square pixels, but originally they did not have that shape [Thye]. This in addition to the absence of grain and jitter means that you can have a lot cleaner image than from 35mm film. That makes it easier to produce special effects for and easier to make green screen productions [Plaut]. Some people say that the differences are already so small that you can not spot the difference, and hence the debate over which is better is not necessary. It comes down to what people prefer and what kind of workflow people like. As Albert Hughes, director of The Book of Eli, puts it; “No one is watching a movie in the theater and spotting a great Avid or Final Cut Pro edit. And that goes for the image as well, regarding film versus digital. A few years ago, a similar sort of debate was happening over Super 35mm an anamorphic among directors of photography. My feeling is that if the audience can’t tell the difference, then it really doesn't mater. If the image is pristine when I look at my footage, I’m happy.” Albert Hughes, director [Kadner, p.57] Portability and Affordability Although the camera body of the RED One camera is very small and only weighs 4.5 kg, that doesn’t mean it’s always ultra portable. Once people attach the necessary things for any production such as handles, viewfinders, monitors, batteries, recording devices and such, it quickly grows. It is still smaller and lighter than 35mm film cameras and can relatively easily be used for most hand-held shots. “At this point I think that RED delivers for the money - the picture is just much better than anything near its range, and Jim is just getting started.” Mark L. Pederson, Offhollywood Productions [Kadner, p.86] The RED One is a relatively inexpensive camera at $17.500, but that number quickly rises if the intent is to make a complete package ready to shoot a whole film production. There are hundreds of manufacturers that make accessories and while all of them might be necessary, some are, and they can be quite expensive. One of the most appealing thing about the RED however is that it is compatible with so many of the older motion-picture accessories such as lenses [Kadner, p.61]. Because of that, existing production companies can switch to RED without having to build up a whole new system. Although the RED One body is small compared to film cameras, the system quickly grows when necessary accessories are attached.8 One aspect where shooting with RED can save money is the cost of film stock. The RED website says; “[...] it is the elimination of the cost of film and processing that make the RED ONE so economically attractive [...]”9 This is rather misleading judging by the people that I interviewed. Although it is true that film is not a part of the cost of production, there are several other things that need to be purchased instead. These include recording devices, such as hard drives and CF cards, and backup stations, which can get rather expensive. Overall, according to the people I interviewed, a production can save one third of the cost by shooting on RED instead of using 35mm film. There is another aspect of the change from film to digital which especially affects smaller markets such as Iceland. There is no longer film that needs to be developed or color graded which not only changes the process but changes where the budget is being spent. “After the collapse of the banks and the big change in the Icelandic currency it makes a huge difference not having to send films to London for developing and color grading. The money is still in the budget, these aspects are still expensive, but they are being used here at home within the companies.” Örn Sveinsson, Saga Film [Sveinsson] Methods of Production “You’d think the film loader position would no longer be necessary, but the loader does a lot of other tasks, like moving gear around, prepping, an slating shots. We needed the help as well because the RED camera has more parts and accessories to keep track of than our previous Panavision cameras. Then we also added a digital imaging technician (DIT) to be responsible for footage downloads, backing data up, and everything technical to do with the camera.” Brook Willard, DIT for E.R. [Kadner, p.197] A similar story was told at my interview with Saga Film and both Nick Thye and Sidney Plaut agreed, this is a very common misconception. Like with the cost of film, where it is replaced with hard drives, although the film-loader is not needed, a person to handle the data is. Sidney Plaut has worked as a DIT10 for several productions and it is a very important job since if something goes wrong a whole take or even a whole day’s work can be ruined [Plaut]. Prami Larsen mentioned that many times people using the RED camera from the Film Workshop had come back with ruined footage [P.Larsen], which demonstrates the importance of this task. He also knows of others that have experienced this. “I heard that Susanne Bier was shooting somewhere in Europe and she lost two days work. They had said we don't need the DIT but we can use him in another place, and the guy said "I think I'm pretty important here" but they didn’t listen and they lost two whole days of shooting. So we have to get experienced DIT's.” Prami Larsen [P.Larsen] When shooting with RED the footage is readily available for quality checking if you bring a computer on-site. It can be part of the DITs responsibility to check the footage on site, back it up, and that way there is no mystery on how it went or risk in loosing it. This is common for productions today and it gives the director a more control on-site. Sometimes a rough-edit can even be put together on the same day which can give the director even more information on what might be missing or what needs to be done different in the next take. The digitation of the filming process clearly brings some very nice improvements for the crew when shooting. [Plaut] “The biggest changes these cameras have for Iceland is that you have access to the materials right away. You can start editing on the spot and many directors have done this, they start editing on location. That is a vast difference from how it used to be. Technically you could shoot a commercial and get it on the air in the same day. Technically.” Saga Film interview [Sveinsson] From a post processing point of view it can be an advantage on set to shoot with RED. When shooting with film there is no digital output from the camera. Sometimes a low quality preview is recorded from the viewfinder of the film camera and that can be viewed on a monitor for basic quality checking. The resolution of that preview is not very high and therefore it sometimes not detailed enough. For example it can be hard to see whether a small tracker placed in the scene is visible in the footage because of the low quality of the preview. With RED the footage is available straight away in full quality and that helps with seeing the finer details of the scene which can be very helpful. [Øhlenschlæger] Methods of Post-Production Although almost all of todays post production is made on computers there are differences in working with footage that was shot on film or that originated digitally. Film has more exposure latitude than digital formats as has been mentioned previously. When film is developed and digitized a common format to use is DPX, and this process involves making some initial color grading. It also limits the available latitude so if extreme changes are necessary in the color grading it requires going back and re-scan the film with different settings. This happens very rarely due to the high cost of developing. What this means is that although a digital format might have less exposure latitude than film it can give more flexibility for the color grader if he has access to the original raw files. [Øhlenschlæger] The RED footage on the other hand might need a bit more work in order to achieve the “film look” because that is of course inherent in the filmed footage but the digital footage does not have exactly the same qualities. Overall it is not easier or harder to color grade digital footage compared to film. [S.Larsen] A big step up for digital cinematography, as previously mentioned, was when the digital cameras were able to shoot progressive frames. There is however still a fundamental difference in how a film camera captures the available light compared to a CMOS digital sensor. The film camera captures the entire frame at the same time while the digital sensor uses a so called rolling shutter, where one line is captured at a time. This has no visible effect in most cases and only matters in very extreme cases. An example is when a straight line becomes skewed, for example the edge of a building, when the camera moves at high speeds. Another example is objects moving at high speeds, for example rotor blades on a helicopter. A third example is when light appears or disappears very suddenly, as is the case with lightning. This causes the frame to be only partially lit. Following are examples of these rolling shutter effects. Three examples of the effects of a rolling shutter. These effects only appear in very extreme situations. In my interview with Saga Film I was told that they had never had issues because of these kinds of effects although they have worked on many RED projects. [Sveinsson] Framestore did have some experiences with this phenomenon. Cameras often needs to be tracked in order to recreate their movement in 3D for when CG objects should be added to a real scene. This kind of motion tracking can prove to be difficult if lines that should be straight suddenly become skewed. [Arnarson] One process that is very important to any post production company is keying, or chroma key, that is removing a certain color from a scene. This is the process used with green and blue screen productions. There are certain differences when keying footage from film or digital cameras. The filmed footage might have more grain which can make keying more difficult. [S.Larsen] On the other hand, digital footage is more likely to have digital artifacts which film has not. Some therefore prefer the 35mm footage for keying. [Øhlenschlæger] It likely depends on each individual project and the different preferences of the compositors. Because of the film camera machinery, each frame of film is not exactly placed in the same spot each time. This causes a small jitter of the footage. It is not something the naked eye can easily distinguish but when CG elements need to be added to a scene or two scenes added together via green screen technology this can be an issue. [Thye] In these cases either one of the plates needs to be stabilized and then the jitter from the other plate applied, or both of the plates need to be stabilized. [Øhlenschlæger] When CG elements are added to a real scene the compositor needs to make sure that they match the scene. This can include color grading the CG, blurring it and adding noise. Noise from digital cameras can be easier to replicate than noise from a film camera, especially if there is heavy noise because of a low light situation and a high ISO number. According to Thomas Øhlenschlæger from Ghost it does not matter to a post processing house which kinds of footage they work with. When making the budget and project plan the differences are that small that ultimately it does not make a difference. [Øhlenschlæger] Hardware and Software Issues Simon Duggan, director of photography on Knowing starring Nicholas Cage, talks about his experiences with RED. He says almost any issue they encountered with the camera could be solved with restarting it. He does mention that the RED-DRIVEs are susceptible to vibration and therefore they opted to use RED-RAM drives or CF-cards. Overall the experience was very trouble-free. “The in-camera fault-detection warnings are very good, and there weren’t any surprises. One added bonus was that the in-camera playback, so we would always do a quick focus check after each take.” [Kadner, p.268] Nick Thye also says that most of the issues with the camera, like with other computers, can be solved by restarting it [Thye]. Sidney Plaut has a similar story, although both of them do speak of other issues, mostly hardware related. Those can be that the battery plate comes loose or a faulty cable connected to the hard-drive causing it to freeze [Plaut]. Nick also spoke of faulty fans in the camera that should have shut off when recording so they would not ruin the sound recording [Thye]. Both the Icelandic companies were very happy with the performance and said the camera had relatively few issues [Sveinsson][Arnarson]. “There are always technical issues that pop up in any production like this. Perhaps with RED because of that there are fewer steps, you get the footage right onto the computer and can work with it from there, there are fewer things that can go wrong.” Saga Film interview [Sveinsson] Prami Larsen of the Film Workshop said people working with their camera had encountered many issues and in fact there had not once been a project where the camera was not broken afterwards or that the footage was ruined to some extent [P.Larsen]. He spoke of the battery plate and the hard-drive connections being broken and even that when people came back after shooting that all their footage was completely black. The RED camera is ultimately a computer built around the CMOS sensor. It let’s the photographer know if there is something wrong with the camera, so he should know exactly what is going on in each shot. If there is a DIT on location that can double check everything, that should eliminate almost any surprises that could occur with the footage. The Merging of Video and Photography Screenshot from a moto by Greg Williams. The actor suddenly turns and looks at the audience.It’s not only still cameras that are moving into the realm of video, but also vice versa. Greg Williams is a photographer that has utilized the power of the high resolution in the RED cameras and he uses RED on more than half of his projects. “There is a massive convergence of photo and film happening right now, and RED is a catalyst for that.” He uses the same ARRI prime lenses for both his Canon DSLRs and his RED camera so the kit that he needs does not double in size. He doesn’t just take stills from the RED stream, but he also makes a combination of moving images and stills which he calls motos. “Motos look like regular stills. Everyone thinks it’s a photo because it has such high resolution, but then it comes to life like the newspapers in Harry Potter,” Williams says. It is a challenge for some of the models according to Williams when he shoots with the RED camera because they are so used to the ‘click’ of the camera shutter signaling them to do a new pose. It’s easier for actors to model that way because they used to being in front of a video camera. [Kadner, p.292] Williams goes as far as to say that there will be almost no boundaries between the two fields. “I think nearly every photographer is going to have to become a filmmaker. RED started this collision of still and motion. It was inevitable, but Jim Jannard is a daredevil who did it at least three years before anyone else was going to.” [Kadner, p.294] With the newer versions of RED, the Epic and the Scarlet, the company hopes that they will be accepted as still cameras as well because of their very high resolution. This concept is called Digital Stills and Motion camera (DSMC) [Kadner, p.84]. Projects Made with RED Here are a few examples of recent movies that were shot using RED cameras. From top left; District 9, Antichrist, Green Zone, Beyond a Reasonable Doubt, Knowing and The Social Network. Peter Jackson was the first one to make a film with the RED One camera in 2007, a short film called Crossing The Line. It was shot with two RED cameras that were only prototypes at the time. It was premiered on NAB in 2007 and brought the company a lot of attention at the conference. On the Shot On Red website11 there is a list of movies that used RED cameras in their productions. There are many big titles with major directors and actors and the list grows steadily. Shooting with RED has clearly become a viable option for big budget Hollywood productions, and they are clearly also very popular with independent filmmakers. The Future for RED It is the consensus with everyone that I have talked to that RED and other digital cinematography cameras are taking over and replacing film. Of course there will most likely be a niche market for film for many years to come, much like with vinyl records in the music industry, but sooner than later film will not be commonly used. One thing that people mention that RED should do better in the future is being able to get to a higher frame rate while still maintaining a high resolution [Thye][Øhlenschlæger]. The digital formats are getting better with every year and the ease of production and cost saving are benefits that can not go unnoticed when people shoot digitally. There are of course competitors to RED in this area of digital cinematography cameras. The Alexa from Arri is one of them and according to people at Ghost that camera has similar qualities as RED and is even a bit better [Øhlenschlæger]. Sammy Larsen at Minerva has also worked with the Arriflex D-21 and said it performed very well. He added; “The Sony Ex1 and Ex3 XDCAM can also be good alternatives with a decent overall quality, but require more work in color grading to get a film look effect.” [S.Larsen] Time will only tell if RED is able to keep its edge in this field with the introduction of the Epic and the Scarlet versions of RED in the coming years. Today in Denmark more and more productions are relying on RED as in the rest of the world. It is mostly the big directors and cinematographers that use film, and perhaps it is just because that is what they are used to and they can afford it. [Øhlenschlæger] HDSLR Video DSLR12 cameras started appearing in the ‘90s and they were based on the SLR technology which has been used from the early days of still photography. For many years the DSLRs were only able to shoot still frames but in the ‘00s the maximum FPS of the cameras was rising and approaching that of video. In 2008 many developer started to include the option to shoot video and since the cameras had such large sensors the definition of the video was quite high. The first DSLR camera to include HD video was the Nikon D90 and it was able to shoot 720p2413. With the high definition video addition the cameras adopted an H creating the five letter acronym HDSLR. With the introduction of the Canon 5D Mark II camera amateur digital filmmaking changed drastically, and to some extent, so did professional filmmaking. It was the first DSLR camera to incorporate full HD video, and with such a large CMOS sensor and the availability of great lenses the film look has never been so easy to achieve. The HDSLR cameras record footage in very different ways from the RED cameras for example. They use delivery formats but not raw formats which means they offer very little latitude, that is very little possibilities to change exposure settings in post production. What this means is that if a scene is not lit perfectly and/or the camera’s exposure settings are slightly off, then it is difficult to fix those issues in post production without the image quality suffering drastically. HDSLRs in Professional Situations There is a big debate in the film industry whether DSLRs are a viable option for a professional production or not. In my interviews I came across opinions from both ends of the spectrum. In my interview at Saga Film they told me that there were directors that came to them and said they they had heard about this amazing cheap and portable camera capable of shooting HD footage and that they wanted to shoot with that. Saga Film responds simply in those situations, you can not. They are not made for professional use and the delivery formats are simply too restricting, and they won’t do it [Sveinsson]. On the other hand when I talked to Sidney Plaut, this is the first thing he said after only a few words about his business of renting out his RED camera; “RED was the big thing, but then something else came along that could be even bigger, even though it's smaller, and that's the DSLRs.” Sidney Plaut, interview, Oct. 12th 2010 [Plaut] He is very enthusiastic about the possibilities with those cameras. Although they might not be able to produce footage for the silver screen yet, he has produced commercials of very high quality with a Canon 7D [Plaut]. Director Peter Harton is the first director in Denmark to produce a series on HDSLR for the DR2. The production is a satire called Rytteriet and is shot on a Canon 1D Mark IV. According to Peter it is a big of a challenge using the HDSLR but it is worth it because of the very special quality and low DOF14 that the video gets.15 Another good example of HDSLRs being used in professional situations is the final episode of the fourth season of the medical drama, House. Two TV shows that have used HDSLRs in their productions, the Danish Rytteriet16 and the medical drama starring Hugh Laurie, House.17 The show is normally shot on 35mm film and the reason for using HDSLR cameras for this episode, specifically the Canon 5D Mark II, was to be able to film in small spaces. In the episode a building collapses and a woman is trapped beneath. A big portion of the episode is filmed in that very tight situation where Dr. House tries to save her life. They only used Canon lenses and filmed the whole episode with this camera, and this was the first prime time TV show to be filmed in it’s entirety on an HDSLR. Canon were very excited about this and issued a press release on the day the episode was aired congratulating the cast and crew of the show.18 The shows director, Greg Yaitanes, answered questions online from fans of the show and HDSLR enthusiasts.19 When asked how the quality was compared to the traditional camera they used he simply said; “I loved it and feel it’s the future.” Overall it seems as if this experiment was a big success and although they might be rather biased the Canon people ended their press release with these words; “This milestone marks a paradigm shift in the way professional cinematographers and filmmakers capture HD video.” Icelandic Feature Film Made with HDSLRs In Iceland film-maker Ólafur Jóhannesson and production company Poppoli wanted to make a movie shortly after the collapse of the Icelandic banks in the fall of 2008. They wanted to do it from their love of making movies even if there was no money available and very unlikely that they could get any grants or sponsors. Therefore they wanted the cheapest solution available, and after careful consideration they decided to shoot it on the Canon 5D Mark II. They had considered the Canon 7D but the deciding factor was that the 5D has a full frame chip which allowed for even shallower DOF. The movie that they made was called Borgríki, or City State20 and the two cinematographers that worked on it made a nine page paper about their experience so that others might benefit from what they learned during production[Bjarnason]. Their report is a very interesting read and describes many issues that people can come across in such a production. They do stress that although the Canon 5D Mark II was their choice for this production, it is not necessarily the best for all situations or the best of the Canon family. They chose it first of all because it was an inexpensive option, but it also suited the script well. It is a dark, fast paced thriller which means that the hand held feeling and the fast paced editing, which was necessary to hide the unavoidable errors in focus because of the shallow DOF, was just the right look for this story. They were very happy with the outcome and in the end of their paper they ask themselves if they had to do it all over again would they still choose the Canon 5D Mark II camera, and their answer was simply; “Yes!” Problems with HDSLR Video The most obvious problem for professionals when shooting video with HDSLRs is the one that has already been mentioned, they don’t shoot raw footage. They deliver very compressed files that have hardly any extra latitude available for changing exposure levels in post production. This can not be overcome at the moment although it might be possible in the future. The delivery formats, such as H.264 do not allow for much post processing so the best option is trying to get the shot perfect on location. But there are many other issues. Several of the people I interviewed mentioned that if the plan was to do any green screen or special effects work people should stay away from the HDSLR cameras [Thye] [Plaut] [S.Larsen] [Øhlenschlæger]. They don’t allow for pixel-perfect manipulation because of the compressed format. It can also be hard to match edits perfectly if two shots were shot in slightly different lighting because of the lack of latitude. The delivery format basically puts major restraints on anything that is called post-production. The Saga Film [Sveinsson] interviewees didn’t even want to do simple work with them because of these heavy restrictions. Sammy Larsen also talked about noise being too prevalent in the cameras and that the moiré effect21 was one of the weaknesses of the HDSLRs [S.Larsen] Two issues with HDSLRs. The moiré effect and digital noise.  If people can live with these problems and decide to shoot with an HDSLR camera anyway, the camera alone is usually not enough. Of course like with any video production you need lenses and preferably high quality lenses. There are lenses made for video that fit on some of the HDSLRs [Plaut] but they are very expensive. Most people will use standard still lenses that can work very well. If the lens needs to be focused while shooting then a device needs to be attached to the camera that can help a focus puller to remotely focus the camera. The same can be said about zoom. Usually it can not be used in the traditional way but the camera needs to be fitted with a special device that allows for remote zooming. The camera is also not very steady when being held like a traditional still photographer would hold it, so some sort of shoulder stabilization system is usually needed. They are expensive and not optimal in most cases, the camera is often still not stable enough and/or heavy enough for comfortable control. This is changing now as many developers are making rigs and new products hit the market every month. A brand new and promising company is Handy Film Tools which make affordable and sturdy rigs for DSLRs and other types of smaller cameras.22  Two of the cameras that were used to shoot the Icelandic film Borgríki, shown here mounted on their rigs for better hand held control. [Bjarnason, p.6] The sound recording will in most cases need to be done with a separate sound recording device or in some cases done in post production, since the sound recording of the HDSLR cameras is understandably not very strong. This also means that the footage and the sound need to be joined and synced in post production, and there needs to be a clipper or the equivalent on set that is used for every shot. These are only a few of the issues that need to be dealt with when shooting with HDSLRs compared to regular cinematography cameras. That being said, there are a slew of manufacturers that have developed professional solutions to all these issues. Some are more expensive than others, but the bottom line is if there is a will to shoot video extensively with and HDSLR camera, there is a way. Pre-Analysis Conclusion This look into the changes of how professional digital footage is being made shows us that the digital technology is allowing more and more people to create great looking footage for a fraction of what it used to cost. Sidney Plaut put forth this thought on the subject. “The thing that's interesting is that finally it's more or less down to skill now. That is the most fair democratic thing. If you have a MacBook and access to the 4K files, you can color grade and do exactly the same thing as they do in Hollywood, even though it might take you longer.” Sidney Plaut [Plaut] Even though not everyone would agree that amateurs can get their footage to look like it came from Hollywood, people can not deny that it’s easier than ever to make good looking films. The Sundance film festival has seen a drastic increase in entries recently. In six years they have gone from 750 entries to 3600. [Tryon] This is not necessarily good in every aspect. One could compare this to the changes in still photography, which has become so automated and every frame is so cheap that people shoot more than ever. In most cases this makes each frame less special and less thought goes into it before people press the button. People can shoot ten frames of the same thing, pick the best one and fix any problems that might occur on the computer. Some filmmakers think it’s enough to rent the RED camera and they then believe that their film will look great but the truth is that lighting and the overall quality of production is a much bigger influence in the final look than the quality of the camera [Øhlenschlæger]. If it gets too easy to produce movies will the good indie films get lost because there are so many bad ones? Mark Gill, the CEO of The Film Department has an big opinion on this evolution. He says: “The digital revolution is here. And boy does it suck.” He knows exactly how he wants it to be, and says the motto of the indie world should be: “Make fewer better.” [Tryon] Of course there are many upsides as well with technology getting cheaper and easier. Prami Larsen with The Film Workshop talked about the up and coming directors that are able to shoot much more and get more experience. Before when there was only film the stock was so expensive that it was hard to get enough time actually shooting on set for the director, and everyone else, to become familiar and comfortable with the process [P.Larsen]. The Panasonic AF100, the first HDSLR and camcorder hybrid.This pre analysis has only touched upon a two of the major players in this digital revolution, RED and HDSLRs. There are many others, the latest addition being Sony PMW-F3 announced in November 201023, but it would be redundant to list all the digital cinematography cameras on the market today. The change is also just beginning and it’s hard to see where exactly it is going. The HDSLRs are an interesting competitor for the bigger cameras and that fight is only just beginning. Camera manufacturers are constantly evolving their products and it has now come to the point where the first HDSLR camera that is focused on video is being released next year. It is called the AF10024 and it is from Panasonic. It is essentially an HDSLR camera in the shape of a video camera. This is no doubt only the first camera of its kind and we will see many more of these hybrids in the not so distant future. But where is this evolution headed? Quality is getting cheaper. Everything is going digital, resolutions growing and cameras are getting smaller. Nick Thye had this to say about how RED cameras became so popular. “RED has this huge force of coming into the market while the market was going down. Everybody wanted the same product for half the price within commercials, done twice as fast. One of the solutions why RED became so popular and still is it's because it's cheap. It's not a perfect solution but it's a very well made solution for helping production companies meeting the demands of the clients.” Nick Strange Thye [Thye] So if everyone wants the same product for half the price, the market has to respond. But what is that product? What should you spend money on if you have limited resources? What is the synonym for the look that everyone wants to achieve? It is that look and that feeling that so many amateurs fail to reach because of their lack of knowledge in the craft of filmmaking. It is the all important film look. Just as I mentioned in the introduction, we don’t want perfection, we want experiences. The film look is not about looking perfect, it is about looking like what we see when we go to the cinema. The 24 FPS is not the perfect number of frames or the best available today. It has been the standard for film for many decades and it’s what we know and have come to expect of films. Film has grain and is not necessarily the best available medium, but it’s what we know. Footage from digital cameras or digital camera projectors show no film grain, but it is still something we associate with film and are attached to in some way, it has a certain charm. There are many other aspects of film that have a huge impact on the experience but are perhaps not directly related to the general film look. These are elements such as the story, the sounds design, and the set design or mise-en-scene. But for the sake of this project we are only looking into the technical aspects of the visual elements that all films have in common. Going into the analysis we want to answer the question of what the most important elements of the film look are. Initial Problem Statement From these thoughts comes the following initial problem statement: “How do people perceive the different visual qualities that are commonly referred to as the film look as opposed to amateur looking video, and which of those qualities are the most important in achieving that look?” Analysis This analysis is devoted to finding the most important elements of the film look and see how they work. Then the goal is to find appropriate test methods to learn what the audience thinks of those elements. From Video to the Film Look To understand the film look we can look at what people that shoot digital video are doing to make their footage look more like film. They are thinking about what the differences are between the two mediums in order to make the gap smaller. According to filmlike.com and other similar websites the process of making video look more like film is called filmizing.25 The website Inside The Hive26 regards two elements of the highest importance to achieve the film look; frame rate and depth of field. That is a common standpoint but filmlike.com also mentions interlaced vs. progressive scan, dynamic range and shutter angle. Many more websites exist on the subject and they commonly refer to these same things, and in addition they talk about aspect ratio, film grain and color grading for example. The website Learning DSLR Video27 is made by a man taking his first steps in shooting professional video and as the name implies he uses a DSLR camera. He mentions many of the aforementioned topics and also that it is good to use similar methods of production as the big film studios do. These include three point lighting, film like camera moves (dolly, slider or gib), keeping the camera steady and not using zoom. These elements can of course add immensely to the production value, as well as a good story, good acting, set design and everything else that has to do with the production of a film. The more professional each of these elements is the audience is more likely to perceive the footage as being of high quality. For the sake of this project we will not discuss what is in front of the camera but focus only on the technical aspects of the film look. For the test each of these elements of the production should be either discarded or made to be constant and as professional as possible. We will now look at each of the individual aspects that are important to achieve the film look. Frame Rate The first common frame rate when movies were starting to get popular was 16 FPS but when sound was introduced to movies this number went up to 24 FPS [Skidgel, p.10]. This has been the standard ever since for film. When shooting digital video the frame rate can be either set in the camera which is of course the easiest option but 24 FPS has only recently been available in many cameras as was mentioned in the pre analysis. There are also other options which include plugins that can lower the frame rate of from 30 or 25 down to 24 frames and change interlaced footage into progressive scan from interlaced video. There are also tutorials available that help people achieve similar results with other methods. These methods can be time consuming and expensive and do not always give good results.28 “Usually film and video differ in resolution and timing, but the video format 24p has both the flexibility of video and the look of film.” Producing 24p Video [Skidgel, p.7] The Panasonic AG-DVX 100.The digital 24p format, which means 24 FPS progressive scan, was first introduced in the Panasonic AG-DVX 100 in 2002 although that camera was SD and not HD [Skidgel, p.30]. It has since become a very popular format in digital video cameras. As was described in the pre-analysis it was a big step for amateur cinematography when HDSLR cameras capable of shooting 24p appeared. Frame rate might not be something that people consciously notice but it’s a feeling people might get because of what they are used to. It would most likely be hard for most people to distinguish between footage shot at 30 FPS and footage shot at 24 FPS. The movement might be sharper at 30 FPS and the motion blur different, although that also depends on shutter speed as discussed later. Interlaced vs. Progressive The ‘p’ in 24p stands for progressive scan, which means every frame displays the whole image. Interlaced video supposedly saves bandwidth in television broadcasts and was originally developed to be used with CRT televisions.29 It doubles the frame rate, projecting half of the image (all lines with odd numbers) first and then the second half (all lines with even numbers) in the next frame. An extreme example of the effect interlaced video can have with fast movements.This can produce undesirable flicker effects. In order for video to look like film it is important that these flicker effects are non existent. Therefore the progressive scan, where each new frame shows the whole image just like traditional film does, is the desired look. Shutter Speed Three pictures taken at different shutter speeds, from slower to faster respectively.In traditional older motion picture cameras, and still in some newer ones, there is a mechanical rotary disc shutter. As the name implies it rotates and covers the film thereby controlling for how long the shutter covering the sensor is open for each frame. The same principal is at work in digital cameras although it is not mechanical. The shutter speed is how long light is allowed to shine on the sensor and by that determines the amount of motion blur that can be seen in the frame. High shutter speeds are often used when filming sporting events because then the desire is to capture very fast movement very accurately.30 The tendency for the amateur digital cameras when they are set to autopilot is to set the shutter speed as fast as possible. The traditional film look however has a slower shutter speed and hence has more motion blur. Depth of Field Depth of field is an important concept in both still photography and cinematography. A shallow depth of field makes the subject in focus stand out from the other parts of the image. A large depth of field makes most of the image appear sharp. It is a complex phenomena and it is determined by many parameters, some influence it directly and others only influence our perception of it. [Cope, p. 3] The main parameter that influences depth of field is aperture. When the iris of a camera is opened up wide31, the depth of field gets shallower. As the iris is made smaller and less light is let through, the depth of field increases. Theoretically a pinhole camera has infinite depth of field. This image shows the relationship between the different elements at play that are used to describe depth of field. The change in focus does not happen abruptly, but gradually towards both the foreground and the background. Technically there can only be one plane in the space of the image that is perfectly in focus but the depth of field is defined as being the part of the image that is acceptably sharp. The ‘circle of confusion’ is used in order to define how blurred a point needs to be in order to be deemed out of focus. It is loosely defined as a circle that people can not perceive when viewing a 8x10 inch photo from the distance of 1 foot. The circle of confusion change according to the print size and viewing distance.32 These images are taken with a 400 mm lens and a 50 mm lens respectively. The depth of field is the same, only the perspective of the background changes.The depth of field parameter only has to do with what is in focus, but not what happens outside the focus area. Different types of blurriness exist and the blurred areas are also called bokeh which comes from Japanese. It is a common misconception that the depth of field changes with focal length. In the example on the right33, we see two pictures taken with two different lenses with very different focal length. For this test the subject is made to have the same size in the frame so the photographer did not have the same distance from the subject in both pictures. If the pictures are examined carefully, one can see that they have the same depth of field, that is, the amount of blur both in the foreground and in the background is the same. The picture taken with the lens with a longer focal length appears to have a smaller depth of field because of the greater angle of view34. That being said this effect can still be used similarly to shallow DOF, to make the subject in focus stand out more from other parts of the image. Lens adaptors are also useful to aid in achieving shallow DOF when applicable. With them you can use still camera lenses with different video cameras which can allow for a shallow depth of field. These adaptors however can be very expensive. Another possibility is to fake this effect in post production but this is very difficult and seldom produces good results. The website Inside The Hive35 proposes that new filmmakers buy a DSLR camera if they want this professional look. Angle of view, also called field of view, is related to depth of field. It is the amount of area that is visible in the recorded image, and it is a function of the focal length of the lens and the sensor.36 Cinematographers, as well as still photographers, might need to think about the combination of their cameras and lenses for what they want to shoot, but this has very little effect for the audience. Aspect Ratio Aspect ratio for film is not a standard size, there are many different sizes that are currently in use.37 They are all widescreen, that is, they are wider than the Academy aspect ratio which is 1.375:1. Television for many years had the format 4:3 but in recent years HD has been taking over with its widescreen 16:9 formats. People have long associated the widescreen format to film because that is what they are used to, for example the letterbox38 effect which appeared when movies were shown on 4:3 television sets.39 Today people are used to seeing widescreen footage on TV and therefore might not associate it to film as strongly as before. Color Grading Just how much character a colorist can add to the look of a film can easily be demonstrated. The following image from Transporter 2 is a perfect example. The first image is a frame from the raw footage and the second one is the same frame after being color graded. The difference is remarkable. Before color grading the scene almost looks like it was made by an amateur with its lack of contrast and high key lighting. In the graded version the skin tones look much better, the contrast is higher and the scene looks like it was shot at night. Not all films use grading that is this extreme, but it’s safe to assume that color grading plays an important role in any big film production. This scene from Transporter 2 clearly shows the drastic effect color grading can have on the look of a film. 39 Color grading, also called color timing, is important to make the different shots in a scene fit seamlessly together, but the main goal is to create a color look across an entire project, wether it is a film or a TV series. This should be invisible to the viewer. Color grading can be split up into primary and secondary grading. Primary color grading is applied to the whole image and is used to make an overall look for the film. That is done by modifying the brightness and contrast of the red, green and blue channels. The primary color grading is performed in the beginning and in the end of a project. When color grading is added to specific areas of the image, for example only to the skin tones, it is called secondary color grading. Oh Brother Where Art Thou by the Cohen brothers was the first film to be entirely digitally color graded, and that was only ten years ago in the year 2000. That gave the colorists control that was never before possible. But more control does not necessarily lead to better outcomes and many critics of film are complaining that all films have the same look these days, using orange and teal as their primary colors.40 In my visit to Saga Film in Iceland I was told that colorists in Iceland are a bit like pop-stars [Sveinsson]. They are not easy to get a hold of and what they do can not be done by just anyone. It is a difficult profession and when a director teams up with a colorist often their collaboration lasts a lifetime. Dynamic Range Although the digital video cameras have been getting better and closer to the quality of film there is still one area where film is undoubtedly the winner. It has more exposure latitude41, that is, it allows for more change in exposure in post production. Because of this flexibility there is more possibility of a high dynamic range when footage has been shot with film. Film Grain Film grain was prominent in the early days of cinema. As film stock improved in quality the effects of the grain decreased but still today it is clearly visible, especially when the footage from film is seen up close. As technology changes and quality improves, for example with Blu-Ray discs, there is a tendency to “fix” the movies from the golden era of cinema by removing grain when the films are re-released. This is something that many people object to42 because the grain is a part of the films character and people associate films with film grain to some extent. With digital photography and cinematography there is no film grain. There might however be other kinds of noise for example digital artifacts because of compression. When people go to the cinema they see a projection from a copy of film that might have been in use for some time and become worn. Scratches and dust become more obvious as the film is run through the projector several times a day.43 These scratches and dust is something that many people might subconsciously associate with the film look. Elements to be Tested It is clear that there are many elements that make film look like film, some more important than others. Shutter speed, dynamic range and field of view, although important for the filmmaker to be aware of, are not things that the audience usually notices. Frame rate and interlaced or progressive scan is something that most audiences would not think about but could subconsciously give them a feeling of film rather than video. According to Thomas Øhlenschlæger the interlaced video will always be strongly associated with amateur video and it will hopefully soon be a dead standard [Øhlenschlæger]. This might happens as television technology is changing. Film grain is something that the audience might be aware of but in most cases this effect is likely to go unnoticed, especially today with the emerging of digital cinema. Changes in aspect ratio is something that audiences are very likely to notice. With high definition becoming dominant in most video production, television for example, the widescreen format is likely to be less associated with movies. The 4:3 format has become something that is associated with older video. The first thing most people mention when asked to describe the film look is depth of field. It is a very important element of photography and cinematography and it can give images a very professional feel. Color grading can also have very much to say as we saw in the examples in this chapter. According to Thomas Øhlenschlæger in addition to quality production methods color correction is one of the big elements in getting closer to the film look [Øhlenschlæger]. It not only allows for creativity in creating a films look but also enhances the images to make them look their best with stark contrast and vivid colors. The elements that have shown to be most significant and apparent are depth of field and color grading. These elements will therefore be used for testing. Final Problem Statement The two elements that should be tested are depth of field and color grading. The possibilities are to investigate these elements separately and also what happens when they are applied together. This brings us to the final problem statement. “Which of these two elements of the film look, depth of field and color grading, do audiences most associate with quality footage and do they create a synergetic effect when applied in unison?” Test Methodology This chapter describes what kind of testing is suitable to answer the problem statement. Factorial experiment design is described, experience and bias of the audience is touched upon and finally the questions for the test are formed. What Will Be Tested The point of this test is to learn about an audiences perception of two of the elements of the film look, depth of field and color grading. The first thing that this experiment should try to answer is how connected each of them is to the film look in the minds of the audience. Another thing that could be examined is the interplay between the two elements. As the problem statement suggests, there is a possibility that the elements used in unison will create a synergetic effect. Like was apparent in all the literature cited and the interviews conducted for this project the main ingredient for the film look is quality. Although the different elements discussed in the analysis chapter all contribute to the differences between video and film, the bottom line is that films have quality production methods in addition to quality equipment. Therefore for the purposes of this test the terms “film look” and “quality footage” will both be used to describe the same effect. Utilizing the two elements in question is not as simple as turning them on and off. Depth of field can vary greatly, from an extremely narrow plane to near infinity. Color grading has even more variety. It not only varies in intensity but can also differ immensely according to the choices made by the moviemaker which depend on what kind of look he is trying to achieve for his film. In order to simplify this testing process and to leave room for exploring what effect the elements might have together, only two variations of each of the elements will be used. Depth of field will be set to either deep or shallow and color grading will either be turned on or off. Further discussion of the design of the videos can be found in the next chapter. Factorial Experiment Design Although one could list countless things that influence a complex phenomena such as film we are looking at only two factors, and only two settings of each. Experiments where two or more factors and all combinations of those factors are taken into consideration can be called factorial experiments [Trochim]. Following is a simple diagram showing the possible combinations of the two factors and their settings. There are four in total. If we imagine the two different elements as being either turned on or off, we could say that clip 1 has both elements turned off and is likely to be furthest from the film look, and that clip 4 has both elements turned on and is likely to be closest to the film look. Clips 2 and 3 have one element turned on and the other turned off. As Trochim explains on his website44 there are several possible outcomes from a study like this. If we take an example, if there is an increase in perceived quality between both 1 and 2 and also between 3 and 4, that would give an indication that DOF is having a positive effect on the perceived quality. Similarly if there was an increase between 1 and 3 and also between 2 and 4, that would show that the color grading had a positive effect. Another possible outcome to consider is interaction effects. If 2 and 3 had a very small or no increase in quality, but 4 had a significant increase, that would imply synergy between the two effects. Yet another possibility is that the different factors have a negative impact on each other. When factorial experiment design is used then both the effect of each of the elements as well as the interplay between them is investigated. Instructions and Bias It is important to think about how much the audience knows before they participate in the test. This includes the instructions given prior to the test and possible bias because of previous knowledge. It would also most likely have a big influence on the outcome whether the participants are allowed to see more than one of the clips. The different participants will each have had personal experiences with films and of course that might influence their answers. The best way to minimize the effects of those differences is to keep the material general, the questionnaire and the test footage. They should be constructed with very commonly known methods and the most common effects so most people should be familiar with them. Before the test starts it is important to try to make each of the participants to have similar expectations. This can be controlled by using similar words to describe the experience they are about to have. In order to get peoples help it might be necessary to tell them what the project is about without going into detail. To avoid bias because of this it is better to promise the participant an explanation after he or she completes the test if they are interested. Only very basic information about the test should be given before hand and it is important that there is no mention of the factors of depth of field or color grading. One or More Viewings? The experiment is greatly influenced by whether the participants are allowed to see one or more clips. If they are allowed to see more than one clip the experiment becomes more complex and bias from the other viewings needs to be taken into consideration. If people were allowed to see more than one clip they are very likely to answer questions based on their previous experience in the test. If they see a difference in a clip compared to the previous one, they might place their answer on the scale compared to the other video, not using the entire scale. The effects of this could be minimized by arranging the viewing experience of the participants so that the bias would spread equally on the different videos. Another kind of bias could also occur. If they are asked to answer questions after seeing the first clip but before seeing the second clip, they know the question they are supposed to answer before the second viewing, but not before the first. To minimize this bias one option is to ask each participant to read the questionnaire before watching the first clip. Then the knowledge is the same in both viewings. Because one of the main goals is to investigate whether there is a difference when one of the factors is turned on or off it would be interesting to see how the responses of the participants change from viewing one clip to another. Adding more than two clips to the viewing experience would make the test exponentially bigger because of the number of viewing orders necessary to eliminate bias. Therefore, two clips will be shown to each participant. Even though people are allowed to read the questionnaire before hand eliminating the bias towards the questions, it is very likely that there would be a bias depending on which clip is viewed first. To eliminate that bias a reverse order of viewings should also be tested. With four different clips to be tested, and two of them seen by each participant, the number of combinations is 12. The following graphics demonstrate this. Each possibility can be seen as two options because of the reverse order of viewings. The main goal as previously stated is to see how the perception changes when either effect is added. The last four options (the last two boxes in the graphic) are not necessary for this to be investigated. Also, if a participant would view clip 1 first and then clip 4, both the parameters would have changed and if their response would change between viewings it would not be possible to know because of which parameter. Even if the last four options are are excluded, sufficient amounts of data should be collected from the other instances of the test to compare all the differences. Therefore, in order to simplify the experiment, only the first eight options will be tested. Test Questions The role of each participant is quite simple since they are only required to watch one clip and answer questions regarding that clip. Because of the different experiences that people have with film, the term “film look” might mean very different things to each participant. Also, if the participants were asked how likely it was that the clip they saw was from a professional film they would judge not only the quality of the visuals but also the audio, the acting the set design and everything else that a film entails. Another possibility is that they might answer according to the fact that they know the person or surroundings in the clip and therefore know that it was not from a professional film. Hence, and in light of the close connection between “quality of footage” and the film look discussed in this report, the quality connotation might be better for getting a general response from the participants. People asked about quality are also very likely to have different associations with that word, and therefore it could help to refer to specific things on each end of the scale. The low end could be a “home video”. This implies that little thought has gone into the production and that the equipment was not of a high caliber. The high end could be a “big budget Hollywood production”. Although people might have very different feelings towards the film industry in Hollywood, very few would argue with the fact that they produce very high quality visuals. Referring to these types of video production also tells the participants that “quality” is not referring to compression or such elements of image quality, but that it is regarding quality of the production and the overall film look. Because of the vast differences between the low end and high end examples set for the edges of the scale it is appropriate to supply a sufficiently large scale for people to answer on. For this test a scale of ten will be used. Providing scale with an even number of possibilities also forces people to choose which extreme the footage is closer to because there is no middle value. The question, formed based on these decisions, is the following: On a scale of 1−10, how would you judge the quality of this footage? 1 being a home video and 10 being a big budget Hollywood production. Since there are four different instances of the video and eight different combinations of viewings, and since the factorial experiment setup allows for comparisons between the factors there is not a need for further questions. The data acquired should be sufficient to provide some answers to the questions posed with the problem statement. A simple questionnaire would also allow for testing of more individuals which would mean more accuracy in the test. The same question will be asked for both viewings of the clips. To see what kind of viewers are being tested they will be asked about their age and gender. Since it is important that all participants can view the footage in the same manner an additional question will be added asking whether they have normal or corrected to normal vision. The questionnaire. The box on the top right was for me to write which testing scenario the participant viewed. Design and Production This chapter deals with the design of the test footage according to the guidelines set in the analysis and test methodology chapters. Details of how the footage was produced and its post production follow. Test Footage Design There are many factors one needs to consider in order to shoot a video but there are only two we know that need to change between the clips, DOF and color grading. The other factors should remain as similar as possible between the clips so that peoples answers change only because of the changes in the two elements being tested. The footage should be made as professional as possible and pleasant to look at so that there is a connection to quality footage in the minds of the viewers. At the same time the footage should be simple and straight forward so that they are able to think about the quality of the footage rather than following events that might occur on the screen. There are two different shots needed with different depths of field. Those two will then be color graded the same way, making a total of four clips. In order for the shot to be easily replicable there should be very little action in the scene. The scene should be steady, that is the camera should not be hand held. That is more professional and also guarantees that the framing stays the same in both shots. The scene should only be one single shot with no edits. That way it is more likely that the shots can be made to look very similar and gives the audience a chance to focus on the quality of that one shot and answer according to that. The length of the shot should be enough for people to absorb information about the quality but not too long so that it gets boring to look at. Between 10 and 15 seconds should suffice. Most movies evolve around people and that is what viewers are used to seeing in films. Color grading is also to a large extent focused on making skin tones look good in front of the background45. Therefore it is good to have a person in the shot. For the shot to be easily replicable the person should be doing very simple things, and very little acting should be necessary so the audience is not distracted by an actors poor performance. The whole scene complements the skin tone of the actor. From Transformers 2.The setting of the clip should be somewhere that allows for a nice effect of a blurry background for the shallow DOF shot. The background should be simple and rathe A very shallow DOF. From Transformers 2.r static. All settings of the camera should be the same in both of the shots as long as they allow for the two shots to look as similar as possible. Depth of field changes from one shot to another and the change should be significant and obvious. The changes after color grading should also be significant and the graded clips should look as professional as possible. The same color grading should work for both the clip with shallow and the clip with deep DOF. A modern Hollywood blockbuster look should be the main guideline for the color grading, but it should not be so extreme as to degrade the quality of the original footage. Sound design plays an extremely important role in any modern film and there are vast differences between sounds from a professional film and sound recorded in camera by amateurs. Including sound in the clips, either the sound recorded with the footage or professional sound, such as a soundtrack from a feature film, might push the responses towards one of the extremes of the scale and thereby minimizing the differences between the clips. Therefore to make the participants focus on the visuals and to minimize potential bias caused by the soundtrack, there clips should be without sound. Shooting the Test Footage For the test footage to look as professional as possible and for the DOF to be very shallow the footage needed to be shot on a high quality camera. The first choice would of course have been a RED camera, but with the small budget of this project it was unfortunately not an option. The cameras available in AAUK were also not optimal as they all use Mini-DV tapes and are not likely to be able to achieve the shallow DOF needed. The footage from them simply would not have looked professional enough. The best available option was to borrow an HDSLR camera. In my interview with Nick Thye he mentioned that he had such a camera and when I contacted him he agreed to lend it to me so I could get the shots I needed. It is a Canon 550D, a camera released in 2010 which has the same sensor as a 7D which is roughly the same size as a 35mm film camera. In other words it is an excellent camera to shoot video with. The lens is a Canon 18-135mm 3.5f. I used my own tripod and although it is meant for still photography it was sufficient because the scene should not have any pans or tilts. The Canon 550D and the 18-135mm lens used to shoot the footage for the test. First Two Attempts Finding a location and the perfect setting for the footage turned out to be quite a challenge. The first location that I chose was next to a canal in Copenhagen in the middle of the day. It was sunny and the ground was covered in snow. It quickly became apparent that it would not be possible to open up the iris for the DOF to become shallow enough. The next try was that evening in Tivoli in Copenhagen. I got two of my friends from Medialogy to assist me, Camilla Hägg and Niels Christian Nilsson. The setting was lovely and this time when the iris was opened up to achieve the shallow DOF the shots looked very good. This time however, the other end of the spectrum was a problem. After many test shots it became clear that it was simply too dark. The lights in Tivoli were not strong enough to allow for the iris to be closed sufficiently for the DOF to be deep. What I learned from these two attempts was that it is very hard to shoot two shots in the same setting with very different DOF without any accessories. One thing that could help in that regard is ND filters46 that allow for a higher aperture in bright settings. Another option is having studio lights and a very controlled environment. Changing the lighting between the shots might make it hard to keep the two scenes nearly identical. The Perfect Setting Since I did not have access to any filters I decided to try more locations. I tried to shoot in a Metro station, a well lit indoor environment that could work as a setting from a movie. It was better than the Tivoli lights but still not bright enough. The fourth setting turned out to work, which was outdoors on a rather heavily clouded day. This location allowed for both settings to work. I asked my friend Birna Rún Ólafsdóttir to help me and we found a very nice location on a bridge over a canal in Christianshavn. Birna leaned against a railing and looked across the bridge. Since the shot was very short it was not necessary to have her do a specific task, she should simply be waiting for someone to arrive. The background was the canal stretching far behind her which allowed for the effect of the shallow DOF to be very apparent.  The two original shots. Deep and shallow DOF respectively. The shot with the deep DOF was shot with f.12 and 1/30 shutter speed. Because there were no filters available the shutter speed needed to be increased significantly in the shot with shallow DOF. This would have an effect on a shot with motion blur because of moving objects or a moving camera, but because of the very static nature of this shot the effect was not visible. The second shot had f.4.5 and 1/400 shutter speed. The actors placement, placement of the camera, the focus or any other settings of the camera were not changed between the shots. Color Grading and Preparing Test Footage The footage was shot in rather windy conditions which created a nice effect of the hair and clothes of the actor but it also meant that the camera was shaking slightly. For the two clips to be the same and professional the movement was removed with match-moving in Adobe After Effects. Color grading was also performed in After Effects. The focus was kept the same in both of the shots. The effect of the shallow DOF however is that the focus plane is much smaller. Once loaded into the computer it became apparent that the subject looked sharper in the deep DOF version. This is most likely due to the fact that the focus was not 100% in the correct spot but the effect of that is only apparent in the shallow DOF version. For the subject to look the same and only the background changing the deep DOF footage needed to be blurred with a Gaussian Blur effect, but only very slightly. Magic Bullet Looks allows for effects for five different stages of any shot. To get the movie look through color grading I used very popular tools and filters available from Red Giant Software called Magic Bullet. More specifically the filters I used are called Colorista and Looks. Colorista is an advanced three way color corrector and Looks is a more complex program that allows for very specific changes in all aspects of the footage. I found a tutorial47 made by Stu Maschwitz, a very experienced color grader and special effects artist. He is the owner of the acclaimed blog about cinematography and related subjects called ProLost48. In this tutorial he describes how to achieve looks of four different film productions from Hollywood that were released in the summer of 2009. Video screenshots, deep DOF. Top left is the original footage, top right has increased contrast and saturation, and the others are the four different movie looks inspired by Stu Maschwitz. The films whose looks were simulated in the video were Transformers 2, Where the Wild Things Are, The Taking of Pelham 123 and Terminator Salvation. Although these four movies look very different at first glance their art direction is based on similar philosophies. Restrict the color, and use complimentary colors49. The video clip used in the tutorial was similar to the footage shot for this project in that it was not lit with a specific look in mind. They only used light available on location and the look was achieved through color grading only and the same is true in this project. It appears that since most filmmakers are filming humans and want their heroes as well as their villains to look their best on screen, they want to compliment their skin tones. For this reason they often end up with very similar color themes. There is some discussion on various websites whether this is a good thing or not. Todd Miro, an indie filmmaker, is appalled at this as is very clear from a recent entry on his blog50. He takes numerous examples from big Hollywood productions that have very similar color themes. It seems as if orange and teal are the two most common colors in all of them. This is a screenshot from Adobe Kuler51, a web hosted application for generating color themes. It shows that the most complementing color to skin tone is teal. Stu Maschwitz also talks about this trend in his video. Whether it is likable or not, and whether people notice it consciously or not, this is likely a color theme that people associate with many recent movies. Therefore the color correction for this project is made to resemble that look and with this color scheme in mind. The order of the effects applied is important. As can be seen in the screenshot from After Effects, the first effect applied in this shot is a vignette effect made using Colorista. It is created using a rectangular power mask, where the exposure is lowered on everything outside the mask. This is a popular effect in recent movies. It puts a great emphasis on the person in the foreground. The second effect is also made with the Colorista filter, and its purpose is to change the colors and contrast. Both the dark areas and the mid-tones are pulled towards blue or teal. The highlights, which mostly affect the actors face, are pulled towards orange. The contrast is made more distinct by darkening the already dark portions and conversely adding a bit of exposure to the highlights. There is nothing written in stone in color correcting. Everything depends on the scene at hand and what look should be achieved. Lastly there is a gaussian blur added. The actor was in sharper focus in the footage with deep depth of field and for her to look the same in both shots a blur of 0.7 was added. Another effect, which can not be seen in the screenshot, was the tracking of the footage to stabilize it. The four versions of the videos can be found on the CD accompanying this report. A screenshot from After Effects showing some of the effects applied to the footage. The clips were exported in full HD in H.264 format. Although the format compressed the footage slightly the effect is hardly visible and all clips went through the same treatment so there should be no bias in that regard. Testing and Results First this chapter describes the execution of the test and then the test results will be listed. Analysis of the results will be saved for the discussion chapter. Conducting the Test There were two options of how to conduct this test, either online or doing it in person. Doing it online would have meant very different viewing conditions for the participants. People have different computers which means different screen sizes, brightness, viewing distance and overall screen quality. Peoples internet connections could also pose problems if they were to stream the videos in high quality. Therefore I decided it would be best to conduct all the test in person and each one in a very similar setting. The most straight forward way to conduct this test was to ask fellow Medialogy students to participate. They are usually willing to assist and are familiar with this kind of testing. To make it easier for the participants I brought my computer along and asked them to take the test in their own group room. This way I spent less of their time and they were more willing to participate. I made sure that students close by that had not taken the test did not see the screen while someone else viewed the videos. Everyone was seeing the videos for the first time.  I created a slideshow in Keynote which explained the procedure so that everyone would get exactly the same instructions. On the first slide the participants were asked to read the questionnaire and after that play the first video by pressing the space bar. For there to be no confusion once the video started it also stated that there would be no sound. After the first video played the slideshow automatically went to the next slide where the participants were asked to answer the first question before playing the next video. After it had played the next screen appeared and asked them to finish the questionnaire. The participants all viewed the videos from a similar distance and in a similar setting. I stepped aside for each of the test and made sure they did not feel that someone was watching them take the test. One participant asked during the test whether he was allowed to see both videos before answering the questions. I explained to him that I wanted a non-biased answer to the first viewing and so he complied with the instructions. How Results are Described The test included four different videos. The participants looked at two videos each in one of eight different orders and answered one question about each of the videos. There are several ways to look at the results from the test. The first is the classic way of taking averages of the answers, looking at standard deviation and figure out whether the results are significant or not. We can call this absolute results. Since the participants answered on a large scale, because of the subjective nature of the question and because one of the goals is to see the change in their answers from one clip to another, it is also feasible to look at the results relatively. That is, how much does their perceived quality increase or decrease when viewing the second clip. We can call this relative results. The results are described in separate sections in this chapter according to these two different interpretations. To minimize redundancy and to clearly describe which clips are being discussed at each time it is helpful to call them clips 1 through 4 according to the following graph which appeared earlier in the report. 1 2 3 4 Absolute Results The test was conducted on December 6th 2010 in the IHK campus in Ballerup in Copenhagen. Participants were 48 in total, 41 male and seven female and their average age was 24.3 years. They all reported normal or corrected to normal vision, took the test on the same computer in a similar setting. First Viewing and All Viewings Since the participants answered the question regarding the clip they viewed first it is possible to look at those results separately without them being biased by the second viewing. Each of these averages is based on 12 answers. Clip 1 Clip 2 Clip 3 Clip 4 STDV 1.82 STDV 0.98 STDV 1.15 STDV 1.34Averages of the results together without regard to when they were taken will of course be biased by when the viewings took place so that needs to be taken into consideration. Half of the scores are first viewings and half are second viewings with different first viewings. Here are the averages of the total results. Each number corresponds to an average of 24 ratings. Clip 1 Clip 2 Clip 3 Clip 4 STDV 2.08 STDV 1.67 STDV 1.58 STDV 1.41Following are the same numbers in graphs, including the standard deviation for each average. Again the first viewing is based on 12 participants, and the one with all viewings is based on 24 participants.  Second Viewing Averages Following are the averages and standard deviations of the second viewings. The numbers in gray are for comparison and are based on the first viewing from 12 participants. The second viewing numbers are based on six participants each. STDV 1.82 STDV 2.28 STDV 1.37 STDV 1.34 STDV 0.89 STDV 2.10 STDV 1.15 STDV 1.47 STDV 2.56 STDV 0.98 STDV 1.03 STDV 1.84It can also help to visualize this in graphs. Instead of comparing the second viewing of six participants with the average from 12 participants like in the tables above they are put side by side with the average from the first viewings of those same six people.   Relative Results Like stated before it can be helpful to look only at the changes from one answer to another within each individuals answers. Because the viewing order changed in the different test scenarios it can be difficult to see whether the change in the score was in the direction that was anticipated or if it was opposite. Therefore the following results all account for those differences and state whether the increase or decrease was in the order anticipated or not. First lets see how many people changed their rating from the first to the second viewing. The headlines are the total numbers regardless of which clips were viewed. Since only one of the factors, DOF or color grading, changed between the clips in all of the eight scenarios it is fitting to look at the changes in the answers according to the parameter that changed. 25 higher 11 people rated one clip higher in the direction that was anticipated when DOF changed, and 14 when color grading changed. 9 equal 4 people rated both the clips with the same number when DOF changed and 5 people when color grading changed. 14 lower 9 people rated one clip lower in the direction opposite to what was anticipated when DOF changed and 5 when color grading changed. We can also look at the averages of the relative answers. The following is how much the average change was from one answer to another with the participants. These are divided into both which parameter changed in the video and in what direction it changed. DOF change. Average change in answer: 0.16 (the direction anticipated). DOF change. Average change in answer: 0.16 (the opposite direction anticipated). Color change. Average change in answer: 0.67 (the direction anticipated). Color change. Average change in answer: -1.00 (the direction anticipated). Student’s T-test Whether there is a significant difference between results from the measured quality of one clip to another can be seen with Student’s T-test. All the tests below are one-tailed, paired T-tests. In the table on the right the results from the eight different test setups are compared, the first viewing against the second viewing. The top of the table on the right compares the first views of the different clips against each other. Clip 1 vs. clip 4 and clip 2 vs. clip 3 are then also compared (the first views only) although no participants saw these two clips in the same test. In the bottom the second views of each of the clips are compared against each other. Comments from Participants The participants were allowed to leave short comments on the questionnaire. Out of 48 participants 11 wrote a comment and below are the most relevant comments. The complete test results can be seen in the Appendix. Clips viewed Comment 1 & 2 I think it depends on the situation, if they want a close-up. But clip one looked most like a big production. 2 & 1 I liked the blurred background, but it depends on the purpose. 2 & 4 I thought the second clip was a bit too “red”. 2 & 4 It’s amazing what color correction can do :) 3 & 1 Couldn’t see any difference. 3 & 4 Hard to see difference. 4 & 3 Both the same except for background blur. 4 & 3 A bit difficult to see the difference. Would like to see it again. Discussion Looking only at the averages of the answers from the test does not give sufficient insight into how the two parameters that changed in the videos affected the participants. It seems to have been a good decision to make the participants look at more than one clip each because the results reveal much more when the changes between viewings are examined. In this discussion chapter we will first look at what can be read from the results, both the absolute and the relative. Then we will look into why the results looked like they did and if anything could have been done differently in the testing process to deliver clearer results. Discussion of Test Results The averages of the first viewings are rather similar. They range from 6.17 to 7.33, the highest being clip 2, with no color grading and shallow depth of field and the lowest being clip 4, with both color grading and shallow depth of field. It is interesting that looking at all the viewings together puts the four averages even closer together but further analysis of that is not very useful because of the different biases that enter from of the other viewings. Not only is the average for the first viewing highest for clip 2 but it also has the lowest standard deviation. It would seem that the audience was in agreement that clip 2 was the best looking. But these are only the results of the first viewing and this observation becomes very interesting when we start to look at the results of the second viewing. People that viewed clip 2 after viewing either clip 1 or clip 4 first gave it a much lower rating, in fact the lowest rating by far of any of the second viewing of the four clips. It scored an average of 6 after people saw clip 1 and an average of only 5 after people had seen clip 4 first. It is hard to say whether this result is random or not but it seems that clip 2 looked nice to those seeing any of the clips for the first time, but not as nice when they had viewed the other clips first. DOF Specific Results If we look only at the people that saw clips 1 and 2, no matter which one they saw first, their opinion on average did not change from one viewing to the next. The people that saw clip 2 first though gave both the clips a 7.5 average while those that saw clip 1 first only gave them an average score of 6. We can see from this that the clips are perceived to be very similar although the perceived quality changed depending on which clip was viewed first. Continuing to look only at the DOF change, this time with clip 3 and 4 where the color grading is on, those finding support the results from clip 1 and 2. The change is very small on average, under 0.5. Switching over to the relative results, they confirm what seems to be apparent in the absolute results. The average changes between answers are very small in the four scenarios when the DOF changes between clips. Four people rated both clips the same way, eleven people rated the clip with shallow DOF higher but nine people rated the clip with deep DOF higher which was not anticipated. There was clearly not an agreement between which clip was better when the DOF changed. Color Grading Specific Results There is a clearer difference between color graded and non color graded clips than there is with the clips where DOF changed. Whether looking at clip 1 vs. clip 3 or looking at clip 2 vs. clip 4 the color graded clip scores higher on average in every case. That being said the difference is not very extreme. The most pronounced difference is seen when people saw clip 3 first and then clip 1. The average score for clip 3 was 1.43 higher than for clip 1. Overall there was a clear trend to rate the color graded clips higher. The relative results support this but also show that the participants not in complete agreement. Five participants rated the color graded clips lower than the non color graded clips and those five were spread across the different viewing scenarios. Another five saw no difference in the clips but 14 people answered in the anticipated way, that the perceived quality of the clips was higher when they were color graded. The average relative change was 1.0 when the participants saw the color graded clips first, and 0.67 when they saw the non graded clips first. This is a minor difference and overall it shows that no matter in which order they viewed it the general response was to favor the color graded clips. Significance of the Results Any number under 0.05 in the T-tests should show that there are significant differences in the results and that it is unlikely that the differences are random. The T-tests performed show that the differences between the answers are not big enough to conclude anything from them with certainty. The only significant difference that could be found was between the first viewing of clips 2 and 4, where the T-taste gave a result of 0.018. Two other results are around the 0.05 mark, people that first watched clip 3 and then saw clip 1, and between the first viewings of clips 2 and 3. The numbers are larger when looking at the results from the DOF change which just goes to show once more that there was more unison with the participants with the quality of the color graded clips. It is interesting to note that when viewers saw clips 1 and 2, no matter in which order, that the number from the T-test is 0.5 which is very high. It is very clear that there are no significant trends between the viewing of those clips. The Testing Method Analyzing the results of this test can only go so far. As proven by the T-tests the results are not conclusive enough to make anything more than assumptions based on certain trends seen in the data. It is clear that the effect of the color grading was bigger than the effect depth of field had. This is confirmed by the absolute results, even more so by the relative results and the T-tests also show this trend. However, the DOF also had a big effect on the answers of the participants, but it was not the effect that was expected. What could have caused this? A few things come to mind that could have caused the vague results and to make things clear let’s put them up in a list (in no particular order). ➣ The preference of the audience. Perhaps half of the participants, or so, simply liked the clip with the deep DOF better. Although shallow DOF is often linked to quality production whether it is in still photography or cinematography it is not unlikely that many people prefer the look of a deep DOF. In the case of the color grading, they might have not liked this kind of look. There are millions of different ways of grading and several trends in the film industry. This might not have been a look that the audience cared for. ➣ The mandatory change. Perhaps the audience did not really see a big difference from one clip to another and just answered differently after the second clip because they thought that there must have been a difference because of the setup of the test. It could also be that they subconsciously knew that there was a difference but could not really put their finger on it. They then answered differently after the second clip without really knowing which they felt was better. ➣ The helpers. Because the test was performed on Medialogists they are most likely very familiar with such experiment setups and are performing tests on their own around the same time. Whether it would be conscious or not they might have assumed that the second clip was supposed to be better and therefore gave it a higher grade. Another way that they could have believed that they were helping is if they noticed the difference and realized exactly what was being tested and what results would be expected. Then, even if they did not feel that the effect had any big difference, they answered in a way they believed would be beneficial. ➣ The differences were not extreme enough. The effect of the DOF is rather subtle, especially sine the audience was not allowed to switch between the two clips or view them side by side. One participant, who saw clips where the DOF changed, noted in the comments that he would have liked to view the clips again to make a comparison. Two participants wrote on the questionnaire that they could see no differences between the two clips. One of them watched clips where the DOF changed and the other one where color grading was added. A total of nine participants scored the two clips they watched equally, and they were spread across the two parameters. ➣ The audience does not link these parameters with quality. Perhaps some of the participants noticed the changed parameters but did not think they increased the quality of the clips. One participant commented: “I liked the blurred background, but it depends on the purpose.” Incidentally he still gave the clip with a shallow DOF two grades higher than the other one which might link to the helper theory. Of course it would be very interesting to be able to prove that the audience did not link the parameters to quality because it is opposite to what was originally anticipated. Unfortunately there is no way to conclusively state that this was the case with the results at hand. ➣ The production of the clips. It has been mentioned that perhaps the differences between the clips where DOF changed was not extreme enough. It could also be that the setting that was chosen was not optimal to display perfectly the differences between shallow and deep depth of field. An object in the foreground could have helped to create a more noticeable effect. That is a method often used when the DOF is shallow, but that same shot with deep DOF might look wrong. A more classic look with shallow depth of field is in darker surroundings with lights being blurred in the background as we tried to create in Tivoli. The deep DOF shot was also not as deep as it could possibly be. The background was still slightly blurry and the shot looked very nice. Even though shallow DOF is often connected to quality it does not mean that deep DOF is connected to poor quality. The color grading was perhaps not as good as it could have been. One participant commented “I thought the second clip was a bit too ‘red’.” while another said “It’s amazing what color correction can do :)”. There were clearly differences in opinion on this matter as could be expected. Whether this blockbuster look was the right look or not to go for is hard to say. This list included reasons for why the test participants or the clips might have influenced the results, but what about the questionnaire and the test procedure? From the observations already mentioned it seems that more qualitative questions could have helped in understanding why the audience responded they way they did. That could have been a whole other way to imagine the test, rather asking qualitatively about personal preferences than quantitatively about perceived quality. But that would require a completely different train of thought. It will suffice to say that in order to better understand why people answered the way they did, more qualitative questions could have helped. There were not that many comments written on the forms but those that were gave an interesting insight into what those participants were thinking during the test. As one person commented, and most likely some others thought about, seeing the clips more than once would have helped them to compare the two clips. That way it is safe to say that most people would have noticed exactly what the differences between the two clips were, and perhaps that is not a bad thing. Another side effect might be that they would start to analyze the clips and try to notice details about them that are not relevant to this test. It was partly to ensure that everyone followed the same procedure and saw the clips an equal amount of time that they were not allowed to see them more than once. Perhaps the other option could have worked as well. Having a focus group look at the four different clips and discussing them is another way to go. That might help people put into words how they feel about the differences and the discussion might bring up interesting points about other elements of the film look that people think are important. Over all I believe the test was simple, straight forward and well executed. The clips, although they might not have been perfect for this test, were professionally made in the opinion of the audience. The average score for all viewings of all the clips was 6.73 and considering that the scale went from “home video” to “big budget Hollywood production” that must be considered rather high quality. Three of the participants even gave a clip the score of 10. That should in it self prove that video filmed with the modern still cameras can at it’s best rival the bigger productions. Conclusion This report started with a look into the world of cinematography. The digital revolution seems to be well on its way and it is the opinion of all the experts I interviewed that it is only a matter of a few years that it will take over nearly the whole market. It seems, however, that the fundamentals of the art of cinematography have not changed much at all. Whether it is because of the conservative nature of human beings or because the old way simply works so well, it is hard to say. The only thing that is very apparent is that there is a certain look that most people making moving images want to achieve. With the digital revolution it has become easier than ever for the amateur cinematographer to achieve and so I wanted to find out exactly which elements are most important to be aware of. The analysis of the different qualities that supposedly constitute the film-look brought me to two different parameters that seemed to be most important, depth of field and color grading. I produced four different versions of the same scene and tested them with a simple test where the participants were asked to rate their quality. The addition of color grading had a positive effect on the perceived quality but the increase in depth of field seemed to have much less of a pronounced effect. It is still my belief that these two parameters are of the biggest importance of all the technical elements in the film look. It is worthy to stress that these technical things are only a part of the whole production of a film. The quality of all other aspects of the production, lighting for example, is very important and likely of much more importance than the technical things I have described. It is hard to say in which direction cinematography will evolve in the coming years and decades. It is my prediction that with the new technology available the definition of what a film is or how it should look will slowly evolve and morph into something that 100 years from now will be completely different from how we see it today. The fact that even with the new technology we are still trying to imitate the old one only goes to show how short we have come in this transformation. Diminishing quality by adding film grain to digital footage will in the future seem absurd although it is standard practice today. One thing I believe is for certain. People will never tire of creating fiction or documenting our reality and the medium of moving images is one of the most compelling for that purpose. The tools and methods might change but the fundamentals stay the same, and the reason is simple. We all love a good story. Bibliography Sveinsson, Örn and Hreinsson, Ingvar. Expert interview at Saga Film. April 12th 2010. Reykjavík, Iceland. Arnarson, Jörundur and Kristjánsdóttir, Linda. Expert interview at Framestore. April 9th 2010. Reykjavík, Iceland. Thye, Nick Strange. Expert interview. September 24th 2010. Copenhagen, Denmark. Larsen, Prami. Expert interview at the Danish Film Workshop. October 14th 2010. Copenhagen, Denmark. Plaut, Sidney. Expert interview. October 14th 2010. Copenhagen, Denmark. Larsen, Sammy. Expert interview from Minerva Film, via email. November 18th 2010. Copenhagen, Denmark. Øhlenschlæger, Thomas. Expert interview from Ghost. November 22nd 2010. Copenhagen, Denmark. Kadner, N. RED: The Ultimate Guide to Using the Revolutionary Camera. 2009. Peachpit Press. Rombes, N. Cinema in the Digital Age. 2009. Wallflower Pr. Tryon, C. Reinventing Cinema: Movies in the Age of Media Convergence. 2009. Rutgers University Press. Skidgel, J. Producing 24p Video. 2005. Focal Pr. Cope, G. Depth of Field: The Misunderstood Element in Image Design. 2009. Oklahoma. http://www.photoclasses.com/Graphics/pdf/Depth%20of%20Field-general.pdf Trochim, W.M.K. Research Methods Knowledge Base. 2006. NY, USA.
http://www.socialresearchmethods.net/kb/ Bjarnason, Bjarni F. and Jónsson, Gunnar. Borgríki & Canon EOS 5D Mark II. 2010. Reykjavík. http://sense.is/upload/sense.is/files/6IwVMQ.pdfAppendix Test Results Following are the relative answers one bar for each participant. When there is no blue bar they rated the clips equally. If the bar is above 0 they answered in the direction anticipated. If the bar is below 0 they answered in the opposite manner that was anticipated. On the next page are the complete results.  Test Results Interview Transcripts Following are transcripts of interviews that I took for this project. On occasions where I am speaking the text will be in italics. On occasions where I put additional input or comments that were not part of the interview the text will be in [italics and square-brackets]. The interviews were only semi-structured, I mostly let people talk and asked questions where I felt more input was needed or if we had gone off track. At times where the interviews were off topic, that is of no relevance to this report, I did not put that in the transcripts and put a few bullet-points instead if there were the occasional remarks that could be of relevance. The interviews are presented in the order in which they were taken. Following is a brief description of the interviews. On April 12th 2010 I met with Örn Sveinsson, post production manager, and Ingvar Hreinsson, IT and technical manager, in the Saga Film headquarters in Iceland. After the interview I got a tour of the facilities and met some of the staff. I visited Framestore’s headquarters in Iceland on April 9th 2010. There I met with Jörundur Arnarson and Linda Kristjánsdóttir, both VFX artists. Unfortunately I did not record the Framestore interview and hence can not provide a transcript. I provide thoughts about the visit and what I remember most from my talk with Jörundur and Linda. I talked to Nick Strange Thye which graduated from the Medialogy Master program in 2006, being one of the first graduates from the education. He has since worked for a few companies in the field, most recently at Ghost, and is now mostly in the production side. He has been on productions with both RED cameras, 35mm film and HDSLR. We met at a Café in downtown Copenhagen on September 24th 2010. I visited Prami Larsen, the head of the Danish Film Workshop, on the 12th of October 2010. We talked in his office and then I got a tour of the Film Workshop facilities, which were very impressive. Sidney Plaut owns and rents out a RED One body and accessories through his company, Spearhead Pictures, as well as working on many of the productions he rents the camera to. He also does many other related things in the industry as I found out. On October 14th 2010 we met in Hellerup in Copenhagen and walked his dog Maxi through a park in the neighborhood while talking about the industry. Sammy Larsen is a Flame artist at Minerva Film. He answered a few questions for me about RED and film vs. 35mm and the important elements in the film look via email on November 18th 2010. I met Thomas Øhlenschlæger, a VFX supervisor and 3D artist for Ghost, on November 22nd 2010 at the Ghost headquarters in Copenhagen. Before the interview he showed me around the facilities of this rather big post-house, with around 35 employees. Saga Film Iceland, interview transcript On April 12th 2010 I met with Örn Sveinsson, post production manager, and Ingvar Hreinsson, IT and technical manager, in the Saga Film headquarters in Iceland. After the interview I got a tour of the facilities and met some of the staff. *** After we started working more digitally there is a lot more money that stays in the system here, but there are problems that come along with it. For example there are no color graders in Iceland that have studied the field, that I know of. It's a field that is comprised of many different fields. Colorists here are kind of like pop-stars. They are a special breed. If a color grader gets into contact with a director and they have a good relationship then he can be for life. But people here have had to send most of the materials abroad. About the switch to digital in cinematography. It's like comparing a typewriter with Microsoft Word, or comparing a film camera with a digital still camera, it is the same thing for the movie industry. The leap is amazing. Although the beast that is the RED camera is nowhere close to perfect, then the thirst of the market for a solution for this has been so extreme that people take it with open arms. People make their own solutions with it. The base might cost 2m ISK but the add-ons that you might need could cost double that. It is however designed in a way that it can work with things that you had before such as the lenses, it's the same lens mount, same audio connections. What they left for the industry was to make the workflows to make the video ready for the screen. It's not enough to shoot it, you need to adjust it, calibrate, edit and make many arrangements, make post production, get it into 3D work. All of this they left up in the air in a sense, they made codecs for the computers to understand the data but at the time when the first machines were released the best computers could hardly work with the material it was so heavy. It's not until to day that we can work with it comfortably, the resolution is so high and there are such vast amounts of data. As an example we are having trouble because of the latest movie that we did, we have such huge quantities of data. Just moving material between two drives takes many hours. These changes have been very positive for the industry here in Iceland. Almost all the directors have decided to go for this whether it is for making TV programs, movies or advertisements. There are around five or six RED cameras in Iceland. Saga Film has two. Pegasus has two. After the collapse of the banks and the big change in the Icelandic currency it makes a huge difference not having to send films to London for developing and color grading. The money is still in the budget, these aspects are still expensive, but they are being used here at home within the companies. Many people thought when they heard of this that everything would be much cheaper. They said things like "Ok so we are going digital, lets take this huge cost of films and delete it from the budget." In reality you need to buy hard drives or discs. You also need backup equipment, because you used to have the film negative no matter what happened but with the digital format you need to be extra careful not to loose anything. You need to have all the originals and have backups of them. You also need more gigabytes or terabytes to work with when processing and making edits. People also though that you only needed one assistant cameraman since one of them used to be taking care of the films, loading and unloading and such. But that persons title changed from being clapper loader to being data handler. So his job changed from taking care of the film to mounting and un-mounting the hard drives, connecting them to a computer and loading the footage, taking backups and such. So this 2nd AC job has changed but many people originally thought it would be unnecessary. The biggest changes these cameras have for Iceland is that you have access to the materials right away. You can start editing on the spot and many directors have done this, they start editing on location. That is a vast difference from how it used to be. Technically you could shoot a commercial and get it on the air in the same day. Technically. When we have been doing TV series the editing process has started two days into shooting. The editing is being made alongside the shooting. This is very good for the budget for the Icelandic companies, there is more work that needs to be done in-house instead of sending it away. That way the money stays here. Although it is not as much cheaper as people wanted to believe, it is definitely cheaper to shoot digitally. Each minute of a film is cheaper shot on RED than shot on film. It has not been calculated precisely how much cheaper it is. One minute of 35mm shots for a movie vs. one minute on RED. It would be a very interesting thing to look into. It would not manage to get to be 50% cheaper, but perhaps somewhere around 25%. With our currency right now it could be even more cheap. We did a season of a lawyer drama very quickly. It was six episodes and we shot two and two episodes together. Two days after shooting the first two they were ready, before we had started the next two. It was actually too fast, it was a lot of pressure. It is cheaper this way, to have the crew working 30 days instead of 45 days but it also poses some problems. There is more work to be had since we don't have to send the film out of the country. The director now comes here and sits with us while doing the color grading, instead of having to go out of the country. Also gathering all the footage, syncing sound and video and more things like that. Now we do more of those things here so there is more work for us. About the issue of storage. We have been taking the footage and burning them on BlueRay discs, which is not optimal. We take the flash cards and put them in a card reader and load the data to a hard drive. Then you needed a permanent backup if the hard drive would crash. We ended up with around 200 BlueRay discs that hopefully will never have to be used but we have them just in case. All of this is evolving right now, people are trying to figure out what the best options are. We are using tape decks which are rather safe, they can't crash like the hard drives. About the workflows, they have changed rather dramatically. What we used to do is for a commercial for example, we took tapes and loaded all the footage in low quality into the computer, the director made a rough cut of the commercial and then we only loaded those shots from the tapes in full quality. Today we are working with proxies, which are also low quality versions of the shots, but there is no need for tapes because the shots are already there. You can load them instantly if you want, for example into a color correcting program. It's all the same file in fact, the proxy and the full version. The idea is the same, it's just either from tape or from the hard drives. But is the material different or just the methods? The distortion is different when there are big movements, is that something that you need to think about? Yes, it is. For example the resolution which used to be 720x576, the SD resolution, but with RED everything is shot in more than HD resolution. Things that don't change are for example of course it matters how the lighting is and what the director does, that hasn't changed. There is a difference in quality between film and RED material. The film is better regarding exposure for example. The point is however that the ease of use is that much more that it is worth the drop in quality. This technology is of course evolving and now the Mysterium X chip is coming and it's exposure levels are immense, so perhaps this criticism on the RED regarding this is history. Another thing that justifies using RED is the ability to use all the good old lenses that have great quality. When you are shooting in resolution this high there is relatively a lot of latitude that you have to work with but you can't deny that in film you have almost endless possibilities of changing exposure. The thing is that we need here in Iceland is to make some really hardcore colorists that know exactly what they are doing. Being able to sit the director down with someone like that that can proceed the look that the director wants. The director should not need to say 'put this filter on and make this a bit darker' but he should be able to communicate a certain feel that the colorist then creates for him and with him. We have had great experiences with RED with all the productions that have been made here. Saga Film does not know of any situations where the distortion from the RED footage (due to the progressive scan) has been an issue for them. It is only in very extreme situations and the material that they are producing is not of that caliber. There are always technical issues that pop up in any production like this. Perhaps with RED because of that there are fewer steps, you get the footage right onto the computer and can work with it from there, there are fewer things that can go wrong. Dagvaktin was the first thing they did, and they believe that they were the first company in the world to produce a whole TV series (11 episodes) with RED. It was still in the Beta phase but it worked very well. [Perhaps this says a lot about the work ethic in Iceland, people just go for it and are not afraid to try new things.] Framestore, Iceland I visited Framestore’s headquarters in Iceland on April 9th 2010. There I met with Jörundur Arnarson and Linda Kristjánsdóttir, both VFX artists. Unfortunately I did not record the Framestore interview and hence can not provide a transcript. Following are thoughts about the visit and what I remember most from my talk with Jörundur and Linda. *** There can be difficulties working with extreme materials that are shot with RED because of the progressive scan in the CMOS sensor. It is mostly because then it is hard to track the camera movements when objects, like buildings for example, are skewed. The motion tracking algorithms don't work if the points that are supposed to be steady are moving around. There are programs available that that fix these errors. We expected some difficulties when we started working with this new technology, but to our surprise it worked almost perfectly right of the bat. We shot a whole series, eleven episodes, and there were no major issues. There were two shots that were shot with a wrong white balance so they looked completely different from the other shots in the scene. It took a lot of working with the color to make them fit and it would probably have been easier if we had shot on film. This kind of footage doesn't allow for as much color correcting as film does because it is somewhat compressed, but we managed to make it work. Nick Strange Thye interview, transcript I talked to Nick Strange Thye which graduated from the Medialogy Master program in 2006, being one of the first graduates from the education. He has since worked for a few companies in the field and is now mostly in the production side. He has been on productions with both RED cameras, 35mm film and HDSLR. We met at a Café in downtown Copenhagen on September 24th 2010. *** [Nick talks about his friend that has the RED camera company, Sidney Plaut.] If it burns out or has to be reset, they call him. It's not a perfect system. People are very optimistic about it. I've been doing Lego for three years, Bionicle, and when I started we used 35mm and we changed into RED during that. There is a change in process there. In the beginning, somethings were easier and somethings were harder even though it was digital. Which things were harder? In the conversion, when you have to convert the raw RED files into something you can use in a post-production line, which would mainly be DPX frames. DPX is like a frame-stack solution instead of video. It's a high quality image where you have the options of pulling some more strings than you have in uncompressed QuickTime for example. The other problem is that the post-production house that I was working at was working with an Avid solution which wasn't prepared for RED. RED was done for Final Cut mainly and many of the big houses still use Avid because it's more stable. When you have a huge unit running, more machines running on the same server compared to Final Cut which is like a on-man-army editing tool. It has the same setups but it's not a stable and secure as Avid. But Final Cut has become better and better through the years but it's still more like a one man or two man solution in editing. Today it is possible to use Avid when working with RED material but at that time it was a hassle. The new Avid has RED support. The graphic card Red Rocket, that's an essential thing to have if you work with RED. Red Rocket came out later than RED so when you had to convert RED you had to have Film Master or Phoenix which are grading tools which are huge machines that can handle the resolution and convert the material. When we did Lego we maybe shot like 20-30 minutes of material and that could take a day to convert which is a lot in a commercial setup. It's kind of the same as if we did 35mm because then we had to go scan it. There are some things in the pipeline you have to change if you use RED vs. 35mm. When you scan 35mm you have to either overscan it so you get a full frame picture to get all the information from the picture, or you have to define the frame that you want. Then you can't change it afterwards and go up or down in the picture. As well when you scan you put a standard grading or a standard lighting setup. You have different ways of scanning, best light, technical, but this is probably too technical. You can go darker or lighter or enhance some colors in the scan already from the raw material. These decisions you have to make at that stage before you go into the post production. Normally you would have 35mm and take as much information as you can, that's what we did at Lego, and keep it as neutral as possible so you can change the colors and the grading settings later on in the process but you still have to take some main decisions there and you can't change them unless you do the scan again which is an expensive process. So that's of course an issue. When you do RED you kind of have the same setup with the grading situation, you can have the raw RED, but when you convert it it will change something. It's like a compression or anything else, you will change something. If you shoot at 4K which we did at Lego mainly, then you have to take the material down to 2K to work with it in the machines. We had the 4K as a backup solution but after the conversion you really don't want to go back and convert again because it takes time and time is money when you do commercials. There is also a deadline you have to make. So you try to do it right the first time, you don't test your way through it. Of course you test a little bit, in the manner that you take a small sequence and test that and check the colors before you set it all to render. But when you have done the whole lot you don't really have time to do it again. That's more for feature films that have a lot of time. They can edit, then redo the material while they're editing and re-import it. So they have another solution. I mainly know it from the commercial side. What about the control? You have more control over exposure and color when you use film, but is there a very big difference? You can change those things with the RED material as well. Yes, but then again you don't have that much control. When you scan the film you have full control, but when you have the file out from scanning you end up in the same product, a DPX sequence, as you do when you convert RED. So unless you have a lot of time it really doesn't make any difference? I'd say on feature films you have some differences. There is of course the look and the way of the work process. Some directors actually still prefer working with 35mm. Before RED there have been other digital cameras which are still in the market, they are just highly expensive compared to RED. RED is a very cheap solution. You have to have that in mind. It's a high profile, cheap solution. That's why it has become so popular. If you look at the Sony 24p which was one of the first digital cameras, they used it for Star Wars, it was tested on Star Wars actually, it's more expensive to use that or rent that, I think in Denmark there is one place you can rent it and they have to take it down from Sweden, and that's one camera. Would you say it's better than RED? The old ones are not, because RED has gotten better with this evolution, but the new Sony cameras… It's more the feel. The resolution you can go higher with some of the other digital cameras. But I know RED is coming out with a package, I don't know if it's out yet, that's going to solve a lot of those problems. They are going to go for some high speed cameras and they are going to go with some super high resolution cameras as well, 28K or even up to 36K I think. But why they are going up to these high resolutions is also because when you do high speed you have to compress the resolution, so the idea of going up to the enormous resolution that nobody can actually handle in the post line is that you can do high speed. More frames per second so you can make slow motion. Where actually the digital have had problems before. There is a camera called Phantom which is the first camera that has a really good solution for it. They are expensive as well. Some of the Phantoms are digital. But everybody is trying to go into digital, of the camera companies. But the 35mm cameras, the old ones, many of them are still very good. There is evolution within those, new stuff, but sometimes you shoot on something from the 70's that is great! You know. It gives a nice look. [I now explain more where I'm coming from with this project and what I'm hoping for. Tell Nick that I don't know yet what I want to test.] Nick asks: Is it from a technical point of view, or an overall point of view, or a consumer point of view? Or a business point of view, could be that as well. RED is new, but then again it's already been three years. When I started out we worked on 35mm talking about the digital age. We did an Arla commercial on the Sony 24p before RED came, that was a highly expensive commercial. We had to do it because of the post process. It was a lot of vegetables and fish jumping around in a supermarket. We had to change next to everything in the scene, set extensions, we really had to work a lot with it. So we chose the 24p, but we would probably have chosen the RED had it been out because of the expenses. You have to take the business point of view into it as well because you can't just compare 35mm and RED and the pipeline, because what also defines it is the economy. RED has this huge force of coming into the market while the market was going down. Everybody wanted the same product for half the price within commercials, done twice as fast. One of the solutions why RED became so popular and still is it's because it's cheap. It's not a perfect solution but it's a very well made solution for helping production companies meeting the demands of the clients. You have been working at Ghost [a Danish high-end post-production company], do you work with a lot of RED material there? We do. We get any kind of material. Some of the directors still want to use 35mm. For example we did the new Coke Zero worldwide which is coming out now. That was shot in 35mm due to the directors choice but that's also because the director was chosen specifically for this. He's done a lot of them. I can't really call him "talented" because he's been here for 15 years. He's a very good guy doing very good stuff and he knows what he's doing. He wanted to use 35mm. Is it usually the director that gets to choose? It depends on the profile of the commercial. High profile commercials where the director is the main issue and selling point as well as the creative developer of the spot and it's a success, if he's a high profile director if you can call it that. Together with the cameraman, he has his favorite, so it's also up to the cameraman and the photographer. Some of those just swear to use 35mm. We did Finland's most expensive commercial this summer and that was shot in 35mm as well. This was both due to the director and the photographer. Both high profile guys that wanted to use 35mm. Then I got a new job with them doing something else. That was the first time they had to test a digital camera, it wasn't RED, it was the new Arri. They of course have a digital line as well. Arri's range is a lot more expensive than the RED. It has some other features and a lot of the photographers like it because it's built up like the 35mm setup but with the digital implants. We did a project with this one and we found out that too many changes in the image, in individual pixels, actually made it crash. We had some closeups of a face doing some mimics and that was in 8K resolution, which was the photographers choice. We only need 4K. But there were too many changes. You can call it raw but it's not raw, there would be too much data. Raw is not needed, the eye can not distinguish the difference. Digital is always compressed in some kind of way. It's also about the bit-rate. You can always zoom in and get some digital flickers. That's why you can't really say it's not compressed. You can always zoom in until it looks bad. Compared to 35mm film, you can say that it's raw material and not compressed. But yes, there were too many changes in the camera so it crashed, it couldn't store it on the hard disc fast enough. Those are problems you get with digital. It works like a computer. Many people prefer the 35 mm. When they have to reset the digital camera, it's strange for them. That was the Arri that you talked about that crashed, have you heard of similar problems with RED? There are a lot of problems, well maybe not a lot, I have seen some problems. RED has a fan, just like a computer, that cools it down. This fan should turn of exactly when you start recording or just before. So when you press the button it stops so it doesn't make sound. Because you can't have a the camera humming in the background, then you destroy the sound. Also usual computer stuff, crashes, overheating, this fan, then you have to take it out because of course it always does it when it's the right shot so the sound guys are going bananas. The sounds is recorded externally but they still get the fan sound. General crashes you get on a computer. You can't always explain it, it could be too much data coming in so the processor can't send it to the hard drive. It can be heating problems, it has the same problems as a computer, if it's 30°C outside it will get slower. The slower a camera gets it has a certain point of where it can't store the amount of data it should if you put it into a high profile setting. Other unexplainable crashes. Still it's very much so worth working with RED, otherwise it wouldn't be so popular. In this kind of market like I mentioned where the money has been so tight but the demands are the same. It's getting developed as well, so it's getting better and better. I don't know if you can get the same, well some people, they say they can't get the same look and feel as with an analog camera. But it's also what is the idea? What is the purpose of this commercial? How is the pipeline afterwards? Is it 3D people running around doing things or is it poetic non-post-heavy assignment. Everything is specific, we sit down and say, we have this idea, this storyboard. We need to go into post, we need to deliver on this deadline. The post-process I did on lego where we implemented like huge Lego figures in a new product called Hero factory, they were characters within a real world, scaring some kids playing basketball, some bad guys. This was a 3D character, then you need high resolution, then you need a lot of information to insure that you can make the composite look real in the real world. There digital has it's advantages. But as well, we did the Sony Ericsson this summer at Ghost and that was shot on 35mm. We implanted a robot, of course you can do that and we have done that before. But sometimes it makes it easier for the composite-guys to have a digital format. Why is that? If you import the film you end up with a digital format as well? It is easier, but it's hard to say. Analogue is build of these cones or I don't know what you call it, more oval shape, but digital pixels are of course square. So you can't really put your finger on it, it's just something…? Yeah, they guys at Ghost don't care, they have been doing this for years. But still, it just has it's advantages sometimes. It's also because if you have a specific scene that doesn't work you have a digital version of it in raw material that you can examine, find out what you can change in the development process. The funny part about RED, ok, we need to change subject a little bit to get this right. When you convert RED you have a specific program for it. There are different ones but RED has it's own for example. To get it to look more film like, you know the film-look, not analogue, you kind of develop the digital film into what you want, into DPX. When you do this development you can change different values just like when you are scanning raw film. Color schemes and such. And you can go back if you have a scene that is problematic due to this conversion, you go back and do it again but you can see all the values in digital, you can't do that in film. When you work with the raw material in film you have to scan it and sit with the scanner and check it there while if you have it digital you can adjust the specific things you know are a problem. Then make the conversion again. It doesn't happen very often, but it can be a helpful difference. I'm more on the producing side of things, but I have a technical advisor who can help you with these things. [From here on I write bullet points of what me and Nick said because not all of the conversation is relative to this report.] - Nick talks about customer relations, who the customers are (often ad agencies that are working for another end customer), what they want and how they get there. - About choice between for example film or digital, in high profile jobs its often a high profile director and then they usually have a preference. - Sometime people brag about having made things with 35mm because now the standard is becoming digital and the other option is more expensive. So in some cases it might be more of an image thing. In the beginning people were bragging about having made things with digital because then that was the new. Right now having used RED is not a selling point because people know it is to save on expenses. Some customers are really into the newest technical stuff and want all the new technology. I mention that Saga Film in Iceland made a series of episodes and the first episode aired two weeks after they started shooting. - Nick says that the RED process can help even more in TV programming because you can have a rough edit ready very fast, even on location. It doesn't change as much time wise for commercials simply because there is not as much footage. - If you shoot a TV series on RED instead of film you could save a third of the budget. - Time wise it doesn't matter as much for commercials because there is a deadline, and if you need more time you just work overtime. You could save a day maximum with using RED when doing a commercial. - Avid and PC is hell when you are working with RED material. - A timespan for making a commercial can be a few days to a few months. My longest was 5 months, and shortest was 2 days. I talk about that I also want to mention DSLRs because they are a big part of the digital video revolution. - I have shot a couple of commercials on 5D and 7D. - DSLRs have the features and image quality that RED had a couple of years ago. It's new, it's very cheap and the image looks super good. - The client says, "Ah, perfect solution!" but it isn't. It's a solution for some things, and it depends on the storyboard again. If there is a lot of zoom, the zoom is very hard to handle on the DSLRs. Same with the focus. - You need a lot of extra rig to get any functionality. - What is great is that you have full-frame recording and the options of lenses. You can use the big Canon lenses, which is cool. - It's HD, but it's not raw. It's compressed into H.264, so you lose some quality. - If you ask any post guy that has to to grading, well you can to grading on it, but if you need compositing or green screen they look to the floor. If you need any heavy visual effects or post, you're f***ed. - You can get a good result, but you really have to know what you are doing when you are recording. You can't change it afterwards. - If you need to put two shots together, perhaps shot with two hours in-between, it's very hard to make it look like the same shot. You don't have enough information. Also the compression, it might not have compressed the two frames the same way, if the lighting changed a little bit for example. - In the ones that I used the DSLR it was a simple story with almost no post processing (just a TV screen replacement) and it worked fine. - Not all photographers feel comfortable shooting with it. You need lighting guys that are familiar with it. But it's still new so not everyone is familiar with it. The director needs to be aware that the shots need to be simple. You don't do a lot of movement, you don't do zoom, if you have a rather still image you can shift the focus, but it's very basic things. If you're down to this basic old-school stuff, then you can use it. Handheld, no way. Even if you have a really good steady-guy, with the compression, you can't be sure you can't see it, because the compression can mess up when you do. It changes the image too much, it's prosumer cameras, but still it's just a small chip. It makes digital errors if there is too much data. - You can see all your clips and even edit on set, right away. That's a bonus. When you use 35mm on big productions sometimes you use extra video cameras on the side to check the material to see if you have what you need, because you can't develop the film while shooting. - Many people swear to Canon when choosing DSLRs. Good quality, high speed in 720p. You can also fake high speed using NTSC because it's 30 FPS. You get five extra frames if you tell Final Cut that it's a PAL video. Then you can overload it into 60 FPS. - Some directors and photographers challenge themselves by using DSLR, but it has more to do with the assignment at hand. - When you shoot with RED often you have a video assist guy, and my friend [Sidney, with the RED rental] for example has a Mac Pro with a RED Rocket card which captures the material directly so he can check it right away, almost real-time. He checks if everything is ok with the setup and the lighting and such. Then when there is a break the director can sit down and choose a couple of scenes so when you go to the editing process you can save a lot of time because it's pre-edited, the scenes are chosen and the material is checked and already converted. That's a huge advantage on some productions compared to the analog workflow. Prami Larsen, interview transcript I visited Prami Larsen, the head of the Danish Film Workshop, on the 12th of October 2010. We talked in his office and then I got a tour of the Film Workshop facilities, which were very impressive. *** Well we started out in 1988 with a project we called digital days. At that time we still worked with 16 Steinbeck tables and we worked with tape to tape video editing. I saw that the end was near for those kinds of production lines and I wanted to kick them out of the film workshop. I wanted the young generation of filmmakers to get experience with digital film production. We closed down the Steinbeck editing table and we closed down the tape to tape editing and we used computers and we bought an Avid DS for online editing and color grading and we opened up a graphic visual effects department with four workstations. We had used computer for normal editing, for sound production, but now we tried to make everything digital, we didn't care that it came from VHS, stills or graphic or whatever, everything could be digitized and used online in a film production. So that was the basic idea. The freedom with the visual effects that Tron and Star Wars had opened up, we wanted to work with the and get people experienced to work in the business with those kinds of facilities. Everything worked, the sound device was digital, post production was digital, all the way through, except for the tape in the end, beta cam in the beginning and later HD cam. But still the best resolution for special effects we needed to shoot on Super 16 and that is a really shitty format because you can't exceed 100 ASA raw stock if you are going to match up with the visual effects. It's very expensive and it doesn't give you the freedom on the set to work with actors for a long time, and work out your expressions together with the actors. You can't shoot and re-shoot, it's very expensive for talent development. It's a very bad production line for talent development. That is our main focus, we are not a production company, we are not a film supporting system, we are a talent developing system. So we don't care if your film is a good or bad film, the method we use is like, ok, you are a young filmmaker, you want to get into the professional business, now we make a game, a setup, not we play that you are a professional. You are Susanne Bier you are Thomas Vinterberg, what will you go though? You have to write, you have to communicate your project, you have to meet all the conditions of working within the professional film production, and that includes meeting a crew working with professional equipment. So we shot on Super16 for year and when Sony came out with HDCAM camera we got money from the board of directors to buy HD around 2005 or 2006 I think, it was years ago. We were proud and it was easy for us to go from standard definition to high definition when it came to the high end production line, because Sony provided us with a camera that worked, and we had some of the old lenses that we could use and they had the tape deck, HDCAM, so the production line was actually pretty easy. It was a hell of a problem with the HDV because they hadn't developed the production line. They hadn't developed the machines and built the machines yet. But we were really happy with HDCAM, because it worked for visual effects and it worked for the cinema when we screened our first HDCAM productions in the digital cinema we have here in the Film house it looked fantastic really, really terrific and we could work until the last hour with it. You know, we could fix problems ten minutes before the opening of the film, all that kind of stuff. So that really worked with the no-budget production because you could shoot and re-shoot, you could work with children, you had a total artistic freedom and it was within the possibilities of the funding that we can work with here. We fund around 50 projects per year. So one camera set for short films shot in one week, it makes sense. I mean we shoot documentaries and fiction, around 25 fiction titles per year. So it worked for us, and it was nice. Then our technical manager came back from Las Vegas and said now we are going to shoot on computers. RED will be the future. Ok, let's see, we said. This time we better be a little conservative instead of being the first, or one of the first production facility in Copenhagen that works with this new camera, I think we have to be a little conservative. We are really fond of, now we have two HD cameras with fixed lenses and all that kind of stuff. And then we heard about the problems that people were getting into when they shot on the RED because there was no workflow. They hadn't really considered that when you come back with a 2K or 4K resolution material you have to get rid of all the old computers. You have to buy new computers, you have to work with production lines where you keep your original footage in a safe place and you work with proxies and all that kind of stuff. They lost a lot of material and they lost a lot of time. So the problems that you heard about were mostly in post-production? Well, the directors of photography they were not responsible, they wanted to get the camera. In a very short time we had around seven or twelve cameras in Copenhagen and everyone wanted experience with the camera but they were not rented because the producers for funded film they were very conservative and said "If I am going to keep the budget, and you want to shoot on a camera we know nothing about and we have to release this film on this date, and we have no workflow, no way! We have to shoot on Super16 or HDCAM, something we know, something we like, instead of trying out the new format." So they were bought but they were not in action and no no-budget production could get access to that kind of, especially because the experienced cameramen they worked on the mini- or no-budget projects to get experience with young filmmakers. So, we said no way, we don't want it in, because it's too heavy. And the first project that cheated it in, they are still not finished. They have some very ambitious visual effects, and the old computers we had, in the visual effects department, they couldn't work with it. They were just sitting looking at sign saying rendering, rendering, rendering. Because they forgot that yes, you can offline-edit your film, but the visual effects have to be done in the high resolution and our computers couldn't make it. And it's still not finished because they are very ambitious. Especially when we mix formats, it's hell on earth. We loose about 25% of our post-production time to converting, to fixing problems in the process. So we have lost a lot of creative time, introducing RED. After some years our technical manager he had helped setting up RED workflows for commercial production companies. He said ok, now I think we know what to do. Let's buy the camera. The two HD cameras never worked anymore, and we have a responsibility to set up a professional production, and if the cameramen work with RED we have to have RED, and we have to learn this new DIT profession. So we set up some very strict conditions for the projects granted to shoot on RED. We have to know the cameraman, we have to have a meeting with him to see if he knows what he's dealing with. And actually we had very bad experiences. The camera has not come back once without being in some way destroyed. Every time. All of them, they have had problems not being able to shoot and they have tried to fix it by themselves and have destroyed shots, raw stocks or the camera. And right now the camera is in Los Angeles, destroyed for €3000. So our experience is 100% failure. So it's learning by failure and not learning by doing. What kind of problems have you had? Anything, you name it. You can't put on the hard drive, you try, you fix it and you do something with the mounting and destroy the mounting for the hard drive. The card mount. The batteries are the same, you try to fix it and you destroy it. Everything. You name it. You go in, change the setup, and everything is black. You come home with five days of shooting and everything is black. So it's everything. And these people, they are people who work on RED rental, people who have shot two or three short films before they are granted permission to work with them. And if they are going to make post production… You can either just have the camera and shoot, if you are not going to make post production at the film workshop, you just have to know the cameraman. And then you make post production away form the film workshop, you are not allowed to do it here. If you're going to make post production at the film workshop, your original material is not going to exceed 1TB. We are running 93 film projects at the same time. We can't have that many projects exceeding 1TB. Of course a lot of documentaries shot on HDV are not exceeding 200-300 GB but still they are shooting a lot. So you are not allowed to exceed 1TB, you have to have a DIT, and we have to have a meeting where we approve this DIT person. It's getting better. We have made a total RED setup. We bought a new computer, we bought tape backup systems. It's a total professional setup, we haven't been saving the money, the camera is very cheap but everything around it is very very expensive. We have fixed lenses, we have everything. But still there are a lot of troubles. The DIT's are not experienced enough, they meet a lot of problems. We have made what we call Pixie-guides, step by step instructions. You can download them, they are printed and put up everywhere they should be used. At the meeting we say to the people that you have to be updated by the Avid and RED websites because we can't follow their updates. You need to be updated because they change the workflow almost every week. We only have one technical manager, so they have to follow the websites and update themselves all the time. I haven't seen one finished film shot on RED. There was a feature film called Original that was shot on HDCAM but some of the scenes were shot on RED. I have seen it on DVD, it hasn't been premiered in Copenhagen. So, I don't think on the big screen that I have seen one film shot on our RED equipment. Do you do many feature films? Not a lot, one every second year perhaps. Right now we have two features waiting for a release date. They don't dare to release bad, no-budget, shaky feature films because right now we are branding Danish feature films badly. We have released a lot of poor Danish films and the audience they say "Oh, is it Danish, I don't want to see it." So they don't dare to release, not in the cinema at least. So our experiences are like, ok we have to do it, but it's not with love. It has really been giving us a lot of trouble. But cameramen have to learn to work with it and post-production has to learn to work with it, especially the DIT function, you have to have a DIT on the set and in the post-production. We knew the problem already, people come here and they have experienced all-in-a-box productions, you know. They shot on one camera, they made post-production on one computer, VFX, sound, editing, online, color grading, color correction, and then somewhere out in the city they accessed an HDCAM or Digital Beta and got it on tape. But in the moment they meet the production lines at the film workshop they have to split their production into sound department, VFX department, offline-online, and all this. That is very critical. Do you think that the whole RED system is too young, is it not ready unless it's people that are really familiar with it? It's very very difficult, and it's very delicate this system. And it's so abstract. Now we have a generation that has been used to computers just working. I mean, for my generation, nothing worked. We had to fix all our bug and understand what is a computer. We had to understand what software is, we had to understand the whole architecture. So we could figure out, if I meet this problem it might be this or that. So we could fix everything else. Today they are sitting with these half eaten apples and they know nothing! They just open it and it seems to work, but it doesn't work, under it. It seems to work because the can see it. But they don't know anything about it. So it's actually the wrong generation for the wrong technology! Because you have to know a lot about the architecture behind the digital film production to understand the problems that we meet. When you go from Mac to PC, when you go from that kind of file system to that kind of file system, when you go from this format to that format. When you go from 1.2K to 4K. Because it's not just four times the amount of information, it's an explosion. Every step and every time you put this amount of information you weaken the old system that are not prepared for that kind of amount of information. The thing is that I don't see compared to HDCAM that we are getting any better. Visually it's not getting much better. You still have your burn-out pictures. You still have to work with a digital image and digital images are not like when you shoot on 35mm raw stock where it's resolving softly into white. When digital reaches a limit it dies, the signal dies. So you still have to be very controlled when you work with RED like when you work with HDCAM. Ok, it's a little more flashy, you can get a little more resolution, all the way to the end product on the release date. What about the film look, the depth of field, you don't see a difference? Not so much. I recall a camera man who came up here when we shot on Super16 and he wanted to shoot in cinemascope and we decided to help him. But when it came down to it he only had action in the middle of the screen so it hadn't been necessary at all. So I think 4K is too much for most films really. But they have to learn to work with it and the technology is so cheap that they demand to have access to it. Now they have 4K resolution which is almost as good as the 35mm so they can't complain. It makes it easier for me because often when they came back with the HD cam they said "well it's still better to shoot on Super16". But it's not good enough, BBC don't consider Super16 to be HD. So now when they shoot on RED they don't come back and say it's not as good as film. I don't hear that complaint anymore, but they come with so many other things. But of course they work and they learn, and in one or two years time there might be less problems, not problems with the digital. I hope that we have more experienced DIT's and cameramen so the equipment will not come back broken and we have this artistic freedom that the computers have provided when it comes to post production. [I talk about others that I have interviewed and that not everyone has had so many difficulties.] I heard that Susanne Bier was shooting somewhere in Europe and she lost two days work. They had said we don't need the DIT but we can use him in another place, and the guy said "I think I'm pretty important here" but they didn’t listen and they lost two whole days of shooting. So we have to get experienced DIT's. Can you describe more the problems that you have had? Yeah, you can set up the camera, and it looks as if you're shooting is ok, but actually it's totally black when you get back and you put the files in the RED system, Avid or Final Cut. Would you guess that the problems are more with the camera or with the crew that was using it? I don't know what happened. It's very difficult when we go back and say, hey, let's learn from this, what happened? Everyone says that it wasn't them, it's always someone else. So we can't get an explanation. We started with another system for the batteries, better than what is used for other systems in Copenhagen. But if someone borrowed from others that destroyed the system, so we had to go back to the other kind even though ours was better so we would be able to use equipment from others. People tend to forget that you can't hot-swap the drives, they need to finish the process they are working on before you can unplug them. The cameramen might forget to tell the assistant to wait for it to finish. That has happened a few times I think. It's so much easier to take out a tape, dump it in a box, hopefully someone will write on the outside of the tape what is on it. Then you take a new tape and it's done. So the problems with RED and the computers it's still world-wide-wait. You have to wait for all he processes to finish. What is your vision for the future? Will you continue using RED? We have to. It is what the people want to work with and they need to get experience with what is being used out there. It has always been like that. When we were looking for our first editing suites, we thought Avid was too expensive and not worth it, but it was what the market was using and people wanted to work with it. And it worked fine also, people just got in the room and made their project, perhaps asking for more hard-drives but they knew how to work it and it got done. So when you are working with RED material, do you use Avid? Both Avid and Final Cut. We prefer Avid because it is more professional as a system. When you put something into Avid it says, ok, go out and have some coffee, I chew on this and make it an Avid thing. When you put something into Final Cut it says, oh here it is. But it might not get out again as something you can use. You can take a mobile phone recording or whatever signal or codec and you can watch it on the screen. It's mainly amateurs that work with Final Cut and they know nothing about the technical side. They get so angry when they can't get it out again. Burn a DVD and premiere your film on DVD, that's what you have been aiming at, you haven't been working professionally. Now we set up workflow meetings, we have like three different kinds of workflows for Final Cut, in Avid we have one. And it works. We are discussing very much whether we are going to buy an HDCAM SR to get 4:4:4, or we are going to buy a cinema setup where you have your film on a hard drive as a master format. Right now we have HDCAM which works 4:2:2 but we want to master RED projects on an HDCAM SR because you keep the 4:4:4 format, it will not loose quality at all. But Sony's machine is very expensive and the new system that is going to be the system for digital cinema is another kind of QuickTime system where you have the thing on a hard drive. The cinemas will have to buy the equipment, but I think we will end up with the default being the digital negative, that will be the format. Well, hopefully your troubles will be worth it when you see the first screening of a RED project this week. I always talk about Festen, it was shot on a 1 CCD camera and it has been one of our major successes in the last 15 years. So the story is still the most important. So you are not pro-digital or pro-analog, it's still all about the story. I think it's very very important when you are an artist that you experience professional film productions. So many times you are used to be on the set. In the 80's when I went into the business I saw so many film directors that were only on the set every fifth year. The cameraman, the actors, and every other person on the crew were more experienced than he was. So they said "we can do this and do that" and he said "ok… we can do that". So he was not the king on the set, it was not his vision. My hope is for the Film Workshop, putting in a fragile person into that very heavy production machine, being on the set with 40 people, and everyone gets more experienced. So when they stand there in the same position but now with a €1M or €3M production they will say "I want this, because it's the best for my story." That's what I hope. Sidney Plaut, interview transcript Sidney Plaut owns and rents out a RED One body and accessories, as well as working on many of the productions he rents the camera to. He also does many other related things in the industry as I found out. On October 12th 2010 we met in Hellerup in Copenhagen and walked his dog Maxi through a park in the neighborhood while talking about the industry. *** [Sydney tells me a little bit about his work and I describe what I want to do with my thesis.] RED was the big thing, but then something else came along that could be even bigger, even though it's smaller, and that's the DSLRs. It started with the Nikon D90 and now it's the Canons ruling the game. That means more or less everyone has access to something that can look good. When I started, my first camera was a consumer style DV camera. That was still 2-3000 dollars, for a really shitty camera, like awful, compared to what you get now. My first prosumer camera was the Panasonic DVX100, and that camera was a very important milestone actually. That was the first affordable DV camera that could shoot progressive scan. That is very important because that is the big difference aesthetically between video and film. The perception of motion is different, so just because you shot digital it was a giveaway, and it didn't feel the same as film. I did a lot of research, of course it was a lot of money at the time. Everything was about getting that film-look, but it wasn't working in the beginning and I didn't know why it looked so bad! A close-up could look okay, but as soon as you pulled out it looked awful. FinalCut came along at the same time, which brought color grading. That helped a bit in achieving that look. If you compare us at the time to the big Hollywood productions, there are so many differences. The way they scan the films with so much care, and not to mention the lenses and the depth of field. So our next step was to look at lens adaptors, which were very expensive but allowed for a bit more depth of field with the right lenses. One day my friend told me about this website, red.com. It didn't really have any content, just a statement about HD and film being dead and showed the 4K resolution in scale with all the other formats. It was a couple of years later that I finally got my RED camera because at first I was hesitant if this would become a reality. It was also not just $17.500 but all the other stuff you had to buy too, so I wasn't early in line, around 4600. I had to wait a long time. When it finally arrived I had already been out shooting with a RED and had done a lot of research. The thing that's interesting is that finally it's more or less down to skill now. That is the most fair democratic thing. If you have a MacBook and access to the 4K files, you can color grade and do exactly the same thing as they do in Hollywood, even though it might take you longer. It's only a matter of skill now if their films are of much higher quality than ours. It's not because we can't afford a 4K scan, most film in Europe is scanned at much lower resolutions. There is only films like Batman that was scanned at 6K and we can't even go there. But most of them out there are shooting RED and we have the same technique available to us and it's only a matter of skill. 35mm film can of course look great as well but it wasn't an option for me because of the cost. Also because the experimentation it takes to get to know the different stocks and processing. For example the difference between having a flat scan and a best grade. Some of my films on 16mm looked really good and others not so good, and it was just beyond me. I did learn as much as possible, since we were doing our own films and wanted to save as much money as we could. So I was a director who became a photographer, and an online-technician and everything else. I advanced in the business and started going to the post-production facilities, but I realized that sometimes I could to more on my Mac computer than they could do on their one million kroner system. Suddenly me, the kid who got rejected from Filmskolen, and no one would support knew a lot, also because the technology was changing. It started with Final Cut. When I bought Final Cut nobody that was serious about making films was using it. Everyone used Avid and my friends told me that I should have bought that as well. But getting Final Cut allowed me to buy many other things because it's so much cheaper. Then suddenly people started calling me saying; Oh, this producer is using Final Cut, how do you do this and this and this? So quickly I became one of the experts on Final Cut. Today a lot more people know it, but people started calling me at the time because of technical stuff which is funny because I'm not really the technical guy. It's kind of the same thing with RED now. A lot of people saw the idea for RED early but I put a lot of focus on the technical stuff. I also work as a technician on the big commercial shoots, green-screen and special effects stuff. I can say to the photographer if you do this, that will happen but if you do that, this will happen. I work as a DIT sometimes, so that's part of my job as well. I don't promote it but I sometimes to that for companies, for example Lego. With those productions it's so great because we check everything on location, we try out green screens and we look at all the materials so people know exactly what they are coming home with right away. It's so much better for commercials because everything is so fast. The post production deadline is so squeezed that a day can matter tremendously. So if you can save a day in not having to develop film, it really matters. Red Rocket is a must, you can process your footage really fast, it's amazing. I mostly use RED, I have a DSLR but I don't use it much for professional use. If people don't want to shoot RED it can be the Alexa perhaps, the new camera from Arri. It's their answer to RED so to speak. I tell people though, if you are shooting digital and are not shooting RED, I would shoot on a 7D. Hands down. Of course there are major limitations, especially with the color grading. But if you expose it well it looks much better than video in my opinion. Lightyears ahead of what I started with. Now I use a set of Zeiss still lenses that I converted in L.A. to fit with the RED. [Sidney shows me a commercial that he shot on Canon 7D and it looks very nice.] This was of course heavily color graded. The lens that I used is a $3000 lens, but it could have been a master prime. It worked very nicely. 7D is almost more exciting than RED if you ask me. And the 550D that's even more crazy. It's only $3000-$4000 and it can do the same as the 7D. [I tell Sidney about that Saga Film in Iceland does not want to shoot on the still cameras because of the post production limitations.] Yes, it can work but it doesn't work with everything. If you're doing green screen for example. There is no point in saving let's say 10.000 kr. on camera equipment if you end up spending 40.000 kr. more in post production. And it can easily come to that. 20.000 kr. a day. The H.264 codec is a delivery codec. My knowledge on why it looks so good in the DSLRs is very limited, but I think it is because it has such a good sensor. They have such a good digital sensor in the Canon cameras that makes it look good. You just wonder why they didn't do that a long time ago. HDCAM to me looks like video and it will always look like video no matter what you do with it. If someone were to offer me an HDCAM system with a really good video lens or a 7D with still lenses, I would take the DSLR, no doubt. The new Sony F35 is also a very good camera. They shot 2012 with it and a lot of TV shows. RED just shifted their strategy very recently and said that they are going away from the prosumer market, which I think is very wise. It's still going to be very cheap compared to everything except DSLRs. Why is it better when making special effects to work with material that was shot digitally, rather than working with material shot on film? It is digitized anyway, right? It's because of the grain and the jitter. You run film mechanically through a gate and the film it isn't tightly wound up so it's not moving perfectly, it has a little bit of jitter. Especially with 16mm, it's very apparent. So if you have to match material, when not using the same camera for example, the jitter would be different. Or the exact same kind of film. So first you have to match move that, track it, and it's very small movements, but in the compositing world it's huge. And then you have the grain. A shot might be exposed differently for example, and then it has different grain characteristics. Or it's not the same chemical batch from the processing. All of this affects the image. Now you have to match that as well before you can do anything. All these steps are totally unnecessary if you shoot digital. It doesn't move at all, completely still. The grain or noise can be an issue, but then it's because the photographer exposed it wrong. If it's exposed right, with for example green screen, you can have a much cleaner image than what I have heard you can have with 35mm. Again, I haven't done green screen with 35mm, but that's what I hear. Right now on the software side Adobe Premiere is taking the lead. You can play five full 4K resolution streams at the same time and no other program can do that. Final Cut hasn't progressed that much in the last three years. What they did well was taking the ProRes 4:4:4 format, which is a fantastic online format, before there wasn't any format, you just did uncompressed HD. Avid of course had their own online format, but Final Cut had nothing. If you wanted something to look good you had to do uncompressed. But now they have ProRes, and Avid also now supports ProRes. The Arri Alexa can shoot down to ProRes. It's a really good format, you can run it of most computers. Right now I'm editing a film shot in Istanbul. We have the ProRes 2K files and we are running it of a USB drive. It's 2K and a timeline where we are doing borders so we can frame it right, and three or four tracks of effects, and it's realtime. No rendering. That pretty cool. That's shot on RED also. The only thing that would cost me anything, if I wanted to take a film to the theaters, is printing it on film. And that will be over soon, a couple of years probably. Then you can take your film in the cinema from a Mac Pro at home, in 4K. That's pretty crazy. [Sidney shows me a production that he directed and did all the post production on it from his home, on his MacBook. It was shot on RED with prime lenses. And it looks very good. He mentions though that it's not like you go "wow, that's so much better than the other one" talking about the previous one he showed me.] I think I'm the first generation now, I'm 31 years old, that learned all this by doing. Now, all the kids know how to do post production by themselves. They could have done it to a very high level. Like with special effects, you see online something that some kid has done. Ok, he might have used two years, but it looks pretty damn good. Before everything was divided. Now people can do everything. For instance this week, on Monday I was DP for a commercial, I'm editing a film, I'm color grading a teaser for a feature film, and then I'm writing for some of the big companies as a writer. It's a lot of different stuff. There were problems in the beginning with RED. That's why you have the believers and the haters. I don't care, I just look at it as a tool. There were problems that were quite irritating. I had a problem with my hard-drive for a long time, and I sent it back more than once. Then again, when I had problems with my camera, it was a Thursday and Monday there was supposed to be a shoot. I couldn't afford to rent a new camera so I talked to RED and was very courteous and they said "No problem, we will send you another camera that you can loan for as long as you need." Three days later I had the camera. They told me to make sure when I got mine back that it was okay before I returned the one they sent me. There was still a problem so I had one of their cameras before they figured out what was wrong with mine. They paid everything, transport to and from America. I never knew of a company that did that. It is excellent customer service if you treat people with respect and are nice to them. Some other faults were that the battery plate could come loose, so if you pushed it the power went off, which was really irritating. Then there was a drive cable with a problem that could cause the hard-drive to freeze. Of course it told you that there was a problem. You never come home and you have problems. The camera says so right away if there is something wrong. When you're working with any tool, it depends on your attitude how much you enjoy it and how well it goes. Peter Jackson and Steven Sodenberg both shot on prototypes of the camera where you could basically only press start! The RED has been fixed a lot with upgrades for example, but of course there were a few issues. Now, the people that don't like RED, they have the Alexa. That one costs around twice as much as the Epic will cost. When you are shooting with small cameras like the DSLRs it't not only more portable and easy to move around, but also it saves time. When they are that small it's so easy to switch the lenses and accessories that you can save a lot of time. That is also the point of the new Epic from RED, it's like building blocks. If you are not recording sound then you shouldn't need to have this big and heavy sound element. So it can get very small and portable. A lot of the high end post production software is coming down in price very drastically. For example Avid editing, and da Vinci color correcting, which were highly expensive systems, are down to around $1000 and you can run them on a Mac Pro. So the post production houses don't have as much of an advantage anymore since the systems are so affordable, and many of them here in Copenhagen are closing. Something for you to consider when entering your carrier, if you get onboard with the right technology at the right time, you can really move up fast. Many professionals they work so much that they don't have time to develop their skill. When they are working and have a deadline they have to play it safe and do what they have done before. They don't have time to experiment at all, especially in post-production, there is just no time for that. For your test you might want to test House or E.R. episodes that are shot on RED vs. the DSLR? Or 35mm vs. RED? See if people see a difference or if they were able to recreate the look completely. It's the same crew, same actors, same lighting, but the only thing that changed was the camera. Sammy Larsen, email interview Sammy Larsen is a Flame artist at Minerva Film. He answered a few questions for me about RED and film vs. 35mm and the important elements in the film look via email on November 18th 2010. *** In your opinion, what are the main differences of working with RED material compared to 35mm film?  The quality of Red is very high and very competitive to 35mm . The noise is low and detail high with smooth and clean edges that makes RED good for Chroma keying among things. Some times is better than 35 mm for chromakey because of the less grain.  I have heard different reasons for why it is easier to work with RED material (or any digital footage) when you are making special effects and green screen work for example. In Post Production workflow , Red raw format makes it easy and fast to "develop" with different color and gamma curves settings. This is a big advantage where you suddenly have more control in the postproduction process where one could tweak to get the best result or perhaps use different settings for complex shots. To get same flexibility with 35 mm it will get costly because  you have to back to the scanning process.  Also the option of exporting 4K files from RED without extra cost is an advantage... What is your take on that? Is it easier (or perhaps harder) for color grading or other post production work? I don't think that it is easier or harder to Color Grade RED material compared to 35mm. However I feel that 35mm still does have some "Film" feeling that it is not shown on Digital formats. Therefor when working with Red or other similar digital formats , it needs a little extra work to get that special film effect look that everybody wants. Ex by adding film grain and Color Grade.    Do you think digital shooting will replace film in the not so distant future? Yes - mostly because of the flexibility and cost savings.  How close has RED come to the quality of 35mm film?  Close in quality but lacks a little on film art feeling. (same as when we compare 35mm SLR to DSLR still camera when it came out). Have you worked with footage from the Alexa or other digital cameras similar to RED? Are there any big differences?  No experience with Alexis but worked with Arriflex D-21 and performed very wheel. Not that big difference from RED.  Also the Sony Ex1 and Ex3 XDCAM can be a good alternative with a decent overall quality, but requires more work in color grading to get at film look effect. One thing that makes a huge difference in achieving the film look is the lens that is uses with the camera. If u use film optic (pro35) on a Ex3 it will improve picture and film look amazingly.  Have you worked with footage from HDSLR cameras? It has many limitations, especially since H.264 is a delivery format, but many people in the industry are using it for certain projects. Can it pass for professional footage by your standards?  Yes, I have. The good thing about HDSLR cameras - they are very cheap and much better than low end HD-cam .It gives you overall a nice look from the beginning and at same time a fine Depth of field and thereby an easy way get closer to the "Film" feeling. The Color reproduction is also ok , but its limitations is very much in the noise perspective. It has big problems with noise (macro blocking in the dark areas and some colors) The h264 compression is on of the biggest problem and therefor it is not recommend for keying or complex compositing. Moire is also one of its weakness. But besides that it makes nice pictures for tight budget films.  If you had to choose, which visual element is most important for achieving the "film-look"? I'm thinking of perhaps testing that to see what audiences think. It might be color grading, depth of field film grain, interlaced vs. progressive, 30 FPS vs. 24 FPS something else.  Hmm... that is a hard one to answer... I don't think I can choose on element since all of them are important, however if i must select some... I believe that Depth of field and the 24 FPS will tell the audience clearly if it is video or film. Thomas Øhlenschlæger, interview transcript I met Thomas Øhlenschlæger, a VFX supervisor and 3D artist for Ghost, on November 22nd 2010 at the Ghost headquarters in Copenhagen. Before the interview he showed me around the facilities of this rather big post-house, with around 35 employees. *** How long have you worked at Ghost? I've been here around five or six years, started as a 3D guy and now I'm more of a compositor and supervisor. I think it's 50/50, I go on sets and shoot, I'm involved with planning and budgeting. Right now I'm doing blue screen compositing and clean up. You have worked with RED? Yes, I have worked quite a bit with RED. We do a mixture. These days it seems as if the guys who have the money for it they shoot 35mm, but if they want to go a bit cheaper then it's RED. I think most people really enjoy working with 35. Maybe it's also like a snobbiness or something, 35 has a better image at least. I think you can do a lot of cool stuff with RED and I personally don't mind working with it at all, but I think directors and photographers they like the millimeters. I would say from our end, usually there is not that big of a difference. From my end, as a compositor, I get the DPX files converted from RED material or scanned from 35mm and there is not a lot of difference. Usually RED has a little bit less grain, 35 a bit more grain. But I think for us the advantage to RED is that you can get the camera tapes. When you shoot 35 you only select a couple of takes and then you scan those. Some guy scans them in a certain bit-depth, and he chooses how much data you can get and he delivers that to us. So if I want an extra take, I want a couple of frames before or after the take then I have to go back and re-scan it and that costs money. Or if I wanted it lighter or darker I'll have to go back to the scanning guy and have him scan it lighter or darker. Whereas with RED we usually get the camera tapes so we have everything, we have the control here. I can get to the raw files, I can have the files and I have control over all of the exposure and such. A lot of the times you find that even though they selected some take you would really like another take to patch in stuff. I could be that you are painting away some stuff and you know in another take you have that frame, perhaps a background. So for us that's very nice to have control over getting scans right away, there is no waiting time. It's right here, I can just really fast sit down and scrub through and see, that take looks a little bit better. Even thought the clients chose something different I can use this for something. That's one of the biggest differences for us. That's also a big difference on-set, because they can check everything right away, like the focus for example. Everyone talks about that it's very nice. Yes, exactly. I think a lot of the time I have had a really tough time with 35mm because when you shoot you have a camera assist. They film inside the camera the feed from the viewfinder and that creates a really crappy quality version that you can preview so when you are watching the monitor it's actually a filmed version of that. It's just poor quality. Sometimes when you film you can't tell if something is in frame or not, for example with really fine details. For instance if I'm setting a tracking marker that's really small, I can't tell on a video assist, it's just a big blur. On HD, on RED or Alexa or similar you can have the full resolution, you can see that the marker is there or if I need to move it. It's a lot nicer. And of course, if the production wants it, they can have a set editor, they can do really fast chroma keying and key stuff together and merge layers. Of course they can also do that with a video assist but it's just such poor quality that it's difficult. In theory, it's not often that we do it, but in theory you can bring your whole machine on set and do stuff while you're there. So you can check that everything is all right before you leave the set. Another small thing, I don't think it's anything major. The 35mm cameras have always had movement to them. There is something mechanical going on inside so you always get a little bit of jitter. If we are integrating two elements that can cause some problems for us because if the two plates that we are aligning don't have the same movement then we need to stabilize one. With RED there is no movement unless the photographer wants it to be there. Would you try to stabilize the background and the other one, or would you make the same shake for both? If it was 35, and let's say I'm shooting two plates on green screen and combining them. Taking one element and putting on to another element. In 35 I would have to choose one element and I would stabilize that, remove the subtle camera shake. It's only a pixel or two. Then I would track the other one and I would take the movement from the other plate and transfer it to the first one and then they would hopefully move together. Or I would just decide to remove movement from both of the plates so everything is still, but usually you try to re-apply it afterwards to get the little bit of movement to it. It's not a big deal, it's easier faking the movement afterwards. Nobody really notices this unless if it's wrong. If the plates are moving differently you might see that something is off. Have you worked with HDSLR footage? I have heard that if you are going to do post production on footage then you shouldn't use those cameras, so perhaps you are not working with it at all. I don't think we do a lot of that, no. Sometimes clients come in with some stuff and we do some stuff to it, but it's very limited. You're not supposed to do anything to that material. I mean you can edit it, it can look really good. If you are just shooting a film and delivering it, and you're not doing a lot of post production to it, I think it's fine. It can look really beautiful if you light it properly. There is no range to it, you can't grade it, it's very difficult to key on. There is no information, or very little information. So no, I think we have done some tests, but I don't remember keying on it or anything. I wouldn't consider it as a professional choice. I think though for shooting a background or something that you want for TV or something, then it's more than fine. If a client would come to me with a project with heavy post production and saying that we are shooting on Canon 5D then I would tell him to spend some more money on the camera. I think the lowest alternative is RED these days. And is that the most popular one? Do you get the most of RED footage? I think so. I think in Denmark it's more and more RED. We do a mixture of commercials and feature films and I think commercials is more and more RED. I think with the financial crisis last year I think that most commercial budgets, at least last year, they had a huge decrease. After that I think they have been looking to shoot cheap. RED is a good alternative to that. It doesn't necessarily create worse quality, it's just cheaper. But I still think if you find a director and a photographer that has enough money, most of them would go with 35mm. They still prefer the quality. RED hasn't caught up yet. No, not really. Like I said I think there is also some snobbiness to it, and that's what they are used to. Unfortunately my colleague couldn't be here, he knows a lot about the differences. He said that he felt that the biggest problem with RED today is that amateurs can get it really cheap and then they film it and think that it is 35mm. If you treat RED like you treat 35, you light it properly and spend your time on set to make a proper shoot then it can be almost as good, if not better. He at least thought, and I think I agree with him, that the problem is a lot of people rent the RED camera and think now they have a good camera and just go shoot. They don't think about light and that is impossible to fix. You still need to have good quality productions. It's not everybody who does that, but sometimes it happens. On 35mm shoots, even if they might have a lot of money. So when they press the button it costs some amount of money, so maybe you focus a little bit more before you record. I think that's some kind of mentality. No we can shoot the rehearsals and we just keep shooting because it's free. There is no thinking about the amount of material. Whereas with 35 you are always considering it. You will do a lot of rehearsals first and when everything is looks good and the photographers, the directors, the light guys and everyone is happy then you start shooting. Have you had any footage from cameras similar to RED? I think we have had some shoots on Alexa as well. I must admit I'm not familiar with the differences but Sasha, the color grader, he was very happy with the Alexa. He said it had some of the same qualities as RED but just a little better. A lot of times I don't even notice what we are shooting with. Of course if I go on set I notice what camera it is, but it's not a huge plan in my head. It's just up to the photographer. We can work with everything. I know the scanning process is always a pain, it costs us a couple of days. Once they are done shooting they have to go scan it and it will take a couple of days until we get started. If we shoot on RED I can just take a disc and start right away. You mentioned the jitter when you talked about the 35mm, are there any other differences when you have the footage into the DPX format? As I said, for keying purposes, it has a different feel. I can still get compression artifacts from RED. I'm not sure that they are actually compression artifacts but I can still see that it's a digital format. When I key I can still find like JPEG artifacts, I'm sure it's not the same thing, but I can still see digital artifacts in the media. Whereas in 35mm you don't see that at all. You see that it is more analog in that sense, and I like the 35 because of that, it doesn't have the artifacts. It's not a huge thing. I honestly don't really care. I still think if I should choose I would choose 35 for that reason, for keying at least. With some film stock, that has high ASA number, we can have some problems matching computer graphics because the footage can be very grainy. We have to make digital grain that matches the film grain. That's a little bit more work for us. With RED there is grain but it's CG grain and it's fairly easy to match up. It's not as heavy as the film grain. Some photographers prefer film that has a lot of grain. I wouldn't say that it's a huge problem but it can take a little bit more time. You have integrated the CG element that you wanted and you just feel that it's something different. And if you switch through the color channels you'll see that the grain is moving differently in what we have inserted than in the original one. That's more work with film material. But that's a lot of our work here, integrating we call it. Making the CG elements fit with the footage, blurring, putting grain, or putting smoke over it. Cheap tricks like that. I would like to have a RED camera here to shoot tests on, because it would help us, since it's cheaper, if you need a water splash or something you could just go out and shoot it straight away. Whereas with 35 it costs so much money and it's such a big hassle. It would just be easier to have it available. We have the 5D now, and as I said it's good for some things and you can actually use it for some elements, but I wouldn't trust it for everything. Shooting a plate that's very simple, a rain plate maybe, you only need black and white for example. If you only need simple elements to integrate, but not to shoot more complex stuff with colors that need keying. [I ask about the changes in the industry with hardware and software getting cheaper, is it harder for companies to survive for example here in Copenhagen?] - Thomas says it's less about these things, it's more about the talent. A director might really want to work with a certain person, and they are the one that is the expensive element, not the hardware or software. - Our post-house is more focused on the hardest effects and the biggest jobs. Of course we loose jobs to smaller companies, like advertising agencies that have one creative guy sitting and doing some effects. But our main market is the bigger stuff that you can't do with one or two guys. - Ten years ago you wouldn't be able to do anything by yourself but today just with Final Cut you can actually do some pretty cool stuff. It's becoming easier to do simpler stuff. - Maybe ten years from now the things we are doing now is the simpler stuff and we are doing something else. [I explain my premises for testing the film look and I ask Thomas for his opinion on that.] Interlaced would always look cheap, that's hopefully soon a dead standard. Depth of field is a good choice. You can however get the same depth of field with the DSLR cameras as you can with the 35. That's one of the things that RED has brought, getting better depth of field. I actually think the quality is what determines if it looks real or not real. If you spend time with a good light guy on set, that would bring you the most quality. Whatever camera you are shooting with, if you have a good light guy and a good camera guy, that will give you the most 35 look. The 35 look is quality. That is the most important thing. Film grain, perhaps you notice it a little bit, but I don't think that it's a huge selling point. In the big feature films we paint away all the scratches and the dirt, not all of it, but the bigger things. And that still looks like 35, so… I do believe that it's just quality, you need to shoot it right. Also with grading, you need to treat it right afterwards. I think that you end up with a lot of the same things. I've heard that RED can be considerably sharper than 35. Yes, perhaps, but if you end up in the cinema then you will always get it a little bit blurred out afterwards. But it's true if you end up on a digital projector or a TV then you might have something a little bit more sharp. If you're ending up on an HDTV, going from digital to digital, you will have more sharpness than going from analog to digital. It will get a little bit more blurred. If you're going from digital to analog I think you will get a little bit of the same there. I remember on one feature film I worked on, there was this huge tree and you could see all the small leaves and details on the monitor. But when it got to the cinema on 35mm then it was all mashed together. There was an analog process in there that doesn't take the picture perfectly pixel to pixel. And light of course blends it together. - Thomas suggested that I try to find comparisons from the same film set, both shot on RED and on 35mm. I know from color grader that when he gets the material he has more control with the RED. He has all the exposure levels and a higher bit depth, a higher range that he can control. With 35 he is a bit more locked down. When they scan it the get a certain range. The negative has much greater range but they scan it and then they have to choose where you want the range to lie. With RED if I get the DPX version then somebody has made the choice as well. But if I get the raw files I still have the high range. For him, that makes a difference. How often do you go back to scan it when you have the 35mm scan and you see that you might want a different range? Almost never. I think once or twice in my career. That's mostly a money issue. We get something from the client and we have to work with that. Unless it's really bad, then we might tell them we need something else. Maybe more than once or twice. I think a couple of times we have asked for 4K plates to get higher resolution but I don't really recall us getting different light ranges. If we had a scanning facility here and it was free then I would most likely have done it more times. Especially with the 4K, that's a big deal. Sometimes if you have to scale up something it's a lot nicer just getting it in 4K instead of trying to scale up something from 2K. - I'm honestly relieved that it's not a big difference between RED and 35mm, not for me. When I make a budget for something, it doesn't matter what they shoot on. It would be the same budget whatever they do. Time wise it will be the same. Look wise there will be a creative difference, but usually that's their call, what they want to go for. - I think that in a couple of years the digital will have surpassed film, of course there will always some nostalgic people that want to shoot 35mm, people that really like it. But yeah, soon, hopefully, the benefits will be even bigger. Right now it seems that at least with the RED cameras you can crank up the speed. I think you can go up to 100 FPS if you go down to 1K or 2K perhaps, I'm not sure. Hopefully you will be able to crank it up to go faster and faster. 35mm camera can go a lot faster, you just need more light. Hopefully these things will get better with better chips. - With the film look, I think it's mostly the on-set treatment, combined with grading. Maybe you have to grade RED a bit more, with 35 you get more out of the box. Or maybe it's also because you get a scanner guy that grades it a little bit when he scans it. I have a feeling you need to spend a bit more time grading RED because of this. I have seen the 'before and after' from a grader and the 'before' can look really crappy and with the 35 it usually looks decent. But I think you just start from a lower quality image with RED. Not because it's lower quality perhaps, but you have the full range and you can go anywhere, but with the 35 you are a little bit there already. [After this we talked about the new generations, that although today we associate the film look with certain things, such as 24 frames per second, the later generations might be used to higher frame rates and better quality and sharpness. This might have something to do with computer games or subtle changes that the industry is going through. It might at least not be the same look that filmmakers will be going for in the future. We also talked about the 3D trend and if that would continue into the future or if something else would come to replace it.] 1 Frames per second.2 http://www.red.com/store/3 http://www.red.com/store/4 http://www.aceshowbiz.com/news/view/00037110.html5 http://www.red.com/faqs/red-one/recording6 T-stop is the same as F-stop in still photography regarding aperture.7 Digital Picture Exchange, a standard digital format, common for visual effects work.8 http://blog.manggis.tv/?p=779 http://www.red.com/faqs10 Digital Imaging Technician11 http://www.red.com/shot_on_red/12 Digital single-lens reflex.13 1280 x 720 pixels, at 24 progressive frames per second14 Depth of Field15 http://www.avm.dk/artikel/visartikel.php?artikelnummer=549416 http://www.dr.dk/P2/Rytteriet/17 http://www.fox.com/house/18 http://www.imaging-resource.com/NEWS/1274109903.html19 http://philipbloom.net/2010/04/10/house-season-finale-shot-entirely-with-canon-5dmkii/20 http://www.poppoli.com/citystate.html21 A moiré pattern is created, for example, when two grids are overlaid at an angle, or when they have slightly different mesh sizes.22 http://handyfilmtools.com/23 http://www.sony.co.uk/biz/content/name/ssw-bc-35mm-201024 http://www.panasonic.com/business/provideo/home.asp25 http://www.filmlike.com/26 http://blog.videohive.net/general/getting-that-film-look/27 http://www.learningdslrvideo.com/film-look-dslr-video/28 http://blog.videohive.net/general/getting-that-film-look/29 http://www.vxm.com/Progvsinter.html30 http://www.mediacollege.com/video/camera/shutter/31 When the iris is wide, the aperture is large and the F-stop (or T-stop in cinematography) is a small number. These can all be used to describe the same parameter.32 http://www.cambridgeincolour.com/tutorials/depth-of-field.htm33 http://www.luminous-landscape.com/tutorials/dof2.shtml34 Angle of view is the total area in front of the camera which is visible.35 http://blog.videohive.net/general/getting-that-film-look/36 http://www.geodetic.com/whatis.htm37 http://www.widescreen.org/aspect_ratios.shtml38 Black bars on the top and bottom of the screen.39 http://www.learningdslrvideo.com/film-look-dslr-video/40 http://theabyssgazes.blogspot.com/2010/03/teal-and-orange-hollywood-please-stop.html41 Exposure  latitude  is  the  allowable  range  of exposures  for  a  given  photographic  emulsion. http://www.tpub.com/content/photography/14208/css/14208_51.htm42 http://gizmodo.com/392663/hollywood-attacking-film-grain-for-blu+ray43 http://www.guardian.co.uk/film/filmblog/2010/aug/18/old-celluloid-beats-digital44 http://www.socialresearchmethods.net/kb/expfact.php45 http://www.redgiantsoftware.com/videos/redgianttv/item/23/46 Neutral density filters. Colorless filters used to reduce all wavelengths of light equally.47 http://www.redgiantsoftware.com/videos/redgianttv/item/23/48 http://www.prolost.com49 http://www.redgiantsoftware.com/videos/redgianttv/item/23/50 http://theabyssgazes.blogspot.com/2010/03/teal-and-orange-hollywood-please-stop.html51 http://kuler.adobe.com Aalborg University Copenhagen December 15th 2010 Medialogy 10th semester Appendix Aalborg University CopenahagenDecember 15th 2010 Aalborg University CopenahagenDecember 15th 2010 Magnús Sveinn Jónsson 14 56 2 xxx Appendix p. 1