PHO703: Week 4 Strategies of Freedom

Week 4 Resources

Week 4 Webinar 24 hours

I have taken all of the opportunities presented to meet my tutor and other students online. I did so again this week in the context of a 24-hour challenge the course had set for us.

I decided to explore how to supplement the visual narrative of my abstract work and ended up using photographs from the realm of scientific and computational biology and synthesised 3D images.

I had appropriated and mixed ideas in line with course discussion about such practice. I started with an exploration of what the public or viewer might expect to see generally in biological images. How could I expand visual narratives to reach out to the viewer?

I could adopt 3D, although this was not my intention. I’ll think about that. I have time to try out some ideas during the current Surfaces and Strategies Module.

Colour images I make, I envisage being presented as transparencies on lightboxes for best effect. At £33 x 20 that is an expensive option and I still have to establish the feasibility of printing on translucent material. I read somewhere that Epson printer using pigment ink should work although drying takes longer. With this in mind, what I discovered by synthesising 3D for myself was a new effect I liked. The red-cyan anaglyptic images I made presented as if light poured through a transparency layer and cast a shadow on a wall behind.

I shall also be mindful of the science being something separate to the art in my work and try not to let it distract. I learned that I need to exercise care and balance the inter-disciplinary visuals.

My first response to the activity this week failed due to poor weather conditions. As a cameraless approach, I’d wanted to time how long paper took to burn under a magnifying glass. Afterwards, I did go back to this but stopped because the method is exceptionally crude (lacks any finesse), there is a risk of fire and the risk of the bright sun causing eye damage. I stopped that. A mind experiment only.

Pending still is yet another method of recording that doesn’t use a camera. I wanted to make a sundial and sketch the gnomon shadow for different hours. A grid would act like pixels and be light, or under the shadow, they would be dark. A computer would be used to speak the pattern. An update has now been provided immediately below.

Week 4 Independent Reflection

This weeks content has an applied element and helped to develop my practice. Practice explored new visual presentation, which is a development point. I do not appropriate and yet was challenged to, and this made me feel uncomfortable as well as mindful of licensing and permission. I needed to examine visuals for reasons for exploration.

Sundial.jpg

In summary, audio rather than visual output was the result, and I didn’t use a camera.

If you listen to the track, maybe 30 seconds or less is to be advised. It even drives me crazy.

https://soundcloud.com/search?q=mtcrj

Method

I copied the light and dark squares as text into Adobe Audition and generated speech (Audition – Effect – Generate – Speech). I had previously discovered a Flanger effect called the Crazy Clock of Doom, which sounds a little more exciting, and the name appeals to my sense of humour. (Audition – Effects – Flanger – The Crazy Clock of Doom.

Week 4 Activity Hands Off

I decided to take some media images from the realm of human biology and computational biology and apply some technical methods to obtain 3D visuals. That helps me explore visual themes as my abstract expressionist work uses common DNA as a link to the past.

In 2004 my desktop computer worked away as a part of the World Community Grid. The first thing I chose to support was the then Human Proteome Folding project. My machine is now working on finding Human Cancer Markers.

I did not realise until this week how my photography had gained influence from this field of computational biology.

And so from the Science Museum, I took the model of the famous double helix and used it to create a stereogram pair. I landed on this presentation after trying several different approaches, and this was the best result for such a delicate subject.

DNA_IMG_5488.jpg

I find I can Freeview and get the depth using just my eyes. The London Stereoscopic Company sent me a simple viewer designed by Brian May (of Queen) that cost about £5.

I had seen red and cyan anaglyphs in the past and found today this method worked well on a set of cellular images I grabbed from the World Community Grid facebook group.

A friend gave me a pair of cardboard viewing spectacles, and in trying to get the 3D effect, I stumbled upon a layer visual that took me by surprise. I like it. The result is a wall with the image on it, then a nearer version of the image as if on a transparent surface.

I’m so excited about this because I recently considered using light boxes to display my abstract work.

Until you look at the following with specialised specs, it is hard to gain access. However, there is a more latitude on image size, and that is important in my work as I still have in mind the idea of the viewer looking closeup to become immersed in art as an experience.  These are still early days.

I’m glad to have done this activity as it seems like now I make the progress I was seeking to make in starting this new module.

TransparencyIMG_5064.jpg
TransparencyIMG_5068.jpg

My earlier venture into microscopy took me to x80 magnification which is nowhere near the 3 to 5 Angstrom units of resolution used in scientific research, so for the supporting images, I need to appropriate, and remix the work of someone else. If I go much further, then I need to sort out rights and permissions. The stock images I looked at have a cost of £50 for a base image, and I would still need to create the 3D They are less visually stimulating.  The license is a lot when, as a student, you don’t yet know what you’ll end up doing.

I’ve got some more lined up to make the five images by the end of today. At the moment, I’m wrestling with third party images and mixing them to sample the visual language of cellular biology.

Cells-3.jpg
Spermatazoa.jpg

Week 4 Presentation 4 Turn Away

When I engage in my photographic practice, I scan for bodily injury/repair and may need to use a mirror to check specific framing and focus. Although I take a photograph, the situation often requires someone else to be a photographer. I created strategies to use in place of a second photographer. When I photograph a family member, the constraint becomes one of acting directly and with speed as the subject is only willing to cooperate a short while.

Week 4 Presentation 3 Force

I do not use force for the satisfaction of exercising power over the medium. Instead, I interpret the data in an image and try to read the direction in which it will go if I process it. With experience, I choose which stills I can work on to make a significant effect. I have had to practice a lot. Ever since my first digital camera and post-processing suite, I have practised, and I continue to do so. I previsualize style or type of outcome but let the image data provide the direction.

I am selective and respond to the data in a photograph.

Others may view my approach as passive. I would ask the question, is aikido passive? Aikido is a martial art and philosophy and is a way of unifying life energy. I would settle for that.

Week 4 Presentation 2 Smuggle

I explained below how I use my intent. The outcome is a visual effect of glow, in the underlying image. I reject the need for a high pixel count in my practice. Medium resolution is better suited to my intent. The photography software I use is passive when there is a lot of detail. Filters no longer have a good effect.

I look at the sensor data for the living glow of repair. Detail of a wound or human hair would be distasteful in my art and would serve as a distraction. I wish to create an experience in looking.

There is a concept of a one-pixel cinema, which I see a parallel to my work. Let me explain what the one-pixel cinema is. It acknowledges how to manage colour in film scenes. It goes beyond the use of LUTs. Films from companies such as Disney deploy a recognisable tone aesthetic that harmonises across the film catalogue. I’ve described what is behind a one-pixel cinema and will now explain how it works. The programmer identifies the dominant colour in each scene. The programmer would reduce each frame in turn, except there are some complications.

Several frames can invoke change and make visual sense by overlaying an original full frame.

The programmer reduces the whole film to a single-pixels that cycle through the dominant colours from the scenes.

I deconstruct my images but not to the extreme just described.

Week 4 Presentation 1 Outwit

I started with the lens to begin with and then thought turned to the camera. I realised that it’s code is designed to read the sensor, but in doing so, it compensated for allowable lens aberration. That way, lens manufacturing can stop short of absolute perfection. I take an integrated view of the apparatus and so thought now turns to the software designed to read the camera files once on a computer. Today this could be a smartphone, tablet, laptop computer, or a workstation. And so I view this as a whole, at least in terms of apparatus.

Photography is, therefore, about the lens, the camera, and the processing software. I then identify the constraints in the processing software, and I will force an algorithm to generate an unintended effect that I can use.

I have designed algorithms and so am tuned to finding limitations that create useful results. For example, I initially mute the effect of colour to give way to the infrared light that the filter inside the camera fails to stop getting through. I get my pictures to glow where there is a source of heat, for example, where the body acts to heal an injury.

I’ve given one example of what I do with the software. I do not de-privilege the apparatus by seeking alternative processes.

I do not advertise my methods so yes there is a black box but may relent once the experimentation matures into a technique, Nothing is firm just yet.

Week 4 Forum Human

I responded to the forum over whether it is possible to have non-human photography.

Movement detection software as an app running in tandem on the pairing of an old smartphone and a new smartphone can auto-trigger and send the user an email containing each snap.
This brings into focus a modern dilemma of consumerism, what to do with old equipment that still works. In this example, the first and second apparatuses become linked thanks to the convenience of wifi and the availability of unlimited internet data plans alongside the accessibility of integrated camera and networked computer functionally.
This approach is based on machine detection of the moving target and the extension of technological innovation into the field of a machine on machine integration based here on a consequence of a consumerist society.

Week 4 Introduction Playing Against the Camera

I suppose the freedom I experience is a result of knowing that the photography software designed to allow the photographer to improve an image will begin to breakdown at absolute extremes. I seek to find useful ways of breaking the software. I act intuitively with the data the camera has captured, and I test how it responds to my actions.

I do look elsewhere, as I attempt to return to film and photo paper. I do not have the full facilities of a darkroom, but I am gradually building up my expertise and equipment. There is a certain amount of politics at home about turning a space into a darkroom and especially in introducing chemicals into the house. I can force the issue, but there again, I am a digital worker and create lots of exciting challenges without an absolute need to find real alternatives.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.