GDV P.4.1.2 Feedback of image cutouts

Let’s start this exercise with the description of the Generative Design book: ‘A familiar example of feedback: a video camera is directed at a television screen that displays the image taken by the camera. After a short time, the television screen depicts an ever-recurring and distorted image. When this phenomenon is simulated, an image’s level of complexity is increased. This repeated overlaying leads to a fragmentary composition. First, the image is loaded and shown in the display. A section of the image is copied to a new randomly selected position with each iteration step. The resulting image now serves as the basis for the next step – the principle of each feedback.’ Here you can find the original two programs.

I have made ready a Flickr summary page with all the images I’ve made during this exercise. And all programs you can find at the loftmatic page. A general remark about JavaScript. The images produced by JavaScript are not of the same quality as the one that Processing produces. It seems that JavaScript throws a kind of blur over the edited image. So the end result will be totally different (and much better) when you run the program in Processing. I have no idea why. And a few programs don’t work at all. I will mention that when it’s happening in the following paragraph’s and I will check which error messages I have received.

Because this is a very short program It did not take much time to adjust things. Just the usual stuff. Changed a few variable names to get things more understandable (for me). What does round do? Ah… it calculates the integer closest to the n-th parameter. Forgot that. Add one of my own photo’s. Why is there a white top and bottom? I think because you can see the noise better that is being left by the manipulated pixel slices. I change the background to black. Works also fine. I only have to remove my watermark (or export te photo without a watermark from Aperture). Otherwise it will leave white pixels in the manipulated result.

My first question is: ‘does this feedback of image cutouts work also horizontal?’ In my naïvety I just swapped x_copyFrom with y_copyFrom and let the program run. That does not make any sense at all. There is some movement at the left side. Swapped x_copyTo with y_copyTo. Let it run again. No success. Swapped sliceWidth with sliceHeight. Some weird stuff happening at the top of the display window. Changed a few other parameters and now it works. But it would be better to have a photo which uses the full height but not the full width. Also changed the copyTo block to -1, and 1. Which means that the image distorts very slow (I think).

Can I put this kind of image processing on an angle? Added a translate and rotate in the program and I get an ‘InternalError: transformed copyArea not implemented yet’. Might be the Java2D rendering system. It does not like a copy after a transform? But If you divide the y_copyFrom random range (0, height / 2) only the top of the image will be scrambled.

Pretty obvious when you divide the y_copyFrom random range (height / 2, height) only the bottom of the image will be scrambled.

Back to the original code. Forgot to change something. Lets see it as a happy accident. But not for JavaScript. I get no error message in my Safari Error Console. But JavaScript  produces a total mess which does not represent the images you get if you run the program in Processing.

sliceWidth = (int) random (1, 2); and let it simmer for about 30 minutes. Time for an espresso.

Tried to let the vertical slices change their place not too much. Perfectly translated by JavaScript because it doesn’t do anything at all. What does my Error Console say? [Error] IndexSizeError: DOM Exception 1: Index or size was negative, or greater than the allowed value. (anonymous function) (processing.js, line 4076). Fine!

Found out that you can place the original photo in the background. So it shifts the slices but does not replace them with black. So the image stays partially unaffected. It stays a bit fuzzy what the program exactly does when mashing up some parameters. Anyway… JavaScript doesn’t do anything. Same error message as the previous program.

Maybe I did something illegal here. I have put a larger random parameter in front of the smaller parameter. This gives an interesting effect though. Some repetition is going on. And the image moves slowly to the right border. Repeating itself around 50 pixels from the left. This takes time. It is a bit similar to the wind filter in Photoshop. Only this takes more than half an hour to make this image. The program to generate the last image was working for 210 minutes. But in JavaScript it does not work at all. I used negative values for the image size. And JavaScript does not like that.

Removed the randomness of x_CopyTo. What about positioning the image outside the display. And copy it into the display. That does work. But why does it copy these image strips? And where comes the offset from? Ah… I see… It copies the original strip at the left side. It doesn’t copy the image which is outside the display. When it has copied it skips a 100 to the right and copies it again. And when it has pasted it copies the pasted line again. So I selected the right part of the image of 100 pixels. Let it copy from there and let it paste at random positions between 200 en 800 pixels. Yes… negative values again. So no JavaScript image is produced. It even lead to a crashing Safari browser.

This is an extra program which is not to be found in the Generative Design book. The program makes all x- and y- positions random. Changed the size of the to be copied image selection to 10 x 10 pixels. And it also seems to work perfect in JavaScript.

I know that 10 x 10 is small. But what if I make it 1 x 1. It looks like nothing is happening. Oh I see two pixels three pixels have been moved in the top black area. This is going to take a lot of time. Changing sliceHeight to height. That is better. Let it run for 45 minutes.

And this of course works also for horizontal coping. But I think than its better to take a portrait oriented original image. I decreased the x_ and y_copyTo random factor to (-2, 2). So the distortion is very small and it is going to take a lot of time to do it.

Undid (as in I undid that yesterday) all setting changes and added a sliceWidth randomized between -400 and 400. I have the impression that this image (with a lot of black in it will not work as good as the images with more light and colors in it.

Another image with almost the same settings as the previous example. And this works better. And also slower. The final image took from 13:20 until 16:05. That was the time when the first pixel reached the right side of the Processing image display window.

Changed sliceWidht and height into 100. And x_ and y_ copyTo to random (-2, 2). This seems to give very free and organic colorclouds. I must admit I have also chosen a cloudy source-image.

What about feeding the manipulated image again to the same program. So I first distort an image. Save it as a jpg. And import that again for a round of image processing. I think it does not make much of a change. Except when you change a few parameters. JavaScript blurs all the fine details. I think it uses the gray from the background to mess everything up. Terrible!

I must say that these images have painterly qualities. Just played with the parameters. And left the black sides out of the interaction / animation.

Added only vertical manipulation lines.

Did the same here but on a horizontal way. So the lines don’t exchange places vertically. Only horizontally. And they shift in and out of each other. But JavaScript makes a mess of it.

Henk Lamers


Comments? Leave a reply.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s