GDV P.4.3.1 Graphic from pixel values

This is the description of the Generative Design book: ‘Pixels, the smallest elements of an image, can serve as the starting point for the composition of portraits. In this example, each pixel is reduced to its color value. These values modulate design parameters such as rotation, width, height, and area. The pixel is completely replaced by a graphic representation, and the portrait becomes somewhat abstract. The pixels of an image are analyzed sequentially and transformed into other graphic elements. The key to this is the conversion of the color values of pixels (RGB) into the corresponding gray values,  because–in contrast to the pure RGB values–these can be practically applied to design aspects such as line width. It is advisable to reduce the resolution of the source image first.’ Here are the original programs:
P_4_3_1_01
P_4_3_1_02

I have prepared a summary page on Flickr where all the images are gathered which I made during this assignment. And here is the loftmatic page where all the programs are that I changed. The first 5 do work with JavaScript. The second 5 do not. So instead I added the Processing file when you hit a thumbnail.

To get acquinted to the program I always change the variable names to make them more meaningful for me. For instance how do I know what l3 is. I think this is an unlucky choice (also because of the typeface). And how do I know that it is not 13 instead of l3. And if it is I3 is it than Capital I (ai) or lowercase l (el). Luckey the Generative Design people have placed a comment above this statement in the code: // grayscale to line length. So I changed I3 into greyToLineLength. That is more work but much more understandable for humans. GreyScaleToLineLength would have even been better. Maybe I have to change that anyway. Another example: w4. In the comment it seems to be strokeWeight. So I change that variables name to greyScaleToStrokeWeight. Which leads to names like: greyScaleToLineLengthStrokeWeight. Its long. But understandable. But what about getting a picture of a portrait. I found a photo from 2008 which I made of a cheetah in the Tiergarten Schönbrunn of Vienna. To begin with I used the standard variations which you can find under the 1-9 keys.
P_4_3_1_01_GDV_01

So now its tweaking the program to find other manipulations. Started with the grayscale to stroke.
Case 1: added SQUARE to strokeCap.
Case 2: added 128 transparency.
Case 3: subtracted greyScaleToLineLength from posX.
Case 4: added grayscale / 255.0 * TWO_PI. And 0 – greyScaleToLineLengthStrokeWeight.
Case 5: increased the distance of the nearest neighbor.
Case 6: changed ellipse in rect. Changed fill to noFill and noStroke into stroke.
Case 7: changed rectangle size  from 15 into 60.
Case 8: switched off all rectangles except one. Increased width and height of it.
Case 9: rotated TWO_PI. Changed ellipse into rectangle.
P_4_3_1_01_GDV_02

I wondered what you could do to see of the patterns. At this moment they are very small. It seems that you have to decrease the image size. Decreased it from 100 x 100 to 50 x 50. And that works better. I did some tests with changing the background to black but that did not work very well because you have to invert the image than. Which leaves you with a negative image in the cases 1 – 5. So I left the background white for the time being.
P_4_3_1_01_GDV_03

Just played with the numbers in case 1 – 5.
Case 6: I have removed the fill of the rectangles.
Case 7: added a rectangle in a rectangle in a rectangle in a rectangle.
CAse 9: changed rectangles in ellipses.
P_4_3_1_01_GDV_04

I just mention the cases so that you can really understand what I did (and not did).
Case 1: changed posX and posY into – 25.
Case 2: removed the fill’s and added a secondary circle.
Case 3: added a secondary line which makes a kind of < symbol. Made the background black. But after a while I decided to make it white again. I still don’t like the negative image.
Case 4: removed one greyScaleToLineLengthStrokeWeight in the line function.
Case 5: added another line which makes it all a bit more like it has been sketched.
Case 6: getCurrentColor is divided by 20. Which makes use of another RGB color range.
Case 7: removed the rectangles and replaced them with lines.
Case 8: replaced the rectangles with ellipses.
Case 9: replaced stroke color and rectangle with ellipse and noFill.
P_4_3_1_01_GDV_05

Renamed the global variables. Added comments to get it all a bit more understandable. Resized the display image. And used the svg files which were added by Generative Design. Maybe its a good idea to change the image into a colored image. Made 9 cases like the ones used in the earlier version. And I replaced the standard svg’s by my own much more simple svg’s. Also increased the resolution by increasing the original image. And OS Yosemite seems not to support Adobe Illustrator anymore. I am on JAVA SE Runtime 7 and Illustrator needs Java SE Runtime 6. Downloaded that and installed it. Adobe Illustrator works again. By the way… It seems that you can install JAVA SE Runtime 6 and 7 next to each other. They want hurt each other. Another thing is that from here on the Error console of Safari gives me an [Error] ReferenceError: Can’t find variable: sketchPath, setup (undefined, line 17), executeSketch (processing.js, line 8527), (anonymous function) (processing.js, line 8535). So I placed the programs only on the loftmatic website.
P_4_3_1_02_GDV_01

I am now working with two svg-files. Tried to keep it simple. It might get more complex into the end. Case 6, 7, 8 and 9 need some attention. Just played with the fillings and strokes to get those problems out-of-the-way.
P_4_3_1_02_GDV_02

It is really impressive how many variations you can make with just changing a minor detail. Even enlarging tileHeigt and or tileWidth can give very different results. And remember: pushMatrix () cannot use push more than 32 times. I received that message because I forgot popMatrix.
P_4_3_1_02_GDV_03

Used a honeycomb as svg file. Just tweaked the variables a bit.
P_4_3_1_02_GDV_04

Lets see what happens when I load all svg files of the earlier cases.
P_4_3_1_02_GDV_05

Henk Lamers

GDV P.4.2.2. Time-based image collection

The Generative Design book gives the following summary of this assignment: ‘In this example, the inner structures of moving images are visualized. After extracting individual images from a video file, this program arranges the images in defined and regular time intervals in a grid. This grid depicts a compacted version of the entire video file and represents the rhythm of its cuts and frames. To fill the grid, individual still images are extracted at regular intervals, from the entire length of a video. Accordingly, a sixty-second video and a grid with twenty tiles results in three-second intervals.’ Here is the original program.
P_4_2_2_01

I have put all images I made during this assignment on a Flickr page. I decided not to join the movie-files I used. So instead I added the Processing files if you click a thumbnail on the loftmatic page which you can find here. You might use your own movie-file to get comparable results.

After doing some tweaking of the program I had to find some video footage. Although we have lots of video-material from our documentaries and animation-films it is a lot of work to find real good footage. It takes a lot of time because you have to go through every DV-tape. And while doing that it might turn out that there was nothing useful on the tape. So what is the most easiest way to get some footage. The iPhone! I still got a stunning 3GS. And I have never used the video-function. So this might be a good opportunity. But what do I film? Lets start with the most colorful news-broadcaster of the world. CNN! Took me a few minutes to get some footage. Interestingly enough I filmed upside down. There is no image here. All the images are on the Flickr page. This link takes you to the Processing code. And that is also true for all next links on this page.
P_4_2_2_01_GDV_01

I do another one. Ah… that looks better. I specially like the footage which has typography in it (but that is my abnormality). So what file size is that movie? Its 7.6Mb. Might be that I have to reduce that with Compressor or Final Cut Pro.
P_4_2_2_01_GDV_02

That works fine. The whites are a bit bleached out but its fast.
P_4_2_2_01_GDV_03

What about changing the format. Let’s double TileCountX and TileCountY. Oh… that is even better!
P_4_2_2_01_GDV_04

Can I double that again? Sure it does. TileCountX = 48. TileCountY = 64. That is about 3072 frames. And the final image of all those 3072 frames together is getting better every time. It is also getting more a kind of structure. And it is getting less recognisable. It is also taking more time to compose the final image. This one took about 5 minutes o complete. P_4_2_2_01_GDV_05

Checked if it was possible to rotate the total image. Used footage from a 1960’s NASA film.
P_4_2_2_01_GDV_06

Lift off of Apollo 11. First in 18 x  24. And once in 36 x 48. But wat if I double that again. Seems just to work. Although there is not much to see anymore. The footage has just disappeared into a lot of tiny rectangles. It looks almost if it has 3D-ish qualities. It took almost 14 minutes to render.
P_4_2_2_01_GDV_07

Apollo 10 footage. Above the moon and filmed from the Lunar Module. P_4_2_2_01_GDV_08

Apollo 10 spacewalk.
P_4_2_2_01_GDV_09

Apollo 11 landing at sea. And I made a typo which might give me more ideas to make the next five programs.
P_4_2_2_01_GDV_10

Continuing with checking if my typo still works. I made a mistake while defining the variable TileCountX. Instead of 36 I filled it with 368. A lot of strange things appear in the display window. So I stopped the process. But now I make TileCountX 400 and let it run. This image took 14 minutes to complete.
P_4_2_2_01_GDV_11

TileCountX = 400 and TileCountY = 4. It does weird things to the original footage. P_4_2_2_01_GDV_12

What happens if I make TileCountY just 1 pixel and TileCountX 800? Amazing! P_4_2_2_01_GDV_13

Swap the content of TileCountX and TileCountY.
P_4_2_2_01_GDV_14

And now for an maximum stress test. TileCountX = width and TileCountY = height.Processing has to calculate 640.000 pixels. It started at 13:51. To complete 800 pixels in a row it takes about 2 minutes. After one hour I have cancelled it. It Hardly makes any progress and the image that is going to be delivered seems to get not very interesting. Another attempt. It took 56 minutes for Processing to render.
P_4_2_2_01_GDV_15

Just put everything on an 45 degree angle. Put to fill and 800 x 800 screen Processing needs to render an image of 1150 x 1150. Otherwise it leaves black corners in some place or another.
P_4_2_2_01_GDV_16

Kept the angle of 45 degrees but put TileCountY on 2. So the screen will be divided into two triangles.
P_4_2_2_01_GDV_17

Rotated the image for 90 degrees. Which gives an interesting effect. Processing renders a kind of small blocks. I wonder when I switch off the rotate function if the effect is the same. Yes it is still rendering little blocks. Maybe I have to increase TileCountX? It did not work out.
P_4_2_2_01_GDV_18

Interesting things are going on here. Organised chaos.
P_4_2_2_01_GDV_19

I did not change much here. Made TileCountX 2400 but it did not do much for the image quality. I just rotate the image for 270 degrees because I like it better when its upside down. That was a mistake. Processing rendered it horizontally but I think I like this even more.
P_4_2_2_01_GDV_20

Ah… There is an extra program in the Generative Design folder. The time-lapse camera. After each intervalTime a picture is saved to the sketch folder. That seems to give me some problems. Let’s try to solve them step by step. When running te program Processing says: ‘There are no capture devices connected to this computer.’ We need a camera. The camera is now attached. Processing says: ‘NullPointerException’ in the message area. And beside of that it gives me a lot of junk in the console: ‘2014-10-07 11:46:12.660 java[28975:10003] Error loading /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio:  dlopen(/Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio, 262): no suitable image found. Did find: /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: no matching architecture in universal wrapper 2014-10-07 11:46:12.661 java[28975:10003] Cannot find function pointer NewPlugIn for factory C5A4CE5B-0BB8-11D8-9D75-0003939615B6 in CFBundle/CFPlugIn 0x7fef1c115350 (bundle, not loaded) name=DCR-PC110E,size=720×576,fps=30 name=DCR-PC110E,size=720×576,fps=15 name=DCR-PC110E,size=720×576,fps=1 name=DCR-PC110E,size=360×288,fps=30 name=DCR-PC110E,size=360×288,fps=15 name=DCR-PC110E,size=360×288,fps=1 name=DCR-PC110E,size=180×144,fps=30 name=DCR-PC110E,size=180×144,fps=15 name=DCR-PC110E,size=180×144,fps=1 name=DCR-PC110E,size=90×72,fps=30 name=DCR-PC110E,size=90×72,fps=15 name=DCR-PC110E,size=90×72,fps=1… so… I think I’m running into a technical problem which might cost me a lot of time. Knowing that I stop this session and continue with the next program.

Henk Lamers

GDV P.4.2.1 Collage from image collection

This assignment is described in the Generative Design book as follows: ‘Your archive of photographs now becomes artistic material. This program assembles a collage from a folder of images. The cropping, cutting and sorting of the source images are especially important, since only picture fragments are recombined in the collage. All the pictures in a folder are read dynamically and assigned to one of several layers. This allows the semantic groups to be treated differently. The individual layers also have room for experimentation with rotation, position, and size when constructing the collage. Note the layer order; the first level is drawn first and is thus in the background.’ Until so far the description in the Generative Design book. Here are links to the original programs.
P_4_2_1_01
P_4_2_1_02

I started this assignment with a few photo’s I took in Denmark during our last holiday in April 2014. But after a while working with them I thought that it would be also interesting to try something else. A few years ago I went to the local bookstore. Bought five international newspapers: ‘De Volkskrant, De Morgen, Le Monde, The Times and La Gazzetta dello Sport.’ Selected hundred words or full headers and ripped them from the newspapers. I was specially interested in those words which came from Dutch, France, English and Italian languages but were still understandable in English. Scanned them. And made hundred collages with them which you can find here. I will use the same scans in this assignment. I also prepared a Flickr page with all the images which I have made during this assignment.

When I was running the program for the first time I did not notice how it got to the image files. After commenting the program I came across the word ‘sketchPath’. Checked the Processing reference but I got a ‘Not found’. I did not see the function ‘sketchPath’ before. What does it do? Checked the forum. Found a post from binarymillenium with an answer of JohnG: Re: Get directory of current sketch? Reply #1 – Dec 16th, 2008, 5:31pm sketchPath(“”); will give you the path to the sketch, dataPath(“”); will give you the path to the data directory. You can put things into the method to give a file, e.g. sketchPath(“foo.txt”); will give you the path to a file called “foo.txt” in the sketch directory. Ok. Clear. So it does the same thing as selectInput does. Only with selectInput you have to find the image files yourself.

Now this is not going to work in JavaScript. If you run this program as a JavaScript file in Safari then Safari’s Error Console says: [Error] Processing.js: Unable to execute pjs sketch: ReferenceError: Can’t find variable: sketchPath. Ok! Then I exchange the sketchPath variable with SelectFolder () or selectInput (). But that is not going to work either because these variables are not yet implemented in JavaScript. I could add my images to the data folder but that means that I have to upload at least 3.8Mb per program x the 20 programs I have adjusted = 76 Mb images. That seems not a good idea. For the time being I just add the Processing code to the loftmatic website and leave the finished images on Flickr.

So I changed a few global variable names. I think its hard to read the variable ‘layer1Items, layer2Items and layer3Items’. Changed those names in Layer1_Items, Layer2_Items and Layer3_Items. This is a rather complex program (for me). A lot of things are still a bit mysterious. But I will start with changing the images to images of my own. I think that the first results are a bit too chaotic. Although a collage should be a bit chaotic. Let’s see what I can do to prevent that chaotic look. Made a copy of the footage folder. Changed the sketchPath link and removed all files until I had only three of them left. Does the program still work? It does. And the image looks less cluttered. I even wonder if I need those three layers. But I leave them in for the time being. I will only use the scans from the newspapers. I have put all settings on default (almost). And no rotation. I used only 3 scanned newspaper headers. Clicking on the next (and following links) will show Processing code only.
P_4_2_1_01_GDV_01

In the beginning I thought that those layers would not make any sense. But if you work a while with it than you know which layer has a good composition. So you keep that layer and change the other ones until they are fine too.
P_4_2_1_01_GDV_02

Lets see what rotation brings. Works fine. And I am not sure how the rotation works. Used radians to put layer 1 vertical. Also decreased the scaling factor a bit.
P_4_2_1_01_GDV_03

Rotated layer 3 270 degrees.
P_4_2_1_01_GDV_04

Now using 15 images. But I have ready more of them. I will add 3 per version. So I think in the end we will use 60 newspaper headers (if I make 20 variations) (not sure about that yet).
P_4_2_1_01_GDV_05

I think I have to reduce the image size of my footage. Some of them are more than 800 pixels in width. So that folders file size will increase with Mb’s during the next sessions. Ah… found out that the resolution is 150 pixels per inch. I can reduce that to 72 pixels per inch and they will all getting smaller. But it is also reducing the file size. Lets see how this new set works. The folder size shrinks almost a quarter. Ok. All images are smaller so I can set the largest randomized size factor back to 1.0. And that looks good.
P_4_2_1_01_GDV_06

Maybe its a good idea to add some primary colors to the compositions. Added a red, blue and yellow piece of paper. Ah… because my footage folder is increased with images I can never get the same compositions back again which I had in the beginning. Than I had only 3 images. If I load that file again it will be using all images instead of the 3 which I used in the first program. Anyway… I just continue. Its part of the learning process.
P_4_2_1_01_GDV_07

I wondered why my last newspaper headers were not in the final image. Forgot to update image count. It was still on 8.
P_4_2_1_01_GDV_08

What happens if I make it 20 for instance? The composition will just have more items in it. But also more of the same items.
P_4_2_1_01_GDV_09

I have removed the randomized loading of images. I like a certain amount of images which are just enough to make the composition fine. No more and no less. It’s now on 20. But what about 100? That isn’t any good because only the top layer will be visible. So You have to reduce the scale. If you keep it between 0.2 and 0.4 it might be good. Just try to make it as small as possible. Between 0.1 and 0.2. That isn’t too bad but it is lacking a bit of contrast. What about 200 items per layer? Great. 400 Maybe? Even that is not bad. lets push up the scale to 0.1 to 0.3. And now something silly: 0.01 to 0.09. Ok its going to be a pattern. And you cannot read the headlines anymore. I made a mistake. Instead of a 100 items I typed 11100. Even than it works. It doesn’t look good but it works. brought the amount back to 50. Layer1 is scaled between 0.1, 0.9. Layer2: 0.1, 0.6. Layer3: 0.1, 0.3.
P_4_2_1_01_GDV_10

Just added the code and let the program run. Why do I not like this? I think because it is way too chaotic. What can I do to make it less chaotic.
P_4_2_1_02_GDV_01

Changed the imageMode to corner. That gives me a kind of spiral-ish composition. Changed the imageMode to corners. It does not make very much difference (I think). I switch back to imageMode center.
P_4_2_1_02_GDV_02

I start with putting every variable to a basic setting. Hmm… that is not too bad. But when I hit the 1, 2, 3 keys it’s getting chaotic again. Have to change that. But it looks promising.
P_4_2_1_02_GDV_03

Ok. I fixed the keys 1, 2 and 3. They all have the same settings now. Because the circle looks fine I would like the images a little larger so you can read the headlines better.
P_4_2_1_02_GDV_04

That seems better. Made the circle a bit bigger. So I have more room to enlarge the headlines. I have adjusted the sizes. I think these settings are better than the ones I had in the beginning.
P_4_2_1_02_GDV_05

What happens if I increase the amount of headlines from 100 to 500 per layer? It fills the circle. It is also too large. The image is hitting the boundaries of the display window. That is not too bad. 500 Images per layer is too much. I try 300 for layer1, 200 for layer2 and 100 for layer3.
P_4_2_1_02_GDV_06

Until now I did not visit ‘den zufälligen Rotationswinkel’ (random rotation angle), but by using those variables you can easily make nice images which are slightly chaotic but not too much.
P_4_2_1_02_GDV_07

Increased the rotation from 45 to 90.
P_4_2_1_02_GDV_08

From 90 to 100. Higher numbers will make circles only (if you do not use radians).
P_4_2_1_02_GDV_09

But if you do use radians they will make a difference. Here I used radians 90.
P_4_2_1_02_GDV_10

Henk Lamers

GDV P.4.1.2 Feedback of image cutouts

Let’s start this exercise with the description of the Generative Design book: ‘A familiar example of feedback: a video camera is directed at a television screen that displays the image taken by the camera. After a short time, the television screen depicts an ever-recurring and distorted image. When this phenomenon is simulated, an image’s level of complexity is increased. This repeated overlaying leads to a fragmentary composition. First, the image is loaded and shown in the display. A section of the image is copied to a new randomly selected position with each iteration step. The resulting image now serves as the basis for the next step – the principle of each feedback.’ Here you can find the original two programs.
P_4_1_2_01
P_4_1_2_02

I have made ready a Flickr summary page with all the images I’ve made during this exercise. And all programs you can find at the loftmatic page. A general remark about JavaScript. The images produced by JavaScript are not of the same quality as the one that Processing produces. It seems that JavaScript throws a kind of blur over the edited image. So the end result will be totally different (and much better) when you run the program in Processing. I have no idea why. And a few programs don’t work at all. I will mention that when it’s happening in the following paragraph’s and I will check which error messages I have received.

Because this is a very short program It did not take much time to adjust things. Just the usual stuff. Changed a few variable names to get things more understandable (for me). What does round do? Ah… it calculates the integer closest to the n-th parameter. Forgot that. Add one of my own photo’s. Why is there a white top and bottom? I think because you can see the noise better that is being left by the manipulated pixel slices. I change the background to black. Works also fine. I only have to remove my watermark (or export te photo without a watermark from Aperture). Otherwise it will leave white pixels in the manipulated result.
GDV_P_4_1_2_01_GDV_01

My first question is: ‘does this feedback of image cutouts work also horizontal?’ In my naïvety I just swapped x_copyFrom with y_copyFrom and let the program run. That does not make any sense at all. There is some movement at the left side. Swapped x_copyTo with y_copyTo. Let it run again. No success. Swapped sliceWidth with sliceHeight. Some weird stuff happening at the top of the display window. Changed a few other parameters and now it works. But it would be better to have a photo which uses the full height but not the full width. Also changed the copyTo block to -1, and 1. Which means that the image distorts very slow (I think).
GDV_P_4_1_2_01_GDV_02

Can I put this kind of image processing on an angle? Added a translate and rotate in the program and I get an ‘InternalError: transformed copyArea not implemented yet’. Might be the Java2D rendering system. It does not like a copy after a transform? But If you divide the y_copyFrom random range (0, height / 2) only the top of the image will be scrambled.
GDV_P_4_1_2_01_GDV_03

Pretty obvious when you divide the y_copyFrom random range (height / 2, height) only the bottom of the image will be scrambled.
GDV_P_4_1_2_01_GDV_04

Back to the original code. Forgot to change something. Lets see it as a happy accident. But not for JavaScript. I get no error message in my Safari Error Console. But JavaScript  produces a total mess which does not represent the images you get if you run the program in Processing.
GDV_P_4_1_2_01_GDV_05

sliceWidth = (int) random (1, 2); and let it simmer for about 30 minutes. Time for an espresso.
GDV_P_4_1_2_01_GDV_06

Tried to let the vertical slices change their place not too much. Perfectly translated by JavaScript because it doesn’t do anything at all. What does my Error Console say? [Error] IndexSizeError: DOM Exception 1: Index or size was negative, or greater than the allowed value. (anonymous function) (processing.js, line 4076). Fine!
GDV_P_4_1_2_01_GDV_07

Found out that you can place the original photo in the background. So it shifts the slices but does not replace them with black. So the image stays partially unaffected. It stays a bit fuzzy what the program exactly does when mashing up some parameters. Anyway… JavaScript doesn’t do anything. Same error message as the previous program.
GDV_P_4_1_2_01_GDV_08

Maybe I did something illegal here. I have put a larger random parameter in front of the smaller parameter. This gives an interesting effect though. Some repetition is going on. And the image moves slowly to the right border. Repeating itself around 50 pixels from the left. This takes time. It is a bit similar to the wind filter in Photoshop. Only this takes more than half an hour to make this image. The program to generate the last image was working for 210 minutes. But in JavaScript it does not work at all. I used negative values for the image size. And JavaScript does not like that.
GDV_P_4_1_2_01_GDV_09

Removed the randomness of x_CopyTo. What about positioning the image outside the display. And copy it into the display. That does work. But why does it copy these image strips? And where comes the offset from? Ah… I see… It copies the original strip at the left side. It doesn’t copy the image which is outside the display. When it has copied it skips a 100 to the right and copies it again. And when it has pasted it copies the pasted line again. So I selected the right part of the image of 100 pixels. Let it copy from there and let it paste at random positions between 200 en 800 pixels. Yes… negative values again. So no JavaScript image is produced. It even lead to a crashing Safari browser.
GDV_P_4_1_2_01_GDV_10

This is an extra program which is not to be found in the Generative Design book. The program makes all x- and y- positions random. Changed the size of the to be copied image selection to 10 x 10 pixels. And it also seems to work perfect in JavaScript.
GDV_P_4_1_2_02_GDV_01

I know that 10 x 10 is small. But what if I make it 1 x 1. It looks like nothing is happening. Oh I see two pixels three pixels have been moved in the top black area. This is going to take a lot of time. Changing sliceHeight to height. That is better. Let it run for 45 minutes.
GDV_P_4_1_2_02_GDV_02

And this of course works also for horizontal coping. But I think than its better to take a portrait oriented original image. I decreased the x_ and y_copyTo random factor to (-2, 2). So the distortion is very small and it is going to take a lot of time to do it.
GDV_P_4_1_2_02_GDV_03

Undid (as in I undid that yesterday) all setting changes and added a sliceWidth randomized between -400 and 400. I have the impression that this image (with a lot of black in it will not work as good as the images with more light and colors in it.
GDV_P_4_1_2_02_GDV_04

Another image with almost the same settings as the previous example. And this works better. And also slower. The final image took from 13:20 until 16:05. That was the time when the first pixel reached the right side of the Processing image display window.
GDV_P_4_1_2_02_GDV_05

Changed sliceWidht and height into 100. And x_ and y_ copyTo to random (-2, 2). This seems to give very free and organic colorclouds. I must admit I have also chosen a cloudy source-image.
GDV_P_4_1_2_02_GDV_06

What about feeding the manipulated image again to the same program. So I first distort an image. Save it as a jpg. And import that again for a round of image processing. I think it does not make much of a change. Except when you change a few parameters. JavaScript blurs all the fine details. I think it uses the gray from the background to mess everything up. Terrible!
GDV_P_4_1_2_02_GDV_07

I must say that these images have painterly qualities. Just played with the parameters. And left the black sides out of the interaction / animation.
GDV_P_4_1_2_02_GDV_08

Added only vertical manipulation lines.
GDV_P_4_1_2_02_GDV_09

Did the same here but on a horizontal way. So the lines don’t exchange places vertically. Only horizontally. And they shift in and out of each other. But JavaScript makes a mess of it.
GDV_P_4_1_2_02_GDV_10

Henk Lamers

GDV P.4.1.1 Image cutouts in a grid

As always let’s start with the original description from the Generative Design book: ‘The principle illustrated below is almost the same as the one in the previous example, and yet a whole new world of images emerges. An image’s details and fine structures become pattern generators when only a portion of it is selected and configured into tiles. The results are even more interesting if sections are randomly selected. Using the mouse, a section of the image is selected in the display window. After releasing the mouse button, several copies of this section are stored in an array and organized in a grid. The program offers two variants. In variant one, all copies are made from the exact same section. In variant two, the section is shifted slightly at random each time.’ And here is the original program I started with.
P_4_1_1_01

So far so good. I have made a Flickr summary page on which you can find all the images I have made during this exercise. And all programs (changed, not at all or just a little) you can find at my loftmatic page.

Before getting into this program I let it run and I check if it works as the program is described in the Generative Design book. Most of the time I do this before I start deconstructing. And this is a very interesting program. It gives you so much compositional possibilities to choose from. However there is one thing which is a bit unclear. It is not always sure when you have selected an area. You need to have that feedback otherwise the program cannot build an image from that area. In (roughly estimated) 50% of the case it does build an image. And leaves this image on the screen. The other 50% it shows the composition for a second or less and it flips back to the original photo. I check that again. I have the idea it has something to do with my Wacom tablet and its mouse. With my pen it does not work at all. But that is because my pen cannot simulate a left mouse button. Is that true? I Check my system preferences. Huh? Tablet Version Mismatch. Please reinstall the tablet software. Ok… I download and install the latest Wacom Intuos 3 driver. But does my pen have the functionality to simulate a left mouse button? There is only a right mouse button available. So I have to change that first in the program. But does my Wacom tablet pen/mouse work better now? No! With a pen it is still undo-able. Check the mouse. Ok… that works fine. I was working with an outdated tablet driver.

The next thing is to change the global variable names and the size of the window display to 800 x 800. Eh… why is mouseMoved and keyReleased displayed in a bold typeface in the Processing Development Environment? No idea. Added new sizes to the keys. Key 1 = 8 x 8, key 2 = 10 x 10, key 3  = 16 x 16, key 4 = 20 x 20 and key 5 = 25 x 25. And I am initializing TileCountX and TileCountY to 8. That takes my selection rectangle to the same size as the size of key 1. Now I have to find a suitable image. I will use parts of images which I made in the beginning of 2014 from Strijp S. Full images are here. Strijp S is a former industrial plant of Philips which is now converted to a creative / living area. Something else. When I converted the Processing files to JavaScript it seemed that JavaScript’s interpretation of Processing’s left-mouse button is a right-mouse button (if you have a right and a left mouse button). That is really odd.
P_4_1_1_01_GDV_01

I changed Crop_X and Crop_Y. Instead of dividing I multiplied TileWidth and TileHeight * 1.5.
P_4_1_1_01_GDV_02

What can I do besides changing the original image. Everything I change has almost no impact on the final image. For instance… I changed the constrain function in the cropTiles code block. Does not make a difference. Ok… lets start from the beginning of the cropTiles function. TileWidth = width * TileCountY. That gives me a OutOfMemoryError: You may need to increase the memory setting in Preferences. Fine. Another change gives me a NullPointerException. A spinning beach ball and an ArrayindexOutOfBoundsExcption:64. So better not messing too much with this code. But I can change the size of the tiles by dividing Tilewidth and TileHeight by a certain amount. Hmmm… its not much of a change. A part of the original background image is still visible. What about switching on strokeWeight. Well… that works on my selection rectangle only. Maybe I have to take a look at the filter or tint possibilities.
P_4_1_1_01_GDV_03

I was looking for a way to paint something on top of the tiles. Tried to do that in the cropTiles function but that did not work. In keyReleased it did not work either. But what I can do is change the size of TileCountX and -Y in the keyReleased block. That gives better results. Made a few variations with that. Although the righthand bottom corner is not filled. Ok… have to make to make TileCountY the same size as TileCountX which makes the program behave the same as it was in it’s original state. I defined key 6 for TileCountX-Y 40. And  key 7 for TileCountX Y 50.
P_4_1_1_01_GDV_04

Changed not very much here except for the image.
P_4_1_1_01_GDV_05

Because I found that the images were to close to each other qua-cropping I increased Crop_X and Crop_Y by multiplying them by 20. That gives a totally different idea. It is also more chaotic though.
P_4_1_1_01_GDV_06

Introduced CropOffset which searches further in the image when the amount is higher.
Now it would be nice if you could use the up and down arrows to make the amount of CropOffset smaller and larger. Defined a keyPressed function which can shift the selected area while displaying the status of CropOffset in the console.
P_4_1_1_01_GDV_07

Just replaced the PImage and tried to make some variations.
P_4_1_1_01_GDV_08

More variations.
P_4_1_1_01_GDV_09

And even more variations of a different PImage.
P_4_1_1_01_GDV_10

It seems that you can use the images under an angle. If you add a translate, rotate and scale function in the beginning of the draw block it just keeps on working. But under an angle. No Error messages. The only thing I have to do is make the original PImage larger so I can get rid of the scaling. Because that makes the image less sharp. So I start with a 1600 x 1600 PImage displayed on an angle in a 800 x 800 display window. Will that make sense? Partly… It is difficult to select and it is difficult to get the screen filled. Moved translate and rotate to the reassemble image part. That works better.
P_4_1_1_01_GDV_11

Made some variations with a different image.
P_4_1_1_01_GDV_12

And some more variations.
P_4_1_1_01_GDV_13

And a few more with another image.
P_4_1_1_01_GDV_14

So what happens if I choose a random angle? Its going to animate. Trashed random. I have chosen another angle but I am not fully convinced about the image quality. I need to enlarge everything with scale 1.5 because otherwise the corners of the display window will not be covered. Enlarging the image 1.5 time reduces my image quality. Trashed the angles. In fact if you would like angles in your image you just search for angles in the image. Introduced rectangles. This might seem very simple but I could not find out why it did not work until this moment.
P_4_1_1_01_GDV_15

Made a few variations with the rectangles.
P_4_1_1_01_GDV_16

More variations.
P_4_1_1_01_GDV_17

And some more.
P_4_1_1_01_GDV_18

Tried to use the color palette of the image for the color of the black lines. MyImage.get (gridX, gridY). That did not give me good results so I kept on using the black rectangles. Made the strokeweight less thick.
P_4_1_1_01_GDV_19

Some more variations.
P_4_1_1_01_GDV_20

This was a very useful exercise. I could have gone deeper into the program but that would take too much time. And the results for now are pretty satisfying.

Henk Lamers

GDV P.4.0 Hello, image

After the exercises with the fonts it’s now time to concentrate on images. Do I have images? I have a photo archive which started to explode in size just in 2009. I always made pictures but I was not really aware of what I was doing. In 2009 I started seriously to take pictures. Bought a professional camera and did one workshop with Stephen Johnson in Dead Valley and  FocusOnNature in Iceland. It is purely coincidence that I started to work with Processing in that same year too.

Back to the exercise. This is the description the Generative Design book gives: ‘A digital image is a mosaic of small color tiles. Dynamic access to these tiny elements allows for the generation of new compositions. It is possible to create your own collection of image tools with the following programs. An image is loaded and displayed in a grid defined by the mouse. Each tile in the grid is filled with a scaled copy of the source image.’ Here is the original program. As usual I prepared a Flickr page with all images I made during this exercise. And on the loftmatic page you can find the programs. Almost all of them work with JavaScript this time. Except for P_4_0_01_GDV_06.

I have imported one of my own images which I took at Denmark’s Aquarium ‘The Blue Planet’. It seemed a good idea to me to keep one picture for all the exercises. In that way you can compare the differences of the patterns better. The first thing that struck me was that the image which is used is getting so small that it really doesn’t matter what kind of image you use. So the first thing I would like to try is to decrease the scale ratio. Oh… does it crash? No… it seems to freeze. Well… that is almost the same result. If you go outside to the left or the top of the display window it is freezes. That is not nice. Anyway… I divided tileCountX and tileCountY by 10. That gives me in the smallest version something I can still recognize as a small version of the original photo. But it still freezes when I go with my cursor outside the left or top of the display window. But in JavaScript it does not freeze. Even if you go outside the boundaries of the display window. That is reassuring.
P_4_0_01_GDV_01

What can I do to prevent that it freezes. An if statement maybe? That does work partly. I’l just continue knowing it works when I do not press the mouse. What can I do to make this program generate other images. Can draw a rectangle around every photo? I just added strokeWeight (gridX / 40). That is an interesting variation. Replaced it with (gridY / 40) because I like the vertically effect better than horizontal. There is a difference between the Processing version and the JavaScript one. In JavaScript the top row has a different strokeWeight.
P_4_0_01_GDV_02

When you use strokeWeight (stepX / 16)… you get a black square that grows and shrinks in thickness. You also have the chance on an illegalArgumentException: negative width message.
P_4_0_01_GDV_03

Put ellipses on the cornerpoints. Used get to get the color at the mouse location to fill the ellipses. That did not work out well. So I removed the ellipses. Repeated the image once more but divided stepX and stepY by 2. And again divided it by 4.
P_4_0_01_GDV_04

Introduced a rotate and a translate. Just by coincidence it gave me unexpected things. Commented out the large images. Even more interesting. I only don’t know why the first image is so large of size. Ok. I know. Cleared the background in setup.
P_4_0_01_GDV_05

I need to change this program. I think all the variations I make are getting too close to the original program. I am going to try to merge a range of programs I did two years ago with this program. Maybe just a part of it. The main difference is that this program is not  interactive. It’s just animation. Lets see how that works out. Well… that worked out fine. Except for the fact that when I go the left and top sides of the display window I get an ArrayindexOutOfBoundsException: -100. So I have copied the if-statement of the previous programs: if (gridX < 0); Now it keeps on working. But don’t click the mouse. Knowing that this is good enough for me. Now the image is not scaling. But the objects on top of the image are. It’s a bit different but I think this gives me more possibilities to make variations. By the way… JavaScript does not like this code. That is a shame because this was one of the best images.
P_4_0_01_GDV_06

I have thrown out all interactivity. The image is now an animation but it delivers me more interesting imagery. I added a small stroke of 2 pixels to every arc. It has a transparency of 64. And that makes it stand out nice against the background. And done this it gives me endless possibilities to generate new variations. Reduced the frameRate to 1 because than you can see the individual changes better. Please be patient. The images come up after one or two seconds. And there is no interactivity. And that goes for all other animations after this one. They are not interactive.
P_4_0_01_GDV_07

Removed all stroke-related imagery. Used fill only. Weird enough the image keeps on filling until everything has been filled. And than it stops. While using stroke it keeps on making different images. It never stops. No! Oh… sorry, that is a mistake. I did not refresh the screen. When you use background it refreshes and keeps on changing. Replaced the background with a black dissolve screen.
P_4_0_01_GDV_08

Removed the ObjectSize variable. It seems to give not good enough results when I loaded the files that I made in 2012. Adjusting everything manually seems to work better and it is more work. But the result is better. And that is what counts for me. It loads after 3 or 4 seconds.
P_4_0_01_GDV_09

De rest of the sketches are all variations of the same program but with using different  shapes.
P_4_0_01_GDV_10

Loads after 4 seconds.
P_4_0_01_GDV_11

Loads after 2 seconds but the JavaScript image looks much more interesting than my original design in Processing :)
P_4_0_01_GDV_12

Loads after 5 seconds.
P_4_0_01_GDV_13

Loads after 5 seconds. And it is different from my original design in Processing. But this time the JavaScript is not better.
P_4_0_01_GDV_14

Loads after 2 seconds.
P_4_0_01_GDV_15

Runs pretty fast. Maybe it has something to do with the frameRate. I made it higher in this version. I checked Processing.js page but there is not much more information about frameRate. It should work properly.
P_4_0_01_GDV_16

Starts with almost no delay.
P_4_0_01_GDV_17

Also starts with almost no delay.
P_4_0_01_GDV_18

Works fine.
P_4_0_01_GDV_19

And the last one is also fine.
P_4_0_01_GDV_20

Henk Lamers

GDV P.3.2.3 Font outline from agents

This is the last exercise about fonts. The Generative Design book describes this exercise as: ‘How long is a letter recognizable as such? In this example, the outlines of a letter serve as a source shape. Each individual nodal point moves like a dumb agent. Over time, the letter becomes illegible and is transformed into something new. Points are again generated from a font outline. Each point becomes an independent dumb agent but remains connected to its neighbor.’ Until here their summary. And here you can find the original code.
P_3_2_3_01

I have posted all images I made during the exercise on a summary page which you can find here.
Flickr

There is no JavaScript involved because JavaScript does not support the Geomerative library of Processing. All altered programs you can find on this page.
Loftmatic

For the time being I think I do not want to type any other characters than the letter ‘a’. I took only one letter so I can see the visual changes better. I translated the ‘a’ to the middle of the display and let it run for just a while. I increased the StepSize again and switched off the frameRate. And then the program is running way too fast. Beside of that the program is getting very chaotic after a while. So the first thing I changed was the StepSize and the DanceFactor. Enlarged the ellipses on the character. And I would like to introduce some color. So I introduced the human friendly HSB colorMode.
P_3_2_3_01_GDV_01

Just a thought: why choose a character for this if it finally ends up in a total mess. You could easily take some arbitrary points and throw them at random to the screen. Let’s see if I can control that chaos a bit more. Switched on the frameRate again. Make the font size 800.  Randomized the color settings between 300-360. Randomized the color settings between 50-60 for the lines. Alpha settings to 2. But that seems to minimal. Increased the alpha setting to 4.
P_3_2_3_01_GDV_02

Increased the DanceFactor to 10. Commented out the ellipses. Made the strokeWeight (0.001). I think this distortion is still going way too fast so I decreased the StepSize to 0.1 (which is in fact too less). 0.5 Seems to be a good-one. Just for fun: RCommand.setSegmentLength (1) which gives a complete different picture if you let it run for a minute or 5. Interesting that the original shape of the character remains pretty well ok. It seems that StepSize is the most important variable. When you rise it’s number to a 100 you see why.
P_3_2_3_01_GDV_03

Why does it display twice the same character but in a different color? Commented out the first beginShape Endshape block. Ok. Now it only displays the blue character. So I could add another block in a different color? Yes! That works too. Got a blue / magenta version of the ‘a’. Uncomment the first block and I have three different colors of the same ‘a’.
P_3_2_3_01_GDV_04

Increased the StepSize to 6 which gives me a kind of fluffy character.
P_3_2_3_01_GDV_05

Decreased the point size and typed some characters. That gives me a very chaotic image. I uncommented the second two beginShape, endShape blocks. Ah… I see. It’s growing way too fast. Let me turn that StepSize down. Even 1 is too much. I make it 0.1. That is definitely slower so I make it 0.5. Adding the two begin- and endShape blocks again.
P_3_2_3_01_GDV_06

I give it a bit more distortion and I swap the colors. I think blue should sit at the outline and not in the middle of the character. Experimented a little with setSegmentAngle.
P_3_2_3_01_GDV_07

Let’s try another font. Times. That didn’t work out. Also the spacing between the characters is wrong. Changed it to Arial Bold. Letter spacing is still wrong.
P_3_2_3_01_GDV_08

Made a few proposals with ver ‘airy’ or ‘smokey’ characters. I wonder if I define a key with a color can I than have separate colors per character? No! That’s not working because when you hit the 1 key the number 1 is displayed. Skipped that option. I increased the StepSize to 4 and setSegmentLength to 200. Every character you type now isn’t readable anymore.
P_3_2_3_01_GDV_09

Ok. This would be a nice test to finish this exercise. As it is the starting point of this exercise: ‘How long is a letter recognizable?’. I just type 12 characters and I lower the setSegmentLenght every run with 20. So I start with 180. StrokeWeight (10); When I reached 120 it seems a silly idea though. Maybe it’s better to work the other way around. Start with a readable character and then add ever more distortion. Did some other experiments by commenting out all control points and leave only the ellipses. I have to make one remark. The letterspacing is very bad. It does not work well. So I had to correct them in Photoshop. To finish everything I made a few versions with rectangles which reminded me that I also could make my own shapes. Which, in turn, reminded me also that the possibilities are endless but I have to continue :)
P_3_2_3_01_GDV_10

Henk Lamers