GDV M.1.1 Randomness and starting conditions

I have arrived at the ‘Complex Methods’ part of the Generative Design book. So now it’s getting serious (I think). Let’s see what the first assignment is as it is described in the Generative Design book. ‘In Processing the random generator is the function random (). The term ‘random’ usually signifies an unpredictable event. A random generator created with the help of a computer, however, will never be able to create truly random events or actions because the seemingly random sequence of numbers has always been generated by an algorithm. This is called ‘determinism’ in computer science, meaning that the sequence of values is determined by the initial condition. This initial condition can be set using the randomSeed () function. The command randomSeed (42) therefore generates an identical sequence of values when working with the same random generator, which here are 0.72, 0.05, 0.68 etc. In many programming languages the initial condition is set unnoticed in the background, thereby creating the illusion that real random values are created each time the program is started. The y- coördinate of each point is generated by the random () function and all points are connected to a line.’ And here is the original program.
M_1_1_01

I also made an album on my flickr page of the images that were generated during this assignment. At loftmatic.com you can use the programs and check the code if you like.

Maybe it is interesting to start with a question. Why (while you have a random function) would you like to produce the same random numbers in the same order each time? Well… to be honest I really don’t know. But it seems that the random number we get from the random () function are not really random. They are a result of a mathematical function that simulates randomness. They would yield a pattern over time. So its pseudo-randomness. Using random seed gives you the same random values every time you run the program. Set the seed parameter to a constant to return the same pseudo-random numbers each time the program is run. But again, why would you want that? So I commented out all randomSeed related program lines. Than the program keeps on running. So is randomSeed used to stop and run the program? It is just used to get the same pseudo-random number again and again when the program is run. I just exaggerated the amount of lines by putting a line and a dot at every pixel in the width. Where the begin and endpoint of that line is placed is depending on the random factor. Ah… I see. So you use the randomSeed one time for drawing the lines. And a bit further you use randomSeed again to generate the same random numbers to place the points.
M_1_1_01_GDV_01

I lowered the amount of DoRandomSeed from 42 to 5. But I could not see any better or worse results. So if we know the location of the random points it might be easy to add an extra line or dot in its neighbourhood and create an extra position.
M_1_1_01_GDV_02

I have stopped working on this project for a week and I think that what I’ve done until now is not the way to go. I am going to try to make fake graphics with fake graphic representations using the random and randomSeed function. This looks to me a fine moment to start with some very simple data visualisation. It is relatively easy to put some numbers close to the dots. I have roughly ceiling-ed the numbers because otherwise you get a float number and that number is way too large. Now there are a few thing which have to be changed. For instance the graph is now on its side. But maybe that is good. Let’s see if I can control the data from the dots to make lines.
M_1_1_01_GDV_03

What do we need more to make this graphic seems to be convincing. We need labels. Oh, I forgot something. If you look near the left side of the graphic then the numbers are too high. It‘s because the numbers are calculated from zero (the left side of the display window that is). And I added a margin so that has to be subtracted from the amount. In this case its: xCeiling minus DisplayMargin and that gives the right numbers. What we also need are vertical axis labels. I think of years. And every line should stand for one month. So one year should contain 12 lines. Maybe its good to make a separate function for that. I call it VerticalAxisLabels. I also had a problem with the counting. At that moment it counts from the top (2014) down to a certain number. That should be the opposite. So there are now 27 years displayed. 2014 + 26 = 2041. It should start at the bottom with 2014 and count up to an amount of years. Fixed that by introducing the variable yearCounter. That will count down from 2040 to 2014. I also have to map our horizontal values to a percentage. Because I would like to say in the year so and so there is so much percentage of chance that this and that could happen. Don’t know what could happen at a certain year yet but for now I think this is the way to tell a story. So we have to map the numbers zero to 640 map that to 0 to 100%. At least I thought it would work like that. But I had to adjust it a bit just by fiddling with the numbers. And in the end I left out the years labels because I thought it would not make any sense. And I made a title in Photoshop. Had to do that because the font quality within Processing is not good enough in the smaller point sizes. Beside of that Processing (Java?) does not support the total range of the Univers font.
M_1_1_01_GDV_04

Based on this graphic I will try to make different variations. So how does this look like when you leave out all the lines? It doesn’t look boring. But you don’t have an idea about the orientation. Would it help if I double the amount of random numbers? It does help… a little bit.I wonder if it is possible to see a pattern in the random numbers that are generated. Hence the title: ‘How random is randomness’.
M_1_1_01_GDV_05

What if I introduce the dots again? Well they do help but it does not give you more information about what I am trying to communicate. I left out the lines and introduced the vertex from the original program again. Increased the strokeweight from 0.3 to 8 pixels.
M_1_1_01_GDV_06

Doubled everything except the font size. Centered the percentages in the circles.
M_1_1_01_GDV_07

I have now circles which are on the locations where the percentage lines end. It would be interesting  that these circles show the percentage. So… zero percentage is no circle. 100% = full circle. Can use the arc function for that.
M_1_1_01_GDV_08

The same can be done with rectangles? No. Not really. I used rectangles with a size of 5 x 5 pixels. Lines are being drawn from right to left.
M_1_1_01_GDV_09

The same variation but now lines are running from the top to the bottom.
M_1_1_01_GDV_10

So I made only ten varieties? That is not much. But I think I now have a better idea what the randomSeed function can do for me. That is the most important thing. Not the amount of variations you can make with it.

GDV P.4.3.3 Real time pixel values

This is the last assignment of the Generative Design’s Book ‘P’ part. ‘P’ as in basic principles. As always I start with copying the summary of P.4.3.3. ‘The color values of pixels can again be translated into graphic elements, but with two important differences: first, the pixels are constantly changing because the images come from a video camera, and second, pixels are translated sequentially by dumb agents that are constantly in motion rather than simultaneously. The motion captured by he camera and the migration of the agents thus can paint a picture right before our eyes. A dumb agent moves across the display. The color value of the current real-time video image is analyzed at each position and serves as a parameter for each color and stroke value. The mouse position defines the stroke length and the speed of the agent.’ Until so far the introduction. Here you can find the original programs.
P_4_3_3_01
P_4_3_3_02

As usual I prepared a Flickr summary page of all images I have created during this session. And on the loftmatic page you can find the programs if you click on the previews. I did not put any video material on the site. And you will never get the same results of images which I have because when you run the program it must be connected to a video device. Which in turn shows your environment instead if my habitat.

When I let the program run for the first time it gave me a ‘There are no capture devices connected to this computer.’ And that was certainly true. So I connected a Sony DCR-PC110E PAL to the MacPro. That did not work. I tried a Sony DCR-PC100 PAL which did not work either. And I connected a Canon HDV XL H1 to the Mac Pro. In short they all gave me a NullPointerException in Processing. In the console it says 2014-11-09 13:39:30.148 java[41089:1646798] Error loading /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: dlopen(/Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio, 262): no suitable image found.  Did find: /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: no matching architecture in universal wrapper 2014-11-09 13:39:30.148 java[41089:1646798] Cannot find function pointer NewPlugIn for factory C5A4CE5B-0BB8-11D8-9D75-0003939615B6 in CFBundle/CFPlugIn 0x7f87e9430100 (bundle, not loaded). Which is very impressive poetry indeed. I have no internal camera on my MacPro. So I checked the same program on my MacBookPro and that worked fine with the internal iSight camera. And that is what I had to use during this assignment. To begin I changed all variable names to get a better understanding of how the program works. And I made the line length very short. Its giving me strange remarks in the Console. For instance: DVFreeThread – CFMachPortCreateWithPort hack = 0x1876e330, fPowerNotifyPort= 0x1876cfe0. DVFreeThread – CFMachPortCreateWithPort hack = 0x17817030, fPowerNotifyPort= 0x17816f00. DVFreeThread – CFMachPortCreateWithPort hack = 0x18774ca0, fPowerNotifyPort= 0x1876ddb0. No Idea what this means but three times the word ‘hack’ worries me a bit.
P_4_3_3_01_GDV_01

Commented out the CurvePoint_X. That gives me a random drawn vertical line on the left side of the display window. Changed it to CurvePoint_X * 1.2 in the curveVertex line which gives me horizontal lines only.
P_4_3_3_01_GDV_02

Changed CurvePoint_Y * 1.2. Set CurvePoint_X back to its original state. This gives me vertical lines only.
P_4_3_3_01_GDV_03

If you multiply CurvePoint_Y * 1.2 before the endShape command you get a slightly bended horizontal line at the end. Changed that. Brought all variables back to their original state. Added an ellipse after every shape is being drawn.
P_4_3_3_01_GDV_04

Replaced the ellipse by a rect. Weird. I got a NullPointerException in the message area and a lot of red messages in the console. But the program started anyway. Is this possible? Started the program again and now it seems to be fine. No errors.
P_4_3_3_01_GDV_05

I have commented out all lines of the beginShape. Replaced the CurveVertex line with a rect. And that seems to work fine. I get a more structured and less chaotic image.
P_4_3_3_01_GDV_06

Hmmm… would type work? It does. Need to add color. Changed stroke for fill to change the color of the typeface. And it works only with one character. I think it has something to do with the fact that counter is not updated. Left out the textSize random value. Found it too chaotic. Need to experiment later for using the right text. My point is that I am writing pieces of the program on my MacPro. Share the file with my MacBookPro. Check if the program works. If it does and it looks good I skip to another variation. In a later stage I will make the final visuals by taking my MacBook Pro to a more lively location.
P_4_3_3_01_GDV_07

I am going to remove almost all randomization except for the positioning of the objects. Used two arcs for creating the objects. Radians (90), radians (180) and radians (270), radians (360).
P_4_3_3_01_GDV_08

Made an asterisk-ish object using arcs only. Used a for loop for that. That seems to work fine too.
P_4_3_3_01_GDV_09

Used two triangles to make a kind of flattened square on one of its sides. Compass needle maybe? And that is the end of the first program.
P_4_3_3_01_GDV_10

In the second program I renamed all global variables. Just for a start I replaced noFill with noStroke. That fills every object with white. Replaced all stroke commands with fill commands.
P_4_3_3_02_GDV_01

Added an alpha channel of 50 for each fill. But maybe that is too much. What about 10? That gives very transparant objects. What about 1? That is too less. 5 Maybe? That looks fine.
P_4_3_3_02_GDV_02

What about introducing the stroke again. But a very thin one? Looks good. In addition I made the stroke color the same color as the fill but without the alpha channel.
P_4_3_3_02_GDV_03

Reduced the length of CurvePoint_X and CurvePointY form -50, 50 to -10, 10. Time to remove the fill. Used the stroke with a 50 percent alpha.
P_4_3_3_02_GDV_04

Let’s make another variation by exaggerating one of the curvepoints in each line. Strange enough it doesn’t do anything. I’ve put CurvePoint X to – 100, And curvePoint_Y to 100. I reduce it to 50. Still nothing happens. Maybe I have to make the minus part the same number as the plus-part. That works. Increased it to 400. But that is way too big. 100 Maybe? Put the alpha of the line on 10. That gives me (after a while (5 minutes)) a cloud like image which is quiet nice. It is abstract but I don’t mind.
P_4_3_3_02_GDV_05

Replaced the first line object with a rect. It is definitely too large at this moment. Changed the random factor to -10, 10. It is still not clear to me how this randomization is handled.
P_4_3_3_02_GDV_06

Replaced the rects with ellipses.
P_4_3_3_02_GDV_07

Ellipses combined with curves. Maybe its better to do one with curves only. One with ellipses and one with rects. Ok. Lets exaggerate this by doing an and thing. Every line gets a rect, ellipse and a curve. Switched off the strokes and used fills. But I think it is a bit too harsh. So I need to make the objects smaller. Finally I ended up with using three different objects. All operating at separate places of the display screen. Sometimes they mix sometimes they do not.
P_4_3_3_02_GDV_08

I am going to get rid of these scribbled lines. I will replace them all with ellipses. Time for some structured chaos (within the chaos).
P_4_3_3_02_GDV_09

Used the objects of P_4_3_3_01_GDV_09 to create new images.
P_4_3_3_02_GDV_10

Now I was not really satisfied about the results until now. I started to create simple objects in Processing. But on a certain moment I made a swoosh-like object which was too large if you compare it with the earlier examples. For the color I used a 10 percent alpha. Suddenly I got a kind of strange structure on my screen. I had to wait for it for at least 10 minutes. So it takes a lot of time.
P_4_3_3_02_GDV_11

The next variation I will use a wavy object. In fact it’s just one line. It’s not correctly positioned. But that doesn’t matter. I like it this way. It also got a kind of silk-like or rough paper quality in it. And it takes also about 10 minutes to render.
P_4_3_3_02_GDV_12

I have gone totally abstract now. Not with bright colours but with very soft layers of alpha. It still gets its color from the iSight camera. But by rendering the images this way it gives the images also a kind of 3d-ish look.  I have no idea how this works but it does. And that’s fine by me.
P_4_3_3_02_GDV_13

Let’s see what happens if I make the waves longer. So its less waves but longer waves.
P_4_3_3_02_GDV_14

Some very loose line objects.
P_4_3_3_02_GDV_15

And that finishes this part of the Generative Design book. It took me one year to make all the examples. Sometimes I even did not know why something happened but if it looked good it was fine by me. I am thinking of making a book with the best examples. But for now I will continue with ‘Complex Methods’.

GDV P.4.3.2 Type from pixel values

Here is the text from the Generative Design book: ‘The following text image is ambiguous. It can be read for its meaning, or viewed at a distance and perceived as a picture. The pixels from the image control the configuration of the letters. The size of each letter depends on the gray values of the pixels in the original image and thereby creates an additional message. A character string is processed letter by letter and constructed row by row in the normal writing direction. Before a character is drawn, its position in display coördinates is matched to the corresponding position in the original image in pixel coördinates. Only a subset of the original pixels is used–merely those for which a corresponding character position exists. The color of the selected pixel can now be converted into its gray value and the gray value used to modulate the font size, for example.’ Until so far the Generative Design book. Here is the original program.
P_4_3_2_01

I also prepared a Flickr summary page where you can find all variations I made during this assignment. If you are interested in the programs than you can visit loftmatic’s GDV_P_4_3_2 page.

Just starting with the usual things. Going through the program. reading and copying all remarks from the book version as a comment into the program. Changing the global variable names. For instance: inputText to TextInput, fontsizeMax to MaxFontSize and fontSizeMin to MinFontSize. Spacing to LineSpacing. Kerning to LetterSpacing. fontSizeStatic to OneFontSize. Most of the time I also change the code by adding some spaces when the text is very crowded. I think it makes the code more readable. Also changed the font into a font Jeanne de Bont and I designed in 2004 for an animation film about dazzle painting. The animation film you can find here. That font shows more of the underlying picture. It also introduced a problem. The font supports capitals only. So I added TextInput = TextInput.toUpperCase (); Another idea was to use pictures of several writers. The text of books they have written is creating their own image. I started with Emile Zola and used a paragraph of his book ‘La Débâcle’. It takes a second or two to load.
P_4_3_2_01_GDV_01

The second writer is Jules Verne. I have no special reason for the choice of writers. Just picked them arbitrary. And I copied a paragraph of ‘20000 Lieues sous les mers’. To get a different effect in the image I increased the letter spacing to 5.
P_4_3_2_01_GDV_02

Increasing the global variable letter spacing works fine in the smaller font size. But I would like to keep that effect also when I am using larger font sizes. So I multiplied LetterSpacing * 10. That did not work. So instead of multiplying I divided it by 10. And than the image gets a totally different quality. Used a picture from Simone de Beauvoir and the first paragraph from her book ‘The Second Sex’.
P_4_3_2_01_GDV_03

Margaret Mead. Text from ‘Coming of age in Samoa’.
P_4_3_2_01_GDV_04

‘On Photography’ by Susan Sontag.
P_4_3_2_01_GDV_05

This program puts every character on a totally nonfunctional angle which makes the text completely unreadable. It also animates with a framerate of one frame per second. The text is from Camille Paglia’s ‘Sexual Personae’.
P_4_3_2_01_GDV_06

Germaine Greer. Text from ‘The female eunuch’.
P_4_3_2_01_GDV_07

Robert Hughes. Text from ‘The shock of the new’.
P_4_3_2_01_GDV_08

Richard Dawkins seemed to me the right writer to stop using variations with quoted paragraphs of writings. I use only one character for his portrait.
P_4_3_2_01_GDV_09

Last one is Yuri Gagarin. The image is formed by the letters CCCP. Which I never knew what it really meant. This moment seems to be the right one to look it up. It stands for ‘Sojúz Sovétskih Socialistíčeskih Respúblik’, which when translated into English is ‘The Union of Soviet Socialist Republics’.
P_4_3_2_01_GDV_10

Ok. This program is really another extension of our image manipulation tools. Lets move to the last program of this P-chapter from the Generative Design book.

Henk Lamers

GDV P.4.3.1 Graphic from pixel values

This is the description of the Generative Design book: ‘Pixels, the smallest elements of an image, can serve as the starting point for the composition of portraits. In this example, each pixel is reduced to its color value. These values modulate design parameters such as rotation, width, height, and area. The pixel is completely replaced by a graphic representation, and the portrait becomes somewhat abstract. The pixels of an image are analyzed sequentially and transformed into other graphic elements. The key to this is the conversion of the color values of pixels (RGB) into the corresponding gray values,  because–in contrast to the pure RGB values–these can be practically applied to design aspects such as line width. It is advisable to reduce the resolution of the source image first.’ Here are the original programs:
P_4_3_1_01
P_4_3_1_02

I have prepared a summary page on Flickr where all the images are gathered which I made during this assignment. And here is the loftmatic page where all the programs are that I changed. The first 5 do work with JavaScript. The second 5 do not. So instead I added the Processing file when you hit a thumbnail.

To get acquinted to the program I always change the variable names to make them more meaningful for me. For instance how do I know what l3 is. I think this is an unlucky choice (also because of the typeface). And how do I know that it is not 13 instead of l3. And if it is I3 is it than Capital I (ai) or lowercase l (el). Luckey the Generative Design people have placed a comment above this statement in the code: // grayscale to line length. So I changed I3 into greyToLineLength. That is more work but much more understandable for humans. GreyScaleToLineLength would have even been better. Maybe I have to change that anyway. Another example: w4. In the comment it seems to be strokeWeight. So I change that variables name to greyScaleToStrokeWeight. Which leads to names like: greyScaleToLineLengthStrokeWeight. Its long. But understandable. But what about getting a picture of a portrait. I found a photo from 2008 which I made of a cheetah in the Tiergarten Schönbrunn of Vienna. To begin with I used the standard variations which you can find under the 1-9 keys.
P_4_3_1_01_GDV_01

So now its tweaking the program to find other manipulations. Started with the grayscale to stroke.
Case 1: added SQUARE to strokeCap.
Case 2: added 128 transparency.
Case 3: subtracted greyScaleToLineLength from posX.
Case 4: added grayscale / 255.0 * TWO_PI. And 0 – greyScaleToLineLengthStrokeWeight.
Case 5: increased the distance of the nearest neighbor.
Case 6: changed ellipse in rect. Changed fill to noFill and noStroke into stroke.
Case 7: changed rectangle size  from 15 into 60.
Case 8: switched off all rectangles except one. Increased width and height of it.
Case 9: rotated TWO_PI. Changed ellipse into rectangle.
P_4_3_1_01_GDV_02

I wondered what you could do to see of the patterns. At this moment they are very small. It seems that you have to decrease the image size. Decreased it from 100 x 100 to 50 x 50. And that works better. I did some tests with changing the background to black but that did not work very well because you have to invert the image than. Which leaves you with a negative image in the cases 1 – 5. So I left the background white for the time being.
P_4_3_1_01_GDV_03

Just played with the numbers in case 1 – 5.
Case 6: I have removed the fill of the rectangles.
Case 7: added a rectangle in a rectangle in a rectangle in a rectangle.
CAse 9: changed rectangles in ellipses.
P_4_3_1_01_GDV_04

I just mention the cases so that you can really understand what I did (and not did).
Case 1: changed posX and posY into – 25.
Case 2: removed the fill’s and added a secondary circle.
Case 3: added a secondary line which makes a kind of < symbol. Made the background black. But after a while I decided to make it white again. I still don’t like the negative image.
Case 4: removed one greyScaleToLineLengthStrokeWeight in the line function.
Case 5: added another line which makes it all a bit more like it has been sketched.
Case 6: getCurrentColor is divided by 20. Which makes use of another RGB color range.
Case 7: removed the rectangles and replaced them with lines.
Case 8: replaced the rectangles with ellipses.
Case 9: replaced stroke color and rectangle with ellipse and noFill.
P_4_3_1_01_GDV_05

Renamed the global variables. Added comments to get it all a bit more understandable. Resized the display image. And used the svg files which were added by Generative Design. Maybe its a good idea to change the image into a colored image. Made 9 cases like the ones used in the earlier version. And I replaced the standard svg’s by my own much more simple svg’s. Also increased the resolution by increasing the original image. And OS Yosemite seems not to support Adobe Illustrator anymore. I am on JAVA SE Runtime 7 and Illustrator needs Java SE Runtime 6. Downloaded that and installed it. Adobe Illustrator works again. By the way… It seems that you can install JAVA SE Runtime 6 and 7 next to each other. They want hurt each other. Another thing is that from here on the Error console of Safari gives me an [Error] ReferenceError: Can’t find variable: sketchPath, setup (undefined, line 17), executeSketch (processing.js, line 8527), (anonymous function) (processing.js, line 8535). So I placed the programs only on the loftmatic website.
P_4_3_1_02_GDV_01

I am now working with two svg-files. Tried to keep it simple. It might get more complex into the end. Case 6, 7, 8 and 9 need some attention. Just played with the fillings and strokes to get those problems out-of-the-way.
P_4_3_1_02_GDV_02

It is really impressive how many variations you can make with just changing a minor detail. Even enlarging tileHeigt and or tileWidth can give very different results. And remember: pushMatrix () cannot use push more than 32 times. I received that message because I forgot popMatrix.
P_4_3_1_02_GDV_03

Used a honeycomb as svg file. Just tweaked the variables a bit.
P_4_3_1_02_GDV_04

Lets see what happens when I load all svg files of the earlier cases.
P_4_3_1_02_GDV_05

Henk Lamers

GDV P.4.2.2. Time-based image collection

The Generative Design book gives the following summary of this assignment: ‘In this example, the inner structures of moving images are visualized. After extracting individual images from a video file, this program arranges the images in defined and regular time intervals in a grid. This grid depicts a compacted version of the entire video file and represents the rhythm of its cuts and frames. To fill the grid, individual still images are extracted at regular intervals, from the entire length of a video. Accordingly, a sixty-second video and a grid with twenty tiles results in three-second intervals.’ Here is the original program.
P_4_2_2_01

I have put all images I made during this assignment on a Flickr page. I decided not to join the movie-files I used. So instead I added the Processing files if you click a thumbnail on the loftmatic page which you can find here. You might use your own movie-file to get comparable results.

After doing some tweaking of the program I had to find some video footage. Although we have lots of video-material from our documentaries and animation-films it is a lot of work to find real good footage. It takes a lot of time because you have to go through every DV-tape. And while doing that it might turn out that there was nothing useful on the tape. So what is the most easiest way to get some footage. The iPhone! I still got a stunning 3GS. And I have never used the video-function. So this might be a good opportunity. But what do I film? Lets start with the most colorful news-broadcaster of the world. CNN! Took me a few minutes to get some footage. Interestingly enough I filmed upside down. There is no image here. All the images are on the Flickr page. This link takes you to the Processing code. And that is also true for all next links on this page.
P_4_2_2_01_GDV_01

I do another one. Ah… that looks better. I specially like the footage which has typography in it (but that is my abnormality). So what file size is that movie? Its 7.6Mb. Might be that I have to reduce that with Compressor or Final Cut Pro.
P_4_2_2_01_GDV_02

That works fine. The whites are a bit bleached out but its fast.
P_4_2_2_01_GDV_03

What about changing the format. Let’s double TileCountX and TileCountY. Oh… that is even better!
P_4_2_2_01_GDV_04

Can I double that again? Sure it does. TileCountX = 48. TileCountY = 64. That is about 3072 frames. And the final image of all those 3072 frames together is getting better every time. It is also getting more a kind of structure. And it is getting less recognisable. It is also taking more time to compose the final image. This one took about 5 minutes o complete. P_4_2_2_01_GDV_05

Checked if it was possible to rotate the total image. Used footage from a 1960’s NASA film.
P_4_2_2_01_GDV_06

Lift off of Apollo 11. First in 18 x  24. And once in 36 x 48. But wat if I double that again. Seems just to work. Although there is not much to see anymore. The footage has just disappeared into a lot of tiny rectangles. It looks almost if it has 3D-ish qualities. It took almost 14 minutes to render.
P_4_2_2_01_GDV_07

Apollo 10 footage. Above the moon and filmed from the Lunar Module. P_4_2_2_01_GDV_08

Apollo 10 spacewalk.
P_4_2_2_01_GDV_09

Apollo 11 landing at sea. And I made a typo which might give me more ideas to make the next five programs.
P_4_2_2_01_GDV_10

Continuing with checking if my typo still works. I made a mistake while defining the variable TileCountX. Instead of 36 I filled it with 368. A lot of strange things appear in the display window. So I stopped the process. But now I make TileCountX 400 and let it run. This image took 14 minutes to complete.
P_4_2_2_01_GDV_11

TileCountX = 400 and TileCountY = 4. It does weird things to the original footage. P_4_2_2_01_GDV_12

What happens if I make TileCountY just 1 pixel and TileCountX 800? Amazing! P_4_2_2_01_GDV_13

Swap the content of TileCountX and TileCountY.
P_4_2_2_01_GDV_14

And now for an maximum stress test. TileCountX = width and TileCountY = height.Processing has to calculate 640.000 pixels. It started at 13:51. To complete 800 pixels in a row it takes about 2 minutes. After one hour I have cancelled it. It Hardly makes any progress and the image that is going to be delivered seems to get not very interesting. Another attempt. It took 56 minutes for Processing to render.
P_4_2_2_01_GDV_15

Just put everything on an 45 degree angle. Put to fill and 800 x 800 screen Processing needs to render an image of 1150 x 1150. Otherwise it leaves black corners in some place or another.
P_4_2_2_01_GDV_16

Kept the angle of 45 degrees but put TileCountY on 2. So the screen will be divided into two triangles.
P_4_2_2_01_GDV_17

Rotated the image for 90 degrees. Which gives an interesting effect. Processing renders a kind of small blocks. I wonder when I switch off the rotate function if the effect is the same. Yes it is still rendering little blocks. Maybe I have to increase TileCountX? It did not work out.
P_4_2_2_01_GDV_18

Interesting things are going on here. Organised chaos.
P_4_2_2_01_GDV_19

I did not change much here. Made TileCountX 2400 but it did not do much for the image quality. I just rotate the image for 270 degrees because I like it better when its upside down. That was a mistake. Processing rendered it horizontally but I think I like this even more.
P_4_2_2_01_GDV_20

Ah… There is an extra program in the Generative Design folder. The time-lapse camera. After each intervalTime a picture is saved to the sketch folder. That seems to give me some problems. Let’s try to solve them step by step. When running te program Processing says: ‘There are no capture devices connected to this computer.’ We need a camera. The camera is now attached. Processing says: ‘NullPointerException’ in the message area. And beside of that it gives me a lot of junk in the console: ‘2014-10-07 11:46:12.660 java[28975:10003] Error loading /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio:  dlopen(/Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio, 262): no suitable image found. Did find: /Library/Audio/Plug-Ins/HAL/DVCPROHDAudio.plugin/Contents/MacOS/DVCPROHDAudio: no matching architecture in universal wrapper 2014-10-07 11:46:12.661 java[28975:10003] Cannot find function pointer NewPlugIn for factory C5A4CE5B-0BB8-11D8-9D75-0003939615B6 in CFBundle/CFPlugIn 0x7fef1c115350 (bundle, not loaded) name=DCR-PC110E,size=720×576,fps=30 name=DCR-PC110E,size=720×576,fps=15 name=DCR-PC110E,size=720×576,fps=1 name=DCR-PC110E,size=360×288,fps=30 name=DCR-PC110E,size=360×288,fps=15 name=DCR-PC110E,size=360×288,fps=1 name=DCR-PC110E,size=180×144,fps=30 name=DCR-PC110E,size=180×144,fps=15 name=DCR-PC110E,size=180×144,fps=1 name=DCR-PC110E,size=90×72,fps=30 name=DCR-PC110E,size=90×72,fps=15 name=DCR-PC110E,size=90×72,fps=1… so… I think I’m running into a technical problem which might cost me a lot of time. Knowing that I stop this session and continue with the next program.

Henk Lamers

GDV P.4.2.1 Collage from image collection

This assignment is described in the Generative Design book as follows: ‘Your archive of photographs now becomes artistic material. This program assembles a collage from a folder of images. The cropping, cutting and sorting of the source images are especially important, since only picture fragments are recombined in the collage. All the pictures in a folder are read dynamically and assigned to one of several layers. This allows the semantic groups to be treated differently. The individual layers also have room for experimentation with rotation, position, and size when constructing the collage. Note the layer order; the first level is drawn first and is thus in the background.’ Until so far the description in the Generative Design book. Here are links to the original programs.
P_4_2_1_01
P_4_2_1_02

I started this assignment with a few photo’s I took in Denmark during our last holiday in April 2014. But after a while working with them I thought that it would be also interesting to try something else. A few years ago I went to the local bookstore. Bought five international newspapers: ‘De Volkskrant, De Morgen, Le Monde, The Times and La Gazzetta dello Sport.’ Selected hundred words or full headers and ripped them from the newspapers. I was specially interested in those words which came from Dutch, France, English and Italian languages but were still understandable in English. Scanned them. And made hundred collages with them which you can find here. I will use the same scans in this assignment. I also prepared a Flickr page with all the images which I have made during this assignment.

When I was running the program for the first time I did not notice how it got to the image files. After commenting the program I came across the word ‘sketchPath’. Checked the Processing reference but I got a ‘Not found’. I did not see the function ‘sketchPath’ before. What does it do? Checked the forum. Found a post from binarymillenium with an answer of JohnG: Re: Get directory of current sketch? Reply #1 – Dec 16th, 2008, 5:31pm sketchPath(“”); will give you the path to the sketch, dataPath(“”); will give you the path to the data directory. You can put things into the method to give a file, e.g. sketchPath(“foo.txt”); will give you the path to a file called “foo.txt” in the sketch directory. Ok. Clear. So it does the same thing as selectInput does. Only with selectInput you have to find the image files yourself.

Now this is not going to work in JavaScript. If you run this program as a JavaScript file in Safari then Safari’s Error Console says: [Error] Processing.js: Unable to execute pjs sketch: ReferenceError: Can’t find variable: sketchPath. Ok! Then I exchange the sketchPath variable with SelectFolder () or selectInput (). But that is not going to work either because these variables are not yet implemented in JavaScript. I could add my images to the data folder but that means that I have to upload at least 3.8Mb per program x the 20 programs I have adjusted = 76 Mb images. That seems not a good idea. For the time being I just add the Processing code to the loftmatic website and leave the finished images on Flickr.

So I changed a few global variable names. I think its hard to read the variable ‘layer1Items, layer2Items and layer3Items’. Changed those names in Layer1_Items, Layer2_Items and Layer3_Items. This is a rather complex program (for me). A lot of things are still a bit mysterious. But I will start with changing the images to images of my own. I think that the first results are a bit too chaotic. Although a collage should be a bit chaotic. Let’s see what I can do to prevent that chaotic look. Made a copy of the footage folder. Changed the sketchPath link and removed all files until I had only three of them left. Does the program still work? It does. And the image looks less cluttered. I even wonder if I need those three layers. But I leave them in for the time being. I will only use the scans from the newspapers. I have put all settings on default (almost). And no rotation. I used only 3 scanned newspaper headers. Clicking on the next (and following links) will show Processing code only.
P_4_2_1_01_GDV_01

In the beginning I thought that those layers would not make any sense. But if you work a while with it than you know which layer has a good composition. So you keep that layer and change the other ones until they are fine too.
P_4_2_1_01_GDV_02

Lets see what rotation brings. Works fine. And I am not sure how the rotation works. Used radians to put layer 1 vertical. Also decreased the scaling factor a bit.
P_4_2_1_01_GDV_03

Rotated layer 3 270 degrees.
P_4_2_1_01_GDV_04

Now using 15 images. But I have ready more of them. I will add 3 per version. So I think in the end we will use 60 newspaper headers (if I make 20 variations) (not sure about that yet).
P_4_2_1_01_GDV_05

I think I have to reduce the image size of my footage. Some of them are more than 800 pixels in width. So that folders file size will increase with Mb’s during the next sessions. Ah… found out that the resolution is 150 pixels per inch. I can reduce that to 72 pixels per inch and they will all getting smaller. But it is also reducing the file size. Lets see how this new set works. The folder size shrinks almost a quarter. Ok. All images are smaller so I can set the largest randomized size factor back to 1.0. And that looks good.
P_4_2_1_01_GDV_06

Maybe its a good idea to add some primary colors to the compositions. Added a red, blue and yellow piece of paper. Ah… because my footage folder is increased with images I can never get the same compositions back again which I had in the beginning. Than I had only 3 images. If I load that file again it will be using all images instead of the 3 which I used in the first program. Anyway… I just continue. Its part of the learning process.
P_4_2_1_01_GDV_07

I wondered why my last newspaper headers were not in the final image. Forgot to update image count. It was still on 8.
P_4_2_1_01_GDV_08

What happens if I make it 20 for instance? The composition will just have more items in it. But also more of the same items.
P_4_2_1_01_GDV_09

I have removed the randomized loading of images. I like a certain amount of images which are just enough to make the composition fine. No more and no less. It’s now on 20. But what about 100? That isn’t any good because only the top layer will be visible. So You have to reduce the scale. If you keep it between 0.2 and 0.4 it might be good. Just try to make it as small as possible. Between 0.1 and 0.2. That isn’t too bad but it is lacking a bit of contrast. What about 200 items per layer? Great. 400 Maybe? Even that is not bad. lets push up the scale to 0.1 to 0.3. And now something silly: 0.01 to 0.09. Ok its going to be a pattern. And you cannot read the headlines anymore. I made a mistake. Instead of a 100 items I typed 11100. Even than it works. It doesn’t look good but it works. brought the amount back to 50. Layer1 is scaled between 0.1, 0.9. Layer2: 0.1, 0.6. Layer3: 0.1, 0.3.
P_4_2_1_01_GDV_10

Just added the code and let the program run. Why do I not like this? I think because it is way too chaotic. What can I do to make it less chaotic.
P_4_2_1_02_GDV_01

Changed the imageMode to corner. That gives me a kind of spiral-ish composition. Changed the imageMode to corners. It does not make very much difference (I think). I switch back to imageMode center.
P_4_2_1_02_GDV_02

I start with putting every variable to a basic setting. Hmm… that is not too bad. But when I hit the 1, 2, 3 keys it’s getting chaotic again. Have to change that. But it looks promising.
P_4_2_1_02_GDV_03

Ok. I fixed the keys 1, 2 and 3. They all have the same settings now. Because the circle looks fine I would like the images a little larger so you can read the headlines better.
P_4_2_1_02_GDV_04

That seems better. Made the circle a bit bigger. So I have more room to enlarge the headlines. I have adjusted the sizes. I think these settings are better than the ones I had in the beginning.
P_4_2_1_02_GDV_05

What happens if I increase the amount of headlines from 100 to 500 per layer? It fills the circle. It is also too large. The image is hitting the boundaries of the display window. That is not too bad. 500 Images per layer is too much. I try 300 for layer1, 200 for layer2 and 100 for layer3.
P_4_2_1_02_GDV_06

Until now I did not visit ‘den zufälligen Rotationswinkel’ (random rotation angle), but by using those variables you can easily make nice images which are slightly chaotic but not too much.
P_4_2_1_02_GDV_07

Increased the rotation from 45 to 90.
P_4_2_1_02_GDV_08

From 90 to 100. Higher numbers will make circles only (if you do not use radians).
P_4_2_1_02_GDV_09

But if you do use radians they will make a difference. Here I used radians 90.
P_4_2_1_02_GDV_10

Henk Lamers

GDV P.4.1.2 Feedback of image cutouts

Let’s start this exercise with the description of the Generative Design book: ‘A familiar example of feedback: a video camera is directed at a television screen that displays the image taken by the camera. After a short time, the television screen depicts an ever-recurring and distorted image. When this phenomenon is simulated, an image’s level of complexity is increased. This repeated overlaying leads to a fragmentary composition. First, the image is loaded and shown in the display. A section of the image is copied to a new randomly selected position with each iteration step. The resulting image now serves as the basis for the next step – the principle of each feedback.’ Here you can find the original two programs.
P_4_1_2_01
P_4_1_2_02

I have made ready a Flickr summary page with all the images I’ve made during this exercise. And all programs you can find at the loftmatic page. A general remark about JavaScript. The images produced by JavaScript are not of the same quality as the one that Processing produces. It seems that JavaScript throws a kind of blur over the edited image. So the end result will be totally different (and much better) when you run the program in Processing. I have no idea why. And a few programs don’t work at all. I will mention that when it’s happening in the following paragraph’s and I will check which error messages I have received.

Because this is a very short program It did not take much time to adjust things. Just the usual stuff. Changed a few variable names to get things more understandable (for me). What does round do? Ah… it calculates the integer closest to the n-th parameter. Forgot that. Add one of my own photo’s. Why is there a white top and bottom? I think because you can see the noise better that is being left by the manipulated pixel slices. I change the background to black. Works also fine. I only have to remove my watermark (or export te photo without a watermark from Aperture). Otherwise it will leave white pixels in the manipulated result.
GDV_P_4_1_2_01_GDV_01

My first question is: ‘does this feedback of image cutouts work also horizontal?’ In my naïvety I just swapped x_copyFrom with y_copyFrom and let the program run. That does not make any sense at all. There is some movement at the left side. Swapped x_copyTo with y_copyTo. Let it run again. No success. Swapped sliceWidth with sliceHeight. Some weird stuff happening at the top of the display window. Changed a few other parameters and now it works. But it would be better to have a photo which uses the full height but not the full width. Also changed the copyTo block to -1, and 1. Which means that the image distorts very slow (I think).
GDV_P_4_1_2_01_GDV_02

Can I put this kind of image processing on an angle? Added a translate and rotate in the program and I get an ‘InternalError: transformed copyArea not implemented yet’. Might be the Java2D rendering system. It does not like a copy after a transform? But If you divide the y_copyFrom random range (0, height / 2) only the top of the image will be scrambled.
GDV_P_4_1_2_01_GDV_03

Pretty obvious when you divide the y_copyFrom random range (height / 2, height) only the bottom of the image will be scrambled.
GDV_P_4_1_2_01_GDV_04

Back to the original code. Forgot to change something. Lets see it as a happy accident. But not for JavaScript. I get no error message in my Safari Error Console. But JavaScript  produces a total mess which does not represent the images you get if you run the program in Processing.
GDV_P_4_1_2_01_GDV_05

sliceWidth = (int) random (1, 2); and let it simmer for about 30 minutes. Time for an espresso.
GDV_P_4_1_2_01_GDV_06

Tried to let the vertical slices change their place not too much. Perfectly translated by JavaScript because it doesn’t do anything at all. What does my Error Console say? [Error] IndexSizeError: DOM Exception 1: Index or size was negative, or greater than the allowed value. (anonymous function) (processing.js, line 4076). Fine!
GDV_P_4_1_2_01_GDV_07

Found out that you can place the original photo in the background. So it shifts the slices but does not replace them with black. So the image stays partially unaffected. It stays a bit fuzzy what the program exactly does when mashing up some parameters. Anyway… JavaScript doesn’t do anything. Same error message as the previous program.
GDV_P_4_1_2_01_GDV_08

Maybe I did something illegal here. I have put a larger random parameter in front of the smaller parameter. This gives an interesting effect though. Some repetition is going on. And the image moves slowly to the right border. Repeating itself around 50 pixels from the left. This takes time. It is a bit similar to the wind filter in Photoshop. Only this takes more than half an hour to make this image. The program to generate the last image was working for 210 minutes. But in JavaScript it does not work at all. I used negative values for the image size. And JavaScript does not like that.
GDV_P_4_1_2_01_GDV_09

Removed the randomness of x_CopyTo. What about positioning the image outside the display. And copy it into the display. That does work. But why does it copy these image strips? And where comes the offset from? Ah… I see… It copies the original strip at the left side. It doesn’t copy the image which is outside the display. When it has copied it skips a 100 to the right and copies it again. And when it has pasted it copies the pasted line again. So I selected the right part of the image of 100 pixels. Let it copy from there and let it paste at random positions between 200 en 800 pixels. Yes… negative values again. So no JavaScript image is produced. It even lead to a crashing Safari browser.
GDV_P_4_1_2_01_GDV_10

This is an extra program which is not to be found in the Generative Design book. The program makes all x- and y- positions random. Changed the size of the to be copied image selection to 10 x 10 pixels. And it also seems to work perfect in JavaScript.
GDV_P_4_1_2_02_GDV_01

I know that 10 x 10 is small. But what if I make it 1 x 1. It looks like nothing is happening. Oh I see two pixels three pixels have been moved in the top black area. This is going to take a lot of time. Changing sliceHeight to height. That is better. Let it run for 45 minutes.
GDV_P_4_1_2_02_GDV_02

And this of course works also for horizontal coping. But I think than its better to take a portrait oriented original image. I decreased the x_ and y_copyTo random factor to (-2, 2). So the distortion is very small and it is going to take a lot of time to do it.
GDV_P_4_1_2_02_GDV_03

Undid (as in I undid that yesterday) all setting changes and added a sliceWidth randomized between -400 and 400. I have the impression that this image (with a lot of black in it will not work as good as the images with more light and colors in it.
GDV_P_4_1_2_02_GDV_04

Another image with almost the same settings as the previous example. And this works better. And also slower. The final image took from 13:20 until 16:05. That was the time when the first pixel reached the right side of the Processing image display window.
GDV_P_4_1_2_02_GDV_05

Changed sliceWidht and height into 100. And x_ and y_ copyTo to random (-2, 2). This seems to give very free and organic colorclouds. I must admit I have also chosen a cloudy source-image.
GDV_P_4_1_2_02_GDV_06

What about feeding the manipulated image again to the same program. So I first distort an image. Save it as a jpg. And import that again for a round of image processing. I think it does not make much of a change. Except when you change a few parameters. JavaScript blurs all the fine details. I think it uses the gray from the background to mess everything up. Terrible!
GDV_P_4_1_2_02_GDV_07

I must say that these images have painterly qualities. Just played with the parameters. And left the black sides out of the interaction / animation.
GDV_P_4_1_2_02_GDV_08

Added only vertical manipulation lines.
GDV_P_4_1_2_02_GDV_09

Did the same here but on a horizontal way. So the lines don’t exchange places vertically. Only horizontally. And they shift in and out of each other. But JavaScript makes a mess of it.
GDV_P_4_1_2_02_GDV_10

Henk Lamers