Thursday, 1 March 2018

The Algorithm and Beyond - How I Wrote and Recorded an Algorithmic Symphony

Following a previous meetup on algorithmic music, I was really pleased that Steven Goodwin was inspired to share his own experience with computer generated music.

A video of the meetup is here: [skillsmatter].

Steven's slides are here: [pdf slides].

Precisely Timed Music

In 1996 Steven read about György Ligeti’s mechanical music. Ligeti's 1962 Poème Symphonique in particular was interesting - a composition for 100 metronomes, each set to different speeds.

The piece explored the idea of music from the interplay of precisely timed beats. Importantly, no human could play such music because the accuracy needed is beyond our ability. If you listen to the music above, you'll hear the music change from saturated noise to periods of coherent synchronisation, which disperses again.

There are parallels here to mathematics and physics. In mathematics we have the idea of number multiples, like 2, 4, 6, 8... and 3, 6, 9, 12 ... Sometimes these sequences cross paths, sometimes they overlap completely. Some numbers are always beyond reach - the mysterious primes. In the physical world, we have periodic water waves, which sometimes run past each other, sometimes cancel each other out, sometimes join to reinforce each other .. and sometimes they cause a great non-linear "breaking" wave.

Technology 1962 to 1996

Steven was inspired to experiment creating similar music himself. Ligeti used instruments that were available to him at the time in 1962 - and the metronome was the only reasonable way of generating such precisely timed sound.

Unlike Ligeti, Steven had access to modern (for 1996) technology like computer-based music sequencers and MIDI.

Today, Steven also uses software he developed himself, a javascript and C based MIDI library.

The Process of Idea to Music

Steven gave several examples of the work that goes into building from a core mathematical idea to creating a piece of music that actually sounds like music and is interesting to listen to.

He started with a very simple example.

The above shows a rhythm with 1 note per bar, then 2 per bar, then 3 per bar, 4 per bar, then finally 5 per bar. As a mathematical idea that is simple and rather pleasing. In this particular example, each bar is repeated twice.

If only it were that easy to make successful algorithmic music!

Steven underlines this point and discussed how we need to think about several questions, such as:
  • which notes (frequencies) we want to place at each of those dots marking the rhythm?
  • how many notes to choose from?
  • which instruments?
  • any fades or transitions?
  • vary speed over the piece too?

Steven chose to transition between a piano and a harp, with variations in speed. You can hear the piece in the video at about 9m50 [video]. The result is a very interesting and engaging!

MIDI Libraries

Instead of having to play music himself by hand, Steven wrote his own libraries for writing MIDI files.

MIDI is a technical standard for communicating music between tools and instruments. MIDI files can contain music, but a common misunderstanding is that they are like mp3 files. MIDI files contain information about which notes occur when, and how they occur (eg loudness, duration). The sounds themselves are separate, but a MIDI file will refer to sounds and instruments for playback.

MIDI libraries allowed Steven to create these MIDI files from his own programs. Those MIDI files could then be played back, or modified, by other music tools.

See the resources below for links to a javascript and a C based library.

MIDI Process

Steven went on to explain his process with an example of MIDI generated by his own computer program. He explained that too often the result of directly playing back music created like this something that sounds horrible.

Again, he came back to the theme of having to put in additional work to turn the output of simple algorithms into something listenable. He explained additional things that needed to be thought about when developing the first raw MIDI from a mathematical idea:
  • relative volumes of different instruments, and individual notes to avoid them being hidden in the noise
  • introductions / outros
  • changing sounds/instruments so they work better - pianos don't sound like harps, which don't sound like human voices - they have different timbre.

For me personally, the amount of work involved in sound and music "direction" was something I was previously unaware of.

Fractal Music

Steven demonstrated another idea where he started with a melody of notes, and for the next accompaniment he took every second note, and repeated this idea.

This is similar to the process of creating mathematical fractals - self-similar Cantor sets for example.

The result of taking out every other note, repeatedly, is a melody that's the same as the original - which you can see visually above.

From an algorithmic perspective, this seems like a great idea. Steven played a version of music based on this theme (at 32m50 in the video). It sounded fantastic - but only after a fair bit of human work:
  • orchestration to choose which instrument when
  • which parts to repeat as a motif
  • add elements like a cymbal crash to highlight repeats
  • overall structure of 5 repetitions
  • play the 2nd repeat backwards
  • relative dynamics and emphasis - eg building to a climax
  • remove notes towards the end when they oversaturate 

Written But Not Recorded

After taking us through some more examples of algorithmically based music, Steven took us through an often overlooked step - that of actually recording and producing music.

The central point is that what we might produce at home will very likely sound different when other people listen to the music on their equipment.

Part of the approach is to use high quality equipment and sounds to recreate the music with high fidelity - the master. From this master, versions can be produced for listening on personal audio players and tiny headphones. In addition, recording studios will have expertise in avoiding sound which works in the studio but won't on cheaper and smaller equipment.


The main point for me was that it is great to have mathematically interesting ideas for creating music - but it can only be a starting point, a basis even. Once the theme has been set, a human artist does need to put in a fair bit of work to create works that are listenable music.

Having said that, some of the algorithmic themes are really interesting - the fractal music was really interesting and very pleasing too.

Symphony No. 1 in C# Minor

You can listen to Steven's impressive symphony at his website. Click the following to be taken to Steven's site.

Afterthought: 2001 A Space Odyssey

Steven's talk inspired me to explore Ligeti and I found that his music was used by Stanley Kubrick. I hadn't realised that the haunting Lux Aeterna used in 2001 A S[ave Odyssey was created by Ligeti.


Saturday, 27 January 2018

Clitocopia: Cliteracy and WebVR

At our first ever meetup in 2017, Sinead and her husband Francois talked us through their Clitoris Vulgaris project, algorithmically generated twitter art, as part of a wider effort to raise awareness of female anatomy. Their work touched on 3-d models, colour models, natural language processing to create plausible scientific names, and an insight into how bots interact with each other.

This month we were lucky to have Sinead and Francois back to talk about their next project - Clitocopia.

Sinead and Francois' slides are here [online].

The video is online at [skillsmatter].


Overarching both the Clitoris Vulgaris and this Clitocopia project is the ambition to address the poor understanding, myths and even taboos, around the clitoris. Sinead highlighted the issue powerfully by demonstrating that too many people, women and girls included, do not know what they look like. Particularly striking was the fact that almost everyone can recognise and draw the male sexual organs  - a penis is an iconic universal symbol - but very few can draw the female clitoris.

Sinead and Francois' approach to raise awareness, dissolve. taboos and demystify the clitoris is to use technology in a fun, engaging way that draws people into the subject without them really feeling dragged or lectured. Previously they created a twitter bot, Clitoscope, that automatically created a rendering of a clitoris, with mapped textures and plausible scientific names. It was a fun experience for social media users, and successful too.

This time they've extended their ambition, using virtual reality technologies that really are just emerging right now, to create a very immersive world for users to explore. The idea is to enjoy the world, but also to seek out hidden a clitoris to progress to the next level. They call it this experience Clitocopia.

Virtual Reality User Experience

It's not an exaggeration to say that virtual reality is a new and still maturing technology. Not only does the tech not always work uniformly and consistently, the best practices for designing worlds to be both intuitive and comfortable are not yet established. The field is still new.

That means that the thinking Sinead and Francois put into developing the user experience is probably new and valid. For example, they thought carefully about not throwing the user straight into the world without any warning or clues. Their intended users are firmly normal people - not just already tech savvy developers. Even the confirmation of how to signal that the user had succeeded in finding a clitoris needed some careful thought. Any wrinkles in user experience are a lot less tolerable in virtual reality.

Technologies and Algorithmic Challenges

The development of Clitocopia couldn't have come much sooner - the technology for creating and experiencing virtual reality in a way that is both accessible to many, on a range of devices, using open web technologies is just maturing right now.

They chose A-Frame - a web technology that first started as a Mozilla project, but is now becoming a web standard across different vendors and devices. And it works with your choice of user technology - from using a mouse on a desktop, moving your smartphone around you in the real world, or immersing yourself more fully with Google Cardboard or DayDream headsets for example. Sinead and Francois brought some headsets along and that provided a lot of fun at the end of the session!

Sinead and Francois were very open and honest about sharing their journey - including the bumps and dead ends. This is more valuable than just admiring a finished product and gives a much more intimate understanding of the technologies and methods, and their limitations.

They wanted to use beautiful botanical illustrations of fruit and plants to create a landscape in which the clitoris was hidden. Fruit and vegetables are perfect for camouflaging human anatomy! It's not an accident that through the millennia, shapely and juicy plants have been euphemisms for human bits and pieces!

They wanted to automatically extract the fruit from the mainly Victorian botanical prints, but this was a challenge. Even though many image segmentation methods are good, they are still not general enough to cope with a variety in the source set of images. Here's Francois explaining the idea of thresholding to simplify and bring out the primary subject of an image.

They used trained neural networks provided online by research groups, but again, the success rate wasn't high enough. In the end they cut out the fruit by hand.

Another key challenge was using the 2-dimensional images of fruit to create a world in the three-dimensional virtual world. Sinead and Francois explained how they tried different methods for projecting the images onto a 3-dimensional sphere, within which the user remains. That brings with it the issues of distortion - just like we have different map projections distorting the globe. A specific problem was sorting the images of fruit and vegetation to make it easier to, hopefully, place automatically. They again turned to methods from the world of machine learning, and used the t-SNE projection to group together similar plants with good success.

Placing the plants such that there are no gaps between them was something they put effort into - but that too remains unsolved - that is, doing it automatically, and in a way that is pleasing to the eye.

Overall they succeeded in creating a three-dimensional virtual world that is engaging, visually interesting and pulls the user in to explore the world in an intuitive way.

A-Frame Example

The A-Frame technology they used is interesting. Not only is it open, and an increasingly adopted web standard, which works across many kinds of devices - from smartphones to tablets, though headsets and on big traditional displays - it is has also been designed to be easy to use.

That last point is important - because too many technologies have suffered, and often failed, because they were too difficult to learn and use. A-Frame very comfortable for web developers - it looks just like HTML.

Have a look at the following simple example from the wikipedia page:

     <title>Hello, World!</title>
     <script src=""></script>
       <a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"></a-box>
       <a-sphere position="0 1.25 -5" radius="1.25" color="#EF2D5E"></a-sphere>
       <a-cylinder position="1 0.75 -3" radius="0.5" height="1.5" color="#FFC65D"></a-cylinder>
       <a-plane position="0 0 -4" rotation="-90 0 0" width="4" height="4" color="#7BC8A4"></a-plane>
       <a-sky color="#ECECEC"></a-sky>

You can see that in the head section we pull in a javascript library to give us access A-Frame capability. We then create a scene in the body section with an <a-scene> tag. Everything in that section is intuitive and needs little explanation. You can see a box, a sphere, a cylinder, a plane and a sky being declared with properties like size, location and colour.  It couldn't be easier. If you copied that code into a html file and pointed your browser it would create a virtual world which you could navigate - using your mouse, or your smartphone held up and moved around you, or even your virtual reality headset, all without additional fuss.

Have a Go Yourself

Sinead and Francois recommended the glitch platform for learning about web virtual reality and A-Frame. Here's a link to get started [glitch], with a tutorial here [].

And here is a link to Clitocopia, also hosted on glitch - remember to use headphones if you don't want anyone around you to be upset by the botanical sounds!

What a great start to the Algorithmic Art year!

Thursday, 21 December 2017

A Gentle Intro to Processing - Coding for Artists

We had another introduction to Processing aimed at complete beginners, including artists who perhaps have never coded or aren't technical.

This was similar to the previous one we held in May, and was organised because members wanted another one. Understanding what members want is important to a successful group, and I hope we keep getting more feedback.

The slides for the event are always at:

The previous write-up was fairly comprehensive so this one will focus on the differences.

Artists New To Coding

I was really pleased that the group included artists who don't consider themselves technologists or coders. It's been easy to reach coders who want to explore creativity, but we always have to make an effort to reach artists who are new to coding.

I was also really pleased we have members from St Martins and Goldsmiths too!


Previously I introduced Processing using p5js - which is just a version made to work in a web browser. That avoids the need for any specific software - and is guaranteed to work on the widest range of devices from laptops to smartphones.

That still required us to mess about with source code text files, editors, using the browser to find the correct index.html file and so on ..

Since that first tutorial, I found which takes away all of that complexity too. We simply code in the browser, and click an icon to see the results. Simple!

And in a classroom environment, where all kinds of things can go wrong, with all kinds of abilities and experience - open processing works really well, avoiding unnecessary obstacles and

I want to thank Neill B for suggesting I look at these kinds of solutions.


We started with the basic idea a canvas, introducing the structure of all processing code, and having a go hands-on with setting different background colours.

We then progressed to creating our first shapes - a rectangle, and then circles and ellipses too. This required us to understand the coordinate system Processing uses so that we can specify precisely and unambiguously.

The tutorial was structured alternate new ideas with a short "try it yourself" which seemed to work well.

We introduced the idea of repetition as a powerful artistic method for composing images, and looked at some early examples of computer art - from 1996!

Repetition is what computers are good at - they don't get bored, do it without error, and can do it extremely fast. It is not uncommon for examples of algorithmic art to have calculations thousands and millions of times.

We introduced the loop - processing's way of repeating code. I have to admit, Processing doesn't make it as easy as LOGO for example, and I think the Processing Foundation should fix this.

We used the loop to create a sequence of circles, and then expanded the idea to loops within loops - and used that to cover the canvas with circles.

That led naturally into the idea of colour, and how we can mix our own colours using Red, Green and Blue elements - as is common in the computer world - and in familiar tools like Photoshop, Gimp and Krita.

We were running short of time, as always, and briefly introduced the idea of functions - a way of packaging up useful code so it is re-usable by ourselves and others. W didn't get the chance to explore the really important idea of parameterisation and generalisation - transforming code from being very specific to being more generally applicable. The idea of abstraction is key for coders, mathematicians, and a powerful tool for artists.

We did get to see how functions can be used again and again with minimal extra typing, and also how improving or expanding a function benefits anyone using that function without extra effort.

More Advanced Topic - Recursion

We did get a chance to talk through and see a more advanced but incredibly powerful idea - recursion.

The idea of recursion is a definition that is self-referential. In terms of code, it means functions that call themselves. This can take some time to get used to - but it is worth the effort.

Mastering recursion means being able to describe in very very simple terms patterns that are incredibly, and sometimes infinitely, intricate. And the code is extremely simple too.

Here's the simple example created in eg class - with the code at

We look at using randomness to shoot some of the creative decisions from ourselves to the computer - a significant step, and also using randomness to inject some organic realism into the patterns we create.

The following trees are created using recursion to define the overall branching structure, and randomness to point them in directions that are more realistic than perfect regularity.

The code for these trees is simple, using nothing more than what we covered in the class, except perhaps a little school-level trigonometry -


There was some suggestions from the class that a creative session would be fun - a hackathon! We held one before, and we'll probably do one again.

Inspired Art

Ogaday, who was in the class, was inspired to create some recursive forms himself. Have a look at his code online -

I love the recursive Sierpinski gasket made of circles.

and I really love these recursive circles!


The slides contain links to several resources including the code for the key exercises, as well as to useful tools like an RGB colour mixer.

At the end of the class I showed spirograph patterns created by following the orbits of points on a system of gears. The tutorial is here:

Friday, 24 November 2017

Music Theory, Genetic Algorithms, & Python

This month we had a meetup focussed on algorithms creating music. We also had a flash talk on 3D L-systems.

The slides are here: [link] - the embedded music playback needs Firefox or Chrome to work, Safari doesn't seem to work.

A video of the event is here: [link].

Computer Generated Music

The idea of using mathematics, logic, computer programs to generate art is a long established and noble challenge, that forces us to think even more deeply about art itself, about the extent to which art can be random, or conform to man-made rules, and whether there is something about art that can't be captured in such simplistic logic.

That philosophical debate aside, people have created amazing sophisticated art from apparently simple ideas and mathematical rules - algorithms.

Creating music from algorithms is particularly challenging as our senses are less forgiving than when we look a visual creation. Randomly coloured and placed dots are much more tolerable than randomly chosen musical notes.

We were lucky to have Nicholas, a classically trained musician and a huge tech leader and community contributor, take us through his experiments creating music using a computing approach called genetic algorithms.

Genetic Algorithms

Like several approaches to computer intelligence, genetic algorithms are inspired by how nature itself works. Just as natural selection and mutation allow species to evolve to solve emerging challenges, genetic algorithms evolve solutions to better solve challenges we might set.

Genetic algorithms are particularly useful when we don't know how to directly solve a problem, but we know a bit about how good solutions should behave, enough to score them against each other by how well they solve a problem.

The basic idea is to start with a set, or population, of potential solution candidates. Without additional insight, these can be randomly created.

We then apply to this population of candidates the same processes that apply in nature:

  • crossover - as a result of 2 parents mating, mixing their characteristics, to create a child
  • mutation - random changes to an individual which can be beneficial or detrimental
  • selection - survival of the fittest to continue a new generation, and the removal of the unfittest

We're familiar with these processes happening to animals and plants in nature, with DNA being the solutions that are mixed and mutated, before tested for survival.

The fact that DNA is a sequence of codes, which are interpreted as instructions to create a living organism is very helpful. We can easily apply these ideas to codes or programs which we create in a computer. In our case, our solutions, or individuals, are sequences which represent music. Again, that's not an unfamiliar idea - musicians use a code for writing music - music notation and letters.

Let's illustrate crossover - which happens when two parents mate and mix their codes.

You can see how the children get their code from their parents but it isn't exactly the same. In effect, a section is swapped between the parents code and the result becomes the children's code.

Mutation is a simpler idea, we simply pick a piece of the code and randomly change it.

These two processes create new individuals in the population, which we then sort by fitness, removing the unfittest. What's left is the next generation population.

Judging individuals for fitness is just the same as deciding how good their code is at solving the problem we've set. Those problems might be finding a route out of a maze, or deciding how a character in a computer game behaves, designing the right shape for an antenna, ... or creating pleasing music.

This process is repeated many times until we find a good solution, or decide we've spent enough time evolving and want to pick the best we have.

Creating Counterpoint

Nicholas set out to use this evolutionary computing approach to see if a computer could create music that was musically correct and pleasing. That's not an easy task for a human, so it's a particularly worthy and interesting problem to see if we can get a computer to do it.

The particular task that Nicholas focussed on was composing an accompanying counterpoint to a base musical sequence. The following shows the basic idea.

The red points are musical notes which vary in pitch and appear at different times. The green points are additional notes which are attempt to create a pleasing additional layer of sound which works with the original series of sounds.

A counterpoint is a contrasting yet complimentary series of notes. See Nicholas's slide 4 to hear an example.

Nicholas explained several ways in which a counterpoint might be composed. His visual approach to showing the transformations demonstrated how familiar the ideas really are.


But these transformations themselves aren't enough to create a pleasing composition for music of any sophistication. So how do we create counterpoint?

Well, in 1725 a chap called Johann Joseph Fux wrote (in Latin) a treatise on counterpoint.

He established five levels of sophistication for creating counterpoint, called species.

A very simplistic indication of what these five levels are:

  • First species - a note is added for each note in the original base sequence. This means the contrapoint can't vary rhythmically from the base because the new notes appear at the same time as the original ones. However there are a set of other constraints like avoiding notes which are ten steps apart in pitch. See here for a more comprehensive list.
  • Second species - two additional notes for each original note. There are yet more constrains such as accented (emphasised) beats not being dissonant
  • Third species - 3 or 4 notes for each original note. There are further guidelines on what is permitted and which combinations are to be avoided.
  • Fourth species - introduced sustained notes, which are held for longer than the time a note would take in simpler compositions.
  • Fifth species - a combination of the first four species, and the most difficult to get right.

Nicholas's aim was to encode these constraints on how counterpoint notes are chosen in terms of pitch and temporal location as a fitness function for a genetic algorithm.

In other words, creating a population of candidate counterpoints, using cross-over and mutation to create new variations, and then using a fitness function to pick those that best match the rules developed by Fux.

His code is online at GitHub - [link].


How to test how well the evolutionary approach to developing valid and pleasing counterpoints? The best test is a human audience, and so we had a series of very fun and engaging Turing-tests to see if we, mere humans, could tell if a piece of music was composed by a human or by Python!

For all the examples, broadly the audience was split 50-50 on judging the computer vs the human composer.

In other words, the computer generated music was indistinguishable from that composed by a human!

Have a go yourself, the tests start on slide 21.

Algorithmic Art?

Many will debate whether music created by a computer is valid art. And many others will challenge the attempt to reduce human creativity and sense of beauty (and horror) into simple, even simplistic, cold hard rules.

But what is clear, is that the creations of algorithms are very close to what we humans can produce. And that is a testament to our ability to come up with algorithms. That itself is a worthy science and art.

And the endeavour to develop these automated ways of creating convincing art forces us to explore, understand and appreciate art created by traditional means even more deeply - which can only be a good thing.

Many were looking forward to this event, and afterwards it was clear everyone was buzzing with excitement, possibilities and some were even inspired to continue their own work on algorithmically generated music.

3-D L-Systems

I was really pleased that a regular member, James, offered to do a short talk to share his work on an app which creates three-dimensional L-System forms.

He was inspired by a previous session on Lindenmayer systems, which create patterns from the successive transformation of a sequence of instructions according to a set of rules - so there are parallels with genetic algorithms which also manipulate a sequence of instructions.

You can find out more at, and see examples on his twitter @complexview.


  • A simple tutorial on genetic algorithms with good coverage - [link].
  • NASA's use of genetic algorithms to optimise an antenna - [pdf].
  • The text book I was set when I studied genetic algorithms - [link].

Saturday, 21 October 2017

An Introduction to WebGL

This month we had Carl, a regular member and graphics professional, give us an introduction to WebGL.

Carl's event page is here: [link], slides directly [here].

The vide recording of the meet up is here: [link].

What is WebGL?

WebGL brings two worlds together - the web and GPU accelerated graphics.

The web is is one of the most open and successful technology platforms on the planet. The number of people using the web every days is in the billions. And it all works (almost perfectly) with any device we're using - smartphones, tablets, laptops, tiny sensors, big cloud servers - and with any software - browsers like Firefox, Chrome and Safari, as well as a huge ecosystem of web software libraries.

A key reason it all just works its that the technical standards by which the web works are open, and largely driven by the community. They're are not secret proprietary and driven by a small number of powerful corporations. The open source and open standards movements have become very important in today's digital world.

On the other hand, the history of GPU accelerated graphics isn't such a fairy tale. Early computers found driving a graphics display a very intensive task. Moving this load away from the main computer's processor to specialised graphics processors was an obvious step. For several years, these specialist graphics remained proprietary, with little interoperability. Then standards emerged allowing programmers to code once, and expect their programs to run on different computers with different graphics hardware acceleration. A non-profit industry group called the Khronos Group looks after a very popular API called OpenGL, the leading API for hardware accelerated 3D graphics. Many vendors of GPU hardware have implemented support for OpenGL for many years.

WebGL can be thought of as a smaller version of OpenGL designed to be used as a web technology, through Javascript, and viewable on any modern web browser - on a smartphone or a laptop.

Big Picture

It helps to understand the several technology bits that work together.

You can see that the web browser contains a javascript engine. This is the same engine that runs normal javascript associated with a web page. The WebGL API is a javascript API, and so should not be too alien for web developers to pick up.

That API essentially feeds data and programming language instructions to the GLSL compiler. Let's explain this a bit more. The fast hardware that is the GPU doesn't talk javascript. It does however understand a language called the GL Shader Language (GLSL). GLSL is similar in many ways to C/C++ and is compiled before the GPU can run it. The WebGL API is simply passing the text of our GLSL programs to the OpenGL drivers to compile and run.

You can explore the reference for the WebGL javascript API here.

Lingo: Fragments, Vertices and Shaders

There's lots of unfamiliar language in the world of coding GPUs, and that can be a barrier to newcomers. Carl introduced the most important concepts.

If you remember that a GPU is designed to accelerate, potentially complex and detailed, 3-dimensional graphics, then it makes sense that the processing must be more constrained than the kind of things we can do on a normal general purpose CPU. Not everything we can do with a CPU can easily be done with a GPU, but what a GPU can do, it can do very fast, and to lots in parallel.

Viewed positively, these constraints can be seen as a pipeline of how information about a scene is processed into images on a display. Here's a simplified WebGL pipeline.

Let's talk that pipeline through:

  1. At the start we have data, numbers, which describe simple shapes. Complex objects, like trees, faces, buildings, are made of these simple shapes, or primitives as they're called. A triangle is a very simple primitive, described by three corners. That's all that's required to define a triangle. Other primitives include points and lines.
  2. A program that runs on the GPU transforms this data into the corners of a shape. A fancy word for corners is vertices, and just one is a vertex. That small program can run very fast on the GPU, and in fact, the GPU can transform lots of data in parallel (lots at the same time). That program is called a vertex shader. Confusing name, but there you are. The vertex shader can do things to the corners, like move it around in space, in effect moving the shape.
  3. The next step is to render that shape, that triangle for example, to a display made of pixels. That process is is called rasterisation. This is when the face of the triangle is coloured in. It could be a red triangle, or be a colour gradient, a texture or something else. Again a small, program on the GPU dos this very fast, and is highly parallel. That small program is called a fragment shader. Again, not the clearest of names, but there you go.

That pipeline makes sense, and everything we do must conform to that pipeline, if we want fast acceleration of graphics by the GPU. In short,
The vertex shader works on the shape corners,  
the fragment shader works on the pixels.

WebGL Simple Example

Carl explained and illustrated a simple WebGL program with vertex and fragment shaders, with data passed through javascript.

What I'll do here is try to use that knowledge from Carl's talk to create my own first WebGL code. It's a good way to see the basics of how we structure our code, and also see the basics working, learning by doing.

I'm following the 2D coloured triangle example at WebGL Academy, a site recommended by Carl.

The first thing we need is a web page element to draw on. A HTML canvas element makes sense.

<canvas id='my_canvas'></canvas>

That creates a canvas with identifier 'my_canvas'.  Now we work entirely in javascript.

The main thing we need to do is create a canvas context. Just like many technology frameworks, a context is a way of creating a bubble for your scene, separate and safe from other bubbles.

var html_canvas = document.getElementById('my_canvas');
var GL = html_canvas.getContext('webgl');

The html_canvas variable is just the HTML canvas we created, obtained by the identifier we gave it, my_canvas. The variable GL is the "webgl" context of the canvas, an object that supports WebGL. For IE and Edge we need the older "web-experimental".

Now we need to set up the shaders, the small GPU programs.

First let's set up the vertex shader, the code that operates on all the corners of our objects. Our code is very simple:

attribute vec2 position;
void main(void) {
    gl_Position = vec4(position, 0.0, 1.0);

Let's explain it. Remember this isn't javascript, this is the GLSL language which is similar to C.
  • The first line creates a variable called position. It is of type vec2 which is simply a data structure for 2 numbers, so 2D coordinates. There is also something else, attribute, which is a type qualifier that tells the GSLSL compiler that this variable is pulled into the GPU from a data buffer. That's how we'll pass vertex coordinate data to the shader. The type qualifier for single values is uniform, here we have an array of values which needs attribute.
  • The next line declares a new function main(), which is the main entry point into the GPU code. The name main() is used in many languages to identify the first entry point into executing code.
  • The content of that main() function currently has only one instruction. It sets the gl_Position variable to that position variable, but expands it from 2 numbers to 4, by adding a 0.0 and a 1.0. The 0.0 is the coordinate along the third dimension, so a measure of depth. The 1.0 is, simplistically, a scaling factor. The gl_Position is a special variable, which is used by the fragment shader later. So this is an opportunity to transform (translate, rotate, other) the positions of the vertices, but we haven't here, we've kept them as they are.

Now let's look at the fragment shader, which takes the output of the vertex shader.

precision mediump float;
void main(void) {
    gl_FragColor = vec4(0.,0.,0., 1.);

This is simple code again:

  • The first line sets the precision to be used for floating point numbers in the fragment shader. Medium precision mediump is faster than high precision and good enough for textures and colours.
  • A main() function is declared as an entry into the executed code.
  • This main() does only one thing, it sets the gl_FragColor special variable to a four number vector vec4. These 4 numbers describe a colour using RGB and an alpha channel (translucency), so (0, 0, 0, 1) is black. 

The fragment shader is called for every pixel (fragment) inside the triangle described by the vertices that emerge from the vertex shader, which itself gets them from the data we provide through javascript.

How do we compile this GLSL code? The steps are simple but kinda boring boilerplate code. The following shows the exact same approach needed for both shaders.

var shader_vertex = GL.createShader(GL.VERTEX_SHADER);
GL.shaderSource(shader_vertex, shader_vertex_source);

var shader_fragment = GL.createShader(GL.FRAGMENT_SHADER);
GL.shaderSource(shader_fragment, shader_fragment_source);

First a shader is created from the context, of the type required (vertex, fragment). Then the source code is associated with it, then it is compiled, with the result remaining in the shader. That's a lot of boring code, but that's all that's happening.

We then need to create a webGL program and attach these compiled shaders. Again, boilerplate code.

var shader_program = GL();
GL.attachShader(shader_program, shader_vertex);
GL.attachShader(shader_program, shader_fragment);

Almost there. We now link the variables in javascript to those in the shaders, so we can pass data through the connection. You can see the javascript js_position associated with the GLSL position variable.

var js_position = GL.getAttribLocation(shader_program, "position");

We're done with shaders now. Let's look at creating the data that describes the triangle so we can pass it through to the webGL pipeline.

var triangle_vertex_data_js = [-1, -1, 1, -1, 0, 1];
var triangle_vertex_data_gl = GL.createBuffer();
GL.bindBuffer(GL.ARRAY_BUFFER, triangle_vertex_data_gl);
GL.bufferData(GL.ARRAY_BUFFER, new Float32Array(triangle_vertex_data_js), GL.STATIC_DRAW);

That looks complicated, but again it's boilerplate code. What's happening is that a javascript array is created with a list of the corner coordinates. The first corner is at (-1, -1). Next a GL buffer is created that we bind to the javascript array. Then the data is copied over after being cast as Float32 numbers.

We now have to tell WebGL which of these points, and in which order, make a face. In this easy example, just the first, second and final third corners make a triangle face. The code mirrors the previous one, create the javascript array of data, create a GL buffer, bind it, and fill it with data.

var triangle_faces_js = [0, 1, 2];
var triangle_faces_gl = GL.createBuffer();
GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, triangle_faces_gl);
GL.bufferData(GL.ELEMENT_ARRAY_BUFFER, new Uint16Array(triangle_faces_js), GL.STATIC_DRAW);

Now all that's left is to set up the scene and draw it.

GL.clearColor(0.0, 0.0, 0.0, 0.0);
var do_drawing = function() {
    GL.viewport(0.0, 0.0, html_canvas.width, html_canvas.height);

    GL.bindBuffer(GL.ARRAY_BUFFER, triangle_vertex_data_gl);
    GL.vertexAttribPointer(js_position, 2, GL.FLOAT, false, 4*2, 0);
    GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, triangle_faces_gl);
    GL.drawElements(GL.TRIANGLES, 3, GL.UNSIGNED_SHORT, 0);


Let's explain the key points in the code:

  • The first colour sets the colour used when a buffer is cleared. We set it to colourless and transparent. 
  • A new function is created, called do_drawing(), which does the actual drawing. It is called many times, to enable animation, if that is desired The window.requestAnimtionFrame() is how modern browsers allow custom code to be called whenever the browser is ready to draw a new animation frame. We're not actually doing animation here because every frame is the same drawing.
  • Inside the do_drawing() function, we set a viewport to the size of the html canvas and clear it, then bind the triangle vertex data to that buffer, the same for the faces.
  • The GL.flush() causes all queued commands to be executed, in case they are waiting cached somewhere in the network or GPU driver - which can happen. It's a bit like writing data to disk, it doesn't always get to disk, until forced to by a flush or sync. This command queuing is good for performance.
The full code for this WebGL example is always on GitHub at:

The results of all this is a simple black triangle on a pink canvas, scaled to the size of the available canvas. Here's a screenshot showing the browser window decoration too:

Finally, a WebGL rendered object, from data that was sent through the GPU accelerated pipeline!

Let's add some colour, not because the black triangle is boring, but to illustrate how GLSL on the GPU can do some of the work.

The first thing we need to do is declare new variables for colour in the vertex and fragment shaders. The changes to the vertex shader are:

attribute vec2 position;
attribute vec3 colour;
varying vec3 vColour;
void main(void) {
    gl_Position = vec4(position, 0.0, 1.0);
    vColour = colour;

In the vertex shader we set an attribute colour to allow data to be passed from javascript. We also set a varying vColour, which means a variable allowed to change inside GLSL, and has no link to anything outsidre GLSL such as javascript data. For each triangle corner, the vertex shader sets the internal vColour to the colour which will be passed from javascript as data.

The fragment shader changes are:

precision mediump float;
varying vec3 vColour;
void main(void) {
    gl_FragColor = vec4(vColour, 1.);

This again declared vColour as an internal mutable variable. The fragment shader simply sets the colour of the pixel to vColour, not black as it did before.

All we need to do now, is actually create the javascript data and pass it through. Here are the changes.

var triangle_vertex_data_js = [
   -1, -1,
   0, 0, 1,
   1, -1,
   1, 1, 0,
   0, 1,
   1, 0, 0];

The triangle data now contains rgb colour values, not just the coordinates of the corners.

var do_drawing = function() {
    GL.viewport(0.0, 0.0, html_canvas.width, html_canvas.height);

    GL.bindBuffer(GL.ARRAY_BUFFER, triangle_vertex_data_gl);
    GL.vertexAttribPointer(js_position, 2, GL.FLOAT, false,4*(2+3),0) ;
    GL.vertexAttribPointer(js_colour, 3, GL.FLOAT, false,4*(2+3),2*4) ;
    GL.bindBuffer(GL.ELEMENT_ARRAY_BUFFER, triangle_faces_gl);
    GL.drawElements(GL.TRIANGLES, 3, GL.UNSIGNED_SHORT, 0);


The do_drawing() function now has to have two changes, because that javascript data structure has changed. The numbers show the steps into the array the js_position and js_colour data is to be found. More details here.

The results are rather nice:

We only specified the colours of the corners, so why is the inside of the triangle coloured at all? More to the point, why is it shaded using smooth colour transitions. The reason is that WebGL by default interpolates colour between vertices if it can.

Easier JavaScript Frameworks

The code and complexity of the example just to draw a simple coloured triangle is huge. That's a problem for many reasons - the barriers to entry are high, the code is error prone, even seasoned coders will just prefer not to use WebGL.

Carl explained that today, there exist several abstraction layers over WebGL to reduce the code and complexity for the most common rendering tasks. He lists three.js and babylon.js as examples. Both of their websites link to interesting examples.

The babylon.js playground and tutorial looks really well thought out:

Editors and Tools

We've seen above that writing GLSL shader code as a javascript string and then juggling that to compile, link and run the code is very clunky. Carl recommended online editors which make developing shaders much easier by handling all that machinery behind the scenes, leaving you to the creative task of creating shaders.

He used the editor at The Book of Shaders as a good example:

Despite excellent compatibility across many different browsers and devices, there can be some small differences. The well used Can I Use website is also great for comparing WebGL capabilities.

Carl also listed web tools which show the capabilies of your browser, with a lot of detail, for example showing how many vertices can be created, or the highest level of floating point precision.

Skull Model

Carl demonstrated that you could import 3d objects created elsewhere, and use javascript libraries to convert those models into vertex data for WebGL. He also demonstrated techniques for animating a skull model by using the vertex and fragment shaders to do things like transform the skull into a sphere, or to apply a time-varying texture.

More Resources

The following are hand selected resources and tutorials which I think are good for beginners:

Thursday, 21 September 2017

Art Hackathon - "Future Pangs"

Yesterday we held our first mini art hackathon. It was an experiment to see if we liked the format, and to learn how we might do it better.

Future Pangs

Normally we have a talk or a tutorial, let by a speaker or teacher. Some members suggested that we should turn this upside down and have a less passive meet up - were the main thing we're doing is creating art - not listening or following someone else.

Hackathons are common in some communities - where people get together to work on something, individually or in groups. They're not so formal, but they're very productive and enjoyable.

I had some trepidation about this as I didn't have that much experience organising hackathons. The idea sounds lovely and idyllic - but I imagined all sorts of things going wrong - people being stuck with software installs, being uninspired to create any art, not getting on with their team or partners, ...

Especially for creative events or processes, getting the constraints right is important. The most powerful art is a result of constraints, often self imposed. Constraints like colour palette, media, technique or narrative.

For this hackathon, we had the following constraints:

  • Our art must be created at, which uses the web version of processing called p5js. We did a beginner's tutorial previously, with tutorial slides and video on the meetup blog. This constraint ensures there is enough freedom, but also enough in common across the hackathon. Openprocessing also makes it trivially easy to share our work.
  • We must allow our code and work to be publicly viewable and freely copyable and reusable (CC-SA).
  • We have 60 minutes to create the art from scratch.
  • The work must be inspired by the theme 'Future Pangs', interpreted this as we wish.

I was asked where the theme Future Pangs came from. It was actually a misremembered phrase. When I was young, I used to read the 2000AD comic, and a common phrase was Future Shocks. I misremembered it as Future Pangs. That worked out even better as there isn't a direct semantic connection between the two words, which encourages us to more freely interpret the theme.

The Session Itself

There was a mix of coding expertise, and a mix of artists and technologists - but you wouldn't have thought given how everyone dived straight into working.

I was really pleased that a few of the more regular experts were happy to help others. This creates a nice supportive vibe.

Myself, I was most pleased that artists and art-students had come along to try using technology to create art.

Sharing & Learning

Participants were encouraged to show their work at the end, and talk through their interpretation of the theme, and how they created their work.

Sharing our challenges and difficulties is also a great way to learn together, as well as help each other as a group. One of the teams used noise, rather than purely random numbers as part of their work. As a group, we discussed the usefulness of Perlin noise over random numbers, something we also touched on when we covered ray-tracing previously.


Show n Tell

The following is a selection of some of the art created in the meet up, and presented at the end with a short talk about the artist's interpretation of Future Shocks and how they want about creating their work.

These works have an element of animation or evolution, or even interactivity, so click the images to open the work in a new tab.

Tom has created a work that makes key use of recursive forms which grow and continue to emerge. It's a work that captures you and keeps you engaged. For there is a strong sense of mechanical regularity, but also birth and rebirth of these future forms.

Jun has created an interactive piece. Clicking on the canvas moves a circle which grows ever larger as it consumes the smaller living circles. It suggests the future will be dominated by an emergent aggressive entity!

Peter had partnered with a newcomer to use mathematical functions to model the fluctuating behaviour of bees. For me this strongly suggests the diminishing fortunes of species that are essential to the fragile ecosystem today.

Simon was inspired by the work of another artist (example work). His work makes strong use of objects-within-objects, challenging our sense of reality and dream, the difference between the overseer and the observed, boundaries that are being made fuzzy as we live increasingly digital lives.

James used open data from quandl, to visualise our economic health through history up to today, and the used models to predict the future - all of which foretell a doomed future!

Matt created a a very dynamic work which makes very good use of movement plotting lines, the colour of which is taken progressively from a colour palette. It gives the impression of velocity, diversity but also of a cycle of renewal and supercession.

Neil has used simple elements to create a powerful work. By using a carefully chosen colour palette, and columns of shapes - rectangles and ellipses - the work grows and evolves, in a busy congested way, evoking busy overcrowded cities like Hong Kong, New York and London. Despite the business and congestion, the pace and colour gives the impression of optimism and a future happily occupied.

Raihan explained how we was inspired by science fiction films, futuristic and high-tech, and yet with scenes an equipment made of very old low-tech. Green cathode-ray tube displays, beeping panels and chunky keyboards! The Matrix, Blade Runner and Alien are just some classics that make rich use of this techno-dystopia.

Carl created a sublime piece evoking the gentle falling of rain onto a surface, where the drops ripple and spread. The colour scheme and pace of growth and fading, to me, suggest the growth and decline of diverse communities in a global ecosystem. Viewing this work for a few moments, shows a nice balance between large gentle pastel pieces, and the odd more starkly coloured circle, adding just the right amount of spice to the mix!

These works are so good that the idea of an annual exhibition makes a lot of sense!

Borg Druid created a few works, and this one is a very interesting take on colour palettes you get from paint manufacturers. Instead of colour names, we have more emotional terms, which really do match the colours. And all those themes predict our future world ... yikes!

These works are so good that the idea of an annual exhibition makes a lot of sense!

Success and Lessons Learned

Overall the group seemed to like the atmosphere and the chance to use technology and code to create something just for fun. Projecting a nice video of nature, accompanied by gentle piano music seemed to help provide just enough opportunity for escape and inspiration from the corporate meeting room, without being overly intrusive.

I was really pleased that the more experienced members were helping others, and being asked for help too.

What really surprised me was the speed and ease with which the group dived into working, with almost no blocking issues. One artist, who doesn't have a huge experience with coding, was successfully creating interactive 3-dimensional scenes!

A discussion of what could be done better in future raised some good points:

  • Repeating the gentle introduction to Processing and p5js would be really valued. So we'll try to schedule this for December or January. 
  • Some people want to work on their own, some with others. Some have lots of knowledge and experience, others less so. Some know what they want to do, others need inspiration. Next time, we should try to organise the groups so people can join the right team if they want to.
  • The group felt 1 hour was too short. This was actually extended from the original 40 minutes! We'll try to have an extended session next time.

Thanks everyone for taking part, making it a fun success , and helping us learn how to do an ever better art hackathon next time!