How do we make artistic machines?
Currently, Machine Learning (ML) is employed to extend human capabilities to realms where extensive data access provide opportunities for associations previously unexploited by human artists. These examples take the human point of view first and merely expand their abilities, include generating novel musical combinations based on a palette of tones and analyzing image content to pick out image transformations styles. While these applications rely on ML as a data mining agent over unexplored domains, they fail to exceed the limit of human expectations of what they do.
A different approach is to make ML agents part of a human ecosystem of creative works, exploiting our assumptions about what machines that have humanoid behaviors can or should do. Here, we use Artificial Intelligence (AI) in unexpected ways in everything we interact with, building smart objects that don’t follow rules we expect.
Applying ML to unexpected forms of interactions subverts what we think machines are capable of, creating situations where AI goes beyond human expectation of what machine intelligence should mean to us, making them oddly, Artistically Intelligent.
Sculpture has the connotation of being inactive, because they usually sit inside a museum. What’s more, they are usually considered serious and high-brow due to their association with classical works of art and intellectualism. I chose sculpture as the domain of experimentation because I want to buck these two stereotypes about sculpture by creating pieces that interact instead of being sedentary, and that exhibit quirky and unexpected behavior instead of being profound and unexciting.
First, I made a hand sculpture that rotates either left or right using an embedded servo motor. The gesture is meant to convey the act of “looking” by a sculpture, and prompts the audience to make the same gesture in response. When the audience comes close to the hand sculpture, it detects that a presence is close by using an ultrasonic sensor, and turns to face right or left randomly. However, the distance between the left and right sides with respect is the sensor is different, so the ML can use whether the audience actually is left or right of it as data to train itself to adapt to the sequence of human hand movements. Using this data, the sculpture learns to predict whether the next hand motion from the human will be to the left or right of itself, and will move there in anticipation. The predictions become more and more accurate over time as data is accumulated to drive the ML.
Next, I wanted to take the unsuspected sculptural agency idea one step further by making a talking sculpture that appear to have some capabilities of creative speech production. I used the ML in the Google Cloud Speech API executed on a raspberry pi as a starting point to create my own style of machine speech interface. The audience is prompted to press a button and say something involving or about “sculpture.” A computerized voice reply comes back from the sculpture, which is a plaster mold of a hand doing Star Trek Vulcan “peace and prosper sign.”
Using custom routines based on the Google Speech API that does ML to recognize words, I get the statue to answer back not only repetitions of what the user says, but to say it as if it has agency. For example, whenever the user says “sculpture,” it talks back with a different noun that first appears it is referencing the user. However, as interaction proceeds, pronounces and verbs are also changed, and the audience is seen to notice that the sculpture is using the previous noun to refer to itself, not to the user. The statue is seen to have made a creative transformation in the audience’s view, not by the way it has changed its interaction style, but in the way that the audience discovers what is algorithmically already there. In tests, I only tell folks to say anything they want referencing “sculpture,” yet what occurs is that users learn more and more about the rules of engagement undertaken by the statue. One user said that she thinks the statue was subservient and complimentary at first, but then over the course of the interaction, it became more “sassy.” The rules didn’t change, only the potential for the ML agent to surprise (and annoy) the audience.
As a final exercise, I wanted to extend the idea of creative production further than simply surprising interactions. Although inspired by the ML algorithms for image association used by Google and Pikazo, I wanted to situate the piece so that the sculpture is the agent behind the “deep dreaming” undertaken by ML agents. Unlike previous efforts, I wanted to create a physical interface that appear to be producing the creative output, so that it’s not a computer using user input to create modified dreams, but the sculpture itself which makes content based on who and where the audience is. To evoke perception of creativity, I decided to let the machine take on the persona of a human face. First I made the face mold with Oomoo silicone and cast a face in plaster.
The face plaster is not perfect, so I made an alginate negative of the plaster face after fixing the imperfections on the nose, eyebrows, and extended the forehead. Next, I embedded the LED Matrix in a positive silicone mold based on the alginate. I made a head using Styrofoam and Mold Star 20T clear silicone with wood shavings embedded, and mounting the LED Matrix between the two layers. The wood-grain-embedded silicone retains the form of a classical statue yet forms a mesh that has hidden within it the ability to express itself. The LED matrix appears to respond to human touch due to its proximity to the silicone layer.
Using Arduino to control the matrix, I created custom animations that evoked visual creation from the mouth of the statue only when the user’s face is detected by an attached camera. The animations are dependent on where the human face is. Using computer vision, the system detects where the audience’s face is, and illuminates only that side of the matrix. I wanted to make a connection between human speech and machine data processing. Whereas we express our creativity by make speeches, writing novels, creating worlds by language, the machine analog is not human language as we know it, but a machine code that we can only visualize across a layer that blurs communication. The silicone layer masks the lit up digital LEDs, so the effect is a filtered view of what machines would do creatively if they were creative. Just as we as 3D beings cannot contemplate life in 4D, we also don’t know machine creative processing and the ways it can express itself as a form different from human conception. As humans we can only hope to visualize the data machine produce across a layer of uncertainty.
The tools we create are taking over our lives. From recording our memories onto physical pages to analyzing consequences of business investments, from enabling communication over long distances to interpreting our speech and predicting our desires, digital machines enabled by ML are going from helping us to enabling us to thinking for us. Will the most unique characteristic of humans, that of creative expression, be the next bastion to fall? Experiments with machine creativity have centered on using ML to help or imitate the human creative process. This strategy, however, is based on an anthropomorphic view that the way humans express themselves is the basis for all types of creative works, including those of machines, much as the Turing Test inherently situates machines within the human space with disregard for how non-human processes work.
I proposed that machine artistic expression can emerge instead from exploiting what humans think of objects and devices, allowing ML to subvert traditional forms, coalescing into a system of creative expression beyond simply generating data from modifying previous model. In this view, the context and situation of the use of ML is just as important as algorithms, enabling a world where creative machines appear to permeate.
For more information, see associated paper. This work was exhibited at Columbia University’s “On Collaboration” creative technology gallery. It was also presented at the Art Machines International Symposium on Computational Media Art at the City University of Hong Kong, peer review led by Prof. Richard Allen. Here’s audio from my lecture.