FROM THIS ARISES OUR INVARIABLY DISTORTING WAY OF REPRESENTING THINGS

A found text, pulled from Italo Calvino’s lecture on multiplicity, is “blown” through a neural network representation the English language by real-time wind data.

A neural network was trained on a sizeable portion of the English Wikipedia, using words tagged with their part-of-speech (to preserve homonyms). Like any human-written text, neural networks and other machine learning models are not pure representations of language, but bake in our biases, norms, and conventions.

During the course of the exhibition, the words of the seed text are shifted inside this language representation by real-time wind data from an anemometer installed outside the gallery. Wind speed and direction are captured and shift the text over the course of the exhibition, transforming it into new contexts. The output is not intended to be readable in a useful sense, but poetic transformations of the original through the language systems it is built on, like a tumbleweed blowing in the wind.

A detail of the trained vector space, preserving the parts-of-speech for each word

For this project, the neural network model Word2Vec was chosen because, along with grouping words together that are related by topic, two-dimensional directionality has also been found to encode concepts – moving in a particular direction can show changes in the gender of a term or the tense of a word.

Screenshots of different states of the piece
The entire trained vector space, showing approximately 22,000 words — larger version

This piece was developed while artist-in-residence at Bell Labs, in collaboration with their research team in Cambridge, England.

All work on this site is licensed under a Creative Commons BY-NC-SA License. Feel free to use, but please let me know.