Things about the voice for the crafted nodes

Prototyping Char RNN Neural networks generating text

Trying to find a voice for the crafted electronic nodes- Node Poetry

In the process of characteristation, the nodes need a voice .  I like the idea of the nodes generating their own poetry based on texts I will fed to them. The challenge then , is to find appropriate texts to train the neural network

testing a Char RNN model based on Max Woolf ‘s training model :, an open-source framework by Max Woolf, written in Python and powered by Tensorflow.

First set of seeding texts were user guide for tourist and servo safety guide , 

“Keep quiet when you are punctual if taking part in a shop
loosen up and bring an adventurous spirit
take time to a complete stop
applicable regulations will result in death or serious
Install and avoid hazards involved are authorized to work on your knife.
loosen up and bring an adventurous spirit
Install and all other people’s belongings with the fork parallel across the right side of your host
Ensure you are punctual if taking part in a shop
It is advisable to take your place in the right while eating.”\

The result is fun but  not any relations with the idea of the network of nodes.

Better result with another set of texts :   Wendy Chun and Updating to remain the same. Cambridge, Massachusetts : The MIT Press, 2016 and U.A Meijas . Off the Network. USA: Electronic Mediations , University of Minnesota Press, 2013, in addition to extract from my essay on network masks and some further thoughts about puppetry. 

Put the mask on your face, and become a node’, ‘between 0 and 1
This is the marionette
the Network is Blind
Internet is Myopic
the puppet is Workaholic
the Network is a Trail
the interface is Sniffing
the interface is a Collective Traveller
sensor is on
sensor is off
the code is binary
shadows are invisible
we are the machines
Ask the marionette
a puppet never dies
we are humans…

asking the model to qualify the main social media : ” google computers are invisible,  google the mask on your face, and become a node “

taking the “good sentences ” produced by the model and put it again in the training model

Questions about Narrative in Physical Computing

Things about actuators , sensors and calibration is there a natural order when scripting a story telling in physical computing?
Actuators are designed to activate not to inactivate

As per definition of actuators in wikipedia : “An actuator is a component of a machine that is responsible for moving and controlling a mechanism or system, for example by opening a valve. In simple terms, it is a “mover”.”

So among the electronics artefacts, I have the actuator, the passive and the one which could be moved by the actuator. stop moving when the actuator is not on anymore . stop moving is not a positive form, form of resistance of non moving
Odd that it is more easy to activate than to inactivate a servo… It looks less easy than you think .The way things are built with motors is that their actuators are there to make them move , but not make them stop …

Questions about work in progress

ChatBot using speech recognition  p5jS > create a River script for the interaction between audience and computer , (D.Shiffman Tutorial)

The bot is triggered by proximity sensor monitored via Arduino. Arduino send via Serial to Computer , 1 if there is someone in front of the sculpture. If there is no one, bot is only displaying DataText.

Speaker sculpture is made of a sound sensor connected to an Arduino and a speaker plugged to computer. The audio speaker sculpture is acting as an actuator for the first servo which body is spinning and touching the other sculptures.

Still have to find the link between the proximity sensor with the bot and the speaker sculpture

DataText is produced by training Texts on masks and object in theatre and Text about components specifications. Training model is Markov Chain.

River script is based on waiting for keywords , and ability to display long sentences. This is a limitation to the conversation.

Need to set up a private router , to have the installation free from overcrowded network

Questions about training Neural Networks from the maker


The maker : For who’s pleasure do we train the models?

Who is the curator/ the audience?

What is the purpose?

“If we employ machine intelligence to augment our writing activities, it’s worth asking how such technology would affect how we think about writing as well as how we think in the general sense.” (Ross Godwin )

How to start

Questions about the first actuator

  • recommendation from R. Fiebrink about sound and triggering the installation

Use sound recognition to find if there is someone talking to one of the artefact, will solve the problem of how many people are interacting effectively with the installation, play with the sensitivity level , to get the data sound we need

  • Build up one “ear” node, with microphone , check this  and that to understand wich type of microphone we need. > lavalier microphone
  • Build up a “mouth” to display the response of the chatbot installation, play with idea that a microphone is a reverse speaker?
    • version 1 simple version via p5js with the need to have access to cloud to do speech recognition ,
    • version 2 for speech recognition with BitVoicer API , speech recognition, check Arduino Tuto : speech recognition and synthesis,
    • would need to use a private router to secure access to the cloud for the text to speech > check authorisation for plugging the private router to internet, 

To generate responses from the ChatBot : build up different training set with different methods : Markov Chain , char RNN review and  basic char RNN training with github source,   , Tensor Flow‘s sequence to sequence library feed ,

write different scripts via RiverScript to compute the input text  to ouput text, use array in Riverscript? generate different characters , who you are talking to, use a list of trigger words in an array to output a certain sentence

or use the mic from MAC

  • after tutorial with H. Pritchard

produce a mix reality with the electronic node devices and human interpretation by the nodes Node A activate node B/ one sensor, one motor, one movement, / small (or big)size nodes

create the narrative: when (no one/ or not enough people) is (looking or talking) , the nodes are interacting  with each other- when there is enough people or one person( looking or talking), the nodes stop moving, and talk back

Using text to speech and maybe speech recognition



Question about the node language, surrealist legacy

The perspective is from the node artifacts not the human visitor. The piece is more about a bot for them to communicate with each other rather than building a chatbot to talk with us. It is machine and not human centered, The plan is  to give them a script to perform together as a network. I like the surrealist tone of the generative poetry bots and how the Dada movment was quite in advance in their practice of text collage and random writing. 

A list of tutorial to explore and start prototyping :

I have to learn Python , and how to work with Tensor Flow to manipulate text data sets