Questions about the future, iterations and collaborations

Many other iterations are possible, I have not fully used the potential of the Web server. There is possibility to create a dialogue between two browsers opened simultaneously telling  different texts. like so :

Managed to have professional actors recording the generative poetry.  There seem to be an interest . Maybe some potential for collaboration here, more to follow…

Questions about headphones and microphone

A question remains about using or not using headphones to listen to the poetry of the node artifacts.  In the first presentation at the Eva conference, in order to to spare the other exhibitors installations,  it was necessary for the viewer to wear headphones. As a result, some of the viewers stayed a long time to listen to the network of artifacts. In the church exhibition , I will use a  conference speaker to amplify the sound and place it  so only the visitor who will sit in front of the puppet controller would have a complete experience of the performance, although, the audience is not reduced to the visitor who sit. The performance seen from afar includes the node artifacts and the human actors who improvise with their voice

I covered the microphone with the same weaving to set it as another character on set .

Questions about designing the sound visualisation

 Like the idea of sound visualisation and how it embodies the impact of the machines on their environment. What sort of shapes or visualisation to use? Testing the shapes for the backdrop with human and machines in connection of their talking. 

“In the fusion torch recycling, The emancipated spectator The torch of the sensor is ashamed Cheat the eyelash! 9dfbf0 for the node torch. Grumbling makes the loaf no larger The buildings look like endangered blouses The vertices will link their uncapped mask”

Questions about Narrative in Physical Computing

Things about actuators , sensors and calibration is there a natural order when scripting a story telling in physical computing?
Actuators are designed to activate not to inactivate

As per definition of actuators in wikipedia : “An actuator is a component of a machine that is responsible for moving and controlling a mechanism or system, for example by opening a valve. In simple terms, it is a “mover”.”

So among the electronics artefacts, I have the actuator, the passive and the one which could be moved by the actuator. stop moving when the actuator is not on anymore . stop moving is not a positive form, form of resistance of non moving
Odd that it is more easy to activate than to inactivate a servo… It looks less easy than you think .The way things are built with motors is that their actuators are there to make them move , but not make them stop …

Questions about work in progress

ChatBot using speech recognition  p5jS > create a River script for the interaction between audience and computer , (D.Shiffman Tutorial)

The bot is triggered by proximity sensor monitored via Arduino. Arduino send via Serial to Computer , 1 if there is someone in front of the sculpture. If there is no one, bot is only displaying DataText.

Speaker sculpture is made of a sound sensor connected to an Arduino and a speaker plugged to computer. The audio speaker sculpture is acting as an actuator for the first servo which body is spinning and touching the other sculptures.

Still have to find the link between the proximity sensor with the bot and the speaker sculpture

DataText is produced by training Texts on masks and object in theatre and Text about components specifications. Training model is Markov Chain.

River script is based on waiting for keywords , and ability to display long sentences. This is a limitation to the conversation.

Need to set up a private router , to have the installation free from overcrowded network

Questions about training Neural Networks from the maker

 

The maker : For who’s pleasure do we train the models?

Who is the curator/ the audience?

What is the purpose?

“If we employ machine intelligence to augment our writing activities, it’s worth asking how such technology would affect how we think about writing as well as how we think in the general sense.” (Ross Godwin )

How to start

https://towardsdatascience.com/how-to-build-a-neural-network-with-keras-e8faa33d0ae4

Questions about the first actuator

  • recommendation from R. Fiebrink about sound and triggering the installation

Use sound recognition to find if there is someone talking to one of the artefact, will solve the problem of how many people are interacting effectively with the installation, play with the sensitivity level , to get the data sound we need

  • Build up one “ear” node, with microphone , check this  and that to understand wich type of microphone we need. > lavalier microphone
  • Build up a “mouth” to display the response of the chatbot installation, play with idea that a microphone is a reverse speaker?
    • version 1 simple version via p5js with the need to have access to cloud to do speech recognition ,
    • version 2 for speech recognition with BitVoicer API , speech recognition, check Arduino Tuto : speech recognition and synthesis,
    • would need to use a private router to secure access to the cloud for the text to speech > check authorisation for plugging the private router to internet, 

To generate responses from the ChatBot : build up different training set with different methods : Markov Chain , char RNN review and  basic char RNN training with github source,   , Tensor Flow‘s sequence to sequence library feed ,

write different scripts via RiverScript to compute the input text  to ouput text, use array in Riverscript? generate different characters , who you are talking to, use a list of trigger words in an array to output a certain sentence

or use the mic from MAC

  • after tutorial with H. Pritchard

produce a mix reality with the electronic node devices and human interpretation by the nodes Node A activate node B/ one sensor, one motor, one movement, / small (or big)size nodes

create the narrative: when (no one/ or not enough people) is (looking or talking) , the nodes are interacting  with each other- when there is enough people or one person( looking or talking), the nodes stop moving, and talk back

Using text to speech and maybe speech recognition

 

 

Question about the node language, surrealist legacy

The perspective is from the node artifacts not the human visitor. The piece is more about a bot for them to communicate with each other rather than building a chatbot to talk with us. It is machine and not human centered, The plan is  to give them a script to perform together as a network. I like the surrealist tone of the generative poetry bots and how the Dada movment was quite in advance in their practice of text collage and random writing. 

A list of tutorial to explore and start prototyping :

I have to learn Python , and how to work with Tensor Flow to manipulate text data sets