ChatBot using speech recognition p5jS > create a Script for the interaction between audience and computer , (D.Shiffman Tutorial)
Chatbot is triggered by ultra sound sensors monitored via Arduino. Arduino send via Serial to Computer , 1 if there is someone in front of the sculpture. If there is no one, Chatbot is only displaying DataText. The installation is moving mechanically after the first trigger by Audio Speaker Sculpture
DataText is produced by training Texts on masks and object in theatre and Text about components specifications. Training model is Markov Chain
Audio Speaker sculpture with microphone and speakers plugged to computer: role is to get the Chatbot starting, has to drive the audience in, to start the conversation,
Chatbot ‘s script should be based on waiting for keywords ,and ability to display long sentences
Need to set up a private router , to have the installation free from overcrowded network
The maker : For who’s pleasure do we train the models?
Who is the curator/ the audience?
What is the purpose?
“If we employ machine intelligence to augment our writing activities, it’s worth asking how such technology would affect how we think about writing as well as how we think in the general sense.” (Ross Godwin )
Use sound recognition to find if there is someone talking to one of the artefact, will solve the problem of how many people are interacting effectively with the installation, play with the sensitivity level , to get the data sound we need
Build up one “ear” node, with microphone , check this and that to understand wich type of microphone we need. > lavalier microphone
Build up a “mouth” to display the response of the chatbot installation, play with idea that a microphone is a reverse speaker?
version 1 simple version via p5js with the need to have access to cloud to do speech recognition ,
write different scripts via RiverScript to compute the input text to ouput text, use array in Riverscript? generate different characters , who you are talking to, use a list of trigger words in an array to output a certain sentence
or use the mic from MAC
04/26/2018 after tutorial with H. Pritchard
produce a mix reality with the electronic node devices and human interpretation by the nodes Node A activate node B/ one sensor, one motor, one movement, / small (or big)size nodes
create the narrative: when (no one/ or not enough people) is (looking or talking) , the nodes are interacting with each other- when there is enough people or one person( looking or talking), the nodes stop moving, and talk back