Along the journey of designing the puppet controller, the alien “director” of the installation, I was actively researching how to be able to have its body continuously charged while spinning on itself. First I used a coin cell holder
attached to the aluminium wire, and add also a small resistor 10 ohm to avoid overheating the battery as it was serving continuously 3V along the wire. I used this version for the mini version of the installation at the Digital Culture Eva conference.
Although it was working correctly and gave a pleasant aesthetic , it did not last very long, the body was loosing its electric charge very quickly , and the connection to the sensor were not stable at all.
After the first show and work in progress presentation, I looked for a more permanent solution and I came across the notion of dielectricity. which is how long an item is staying charged even after having been disconnected from a power source. And I went deepr into my research for a solution, I came to discover that Teflon and water have very high dieletric ability. Unfortunately neither of those material would be a solution for it. It was really a desesparate situation as the ability for this character to control was at stake if I could not find a solution which would not be a human touch. I was really keen to give the control of the installation to this non human character.
Luckily , sharing my problem around me and especially with Nick from the tech team, gave me the solution : the slip ring system! With this I would be able to have the wire spinning without entangling the electrical wire connected to the ground.
I immediately ordered it but when I received it, I faced another challenge. Although it was a great technical solution , the aesthetic of this object was not compatible with the rest of this installation and I could not see a nice way to fit in the controller box or above it without destroying the harmony of the installation!
After few iterations and out of despair, my puppeteer craft ability saved me once again! I had discovered my own interpretation of a slip ring : a simple screw with a size just above the puppet body’s diameter, and a recycled open screw scavaged from the magnifier holder, soldered to a conductive wire. the whole system will be enough close to the body to charge it, while allowing it to spin freely !
ChatBot using speech recognition p5jS > create a River script for the interaction between audience and computer , (D.Shiffman Tutorial)
The bot is triggered by proximity sensor monitored via Arduino. Arduino send via Serial to Computer , 1 if there is someone in front of the sculpture. If there is no one, bot is only displaying DataText.
Speaker sculpture is made of a sound sensor connected to an Arduino and a speaker plugged to computer. The audio speaker sculpture is acting as an actuator for the first servo which body is spinning and touching the other sculptures.
Still have to find the link between the proximity sensor with the bot and the speaker sculpture
DataText is produced by training Texts on masks and object in theatre and Text about components specifications. Training model is Markov Chain.
River script is based on waiting for keywords , and ability to display long sentences. This is a limitation to the conversation.
Need to set up a private router , to have the installation free from overcrowded network
The maker : For who’s pleasure do we train the models?
Who is the curator/ the audience?
What is the purpose?
“If we employ machine intelligence to augment our writing activities, it’s worth asking how such technology would affect how we think about writing as well as how we think in the general sense.” (Ross Godwin )
Chatbot Tom Bocklish : https://towardsdatascience.com/personality-for-your-chatbot-with-recurrent-neural-networks-2038f7f34636
D.Shiffmann : https://www.youtube.com/watch?v=slmSCEho31g&list=PLRqwX-V7Uu6aDUo_ia-Vq2UZZGaxJ9nRo
Using RiverScript https://www.rivescript.com/
Text to Speech p5 js : https://www.youtube.com/watch?v=v0CHV33wDsI
Speech to Text p5 js https://www.youtube.com/watch?v=q_bXBcmfTJM
Riverscript to process the input text to output text https://www.rivescript.com/
Text Generator, All about Chatbot
Markov Chain with D.Shifman https://www.youtube.com/watch?v=v4kL0OHuxXs
App Recommanded by Janelle Shanne:
1. textgen-rnn – an open-source framework by Max Woolf, written in Python and powered by Tensorflow. It’s the easiest to install (though you still have to know your way around command line a bit) and comes pre-trained so you can get interesting results even from tiny datasets. It runs fine on an ordinary computer’s CPU, and lets you train the same network successively on different datasets, which is fun. It’s not as powerful as the other frameworks, but just fine for simple lists of names. (tested 15th may )
2. tensorflow char-rnn – an open-source framework by Chen Liang, written in Python. It has tons of flexibility, including the ability to adjust dropout, save frequency, and number of saved snapshots during training, and the ability to adjust temperature during sampling. There’s a tutorial here for running it on AWS, and I’m hoping to find a good tutorial for Google Cloud as well.
3. Andrej Karpathy’s char-rnn, an open-source neural network framework for torch (written in Lua). This one has great flexibility in training/sampling parameters, and it seems to run faster on my 2010 Macbook Pro’s CPU than the python/tensorflow models. I’ve been using this one for the simpler datasets.
Ross Goldwin General Method
prepend the seed with a pre-seed (another paragraph of text) to push the LSTM into a desired state.
Use high quality sample of output from the model you’re seeding with length approximately equal to the sequence length (see above) you set during training.
Seed the LSTM with a meaningful text, that the machine would complete
build a data set , corpus
Choosing the right settings for a given corpus
train the model
James Loy : How to build your own Neural Network from scratch in Python- A beginner’s guide to understanding the inner workings of Deep Learning
Chatbot Personality by 5agado , github
recommendation from R. Fiebrink about sound and triggering the installation
Use sound recognition to find if there is someone talking to one of the artefact, will solve the problem of how many people are interacting effectively with the installation, play with the sensitivity level , to get the data sound we need
Build up one “ear” node, with microphone , check this and that to understand wich type of microphone we need. > lavalier microphone
Build up a “mouth” to display the response of the chatbot installation, play with idea that a microphone is a reverse speaker?
version 1 simple version via p5js with the need to have access to cloud to do speech recognition ,
write different scripts via RiverScript to compute the input text to ouput text, use array in Riverscript? generate different characters , who you are talking to, use a list of trigger words in an array to output a certain sentence
or use the mic from MAC
after tutorial with H. Pritchard
produce a mix reality with the electronic node devices and human interpretation by the nodes Node A activate node B/ one sensor, one motor, one movement, / small (or big)size nodes
create the narrative: when (no one/ or not enough people) is (looking or talking) , the nodes are interacting with each other- when there is enough people or one person( looking or talking), the nodes stop moving, and talk back