Questions about the first actuator

  • recommendation from R. Fiebrink about sound and triggering the installation

Use sound recognition to find if there is someone talking to one of the artefact, will solve the problem of how many people are interacting effectively with the installation, play with the sensitivity level , to get the data sound we need

  • Build up one “ear” node, with microphone , check this  and that to understand wich type of microphone we need. > lavalier microphone
  • Build up a “mouth” to display the response of the chatbot installation, play with idea that a microphone is a reverse speaker?
    • version 1 simple version via p5js with the need to have access to cloud to do speech recognition ,
    • version 2 for speech recognition with BitVoicer API , speech recognition, check Arduino Tuto : speech recognition and synthesis,
    • would need to use a private router to secure access to the cloud for the text to speech > check authorisation for plugging the private router to internet, 

To generate responses from the ChatBot : build up different training set with different methods : Markov Chain , char RNN review and  basic char RNN training with github source,   , Tensor Flow‘s sequence to sequence library feed ,

write different scripts via RiverScript to compute the input text  to ouput text, use array in Riverscript? generate different characters , who you are talking to, use a list of trigger words in an array to output a certain sentence

or use the mic from MAC

  • after tutorial with H. Pritchard

produce a mix reality with the electronic node devices and human interpretation by the nodes Node A activate node B/ one sensor, one motor, one movement, / small (or big)size nodes

create the narrative: when (no one/ or not enough people) is (looking or talking) , the nodes are interacting  with each other- when there is enough people or one person( looking or talking), the nodes stop moving, and talk back

Using text to speech and maybe speech recognition

 

 

Question about the node language, surrealist legacy

The perspective is from the node artifacts not the human visitor. The piece is more about a bot for them to communicate with each other rather than building a chatbot to talk with us. It is machine and not human centered, The plan is  to give them a script to perform together as a network. I like the surrealist tone of the generative poetry bots and how the Dada movment was quite in advance in their practice of text collage and random writing. 

A list of tutorial to explore and start prototyping :

I have to learn Python , and how to work with Tensor Flow to manipulate text data sets

Things about the Project Theme

Sit down comfortably with your eyes closed. Try to imagine a scene that catches your topic/theme in a nutshell; you can try to imagine that your topic/ theme is being presented as a film or theatre play and construct a scene that presents crucial aspects of it. 
 Or you can try to think of a scene as part of a personal memory where you felt that crucial issues concerning your topic were at stake. Remember that even a purely theoretical topic/theme can be articulated in a scene. Try to mobilize all your senses when you imagine the scene. If possible, let your mind flow back to the work you did with the touching hands. 

proposed by Helen Pritchard based on Nina Lykke. Writing Academic Texts Differently: Intersectional Feminist Methodologies and the Playful Art of Writing (Routledge Advances in Feminist Studies and Intersectionality) (p. 155). Taylor and Francis. Kindle Edition.

-Behind the screen, we, the components, are talking, quietly so quietly that you could hardly hear us. Actually , we are constantly chatting , bubbling : “01101000 01100101 01101100 01101100 01101111 00100000 01110111 01101111 01110010 01101100 01100100 00100000 00100001” chanting “HOHA DADO BIDABI KOKA BIDEKA!” . We are so busy, sorting out your data: receive, store, delete, update, send. Little busy shadows, enchained together with no choice but to communicate and release the data we are taking from you. Behind the screen, there is a 24 hours show, a constant ballet of binaries , an infinite concerto of 0 and 1, sadly no one can hear us, except perhaps your EMF detector.

Suddenly a red light is activated, someone is looking at the screen, someone is typing on the keyboard or maybe it is a sound or a touch? Who or what is this?  I have a unique task, I know what to do …hey! this is my input! Let’s go and give it a byte!  Iterate, map, reiterate, variables, vectors, int,  float, strings loop and loop again and return the arrays to my algorithmic friend!

At the speed of an electron, we are sending feedback not only to you, the one with the keyboard, but also to our hosts: the cookies, in a protocol, a language , that no human could speak fluently. Luckily, we know how to speak human. And thanks to the continuous improvement of the human computer interfaces :   you, the users are by now,  perfectly trained to give us the correct input.

But with humans, there is always an end .  You are now releasing the keyboard and leaving the computer. Behind the screen, and until the next time we will meet, there is a new show going on , a new story we tell to each other through the wires or in the clouds, this is is a story based on the memory of our encounter with you. So come back , do not type on the keyboard, do not look at the screen,  try to listen to the story, because it is not only about you * but also about us. 

* Chun,W. 2016, Updating to remain the same,MIT Press.

Things about Motor and Sensors

02/10/2018

check conductivity of water , for input from viewer , Kobakant crying dress or Root Node

tuto EMF

02/05/2018

looking for a narrative, entanglement between a motor and a weaved W cell made of wire

01/30/2018

tutorial for  MPU6050-Arduino-Gyroscope GY-521

<put in a box , share info? to motor>

stepper, worked then freeze….. ????, servo more reliable, but/and making sound, stepper is soundless….. silent …..

01/29/2018

tutorial from Maker Show, Bret Stateham 

  •  stepper motor basics , , testing the order for the wire pins order
  • 28BYJ-48 stepper  common ground between IN1 and Micro Controller, power source independent motor and board , connect to arduino, find the bottom for the delay, not to quick , it will frieze
  • accelstepper arduino library : Supports acceleration and deceleration,Supports multiple simultaneous steppers, with independent concurrent stepping on each stepper, API functions never delay() or block, Supports 2, 3 and 4 wire steppers, plus 3 and 4 wire half steppers.
  • reduce speed DC motor with a PWM., < turning slowly and getting mad when sensor activated>

01/25/2018

Test Servo and sensors, check code, circuit and breadboard

  1. Servo+ Capactive sensors
  2. Servo+ Photo resistor

Hands on !

started from the arduino book and shadow theatre sensor installation

found some interesting movements from continuous rotation versus non continuous rotation  , and different motors , different “mood”,

to do : video and pictures , try different heads to check the movements, test how they relate to each other in term of movement/character

 

Questions about scripts

How to stage a

Network of artifacts????

 ChatBot exploration

  • <create a surrealist “conversation”between the nodes and with the nodes>
  • use the bibi dictionary as a base, binary system with letters , “Ho, He, BikEDa” , translated into shapes
    • code the binary letters, still to be done, shape equivalence bibiDico
    • http://www.graner.net/nicolas/nombres/bibibinaire.php
  • work with pattern making in processing , to create the “landscape” of the cell? 

 Embedded Signal

Signal can be embedded in an object by adding textured patterns or by modifying an object’s natural texture.This form of watermarking is currently employed to track sensitive or high-value machine parts and to identify containers carrying toxic or hazardous materials. Three-dimensional objects also raise the possibility of encoding signal by arranging artistic elements in space”

  • No one has trespassed the border, the nodes are “talking”/ interacting with each other.
  • Someone crossed the border, his face is snapshot by the LAO mask, the nodes stopped talking to each other, mirroring/ framing/hiding/revealing the human.
  • Human is crossing back, when he is out of the “view”, the nodes are “talking” again, they are talking about him, something about this human remains with the nodes to “talk” about.
  • inspiration : chinese whispers, 1 ,2 ,3 soleil, gossip, bid/offer and how news are spread across exchange markets and transformed into (non)tangible assets, lemurs and their group behaviour
  • border is a framing a door for accessing the nodes world , behind the screen, nodes , projecting their shadows on screen . on the order side the viewers trying to pass

 Secret conversations

UCCC : Ekman,U Ubiquitous Computing, Complexity and Culture, 2016 Edited by Ulrik Ekman, Jay David Bolter, Lily Díaz, Morten Søndergaard, Maria Engberg – Routledge, p125 Irene Mavrommati :  “Ubiquitous computing (Ubicomp ) are complex computing systems; they can also be understood as ecologies of artifacts, services, people, and infrastructure. Systems with such complexity can be approached as component-based systems. …Component based systems are independent systems with loose coupling of interchangeable parts, which communicate via interfaces.”

AmI : http://blog.teleplanglobe.no/ami-and-internet-of-things ,“The encrypted tunnels allow the private network to communicate privately over public networks.” “The way devices communicate, in a mesh network of devices”

 create a Network of artefacts , interacting with each others and with people
Slip in two groups, on two different locations

-When “alone”, the artefacts are moving accordingly to patterns generated by one algorithm and data previously collected, input/ parameters should include time and location
-When a presence is detected, new data are collected and new patterns, movement, sound or visual are generated, informations are processed , some of them shared to the viewer, so he could modify its input.
-The two groups will evolve to separate species after few iterations, even if they started identical

(Uncanny space).

– using the programming publish-suscribe concepts?

– artefacts moving accordingly to patterns generated by cellular automata algorithms ( ref simulating neural transmissions, synaptic caguama by Lozano-Hemmer)?  make them evolve with genetic algorithm ( the more people looking at them, ref Jessica Field)?

– repulsion / attraction systems, 10 prints, L systems, machine learning, MaxMSP , using OpenData to map to a visual. Use of OSC touch on my mobile to draw.artefacts mixing digital and analog making, embedded with sensors and OSC technology, talking to each other while receiving data from external input ( open data?)

 

  • algorithms to work on : GA, CA, particle systems,L system
  • technology for input : OSC touch , kinect,OpenData access, web Scraping
  • technology for output : machine learning, rasperry pi, 

draft scenario

  • Input is human , through sensors or data collecting. Output group 1 is visual- Output group 2 is sonic
  • If there is no input, the group 1 are drawing some shapes and machines of group 2 are reacting to the drawings by making sounds,
  • If there is some external input group 1 is taking the input and changes the patterns of its drawing , sending informations to the group 2,  having a private conversation between them . machine 2 output as sounds responding,
  • If the two groups have people , they count the people, the group with more people will have the possibility to evolve and/ or making more noise/ expanding drawings….

 the machine talks to each other in a foreign language

the machine talks to each other in a foreign language, we gave and forgot about it , they still talk this language< Check the ASCII control characters table from old teletype days>

 

<Define the Network of artefacts contolled via OSC touch?  Network is the Mask, its artefacts are the nodes>

 Listen to the Other via EMF

<the machines will feel the EMF on visitor ‘s body and react with sounds or visuals>