The robotic sculpture of Amin’s face was located in Tehran, Iran for the duration of the exhibition, while it was moving according to the real-time stream of orientation coordinates recorded from Saman’s head in New York City. The idea was to create a piece that involves and combines parts of both of the Artists' lives, passions and experiences. We built a mechanical structure which had 3 degrees of freedom that would accommodate the ability to mimic human's head movement to some degrees. Obviously, the goal was never to build a functional robot or prosthetic neck of any sort. Whereas, the focus was more on the comprehension of the notion of gaining a physical presence in a different time-space beyond our limited corporeal reach, through the means of technology. Or, about living someone else's life or dream.
As clearly communicated through its title, it traces some roots back in captivating and yet controversial ideas such as representation versus presentation, fake versus original, virtual versus actual, fiction versus fact and so on. Similarly it reminds us of some theories and ideas of Alfred Korzybski, namely "The Map is not the Territory".
Throughout the week Saman had to wear a customized head band that was equipped with several electronic sensors, a micro-controller and a wireless transmitter, in order to transmit his head's physical configuration instantaneously to Tehran, more than 6000 miles away and eight hours ahead of time.
The technical scope of the installation incorporated five different modules:
- Design, development and fabrication of the headband.
- Android application, a gateway to the Internet.
- Server listening over UDP socket for the incoming data stream.
- The motion controller, translating the orientation data to movement.
- Mechanical structure design and development.
The headband was equipped with one 9 DOF sensor stick, an Arduino Pro Mini 328, a 3.7V Lithium Battery, one Charger Breakout, and one Bluetooth Module. Basically, the micro controller keeps reading, measuring and calculating head's orientation using an open source attitude and heading reference system sensor fusion algorithm. The measurements will be sent to an Android phone via Bluetooth.
The android application's architecture is simple and straight forward. It creates three threads at start. The first thread listens for incoming stream over Bluetooth and writes it on a shared memory FIFO queue. The second thread continuously reads the shared memory for new orientation data, attaches a UTC time stamp to it and sends it to the server in Tehran over UDP with the frequency of 10 Hz. The third thread takes advantage of the internal GPS module of the smartphone; it retrieves the GPS coordinates once a minute, attaches a time stamp and sends it to Tehran again over UDP.
Server in Tehran
The NodeJS server in Tehran binds to two UDP sockets, one for the location latitude-longitude and one for the head orientation coordinates. It puts the incoming coordinates in an ordered buffer based on their content time stamps. It then flows them into a jitter buffer to eliminate any potential jitter due to network latency or dropped packets. Consequently, it performs a very basic real-time interpolation algorithm on the data in order to diminish sensor noise or head's rapid/intensive motions. After all, it'll feed our motion controller with the processed data through the internal localhost network ( in average 200ms latency).
Motion controlling system is written in C++. We used Fastech Plus R Ezi Servo motors alongside their own drives. They are quite handy to deal with and precisely programmable through communicating with their integrated controller over Ethernet/IP.