A lamp is quite useless. Does not talk, cannot dance and is boring. This is where I try to help.
Luci is an autonomous lamp with a webcam, a High Power LED in the lampshade, and five servo motors. She is controlled by an Odroid U3. Once switched on, she looks around checking the environment for any beings and then does what she wants.
Mechanics were the riskiest part, that’s what I thought. I bought a desk lamp, measured its dimensions and tried to give it more childlike characteristic by making the lampshade too big and the arms too short. TurboCAD helped to model the lampshade and the joints.
Lampshade. I did not want to repeat that dusty experience with the lampshade, so 3D-printing appeared very tempting, and I anyhow wanted to try that out. Below you see the 3D-model of the lampshade. In the inside, one can see a small platform the servo motor will be mounted upon (marked red). The hole in front of the platform (marked green) gives space to the axis of the servo motor mounted to the upper arm of the lamp.
The following two pictures show the construction inside Luci's head that will store the electronics. The hole at the upper part of the cover disc will give space for the camera, the half hemisphere is of transarent plastic with a small LED inside, and the wodden lever is used to turn the head with a servo motor. The rolls at the botton of the lever will reduce the torque of the head nicking servo motor by use of a spring pulling up the head. Hard to picture, but it will become clear when the final construction will be ready.
I gave it to a 3D printer shop. After paying a surprisingly high fee (120€!) I had a piece of ABS in my hands: Looked a bit ugly and plastic-like (okay, it is plastic actually), but after painting with black and white varnish it reminds of metal at least.
A third thread runs the trajectory planning algorithm, which produces a sequence of points and orientations in 3-dimensional space generated by certain patterns. When no face is detected, Luci runs a search pattern looking for faces by sampling the environment until a face has been found. Then, Luci carries out a pattern simulating real emotions like nodding without knowing why, pretending to listen, coming closer or retreating depending on the movements of the face. It’s like in real life at home.
Trajectory Planning.The implementation of the trajectory patterns is rather simple; whenever Luci runs short of queued points to be approached she invokes a pattern point generator which is parameterized with the current pattern. There, the next point of a predefined sequence of movements is generated. In case of the pattern that interacts with a face, this is:
The idea is stolen from Eliza (https://en.wikipedia.org/wiki/ELIZA), but coded in a robot instead of Emacs.
Some patterns with special movements are hardcoded, e.g. when Luci pushes a box from the table or looks guilty for watching dirty pictures (1:34 and 2:00 in the video).
Finally, the main loop takes the previous and next point of the trajectory and interpolates all intermediate points with 60Hz using a cubic Bézier curve to smooth the movement. The support points of the Bézier curve are geometrically derived from the trajectory’s prevprev (A) and nextnext (D) point by the rule shown in the picture: Since any polynomial with higher grade tends to oscillate when support points are too far away from each other, so I kept them in a constant distance of |BC|/3 to B resp. C.
Mass Inertia. The last step also computes the lampshade’s acceleration, since the Bézier curve does not take into account that 400 grams are moved in total. As a consequence, the mass acceleration needs to be limited by ½ g to prevent flapping caused by the elastic construction and the backlash of the servo motors. This is done by checking whether the next position can be reached without accelerating above the limit. If not, the new position is computed by taking the current position and adding the maximum distance (on the basis of the current speed and maximum acceleration capped by ½ g) along the current speed vector. In the end the result curve leaves the Bézier curve where it is too sharp.
Kinematics. The output of all this is a 3D-point which is passed to the kinematics module that computes the angles of all servo motors (so-called inverse kinematics). This part is textbook robotics, it works as follows:
The algorithm starts with the point / orientation (=tensor) of the head’s centre A. First step is to compute position B and C out of the head orientation.This can be done by computing the point C relative to the position of A (C.point-A.point), rotating that by the orientation of the head (A.rotation), and adding it to the point A.
C := A.point + rotate(C.point-A.point, A.rotation)
Then, the base angle at F, which is the servo angle, can be computed by
F.angle := atan2(A.point.z, A.point.x)
The angles at E and D are computed by considering the triangle EDC and computing its angles with the cosine law
E.angle := 90° + acos( distance(E,D)2 + distance(E,C) 2 – distance(D,C)) / (2*distance(E,D) * distance(E,C))
The angle at D is computed in the same manner
D.angle := acos( distance(E,D)2 + distance(D,C) 2 – distance(E,C)) / (2*distance(E,D) * distance(D,C))
Last servo is C, which angle is the orientation of the head around the z-axis minus the angle of CD to the horizon:
C.angle := A.rotation.z + 270° - acos( C.point.y-E.point.y / C.distance(E) );
These angles are passed via I2C to the ATmega, where the Arduino library generates a 60Hz PWM signal for the servos.
In the beginning I was scared of the high CPU use of 3D kinematics and tried to implement it with fixed point integers and interpolated trigonometry (I was used to a 24MHz ATmega) . What a surprise when I recognized that using floats and sin/cos with no caching or table lookup had no noticeable performance impact on the Odroid U3.
Facial Recognition. The facial recognition module uses OpenCV 3.0 with Haar cascade classifiers. Although the newer LBP cascades are significantly faster, they had many more false positives, so I thought 10 fps with Haar cascades is sufficient. From the 2D-position of the detected face, the 3D-position is estimated assuming a standard face size which worked surprisingly well. Later on, Luci’s trajectory planning module moves towards the face if it is very close to simulate real interest and moves away if it violates the European intimacy distance. Tracking a face in real time was a bit tricky, since grabbing images from the video stream + face recognition has a latency of 250ms . So, the computation of the face’s 3D position needs to be done relatively to the webcam’s position 250ms ago. Consequently, when Luci moves quickly, this does not work satisfyingly when the image becomes blurry, so the orientation of the head is directed towards the last stable face position until Luci moves slower and following the face in real time becomes possible again.
The trajectory planning module is computing the next two points in advance for calculating the Bézier curve. Consequently, the detected face position is not valid anymore when the Kinematics module is sending a position to the servos a couple of seconds afterwards. The solution is to permanently compute the current 3D position and send that to the Kinematics module, in order to change the head orientation towards the face in real-time.
This project changed me not at all. Like at the office, everything just took longer than expected. Surprisingly the software and hardware worked out quickly, but getting the construction and the mechanics in a shape that worked, was not too heavy, with properly mounted servos and springs took me a couple of weekends in the basement. The Maths were definitely challenging. Getting facial recognition done was the simplest part, but gets the most ahhs and ohhs. The guys from OpenCV did a pretty good job at making this really easy. The most fun was the trajectory planning, i.e. how Luci should move when the webcam recognizes a face moving.