The sound is produecd by our facial movement. Our sound piece contains different beats,
tempos and melodies all going together. When all the emotions and moods are smashing into each other, dramatic sound will spurt out. Hence, our sound project is to present this notion of a kind of special acapella most creating by facial features, and try to merge all different sounds and melodies together to construct a kind of joyful, special, irregular acapella.
The set up is kinda tedious. We need 2 speakers, iSight, projector, Mp26 midi pad controllers. The calibration of face detection took us a long time to find the most possible distance and facial expressions. However we could set the range of how far and near we want to stand as well as how dramatic the expressions we want to make. To make it more interesting, we added other elements such as glass sound effect and pre recorded sound of whiteboard dusters. Other than face detection,
we used MPD26 to control the max patch including its speed, tempo and volume.
For the patch we did, we are using FaceOSC which to detect facial features including face scale, mouth width and jaw. Then the data of the features will be shown on the patch. All the datas are using‘autoscale’ plus ‘zmap’ object to scale the original range of the facial features to fit the range of the volume, intensity, frequency and the tone of note. We have input two pieces of ‘Ah’ sound which are one low pitch and one higher pitch, being controlled in volume by jaw. Then, mouth is controlling on both note ‘mtof’ and frequency ‘cycle~’. For face scale part, it is controlling on the tone ‘pgmout’ of a piece of melody, which has been input a series of notes by ‘coll’ object. We also have additional patch for MPD26 to control the speed of the melody by ‘metro’ object. Also, the switch and volume control for the prelude, glass sound and whiteboard duster sound.