Robotic Geese are remote controlled goose robots that enable participants or robotic goose drivers (aka goosers) to interact with actual geese in urban contexts. The robotic goose interface allows people to approach the birds, follow them closely and interact in a variety of ways that would not otherwise be possible without this interface. The goose drivers can 'talk to' the geese, issuing utterances through the robotic interface, delivering prerecorded goose 'words,' their own vocal impersonations, or other sounds (such as goose flute hunting calls). Each utterance via the robotic goose triggers the camera in the robot's head to capture 2-4 seconds of video recording the responses of the actual biological geese. These video samples upload to the public web-based goosespeak database that the participants can annotate, i.e. "the goose was telling me to go away," "he was saying Hi." As this database of goose responses accretes, redundancy and correlations in the annotations may provide robust semantic descriptors of the library of video clips.
+ GOOSE = AUDIBLE: The sounds that the geese make will be channeled into the gallery to the goose cockpit from the mikes embedded in Leda; where they will be matched against similar sounds. If a translation exists the goose call will be translated into human.
+ GOOSE = VISIBLE: They can see you goosing inside the gallery they can trigger the camera in leda head pushing images to you. geese trigger the camera on by uttering something in close proximity to the Leda, or by pecking at Leda (mike triggered).
+ GOOSE = ACTIONS:
+ PEOPLE = ACTIONS:
+ PEOPLE = VERBAL:
+ PEOPLE = VISIBLE: You can turn on Leda' video camera and get a close up view of the interaction; you can save that piece of video to a database with an annotation of why it was interesting, what you thought the interaction was about. [ see Learning Goose ]