Sunday, December 14, 2008

ACE08 Highlights of day 2

The highlights for me on the second day of ACE were in the technology track in the afternoon:

Aaron Lewisohn presented his system BeatBender and showed how it could generate rhythms. The application for trying out different rule sets for the rhythm generation seemed very tempting to play with. In the QA I asked whether he thought it would be feasible to use it in another system. It would be interesting to give the BeatBender real-time data from WoM and the MM and see what rule sets could be used to help out representing states of mind. (Would also be interesting to see how the principle of subsumption architecture meets the spreading activation network. I mean as a mental picture. Different node types could be connected to different hierarchical levels to the subsumption stuff, maybe according to decay rate.)
IMG_7379


The second highlight for me was from Philippe Pasquier who, just as Aaron Lewisohn is from Simon Frasier University, presented “Shadow Agent”. The user stands in a room and puts her feet on the feet of the shadow. The movement of the user is assessed. The shadow follows the user, as shadows do, but starts to act independently, using a BDI style decision about what to do. A plan is chosen from a database.
Semi Autonomous Avatar in extremis!
Shadow Agent: BDI + plan selections


Other presentations I enjoyed that day was Peggy Weil’s and Nonny de la Peña’s “Avatar Mediated Cinema”, Bill Kapralos “: Dimensionality Reduced HRTFs: A Comparative Study” and Sittapong Settapat’s and Michiko Ohkura’s “An Alpha-Activity-Based Binaural Beat Sound Entrainment System using Arousal State Model”. I want to play with bineural beat sounds too!

Brain Wave Entertainment

Wednesday, December 10, 2008

ACE08 - Entertainment in Dining

Chief manager Shuntaro Yamazaki (NEC Central Research Laboratory) showed in his keynote “Information and Communication Technologies for Food and Entertainment “an application that can be used as a social icebreaker at large social gatherings, “Active Avatar”.
Entertainment in Dining - Keynote at ACE 2008
People can, instead of shyly shuffling around in corners, get an idea of another person’s hobbies and other data, which will make it easier to start off a conversation. (Each attendee has a location tag containing personalized data) Seeing the presentation I also think that the cheers joy of being able to explore such a system together would me a major icebreaker in itself.

Shuntaro Yamazaki also showed the words first sommelier robot. It can look at foods such as wine and cheese and give information about what it is and what kinds of nutrients it contains. (It looks at the texture and color etc.) The robot itself looked familiar to me. I think I saw it at ACE 2006 in Hollywood, so I’m going through through my pictures from that event. Ah yes! The child caring robot NEC showed! It was definitely lovable!
Entertainment in Dining - Keynote at ACE 2008 The sweetest robot ever



Before going to dinner we got a demonstration of how food samples are made by Kousei Kitayama (Iwasaki Co., Ltd).
. You know the ones that are shown outside restaurants in Japan and are so practical since one as a non-Japanese speaker can point at them and know exactly what to get! I have been wondering for a long time how they are done, and it is quite a craft. Here are pictures of how to make lettuce. It is all plastic, and the temperature has to be exact.

How to make food samples for display
Lettuce, making a food sample IMG_7172

ACE08 Human Entrained Embodied Interaction and Communication

The first day of the Advances of Computer Entertainment 2008 Conference in Yokohama was dedicated to keynotes and to a dinner so we attendees got a chance to socialize a little.

Keynote by professor Tomio Watanabe - Human Entrained embodied interaction

Professor Tomio Watanabe (Okayama Prefectural University) gave in his keynote Human-Entrained Embodied Interaction and Communication Technology an exposé of a number of interesting projects. Several of them took a stance from body languange, and particularly the act of nodding in agreement.
Nodding dolls were placed in different environments in order to make the person performing a communication act feel more secure. Nodding dolls were placed among students in a class, and in studios where a radio DJ could get some approving body language around her in an environment where she otherwise talks in the void. In another example nodding sunflowers were projected in the back of a classroom for to support a teacher… and also small nodding artificial flowers are produced, these now being available in toy stores all around Japan (I saw them in shops the day before I left Japan). Also a chair was tried out that rock in a way that makes the person sitting in it not being able to help nodding. The rhythm of these nods is steered by sound input according to speech patterns. The project I could see an immediate use for was InterChat where the rhythm of a persons typing governs the body language of the avatar. I’d love to try that out.
I was picturing how it would be to have a little group of dolls in the classroom when I lecture… I wonder how the students would feel about that… I think we might feel a bit silly all of us, but it could be fun!

IMG_7102 Interaction Model InterRobot

Professor Watanabe could show that people in an environment where the group are enthusiastic are more susceptible to the message being propagated. Which made me associate to the course I took many years ago in social psychology were I learned how easily manipulated we humans are. I’m not sure Watanabe's tests made differences in opinion change versus how much knowledge was soaked up with and without nodding dolls, but that could be interesting to learn.
I was impressed by the range of experiments that had been conducted on the nodding gesture. Way to go!
This is a link to the pictures of the slides I took while listening.