The Moodroom attempts to add more dynamics into the often static spaces in virtual reality and investigates the relationship between subjective emotions and a virtual environment. The main concept of The Moodroom project is to analyze the emotional state of a VR user, based on a two-dimensional model comprised of emotional valence and arousal, through input signals such as motion tracking, speech recognition, and physiological features, and then transfer this state to the virtual environment, which should react to the inputted emotions through corresponding features, including the geometry of the surfaces, size of the room, lighting, and color. Through research, the most exciting finding was that forms and shapes containing less information, like fluent, symmetrical, and curved contours, are easier to comprehend and recognize and more likely to evoke a more positive response. On the other hand, complex configurations with random surfaces and high angulation are more likely to be perceived as harmful, therefore a negative response. This disclosure was essential to bring The Moodroom into existence. The implementation of the concept was accomplished in a simplified way in which only the motion tracking and the voice recognition served as input parameters. The output is the most probable emotion the user is feeling, which in the case of this initial prototype are Sadness, Fear, Anger, and Joy. These were directly assigned to the game engine Unity and afterward to components that were able to morph the surfaces of a cubic room. This all happening in real-time!
Matrix of Surfaces
Inside The Moodroom