The Moodroom

Botond Fülöp
Xuanlin Chen
Professur
Architekturinformatik
Projektarbeit
Virtual reality is continuously a topic of research and has been used in different fields of application. However, as we observe nearly what virtual reality is capable of at present, we notice a major drawback that terminates it from being an organic experience: the non-consideration of our emotions. To consider emotions in a virtual environment is a non-trivial task. Emotions are abstracts, subjective, dependent on many factors, and complicated to decode. Nonetheless, changing the space has a response to one's visual perception, and even if slightly, influences a person's mood. Based on this statement, The Moodroom project addresses the following question: How can we form visually static surroundings in a VR immersive experience into something that can reflect or alter the feelings of a human being?
The Moodroom attempts to add more dynamics into the often static spaces in virtual reality and investigates the relationship between subjective emotions and a virtual environment. The main concept of The Moodroom project is to analyze the emotional state of a VR user, based on a two-dimensional model comprised of emotional valence and arousal, through input signals such as motion tracking, speech recognition, and physiological features, and then transfer this state to the virtual environment, which should react to the inputted emotions through corresponding features, including the geometry of the surfaces, size of the room, lighting, and color. Through research, the most exciting finding was that forms and shapes containing less information, like fluent, symmetrical, and curved contours, are easier to comprehend and recognize and more likely to evoke a more positive response. On the other hand, complex configurations with random surfaces and high angulation are more likely to be perceived as harmful, therefore a negative response. This disclosure was essential to bring The Moodroom into existence. The implementation of the concept was accomplished in a simplified way in which only the motion tracking and the voice recognition served as input parameters. The output is the most probable emotion the user is feeling, which in the case of this initial prototype are Sadness, Fear, Anger, and Joy. These were directly assigned to the game engine Unity and afterward to components that were able to morph the surfaces of a cubic room. This all happening in real-time!