The Experience Lab was a project I started in January 2015 to help facilitate research on interactive human-computer systems. The project was funded by the GamePipe lab at USC (many thanks to Dr. Zyda) and had received countless hours of love from 29 volunteer directed and applied research students of various disciplines at USC (many thanks to these guys!)
Why this project? In short, we want to be able to study human experiences more closely, especially those which impact us internally. A great example is the experience of "fun". Millions of us engage with video games daily for different reasons (entertainment, passing the time, escapism, socializing, decompression, addiction, etc.) but there are some commonalities in the way we engage with these games too, for example, we attend to them and we interact with them. Let's run with this example...
Why do games hold our interest? Professional game designers have been iterating on video game design concepts since these games first gained popularity in the '70s. Some of the design concepts have proven themselves over time to consistently provide quality entertainment while other unique designs have not. Games and entertainment are complex. Through experience (and lots of testing), designers have become better at anticipating which adventures are likely to be deemed “fun” by popular opinion, but little is understood about how different people experience fun. Game designers today rely pretty heavily on their personal experiences and intuition (in addition to feedback from early prototype play testing) to create "fun" experiences that "feel good," but they would be hard-pressed to explain exactly what makes an overall game experience "good."
This problem of quantifying and understanding what makes a particular game "good" or "fun" is one of the many kinds of questions the Experience Lab was designed to help answer. The lab enables closer scientific study of the real-time interactions between humans and computers (not just games) through its wide array of instrumentation designed to capture a person's physiologic state while s/he interacts with the computer, as well as all of the inputs and outputs from the machine (e.g. mouse movements, screen display). The idea is for recordings of interactions with the same stimulus (e.g., a video game) from several participants to form a rich data set of multimodal time series sensor data ripe for modern data science inquiry, time series modeling, and machine learning methods.
In order to facilitate natural and distraction-free human-computer interactions, we built the lab in a sound-resistant room and blocked out light from the windows. We even decided to paint the walls black to minimize the light reflections from the computer monitor. Here are some before and after snapshots of the lab:
Why this project? In short, we want to be able to study human experiences more closely, especially those which impact us internally. A great example is the experience of "fun". Millions of us engage with video games daily for different reasons (entertainment, passing the time, escapism, socializing, decompression, addiction, etc.) but there are some commonalities in the way we engage with these games too, for example, we attend to them and we interact with them. Let's run with this example...
Why do games hold our interest? Professional game designers have been iterating on video game design concepts since these games first gained popularity in the '70s. Some of the design concepts have proven themselves over time to consistently provide quality entertainment while other unique designs have not. Games and entertainment are complex. Through experience (and lots of testing), designers have become better at anticipating which adventures are likely to be deemed “fun” by popular opinion, but little is understood about how different people experience fun. Game designers today rely pretty heavily on their personal experiences and intuition (in addition to feedback from early prototype play testing) to create "fun" experiences that "feel good," but they would be hard-pressed to explain exactly what makes an overall game experience "good."
This problem of quantifying and understanding what makes a particular game "good" or "fun" is one of the many kinds of questions the Experience Lab was designed to help answer. The lab enables closer scientific study of the real-time interactions between humans and computers (not just games) through its wide array of instrumentation designed to capture a person's physiologic state while s/he interacts with the computer, as well as all of the inputs and outputs from the machine (e.g. mouse movements, screen display). The idea is for recordings of interactions with the same stimulus (e.g., a video game) from several participants to form a rich data set of multimodal time series sensor data ripe for modern data science inquiry, time series modeling, and machine learning methods.
In order to facilitate natural and distraction-free human-computer interactions, we built the lab in a sound-resistant room and blocked out light from the windows. We even decided to paint the walls black to minimize the light reflections from the computer monitor. Here are some before and after snapshots of the lab:
We did a lot of sensor research to figure out which devices to include in the lab that were comfortable enough to wear for an hour and offered good data resolution and quality. One signal I wanted the lab to be able to capture was the galvanic skin response (GSR). Typically this is measured at the finger tip, but this placement wouldn't work for people using a keyboard or mouse. Some consumer and research devices claimed to be able to measure GSR from the wrist, but I was skeptical at the time, so I built a prototype wristband GSR sensor. After I verified the signal strength was sufficient under the wrist, we invested in a research device (for the curious: Empatica E4).
After extensive research and testing of various sensors, we acquired consumer and research devices to capture: heart rate (via a PPG sensor), GSR, electro-encephalograms (EEG), eye gaze location, front-facing video camera, keyboard and mouse input (anonymized), and computer audio and video output. An army of graduate students helped setup the lab's data collection software and eventually the Apache Spark and Mahout back-end data storage and machine learning systems. The lab software was built on top of the Robot Operating System because it made near real-time asynchronous capture and storage of sensor data a breeze.
The video below shows our first test run with all of the sensors. The eye gaze is displayed as a green dot over the video in the upper left corner, which also turns red when the eye gaze moves outside of the screen's view frustum. The front-facing camera eventually moved closer to the computer user, too.
The video below shows our first test run with all of the sensors. The eye gaze is displayed as a green dot over the video in the upper left corner, which also turns red when the eye gaze moves outside of the screen's view frustum. The front-facing camera eventually moved closer to the computer user, too.
That's it for the lab build process. If you made it this far, thanks for reading, and if you have any questions or comments, I'd love to hear your thoughts. We first used this lab to study the engagement levels of distance learning students in this publication.
As a final parting thought, I want to mention that understanding the dynamics of "fun," or any kind of real-time interactive experience, isn't as easy as recording all of this data and throwing into a big machine learning system. We have to be able to make sense out of the patterns in the data in order to generalize to other types of activities and in order to connect our discoveries to what we already know. We know that games have to be challenging and that games require some sort of skill, so to generalize findings across games, we would need to understand how this data could be used to measure physical and mental load and how to quantify skill. Neither of these problems are easy either. Some day we'll have answers to enough of these kinds of questions and may be able to build closed-loop real-time feedback systems that enable the computer's software to evolve alongside its user to optimize his or her overall experience. That will be an exciting day!
As a final parting thought, I want to mention that understanding the dynamics of "fun," or any kind of real-time interactive experience, isn't as easy as recording all of this data and throwing into a big machine learning system. We have to be able to make sense out of the patterns in the data in order to generalize to other types of activities and in order to connect our discoveries to what we already know. We know that games have to be challenging and that games require some sort of skill, so to generalize findings across games, we would need to understand how this data could be used to measure physical and mental load and how to quantify skill. Neither of these problems are easy either. Some day we'll have answers to enough of these kinds of questions and may be able to build closed-loop real-time feedback systems that enable the computer's software to evolve alongside its user to optimize his or her overall experience. That will be an exciting day!