This work deals with measuring how engaged a user is in their music retrieval, and how to adapt a retrieval interface to this engagement. It will be presented at mobileHCI in Copenhagen and received a best paper honourable mention. The resulting prototype music system allowed users to navigate music by mood, at a range of engagement levels, and inspired the Bang & Olufsen BeoSound Moment's MoodWheel interface.
Mark McGill and I wanted to solve some of the common issues people had with Virtual Reality headsets. We ran a survey, which showed that one of the key issues to address was enabling people to see their keyboard and other people in the room. We stripped down a webcam, mounted it on an Oculus Rift, and tried out a variety of ways of blending the real and virtual worlds. We found blending based on user engagement, such as when users reach for the keyboard, to work best - people actually managed to type fairly well! Read the paper.
Having scraped a lot of playlists and listening histories from last.fm, I was keen to see which music features were associated with the way users made playlists, or how their listening changed over time. This user-centred approach to feature selection and system evaluation was published at ISMIR 2014. Some example code and interactive demos follow. Read the paper.
The Spritz speed-reading interface caught a lot of attention recently. It enables fast rates of reading comprehension using a very limited display size (a single word at a time). Their demo is impressive but I wanted to try it out with my own text. I hacked up a demo using their approach that you can paste any text into...