iScream!

Singing ice cream
iScream

Feed the food monsters

Augmented chewing
ftfm

Arm-A-Dine

Robotic commensality
armadine

You better eat to survive!

Food in VR
survive

FoBo

Dining companion
fobo

Shelfie

Design Framework
shelfie

Material Representations

Doctoral thesis
phdthesis

Fantibles

3D printing of cricket & tweets
sportsouvenirs

EdiPulse

3D printing chocolates
edipulse

TastyBeats

Sports drinks spectacle
tastybeats

SweatAtoms

Exercise memories in plastic
sweatatoms

dBricks

Tangible design framework
dbricks

DISCOVIR

Virtual Labs web framework
awards

W.Y.S.W.Y.E.

Preventing shoulder surfing
media

Human Computation

Masters Thesis
media

Marasim

Memorable graphical passwords
services

NAPTune

Pictorial PINs
services

PhotoSense

Emergent Tagging
services

GoFish

Game with a purpose
services

iCAPTCHA

Image CAPTCHA
services

L.O.T.R.O.I.

Privacy on social networks
services
Playful Gustasonic experiences

iScream!

iscream
iscream

Abstract

iScream! is a playful Gustasonic project, started and led by my PhD student, Yan Wang . This project aims to contribute to our understanding of the role of sound in Human-Food interaction. Here is the abstract of the work.

Although sound plays an important role in eating, we find that its role when it comes to designing playful interactive experiences around eating has been underexplored. We present “iScream!”- a novel gustosonic experience that can play four different categories of sounds (generative, fantasy, bodily and food sounds) as a result of eating ice cream. We conducted a study with 32 participants to articulate how each of these sounds seat across the four dimensions (stimulation, momentary, pragmatic and negative) of playfulness. Our results show that participants found “food sounds” to be the most playful category, offering stimulation, momentary and pragmatic playfulness while bodily sounds were the least playful. From the study findings we offer design implications to inform designers interested in designing experiences involving food, play, and sounds. Ultimately, we aim to contribute to a deeper understanding of the design of playful eating experiences.

Publications

For more details, please refer to following papers:
  1. Wang, Y., Li, Z., Jarvis, R., Khot, R., Mueller, F. The Singing Carrot: Designing Playful Experiences with Food Sounds . CHI PLAY 2018.Work-in-progress. ACM.
Augmented chewing

Feed the Food Monsters!

Feed the Food Monsters!
Feed the Food Monsters!

Abstract

Feed the food monsters! project is done in collaboration with two visiting bachelors students in Eshita Arza and Harshitha Kurra. This project aims to contribute a playful interface to improve chewing behaviour in social settings. Here is the abstract of the work.

Chewing is crucial for digestion and as mindful eating suggests, it is important that we do it properly. Despite this, not many people chew their food properly. To help facilitate proper chewing, we developed “Feed the Food Monsters!”, a two-player Augmented Reality game that aims to engage co-diners in proper chewing using their bodies as play. This game draws inspiration from Tetris and allows diners to view each other’s chewing behavior through a playful interface that is overlaid on their torso. In this game, players wear HMDs and guide each other to chew properly in order to keep the food monsters quiet. Besides supporting chewing in a social dining setting, this game also makes a contribution to AR-based games where chewing actions are mapped to game actions. Ultimately, with this work, we hope to engage people in the practice of proper chewing in a fun and a pleasurable way.

Publications

For more details, please refer to following papers:
  1. Arza, E., Harshitha, K., Khot, R., Mueller, F. Feed the Food Monsters! : Helping Co-diners Chew their Food Better with Augmented Reality. CHI PLAY 2018.Work-in-progress. ACM.

Check out the video for more details:

Robotic commensality

Arm-A-Dine

sportssouvenirs
sportssouvenirs

Abstract

Arm-A-Dine is a project is done in collaboration with a visiting bachelors student Yash Mehta. This project aims to contribute a playful robotic interface for social dining settings. Here is the abstract of the work.

"Arm-A-Dine!" is our design exploration of a playful social dining system that focuses on a shared feeding experience. Rather than considering technology as a distraction during eating, we believe it can play a positive role if we design it right. As a step towards this, we illustrated how interactive technology can facilitate a playful social eating experience. In this experience, all three arms (the person’s own two arms and the “third” robotic arm) are used for feeding oneself and the other person. The robotic arm (third arm) is attached to the body via a vest. We playfully subverted the functioning of the robotic arm so that it’s final movements (it continuously picks up food automatically), i.e. whether to feed the wearer or the partner, are guided by the facial expressions of the dining partner. We invite more explorations on such playful eating experiences in order to enrich our understanding of computer mediated human-food interactions.

Publications

For more details, please refer to following papers:
  1. CHI PLAY 2018 (Full Paper): Mehta, Y., Khot, R., Patibanda, R., Mueller, F. Arm-A-Dine: Towards Understanding the Design of Playful Embodied Eating Experiences. CHI PLAY 2018. ACM.

Selected Media articles

Check out the video for more details:

Arm-A-Dine CHI PLAY 2018 presentation talk:

Eating in Virtual Reality

You Better eat to Survive!

sportssouvenirs
sportssouvenirs

"You Better Eat to Survive!" is a project is done in collaboration with a visiting masters student Peter Arnold. This project aims to contribute to an understanding of crossmodal eating experiences in particular what does it mean to eat in a virtual reality environment when we can no longer see the food. Here is the abstract of the work.

"You Better Eat to Survive!" is a two-player virtual reality game that involves eating real food to survive and ultimately escape from a virtual island. Eating is sensed through cap- turing chewing sounds via a low-cost microphone solution. Unlike most VR games that stimulate mostly our visual and auditory senses, "You Better Eat to Survive!" makes a novel contribution by integrating the gustatory sense not just as an additional game input, but as an integral element to the game experience: we use the fact that with head-mounted displays, players cannot see what they are eating and have to entrust a second player outside the VR experience to provide them with sufficient food and feeding him/her. With "You Better Eat to Survive!", we aim to demonstrate that eating can be an intriguing interaction technique to enrich virtual reality experiences while offering complementary benefits of social interactions around food.

Publications

For more details, please refer to following papers:
  1. CHI 2017 (Student game competition WINNER): You Better Eat to Survive! Exploring Edible Interactions in a Virtual Reality Game
  2. TEI 2018 (Full Paper): You Better Eat to Survive: Exploring Cooperative Eating in Virtual Reality Games

Check out the video for more details:

Here is the video of the Student game competition winning talk from CHI 2017:

Robotic dining Companion

FoBo

sportssouvenirs
sportssouvenirs

Despite the known benefits of commensal eating, eating alone is becoming increasingly common as people struggle to find time and manage geographical boundaries to enjoy a meal together. Eating alone however can be boring, less motivating and shown to have negative impact on health and well being of a person. To remedy such situations, we undertake a celebratory view on robotic technology to offer unique opportunities for solo-diners to feel engaged and indulged in dining. We present, Fobo, a mischievous robotic dining companion that acts and behaves like a human co-diner but does not try to educate or correct individual’s behavior. Besides tackling solo-dining, this work also aims to reorient the perception that robots are not always meant to be infallible. They could be erroneous and clumsy, like we humans are.

Design Framework

Shelfie

sportssouvenirs
sportssouvenirs

Self-monitoring devices like activity trackers and heart rate monitors are becoming increasingly popular to support physical activity experiences. These devices adopt mostly data-centric view in the form of numbers and graphs that appear on screen and in doing so, they may miss out on other possible multisensorial ways of engaging with data. Embracing the opportunity of supporting pleasurable interactions with one’s own data, this article orchestrates new representation strategies that make use of different materials and digital fabrication technology to make interactions with physical activity data more memorable, enjoyable and satisfying. We designed and studied three systems that create material representations in different forms such as plastic artifacts, sports drinks, and chocolate treats. We utilize the insights gained from associated studies and supplemented them with the knowledge from related literature and our own experiences in designing these systems to develop a conceptual design framework, we call ‘Shelfie’. The ‘Shelfie’ framework is presented in the form of 13 cards that convey key design themes for creating material representations of physical activity data. Through this framework, we present a first conceptual understanding of the relationship between material representations and physical activity data and contribute guidelines on how to design meaningful material representations of physical activity data. We hope that our work inspires designers to consider new engagement possibilities afforded by material representations to support users’ experiences with physical activity.

Doctoral thesis

Material Representations

phd

My PhD research work on “Understanding Material Representations of Physical Activity”, utilizes emerging technologies such as 3D printers and food printers to make physical activity more memorable, enjoyable and fulfilling. This work offers new design thinking towards building quantified-self technologies where people track their physical activity data for self-reflection. In particular, Rohit orchestrates strategies for turning physical activity data into physical representations such as plastic artifacts, sports drinks and chocolate treats - which are personalized based on an individual’s efforts. As such, these systems enable the embodiment of invisible bodily data (e.g., heart rate) in a physical and edible form that can be seen as well as touched, smelled, tasted, carried and even possessed.

This work was done with the vision that in 5-10 years time, 3D printers and food printers would become household appliances. At the moment, little is known on how and for what purposes people would make use of these exciting new technologies. By contextualizing their use in the domain of quantified-self technologies, this research opens up an exciting new design space to take this field forward. To this end, the work can be seen as a precursor for exciting opportunities with food printing to define future meals and dining experiences, where the nutrition is customized on the macro level. Rohit is confident that in near future, we will witness various followups – that aim to connect the biographies of the material world with the immaterial world.

The link to the entire thesis is: Understanding Material Representations of Physical Activity

I gave a 1-hour long talk on my PhD research at University of Melbourne, Australia. Video recording of the talk is available below:

3D printing of cricket & tweets

Fantibles

sportssouvenirs
sportssouvenirs

Sports fans are increasingly using social media platforms like Twitter to express emotions and share their opinions while watching sports on TV. These commentaries describe a intense subjective experience of a fan watching a sport passionately. We see an opportunity to attend to these nostalgic moments by capturing them into a physical form. We present, Fantibles, personalized sports memorabilia that highlights an individual's commentary about sports on Twitter along with the uniqueness of each sports match. As a first case study, we investigate Fantibles for one popular sport, Cricket. We report insights from field deployments of Fantibles during an ODI Cricket match series between India and Bangladesh and offer reflections on the design in the form of four themes: self-expression, layered sense making, ad-hoc interactions and distributed social interactions. We believe our work opens up new interaction possibilities to support social sports viewing experience and design thinking on creating personalized sports memorabilia.

For more details, please refer to following papers:
  1. Fantibles: Capturing Cricket Fan's Story in 3D, DIS 2016

Media articles

Check out the video for more details:

3D printing chocolates

EdiPulse

edipulse
edipulse

Self-monitoring offers benefits in facilitating awareness about physical exercise, but such data-centric activity may not always lead to an enjoyable experience. We introduce EdiPulse a novel system that creates activity treats to offer playful reflections on everyday physical activity through the appealing medium of chocolate. EdiPulse translates self-monitored data from physical activity into small 3D printed chocolate treats. These treats (< 20 grams of chocolate in total) embody four forms: Graph, Flower, Slogan and Emoji. We deployed our system across 7 households and studied its use with 13 participants for 2 weeks per household. The field study revealed positive aspects of our approach along with some open challenges, which we disseminate across five themes: Reflection, Positivity, Determination, Affection, and Co-experience. We conclude by highlighting key implications of our work for future playful food-based technology design in supporting the experience of being physically active.

For more details, please refer to following papers:
  1. EdiPulse: Investigating a Playful Approach to Self-monitoring through 3D Printed Chocolate Treats, CHI 2017
  2. EdiPulse: Supporting Physical Activity with Chocolate Printed Messages, CHI WiP 2015
Press Articles: You can check the videos for more details:
Sports drink spectacle

TastyBeats

tastybeats
tastybeats

We introduce palatable representations that besides improving the understanding of physical activity through abstract visualization also provide an appetizing drink to celebrate the experience of being physically active. By designing such palatable representations, our aim is to offer novel opportunities for reflection on one’s physical activities. As a proof of concept, we present TastyBeats, a fountain-based interactive system that creates a fluidic spectacle of mixing sport drinks based on heart rate data of physical activity, which the user can later consume to replenish the loss of body fluids due to the physical activity. The TastyBeats induces an active engagement of the users with representation of their personal data in the form of a energy drink created by mixing different flavors together.

For more details, please refer to following papers:
  1. TastyBeats: Designing Palatable Representations of Physical Activity (Best paper Honorable mention @ CHI 2015)
  2. TastyBeats: Celebrating Heart Rate Data with a Drinkable Spectacle
  3. Tastybeats: making mocktails with heartbeats

Other achievements:

  • Finalist for Premier Design Awards 2015 under the service design category.
  • Public exhibitions at UIST 2013, CHI 2014 and IIT Bombay TechFest 2014.
  • Finalist for Student Innovation Contest, UIST 2013.

Check out the video for more details:

Exercise memories in plastic

SweatAtoms

sweatatoms
sweatatoms

SweatAtoms is the first prototype, in the series of prototypes that explores material representations of the physical activity to enrich the experience of being physically active. I advocate a novel approach of representing physical activity in the form of material artifacts. I have designed a system called SweatAtoms that transforms the physical activity data based on heart rate into five different 3D printed material artifacts: Graph, Flower, Frog, Dice and Ring. I hope that this work will inspire designers to consider new possibilities afforded by digital fabrication to support user’s experience with physical activity by utilizing interactive technologies at our disposal.

For more details, please refer to following papers:
  1. Understanding physical activity through 3D printed material artifacts
  2. SweatAtoms: materializing physical activity
  3. Sweat-atoms: turning physical exercise into physical objects

Other achievements:

  • Will be exihibited at Ambient Play 2016.
  • Public exhibition at CHI 2013.
  • Finalist for Student Research Contest, CHI 2013.

Check out the video for more details:

Go through the slides for a quick overview:
Tangible design framework

dBricks

dbricks
dbricks

Technological advancements of personal informatics tools that help people collect physical activity data for the purpose of self-monitoring and reflection has raised the question of how this data should be presented to the user. Inspired by the recent advancements in digital fabrication and fueled by the engagement opportunities at disposal with material representations, this work puts forward a new perspective of representing physical activity data in the form of material artifacts. Following this, we designed and studied two systems that represented physical activity data through material representations coming out of a 3D printer. Based on the insights gained from the two studies, we propose a conceptual framework called “Materialized Self”. This framework describes a production and consumption lens that has different design properties with which we highlight the complex interplay between the designer and user. This framework will guide the design and analysis of material representation of personal data.

More details coming soon.

Virtual labs web framework

DISCOVIR

vlabs
vlabs

As education and technology merge, the opportu- nity for teaching and learning expand even more. However, the juxtaposition of the latest technologies has also raised concern for academic institutions as to which technologies are most effective in terms of cost, reach, richness, and, most importantly, learning. In this paper, we present a novel development framework, which we call as DISCOVIR, for asynchronous virtual lab development. Our aim is to simplify the process of developing the content for a virtual lab and to make the resulting labs more consistent for the learners (students). We incorporate the popular Model View Controller architecture, in building the user interface for the virtual lab. It allows the lab developers to focus completely on the quality of the content while the look and the feel for the resultant lab is provided in the form of customizable themes. We define an HTML5 based semantic structure for writing the lab content. It ensures long term sustainability of the lab and supports semantic search capabilities. Our model makes no prior assumptions or requisites in terms of expertise and tools required to use it in practice. A lab developer is free to use any html editor on any operating system to build his lab. It requires no costly software installations and runs locally without a web server support. Our proposed model is currently being used by 16 of the 28 virtual labs from IIIT, Hyderabad.

Read the paper for more details

  1. DISCOVIR: A Framework for Designing Interfaces and Structuring Content for Virtual Labs
Preventing shoulder surfing

W.Y.S.W.Y.E.

wyswye
wyswye

Recognition based graphical passwords are inherently vulnerable to shoulder surfing attacks because of their visual mode of interaction. In this paper, we propose and evaluate two novel shoulder-surfing defense techniques for recognition based graphical passwords. These techniques are based on WYSWYE (Where You See is What You Enter) strategy, where the user identifies a pattern of password images within a presented grid of images and replicates it onto another grid. We conducted controlled laboratory experiments to evaluate the usability and security of the proposed techniques. Both the schemes had high login success rates with no failures in authentication. More than seventy percent of participants successfully logged on to the system in their first attempt in both the schemes. The participants were satisfied with the schemes and were willing to use it in public places. In addition, both the schemes were significantly secure against shoulder surfing than normal unprotected recognition based graphical passwords. The login efficiency improved with practice in one of the proposed scheme. We believe, WYSWYE strategy has considerable potential and can easily be extended to other types of authentication systems such as text passwords and PINS.

Read the paper for more details

  1. WYSWYE: shoulder surfing defense for recognition based graphical passwords
Masters thesis

Human Computation with Perceptive Intelligence

masters
hcomp

Human visual system is a pattern seeker of enormous power and subtlety. We not only can see things clearly, but are also capable of describing them with precision and remembering them for a long time. Having these capabilities had a major impact on our sustenance, survival and perpetuation as species. Although computers can perform a variety of tasks that are beyond human capability because of speed, complexity, or dangerous environments, attempts to replicate human perceptual abilities have been strikingly inferior, even for the visual tasks that people consider extremely simple. In this thesis, we advance the research in the field of human computation by leveraging human perceptual abilities to solve problems that computers alone cannot effectively solve. In particular, we address two important problems: user authentication, and image annotation.

User authentication has issues in both security and usability. For example, passwords are either ‘secure but difficult to remember’ or ‘memorable but not secure’, when by definition, they needs be both secure and memorable. Graphical passwords are viable alternative to text passwords since they are based on proven human ability to recognize and remember images, coupled with the larger password space offered by images. In this thesis, we propose and evaluate, Marasim, a novel Jigsaw based graphical authentication scheme, using Tagging. Marasim is aimed at achieving the security of system chosen images with the memorability of self chosen images. Empirical studies of Marasim provide evidence of increased memorability, usability and security.

Additionally, we examine the manual image annotation problem. Recently there have been a number of attempts to lure humans into annotation process. Notable examples are interactive games like ESP, and social tagging like Flickr. However, we found that extant methods in their present form are inadequate to result in annotations of high quality. We therefore, introduce two intelligent system designs for semantic annotation of images in the form of a game and a CAPTCHA. First one is GoFish, a web variant of standard Go-Fish, a popular playing card game. While the other one is image recognition CAPTCHA, named iCAPTCHA. Behind both these designs is a strong emergent semantic theory that ensures superior annotations.

Full thesis is available below

Memorable graphical passwords

Marasim

marasim
marasim

In this paper we propose and evaluate Marasim, a novel Jigsaw based graphical authentication mechanism using tagging. Marasim is aimed at achieving the security of random images with the memorability of personal images. Our scheme relies on the human ability to remember a personal image and later recognize the alternate visual representations (images) of the concepts occurred in the image. These concepts are retrieved from the tags assigned to the image. We illustrate how a Jigsaw based approach helps to create a portfolio of system-chosen random images to be used for authentication. The paper describes the complete design of Marasim along with the empirical studies of Marasim that provide evidences of increased memorability. Results show that 93% of all participants succeeded in the authentication tests using Marasim after three months while 71% succeeded in authentication tests using Marasim after nine months. Our findings indicate that Marasim has potential applications, especially where text input is hard (e.g., PDAs or ATMs), or in situations where passwords are infrequently used (e.g., web site passwords).

Full paper is available below:

Pictorial PINs

NAPtune

naptune
naptune

Graphical passwords are considered to be a secure and memorable alternative to text passwords. Users of such systems, authenticate themselves by identifying a subset of images from the set of displayed images. However, despite the impressive results of user studies on experimental graphical passwords schemes, their overall commercial adaptations have been relatively low. In this paper, we investigate the reasons behind the low commercial acceptance of graphical passwords and present recommendations to overcome the same. Based on these recommendations, we design a simple graphical password scheme, which we call as NAPTune. NAPTune is aimed to work as a cued recognition based graphical authentication scheme that allows users to choose both text as well as images as their password with the same underlying design and interaction. In doing so, we blend the strengths of Numbers, Alphabets and Pictures (NAP) together to effectively defeat prevalent forms of social hacking. We conducted a user study with 35 participants to evaluate the viability of our proposed design. Results of the study are encouraging which indicates that our proposed design is potentially secure and usable method of authentication.

Full paper is available below:

Emergent tagging framework

PhotoSense

photosense
photosense

Tagging of images using descriptive keywords (tags), contributed by ordinary users, is a powerful way of organizing them. However, due to the richness of the image content, it is often difficult to choose tags that best describe the content of the image to the viewing audience and ensure access to the image. In this paper, we present a novel tagging framework based on the theory of emergent semantics to assist the user in the tag selection process. Our idea is to enrich the current "looking at" experience of tagging with the "looking for" experience of searching. We describe the design of our approach along with a preliminary user study conducted with a prototype Flickr application.

Full paper is available below:

Game with a purpose

GoFish

gofish
gofish

Marking the image content with descriptive keywords (also known as tags) is an effective way of improving the accessibility of images. However, doing so practically is boring as well as laborious to most humans. In the recent times, there have been number of attempts to inspire humans to annotate images. Notable examples are social tagging like Flickr and online games like ESP. However, existing methods in their present form are inadequate to result in annotations of superior quality. Therefore, we present GoFish, an intelligent system for semantic annotation of images. GoFish is a web variant of standard Go Fish, a popular playing card game. GoFish utilizes the theory of Emergent Semantics to ensure that all images will have superior tags. We describe the complete design of the game and discuss its benefits. Results of a preliminary user study are encouraging.

Full paper is available below:

Image Captcha

iCAPTCHA

dbricks

Semantic annotation or tagging of images can greatly improve the accuracy and efficiency of image search engines. However, humans rarely annotate images as they find the task of image annotation boring and laborious despites its benefits in terms of search and retrieval. In this paper, we introduce a novel approach of luring users into image annotation: by embedding image annotation into a CAPTCHA design. A CAPTCHA is a standard security mechanism used by popular commercial websites to prevent automated programs from abusing the online services. Millions of users solve CAPTCHAs daily in order to access web content and services. We aim to utilize human effort spent in solving the CAPTCHA into a productive work of image annotation. We introduce iCAPTCHA, a user friendly and productive CAPTCHA design. Our premise is based on the human ability to recognize images and label them in proper categories. Each time a user solves an iCAPTCHA, he/she is helping to label images in proper categories which will in turn improve image search and retrieval.

Full paper is available below:

Privacy on Social networks

Let Only the Right One In

dbricks
lotroi

Current social networking sites protect user data by making it available only to a restricted set of people, often friends. However, the concept of `friend' is illusory in social networks. Adding a person to the friends list without verifying his/her identity can lead to many serious consequences like identity theft, privacy loss, etc. We propose a novel verification paradigm to ensure that a person (Bob) who sends a friend request (to Alice) is actually her friend, and not someone who is faking his identity. Our solution is based on what Bob might know and verify about Alice. We work on the premise that a friend knows a person's preferences better than a stranger. To verify our premise, we conducted a two stage user study. Results of the user study are encouraging. We believe our solution makes a significant contribution, namely, the way it leverages the benefits of preference based authentication and challenge response schemes.

Full paper is available below: