map personal professional home

Education:

Fellow, SynBERC LEAP. San Francisco, CA. 2015.
Fellow, Mozilla Labs. Mountain View, CA. 2012.
Graduate Certificate, Singularity University. NASA Ames. 2010.
BA Film and Media, Queen's University. Kingston, ON. 2010.
Visiting Scholar, MIT Media Lab. Cambridge, MA. 2006.

Academic Service:

Chair of Video Submissions

TEI 2012 is the sixth international conference dedicated to presenting the latest results in tangible, embedded, and embodied interaction. It is being held 19th to 22nd Feburary 2012 at Queen's Human Media Lab in Kingston, Ontario, Canada. The work presented at TEI addresses HCI issues, design, interactive art, user experience, tools and technologies, with a strong focus on how computing can bridge atoms and bits into cohesive interactive systems. The intimate size of this single-track conference provides a unique forum for exchanging ideas and presenting innovative work through talks, interactive exhibits, demos, hands-on studios, posters, art installations and performances.

Link: ACM TEI 2012


Published, Peer Reviewed Media:

"A Biological Imperative for Interaction Design"

"A Biological Imperative for Interaction Design"

Amanda Parkes, Connor Dickie.

To Appear - ACM CHI 2013. Paris, France. 2013.

This paper presents an emerging approach to the integration of biological systems- their matter, mechanisms, and metabolisms- into models of interaction design. By bringing together conceptual visions and initial experiments of alternative bio based approaches to sensing, display, fabrication, materiality, and energy, we seek to construct an inspirational discussion platform approaching non-living and living matter as a continuum for computational interaction. We also discuss the emergence of the DIY bio and open source biology movements, which allow non- biologists to gain access to the processes, tools, and infrastructure of this domain, and introduce Synbiota, an integrated, web-based platform for synthetic biology research.

Available: ACM Digital Library - To appear.
Available: Local - Biologic_Interaction.pdf

"FlexCam - Using Thin-film Flexible OLED Color Prints as a Camera Array"

"FlexCam"

Connor Dickie, Nicholas Fellion, Roel Vertegaal.

ACM CHI 2012. Austin, Texas. 2012.

FlexCam is a novel compound camera platform that explores interactions with color photographic prints using thinfilm flexible color displays. FlexCam augments a thinfilm color Flexible Organic Light Emitting Diode (FOLED) photographic viewfinder display with an array of lenses at the back. Our prototype allows for the photograph to act as a camera, exploiting flexibility of the viewfinder as a means to dynamically re-configure images captured by the photograph. FlexCam's flexible camera array has altered optical characteristics when flexed, allowing users to dynamically expand and contract the camera's field of view (FOV). Integrated bend sensors measure the amount of flexion in the display. The degree of flexion is used as input to software, which dynamically stitches images from the camera array and adjusts viewfinder size to reflect the virtual camera's FOV. Our prototype envisions the use of photographs as cameras in one aggregate flexible, thin-film device.

Available: ACM Digital Library - FlexCam
Available: Local - FlexCam.pdf
Media: video.
Press: Wired, Gizmodo

"Don't Touch"

"Don't Touch"

Sylvia Cheng, Connor Dickie, Andreas Hollatz, Roel Vertegaal, Justin Lee.

ACM CHI 2011. Vancouver, BC. 2011.

In this video, we discuss the design of an e-textile shirt with an interactive Lumalive display featuring a touch- controlled image browser. To determine where to place touch sensors, we investigated which areas of the Lumalive shirt users would be comfortable touching or being touched. We did so by measuring how often participants would opt out of touches. Results show significant differences in opt-outs between touch zones on the front of the shirt. For both touchers and touchees, opt-outs occurred mostly in the upper chest touch zone. We also found significant differences in comfort ratings between touch zones on the front and on the back of the shirt. On the front, the upper chest and lower abdominal zones were the least comfortable touch zones. Findings suggest participants were less comfortable with touches on the upper chest, the lower abdomen, and the lower back. We conclude that the most appropriate areas for touch sensors on a shirt are on the arms and shoulders as well as on the upper back.

Available: ACM Digital Library - Don't Touch
Available: Local - Don't Touch.pdf
Media: video.

"Emotionally Reactive Television"

"Emotionally Reactive Television"

Chia-Hsun Jackie Lee, Chaochi Chang, Hyemin Chung, Connor Dickie and Ted Selker.

In proceedings, ACM IUI 2007. Honolulu, Hawaii. 2007.

When is an interface simple? Is it when it is invisible or very obvious, even intrusive? From the time TV was created, watching TV is considered as a static activity. TV audiences have very limited choices to interact with TV, such as turning on/off, increasing/decreasing volume, and traversing among different channels. This paper suggests that TV program should have social responses to people, such as affording and accepting audience's emotional feeling with the growth of technologies. This paper presents HiTV, an Emotionally-Reactive TV system using a digitally augmented soft ball as affect-input interfaces that can amplify TV program's video/audio signals. HiTV transforms the original video and audio into effects that intrigue and fulfill people's emotional expectation.

Available: ACM - Emotionally Reactive Television
Media: video (demonstration).

"Kameraflage Garment"

"Kameraflage Garment"

Connor Dickie.

SIGGRAPH Unravel Fashion Show, 2007.

Kameraflage is display technology that is invisible to the naked eye, yet is visible when imaged with a digital camera. By integrating Kameraflage into garments, a new level of expression is enabled for people who are limited by dresscodes or those who simply wish to add an interactive element to their wardrobe, as garments can now include a secondary message or design that appears only in photographs or digital camera viewfinders.

This project became an immediate hit on the internet garnering over half a million hits to my website in one week from sources such as Gizmodo, Engadget, CNN, BBC and many more. Since then, images from this project have appeared in various print magazines including Scholastic's ScienceWorld, TrendOne, TrendHunter and a number of in-flight magazines.

This project inspired me to found a company, Kameraflage Inc. to patent and commercialize this technology.

Available: SIGGRAPH - Kameraflage, link (Google search - Project went viral, receiving more than 500,000 hits in one week), link (achieved 6th place out of 100 textile innovations for 2008 awarded by the Thailand Textile Institute which is part of the Thailand Ministry of Industry), link (Invited to be showcased at the first ever Technology Fashion show at CES 2011).

"LookPoint: an Evaluation of Eye-Input for Hands-Free Switching Between Multiple Computers"

"LookPoint: an Evaluation of Eye-Input for Hands-Free Switching Between Multiple Computers"

Connor Dickie, Jamie Hart, Roel Vertegaal and Alex Eiser.

In proceedings, ACM OZCHI 2006. Sydney, Australia. 2006.

We present LookPoint, a system that uses eye input for switching input between multiple computing devices. LookPoint uses an eye tracker to detect which screen the user is looking at, and then automatically routes mouse and keyboard input to the computer associated with that screen. We evaluated the use of eye input for switching between three computer monitors during a typing task, comparing its performance with that of three other selection techniques: multiple keyboards, function key selection, and mouse selection. Results show that the use of eye input is 111% faster than the mouse, 75% faster than function keys, and 37% faster than the use of multiple keyboards. A user satisfaction questionnaire showed that participants also preferred the use of eye input over other three techniques. The implications of this work are discussed, as well as future calibration-free implementations.

Available: ACM - LookPoint
Media: image (early prototype), image (experimental system)

"eyeLook: Attention Sensitive Mobile Media Consumption"

"eyeLook: Attention Sensitive Mobile Media Consumption"

Connor Dickie, Roel Vertegaal, Changuk Sohn and Daniel Cheng.

Published in proceedings, ACM UIST 2005. Seattle, Washington. 2005.

One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.

Available: ACM - eyeLook, Local: p103-dickie.pdf
Media: image (early prototype)

"Augmenting and Sharing Memory with eyeBlog"

"Augmenting and Sharing Memory with eyeBlog"

Connor Dickie, Roel Vertegaal, David Fono, Changuk Sohn, Daniel Chen, Daniel Cheng, Jeffrey S. Shell and Omar Aoudeh.

In extended abstracts of the first ACM workshop on the Continuous Archiving and Recording of Personal Experiences (CARPE), part of ACM Multimedia 2004. New York, New York. 2004.

eyeBlog is an automatic personal video recording and publishing system. It consists of ECSGlasses, which are a pair of glasses augmented with a wireless eye contact and glyph sensing camera, and a web application that visualizes the video from the ECSGlasses camera as chronologically delineated blog entries. The blog format allows for easy annotation, grading, cataloging and searching of video segments by the wearer or anyone else with internet access. eyeBlog reduces the editing effort of video bloggers by recording video only when something of interest is registered by the camera. Interest is determined by a combination of independent methods. For example, recording can automatically be triggered upon detection of eye contact towards the wearer of the glasses, allowing all face-to-face interactions to be recorded. Recording can also be triggered by the detection of image patterns such as glyphs in the frame of the camera. This allows the wearer to record their interactions with any object that has an associated unique marker. Finally, by pressing a button the user can manually initiate recording.

Available: ACM - eyeBlog 2.0
Media: video (Google TechTalk @ 27:45) video (Discovery Channel), .pdf (Globe & Mail) image (searching augmented memory)

"eyeWindows: Using Eye-Controlled Zooming Windows for Focus Selection"

"eyeWindows: Using Eye-Controlled Zooming Windows for Focus Selection"

David Fono, Roel Vertegaal and Connor Dickie.

In video program, ACM UIST 2004. Santa Fe, NM. 2004.

Most windowing systems from the past twenty years use independent overlapping windows. However, today's user is often engaged in numerous simultaneous tasks, with multiple windows vying for screen space. In these cases, overlapping windows can easily obscure each other, forcing the user to make constant manual adjustments in order to switch and monitor tasks. We have developed EyeWindows, an alternative windowing technique that uses eye-controlled zooming windows. In EyeWindows, information is distorted rather than obscured, and switching tasks is a simple matter of looking at the appropriate window.

Available: video (video figure)

"Eye Contact Sensing Glasses for Attention-Sensitive Wearable Video Blogging"

"Eye Contact Sensing Glasses for Attention-Sensitive Wearable Video Blogging"

Connor Dickie, Roel Vertegaal, Jeffrey S. Shell, Changuk Sohn, Daniel Cheng and Omar Aoudeh.

In extended abstracts and video program of ACM CHI 2004 Conference on Human Factors in Computing Systems, Vienna, Austria. 2004.

We present ECSGlasses; eye contact sensing glasses that report when people look at their wearer. When eye contact is detected, the glasses stream this information to appliances to inform these about the wearer's engagement. We present one example of such an appliance, eyeBlog, a conversational video blogging system. The system uses eye contact information to decide when to record video from the glasses' camera.

Available: ACM - eyeBlog
Media: video (video figure), link (Boingboing.net), link (Slashdot.org)

"Attentive Office Cubicles: Mediating Visual and Auditory Interactions Between Office Co-Workers"

"Attentive Office Cubicles: Mediating Visual and Auditory Interactions Between Office Co-Workers"

Aadil Mamuji, Roel Vertegaal, Connor Dickie, Changuk Sohn and Maria Danninger

In proceedings and video program of Ubicomp. Nottingham, England. 2004.

We designed an office cubicle that automatically mediates communications between co-workers by sensing weather users are candidate members of the same social group. The cubicle regulates visual interactions through the use of "privacy glass", which can be rendered opaque or transparent upon detection of joint orientation. It regulates auditory interactions through noise-cancelling headphones that automatically turn off upon co-orientation.

Available : Ubicomp - Attentive Office
Media: video (Google TechTalk @ 26:25), video (video figure), video (Discovery Channel)

"eyeCOOK: A Gaze & Speech Enabled attentive Cookbook"

"eyeCOOK: A Gaze & Speech Enabled attentive Cookbook"

Jeffrey S. Shell, Jeremy S. Bradbury, Craig B. Knowles, Connor Dickie and Roel Vertegaal.

In video program Ubicomp Seattle, Washington. 2003.

To make human computer interaction more transparent, different modes of communication need to be explored. We present eyeCOOK, a multimodal attentive cookbook to help a non-expert computer user cook a meal. The user communicates using eye-gaze and speech commands, and eyeCOOK responds visually and/or verbally, promoting communication through natural human input channels without physically encumbering the user. Our goal is to improve productivity and user satisfaction without creating additional requirements for user attention.

Available: video (video figure)

"ECSGlasses and EyePliances: Using Attention to Open Sociable Windows of Interaction"

"ECSGlasses and EyePliances: Using Attention to Open Sociable Windows of Interaction"

Jeffrey S. Shell, Roel Vertegaal, Daniel Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, Connor Dickie.

In proceedings of ACM Eye Tracking Research and Applications Symposium, San Antonio, Texas. 2004.

We present ECSGlasses; wearable eye contact sensing glasses that detect human eye contact. ECSGlasses report eye contact to digital devices, appliances and EyePliances in the user's attention space. Devices use this attentional cue to engage in a more sociable process of turn taking with users. This has the potential to reduce inappropriate intrusions, and limit their disruptiveness. We describe new prototype systems, including the Attentive Messaging Service (AMS), the Attentive Hit Counter, the first person attentive camcorder eyeBlog, and an updated Attentive Cell Phone. We also discuss the potential of these devices to open new windows of interaction using attention as a communication modality. Further, we present a novel signal-encoding scheme to uniquely identify EyePliances and users wearing ECSGlasses in multiparty scenarios.

Available: ACM - eyePliances

"AuraMirror: Artistically Visualizing Attention"

"AuraMirror: Artistically Visualizing Attention"

Alexander W. Skaburskis, Jeffrey S. Shell, Roel Vertegaal and Connor Dickie.

In extended abstracts of ACM CHI 2003 Conference on Human Factors in Computing Systems, 2003.

Abstract:
We present AuraMirror, a system that visualizes virtual windows of attention: the commodity of visual attention people exchange during interactions in small groups. AuraMirror acts as a dynamic 'painting' that passively gathers and displays attentional data by superimposing auras over each viewer's head in a real time video mirror. This permits users to see how they distribute their attention in group interactions, and the effect of interruption on this process. Finally, we describe how AuraMirror can be extended to model attention among both participants and ubiquitous devices.

Available: ACM - AuraMirror
Media: .pdf (Scientific American p. 61), video (Google TechTalk @ 11:30), Installed at Ontario Science Centre.

"Designing Attentive Cell Phones Using Wearable EyeContact Sensors"

"Designing Attentive Cell Phones Using Wearable EyeContact Sensors"

Roel Vertegaal, Connor Dickie, Changuk Sohn and Myron Flickner.

In extended abstracts of ACM CHI 2002 Conference on Human Factors in Computing Systems. Minneapolis: ACM Press, 2002.

Abstract:
We present a prototype attentive cell phone that uses a low-cost EyeContact sensor and speech analysis to detect whether its user is in a face-to-face conversation. We discuss how this information can be communicated to callers to allow them to employ basic social rules of interruption.

Available: ACM - Attentive Cellphone
Media: .pdf (Scientific American p. 60), video (video sketch with "attention auras"), video (early video sketch)

Selected Unpublished Media:

"Zero Gesture"

"Zero Gesture: Exploring Full-Body Gestural Interfaces using Motion-Capture in Simulated Micro-Gravity"

Connor Dickie, Paul Strohmeier. 2011.

Full-body motion capture systems like Vicon and Kinect have enabled user interface researchers to create a rich gestural language that allows users to interact with computers in ways that are difficult or impossible with traditional input methods. This paper describes a novel underwater motion capture system that enables users to operate in a simulated micro-gravity environment. By removing the gravitational constraint that has been a constraining factor for previous research in this space we show that there is opportunity to extend the existing gestural vocabulary to support a new class of interactions.

Media: Abstract [.pdf]

"Considerate Vending Machine"

"Vending Machine"

Ted Selker, Connor Dickie, Matthew Hockenberry, John Wetzel and Julius Akinyemi. 2006.

Patent #8,594,838. Exhibited at Wired NextFest 2006, Exhibited at PepsiCo. HQ. in Purchase, NY. Exhibited at MIT Media Lab 2006.

The vending machine platform was an experiment bringing together what we had learned in Attentive User Interfaces and Considerate Computing.

Physically the platform consisted of an array of 12 video screens in place of buttons. Video could span acress all or a portion of the 12 screens producing a fragmented large display. Two more 22" monitors were along the banner section of the machine. We had full control over the pysical vending aspect of the machine including taking money, making change and dispensing product. We also included a camera for passive audience auditing.

Software considerations included using a customized version of AttentionMeter, a sophisticated presence system that can register a wide range of affective feedback via implicit facial gestures. An associated media experience was developed for this new platform. This experience included interactive games, news and sales.

I played a major role in the design, construction and exhibiting of the Vending Machine as well as managing a UROP that assisted with electronics.

Media: video (first prototype), video (second prototype), video (system details), US Patent #8,594,838

"Mouse Voting"

"Detecting Voter Coersion in Online Polls"

Ted Selker and Connor Dickie. 2006.

Can a voting machine tell when the stress on a voter might compromise their vote? A voter might be agitated because they are not sure of whom to vote for, because someone is manipulating them or because something else is wrong. It is our hypothesis that an agitated voter will have this agitation reflected in their mouse movements, and that by recording and reasoning about mouse movement, we will be able to determine if a voter is somehow agitated when casting their vote.

Media: .pdf (report for a CalTech/MIT Voting Technology Project seminar), video (showing mouse-based "agitation" feature detection tool)

Lecturing and Workshops:

"Nightmarket Workshop 2008 (Taiwan)"

"Nightmarket Workshop #2"

Summer, 2008. Taiwan.

I was a group leader at Nightmarket 2008, an international symposium and workshop on smart living technologies, cross-disciplinary education and sustainable culture.

This event was hosted by National Cheng Kung University, the Centre for technologies of Ubiquitous Computing and Humanity, College of Planning and Design, MIT Media Laboratory, the MIT School of Architecture and Planning, Archlife Research Foundation, Industrial Technology Research Institute (ITRI), Institute for Information Industry (III), Technology Licensing & Business Incubation Centre and the National Science Council of Taiwan.

The group explored concepts and ultimately designed and built a collection of wearable personal cooling devices including active and passive cooling technologies.

From my group's facetious project statement:

- Global Warming. Serious Business. Civic Duty in Consumer Culture -

Our group decided to capitalize on Climate Change fears by peddling (at Tainan Nightmarket) a modern version of a decidedly Asian product; the personal fan. In a stroke of sinister sales genius (referencing arms-dealers that sell to both sides of a conflict) our products give the illusion of combatting global warming by temporarily shielding the wearer from the immediate effects of higher-temperatures by the selective cooling of strategic locations on the body. At the same time, our product contributes to global warming by exacerbating the ever-increasing frivolous use of energy with the hopes that this tit-for-tat "heat-exchange" in the "War on Global Warming" will serve to extend this conflict into a never-ending spiral that creates an even greater demand for our product.

Media: link (Nightlarket 2008 website), gallery (group project), .pdf (project poster),

"Nightmarket Workshop. 2007 (Taiwan)"

"Nightmarket Workshop #1"

Summer, 2007. Taiwan.

The Nightmarket workshop provided a series of prototyping toolkits that allowed participants to build multimedia sensing/actuating systems and installations in a four-day workshop. By using those technological toolkits, all participants were able to to build their own systems that addressed cultural issues.

In this workshop, we re-designed, re-engineered, and newly prototyped technological devices that critically consider the habitual tendency of technocracy to deviate from its original ethical intent - to enrich (and not dominate) life. Drawing from their insight and technical expertise, researchers from MIT Media Laboratory guided students in developing new perspectives and technical skills in a very short time. Students were be asked to learn and build prototypes using toolkits including computer-vision, multimedia, gesture, and wireless sensors.

It was my task to "float" between groups and offer technological assistance and critique. I also gave a series of talks on rapid prototyping at Gamania Inc., National Taiwan University, National Taiwan Museum of Fine Arts, Xuexue Institute, NCCU and NSYSU.

Media: link (Nightmarket 2007 website), link (image gallery)

"MIT Media Lab Hacker Seminar - Max/MSP/Jitter"

"MIT Media Lab Hacker Seminar"

Fall, 2006. Cambridge, MA.

I gave a MIT Media Lab Hacker Seminar on rapid prototyping with Max/MSP/Jitter (Max). Max is a visual programming language (like Pure Data, Quartz Composer and LabVIEW) that is ideal for rapid-prototyping and running user studies. I demonstrated how easy it is to get started with Max as well as showed some interesting tricks that I use to interface Max with custom hardware, external software and networks. Finally I touched on how using Max to run experiments can be a real time-saver.

Media: video (MIT ML Hacker Seminar), link (Seminar webpage - MIT login required)

"Guest Lecturer - Queen's University School of Computing"

"Guest Lecturer - Queen's University School of Computing"

Since 2006. Kingston, Canada.

I have been a guest lecturer in the School of Computing at Queen's University in a number of different classes including Advanced Human Computer Interaction, Computing and the Creative Arts as well as an introductory programming fundamentals class.

"Guest Lecturer - Boston University School of Management"

"Guest Lecturer - Boston University School of Management"

Winter, 2007. Boston, MA.

I was invited to speak about the future of media as well as how business can leverage emerging medias to say new things. I also spoke about the perspective of the "consumer as audience".

"Guest Lecturer - Engineering Entrepreneurship Series at University of Toronto's School of Electrical and Computer Engineering."

"Guest Lecturer - Engineering Entrepreneurship Series at University of Toronto's School of Electrical and Computer Engineering

Spring, 2010. Toronto, Canada.

I was to to speak about my experience commercializing a research/engineering project, working in startups, executing patents, dealing with legal and marketing.

Citations:

- "Emotionally Reactive Television"

Ping-Yi Liu , Hung-Wei Lee, Tsai-Yen Li, Shwu-Lih Huang, Shu-Wei Hsu, An Experimental Platform Based on MCE for Interactive TV, Proceedings of the 6th European conference on Changing Television Environments, July 03-04, 2008, Salzburg, Austria

Radu-Daniel Vatavu, Stefan-Gheorghe Pentiuc, Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience, Proceedings of the 6th European conference on Changing Television Environments, July 03-04, 2008, Salzburg, Austria

Sabine Bachmayer, Artur Lugmayr, Gabriele Kotsis, New social & collaborative interactive TV program formats, Proceedings of the 7th International Conference on Advances in Mobile Computing and Multimedia, December 14-16, 2009, Kuala Lumpur, Malaysia

- "LookPoint: an evaluation of eye input for hands-free switching of input devices between multiple computers"

Xinyong Zhang, Xiangshi Ren, Hongbin Zha, Modeling dwell-based eye pointing target acquisition, Proceedings of the 28th international conference on Human factors in computing systems, April 10-15, 2010, Atlanta, Georgia, USA

Jamie Hart, Dumitru Onceanu, Changuk Sohn, Doug Wightman, Roel Vertegaal, The Attentive Hearing Aid: Eye Selection of Auditory Sources for Hearing Impaired Users, Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I, August 24-28, 2009, Uppsala, Sweden

Masaki Omata, Masahiro Kosaka, Atsumi Imamiya, A pen-tablet-orientation-pointing method for multi-monitors, Proceedings of the 10th International Conference NZ Chapter of the ACM's Special Interest Group on Human-Computer Interaction, p.53-60, July 06-07, 2009, Auckland, New Zealand

Dagmar Kern, Paul Marshall, Albrecht Schmidt, Gazemarks: gaze-based visual placeholders to ease attention switching, Proceedings of the 28th international conference on Human factors in computing systems, April 10-15, 2010, Atlanta, Georgia, USA

Xing-Dong Yang, Edward Mak, David McCallum, Pourang Irani, Xiang Cao, Shahram Izadi, LensMouse: augmenting the mouse with an interactive touch display, Proceedings of the 28th international conference on Human factors in computing systems, April 10-15, 2010, Atlanta, Georgia, USA

Miguel A. Nacenta, Regan L. Mandryk, Carl Gutwin, Targeting across displayless space, Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, April 05-10, 2008, Florence, Italy

- "eyeLook: Using Attention to Facilitate Mobile Media Consumption"

Heiko Drewes, Alexander De Luca, Albrecht Schmidt, Eye-gaze interaction for mobile phones, Proceedings of the 4th international conference on mobile technology, applications, and systems and the 1st international symposium on Computer human interaction in mobile technology, September 10-12, 2007, Singapore

Heiko Drewes, Richard Atterer, Albrecht Schmidt, Detailed monitoring of user's gaze and interaction to improve future e-learning, Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction, July 22-27, 2007, Beijing, China

Robert J.K. Jacob, Audrey Girouard, Leanne M. Hirshfield, Michael S. Horn, Orit Shaer, Erin Treacy Solovey, Jamie Zigelbaum, Reality-based interaction: a framework for post-WIMP interfaces, Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, April 05-10, 2008, Florence, Italy

- "Augmenting and Sharing Memory with Eyeblog"

Vaiva Kalnikait, Steve Whittaker, Software or wetware?: discovering when and why people use digital prosthetic memory, Proceedings of the SIGCHI conference on Human factors in computing systems, April 28-May 03, 2007, San Jose, California, USA

John D. Smith, Roel Vertegaal, Changuk Sohn, ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis, Proceedings of the 18th annual ACM symposium on User interface software and technology, October 23-26, 2005, Seattle, WA, USA

Hyeju Jang, Jongho Won, Changseok Bae, MEMORIA: personal memento service using intelligent gadgets, Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments, July 22-27, 2007, Beijing, China

Jenq-Shiou Leu, Yuan-Po Chi, Wei-Kuan Shih, Design and implementation of Blog rendering and accessing instantly system (BRAINS), Journal of Network and Computer Applications, v.30 n.1, p.296-307, January 2007

Vaiva Kalnikait, Steve Whittaker, Cueing digital memory: how and why do digital notes help us remember?, Proceedings of the 22nd British HCI Group Annual Conference on HCI 2008: People and Computers XXII: Culture, Creativity, Interaction, September 01-05, 2008, Liverpool, United Kingdom

Mark Blum, Alex (Sandy) Pentland, Gerhard Troster, InSense: Interest-Based Life Logging, IEEE MultiMedia, v.13 n.4, p.40-48, October 2006

Alejandro Jaimes, Nicu Sebe, Multimodal human-computer interaction: A survey, Computer Vision and Image Understanding, v.108 n.1-2, p.116-134, October, 2007

- "Eye Contact Sensing Glasses for Attention Sensitive wearable Video Blogging"

Connor Dickie, Roel Vertegaal, David Fono, Changuk Sohn, Daniel Chen, Daniel Cheng, Jeffrey S Shell, Omar Aoudeh, Augmenting and sharing memory with eyeBlog, Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences, October 15-15, 2004, New York, New York, USA

Daniel Chen, Jamie Hart, Roel Vertegaal, Towards a physiological model of user interruptability, Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction, September 10-14, 2007, Rio de Janeiro, Brazil

Jenq-Shiou Leu, Yuan-Po Chi, Wei-Kuan Shih, Design and implementation of Blog rendering and accessing instantly system (BRAINS), Journal of Network and Computer Applications, v.30 n.1, p.296-307, January 2007

- "ECSGlasses & Eyepliances: Using Attention to open Sociable Windows of Interaction"

Roel Vertegaal, Aadil Mamuji, Changuk Sohn, Daniel Cheng, Media eyepliances: using eye tracking for remote control focus selection of appliances, CHI '05 extended abstracts on Human factors in computing systems, April 02-07, 2005, Portland, OR, USA

David Holman, Gazetop: interaction techniques for gaze-aware tabletops, CHI '07 extended abstracts on Human factors in computing systems, April 28-May 03, 2007, San Jose, CA, USA

Connor Dickie, Jamie Hart, Roel Vertegaal, Alex Eiser, LookPoint: an evaluation of eye input for hands-free switching of input devices between multiple computers, Proceedings of the 20th conference of the computer-human interaction special interest group (CHISIG) of Australia on Computer-human interaction: design: activities, artefacts and environments, November 20-24, 2006, Sydney, Australia

J. David Smith, T. C. Nicholas Graham, Use of eye movements for video game control, Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology, June 14-16, 2006, Hollywood, California

John D. Smith, Roel Vertegaal, Changuk Sohn, ViewPointer: lightweight calibration-free eye tracking for ubiquitous handsfree deixis, Proceedings of the 18th annual ACM symposium on User interface software and technology, October 23-26, 2005, Seattle, WA, USA

Mark Altosaar, Roel Vertegaal, Changuk Sohn, Daniel Cheng, AuraOrb: using social awareness cues in the design of progressive notification appliances, Proceedings of the 20th conference of the computer-human interaction special interest group (CHISIG) of Australia on Computer-human interaction: design: activities, artefacts and environments, November 20-24, 2006, Sydney, Australia

Roel Vertegaal, A Fitts Law comparison of eye tracking and manual input in the selection of visual targets, Proceedings of the 10th international conference on Multimodal interfaces, October 20-22, 2008, Chania, Crete, Greece

David Merrill, Pattie Maes, Augmenting looking, pointing and reaching gestures to enhance the searching and browsing of physical objects, Proceedings of the 5th international conference on Pervasive computing, May 13-16, 2007, Toronto, Canada

- "Auramirror: Artistically Visualizing Attention"

Connor Dickie, Roel Vertegaal, Changuk Sohn, Daniel Cheng, eyeLook: using attention to facilitate mobile media consumption, Proceedings of the 18th annual ACM symposium on User interface software and technology, October 23-26, 2005, Seattle, WA, USA

Alexander W. Skaburskis, Roel Vertegaal, Jeffrey S. Shell, Auramirror: reflections on attention, Proceedings of the 2004 symposium on Eye tracking research & applications, p.101-108, March 22-24, 2004, San Antonio, Texas

David Smith, Matthew Donald, Daniel Chen, Daniel Cheng, Changuk Sohn, Aadil Mamuji, David Holman, Roel Vertegaal, OverHear: augmenting attention in remote social gatherings through computer-mediated hearing, CHI '05 extended abstracts on Human factors in computing systems, April 02-07, 2005, Portland, OR, USA

Junji Watanabe, Hideaki Nii, Yuki Hashimoto, Masahiko Inami, Visual resonator: interface for interactive cocktail party phenomenon, CHI '06 extended abstracts on Human factors in computing systems, April 22-27, 2006, Montral, Quebec, Canada

- "Designing Attentive Cell Phone Using Wearable Eye Contact Sensors"

John A. Kembel, Reciprocal eye contact as an interaction technique, CHI '03 extended abstracts on Human factors in computing systems, April 05-10, 2003, Ft. Lauderdale, Florida, USA

Connor Dickie, Roel Vertegaal, Jeffrey S. Shell, Changuk Sohn, Daniel Cheng, Omar Aoudeh, Eye contact sensing glasses for attention-sensitive wearable video blogging, CHI '04 extended abstracts on Human factors in computing systems, April 24-29, 2004, Vienna, Austria

Alexander W. Skaburskis, Jeffrey S. Shell, Roel Vertegaal, Connor Dickie, AuraMirror: artistically visualizing attention, CHI '03 extended abstracts on Human factors in computing systems, April 05-10, 2003, Ft. Lauderdale, Florida, USA

Baha Jabarin, James Wu, Roel Vertegaal, Lenko Grigorov, Establishing remote conversations through eye contact with physical awareness proxies, CHI '03 extended abstracts on Human factors in computing systems, April 05-10, 2003, Ft. Lauderdale, Florida, USA

Jeffrey S. Shell, Roel Vertegaal, Alexander W. Skaburskis, EyePliances: attention-seeking devices that respond to visual attention, CHI '03 extended abstracts on Human factors in computing systems, April 05-10, 2003, Ft. Lauderdale, Florida, USA

Connor Dickie, Roel Vertegaal, Changuk Sohn, Daniel Cheng, eyeLook: using attention to facilitate mobile media consumption, Proceedings of the 18th annual ACM symposium on User interface software and technology, October 23-26, 2005, Seattle, WA, USA

Connor Dickie, Roel Vertegaal, David Fono, Changuk Sohn, Daniel Chen, Daniel Cheng, Jeffrey S Shell, Omar Aoudeh, Augmenting and sharing memory with eyeBlog, Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences, October 15-15, 2004, New York, New York, USA

Dieter Schmalstieg, Alexander Bornik, Gernot Miller-Putz, Gert Pfurtscheller, Gaze-directed ubiquitous interaction using a Brain-Computer Interface, Proceedings of the 1st Augmented Human International Conference, p.1-5, April 02-03, 2010, Megve, France

Daniel Chen, Roel Vertegaal, Using mental load for managing interruptions in physiologically attentive user interfaces, CHI '04 extended abstracts on Human factors in computing systems, April 24-29, 2004, Vienna, Austria

Francis Quek, Roger Ehrich, Thurmon Lockhart, As go the feet...: on the estimation of attentional focus from stance, Proceedings of the 10th international conference on Multimodal interfaces, October 20-22, 2008, Chania, Crete, Greece

Roel Vertegaal, Designing attentive interfaces, Proceedings of the 2002 symposium on Eye tracking research & applications, March 25-27, 2002, New Orleans, Louisiana

Jeffrey S. Shell, Roel Vertegaal, Daniel Cheng, Alexander W. Skaburskis, Changuk Sohn, A. James Stewart, Omar Aoudeh, Connor Dickie, ECSGlasses and EyePliances: using attention to open sociable windows of interaction, Proceedings of the 2004 symposium on Eye tracking research & applications, p.93-100, March 22-24, 2004, San Antonio, Texas

Maria Danninger, Roel Vertegaal, Daniel P. Siewiorek, Aadil Mamuji, Using social geometry to manage interruptions and co-worker attention in office environments, Proceedings of the 2005 conference on Graphics interface, May 09-11, 2005, Victoria, British Columbia

Mikael Wiberg, Steve Whittaker, Managing availability: Supporting lightweight negotiations to handle interruptions, ACM Transactions on Computer-Human Interaction (TOCHI), v.12 n.4, p.356-387, December 2005