编者按：2014年度以人机交互为主题的顶级会议ACM SIGCHI已经落下帷幕。微软研究院在此次会议的入选论文总数仅次于卡耐基•梅隆大学，位列第二。此次会议中，有哪些创新想法或技术让人眼前一亮?听微软亚洲研究院主管研究员Darren Edge给我们娓娓道来。
我参与此次CHI大会的首个环节是以“周边互动：塑造研究与设计空间（Peripheral Interaction: Shaping the Research and Design Space）”为主题的研讨会。这次研讨会让我尤其兴奋，因为“周边互动（peripheral interaction）”这个术语正是我在2008年的博士论文《周边互动的有形用户界面（Tangible User Interfaces for Peripheral Interaction）》中首次提出的。在立场文件（position paper）中，我回顾了自己早前对周边互动的定义：用户与其工作空间和注意力范围内的周边对象进行快速而频繁的互动，进而提出了一个用于描述周边互动质量的一般性框架。身处桌面工作空间的情境下，这些周边互动的质量显得更为重要——尽管移动和遍布式计算已经有了长足的进步，我们还要花大量时间在办公桌上使用传统PC和笔记本电脑进行工作。
从工作的角度看，CHI上展示的几个项目试图让桌面交互更加流畅、高效，以提升工作的效率，特别是其中两个项目对如何改善普通键盘的效用提供了思路。第一种是“击键-悬停-滑动（Type-Hover-Swipe）”，这种改良的机械键盘可以识别与按键接触或者按键上悬停的手势。这个项目由微软剑桥研究院的Stuart Taylor及其同事完成，并且有幸赢得了最佳论文奖（Best Paper Awards）的殊荣，另一个键盘项目被称为GestKeyboard，可以在未改良的键盘上识别轻抚键盘的手势，并且可以与常规打字动作无缝结合。这个项目的第一作者是2011年曾在微软亚洲研究院人机交互组当过实习生的Haimo Zhang。
由Dan Morris和微软雷德蒙研究院其他同事合作的Exercise Tracking 系统借助可穿戴传感器来寻找、识别和计算重复性练习的次数。另一个项目也颇有意思，它基于运动的系统也成为本届大会的本科生研究项目大赛优胜者，我有幸（也有压力）担任了评委。Kyongwon Seo基于自主的康复设计运用了微软Kinect设备和游戏化元素，鼓励中风患者在自己的家中继续康复。这也是会议期间展示的诸多Kinect创新用法之一。
另外一个令我特别感兴趣的系统则关注更加娱乐化、体态化的互动。这个项目名为VacuumTouch，是作者Taku Hachisu在2013年在微软亚洲研究院实习期间与人机交互组研究员Masaaki Fukumoto合作的成果。他们的系统在一个特定方面比常规的触控表面更具吸力——它采用一只空气泵和一套电磁阀空气阀，借助吸力来移动和限制用户的手指。
有两篇论文特别关注了情境化词汇学习的挑战。第一个项目是智能字幕（Smart Subtitles），它能提供专为语言学习者设计的交互式视频字幕。第二个项目是WADE，这是一个集成开发环境（IDE），它可以自动修改现有软件应用程序的用户界面，例如将用户界面上的标签和文本翻译成另一种语言。在研究母语和非母语识别的论文中，有一篇研究了实时多方对话中自动化文稿(Automated Transcripts)分享的效果。这项技术不仅在开会时使用会很方便，也将有助于像我这样的外国人在北京的日常交流，因为我的英式口音往往很难让别人理解。
除了学习特定领域的知识和技能外，学习如何管理时间和精力也是很重要的。有一个项目受到了微软亚洲研究院与韩国科技和未来规划部（MSIP）联合计划的资助，微软亚洲研究院副研究员Koji Yatani与来自KAIST的同事共同探索“大学生如何迷上智能手机”。他们发现，面临成瘾危险的学生每天使用智能手机的平均总量高达111次，合计4 个小时。并不是所有参与者都认为自己花在这方面的时间是富有成效或有明确的目的。这使我想起了一个项目，可以用有趣的方式追回一部分在这方面损失的时间，并用于更有意义的活动。这就是正在进行中的个人任务自包研究（Selfsourcing Personal Tasks），该项目由Jaime Teevan和他在微软雷德蒙研究院的同事们合作进行，旨在帮助人们将众包方式运用于他们自身，将大量的个人信息搜寻任务分解成易于管理的“微任务”。就像“微学习”一样，该项目有助于在短期和长期内维持用户的动机和投入。这些方法通过降低有成果和有目的手机互动的门槛，让人们感觉到自己对智能手机使用掌握了更大的控制权，而不是被智能手机所控制。
现在Lydia Chilton及其合作者已经能够帮助人们简化这一过程——其方法就是利用人群的力量——尤其是参与召开筹备会的议程委员会（PC）成员。她的Frenzy系统在CHI 2014期间赢得了荣誉提名奖（Honorable Mention Award），可用来对论文进行环节分组，并形成最终的CHI 2014会议议程。我在“演示技术”论文环节中，直接体验到了Frenzy的好处，因为该系统帮助PC委员会成员成功地将我的两篇基于PPT演示的论文与我们微软雷德蒙研究院的同事提交的一篇密切相关的论文安排在一起。
我要推荐的最后一个项目用充满乐趣、引人入胜的方法介绍了一篇优秀的论文，让所有人都从中受益——事实也是如此，它赢得了“观众评选最佳演说奖（a People’s Choice Best Talk Award）”。Panelrama项目提出了一个跨设备Web应用的全新开发模型，并且借助一款演示应用和最新的可穿戴技术进行了演示。这个项目意味着我们对跨设备体验的思考方式又向前迈进了一大步。演示者Jishuo Yang把额外的设备接入他的演示应用时，组件立即动态地重新分配，以确保界面控制板与交互设备之间的最佳契合。在最后的演示中，Jishuo用他的手表读取即时信息和控制幻灯片进度，用头戴式显示器阅读讲稿，用手机查看幻灯片列表，并用连接到投影屏幕的笔记本电脑播放演示文稿的当前页面。总的来说，这个对强大跨设备交互功能的演示给人留下深刻印象，而所用的技术则都是相对简单的HTML扩展。
If you are reading this blog post, there is a very good chance that you are Human. There is also a good chance that you are reading this post using a physical mouse or touch screen to scroll down a Web page, which is displayed in a Web browser, which runs on an Operating System, which performs computation in hardware, which is packaged inside a physical “Computer” (for example, a PC, tablet, or mobile phone). Have you ever thought about how such technologies are designed based on theories about future users’ knowledge, skills, needs, and values? Or how researchers conduct studies both in the laboratory and “in the wild” to understand when, where, how, and why people use and respond to technology in practice? If you have thought about these questions, then you have been thinking about Human-Computer Interaction (HCI). And you are not alone.
Each year, up to 3500 people from around the world gather together for the premiere international conference on Human-Computer Interaction – the ACM SIGCHI Conference on Human Factors in Information Systems. In the spirit of simplification, this is normally just abbreviated to CHI (pronounced “kai”). Having your work accepted as a paper or note at CHI is a significant achievement – only 23% of submissions to CHI 2014 were accepted (465 out of 2036). Each year, Best Paper Awards and Honorable Mention Awards are also given to the top-rated 1% and 5% of papers respectively. At the conference, one of the co-authors of each paper then presents the work to an audience of people eager to learn about the latest and greatest HCI research. However, as the saying goes, all work and no play makes for a dull conference! Fortunately at CHI, the evening schedule is just as packed as the daytime program, with more receptions, events, and parties to attend than there are hours in the night. This year the Korea HCI party was particularly good, with excellent location, atmosphere, and people. The free drinks and snacks also helped! All of this gives me very high hopes for next year’s CHI 2015 in Seoul.
This year though, CHI was held in the city of Toronto in Ontario, Canada. CHI 2014 was the 32nd CHI conference since its establishment in 1982 and larger and more impressive than ever. Spanning six days including two days of workshops, with up to fifteen parallel tracks to choose among, it is always hard to decide which sessions to attend. Since not everybody who is interested in HCI gets the chance to travel to CHI, and not everybody who attends CHI gets to see more than a small fraction of the overall program, I would like to share my personal highlights, or CHIlights!
As I have already explained, you need to work hard and play hard to make the most of CHI. It is also important to learn from the experience of being in the audience, as well as give your best effort if you are responsible for presenting. However, working, playing, learning, and presenting are not just things researchers do at conferences – they are fundamental activities of human life. Since I have a special interest in all four of these activities, I naturally seek out HCI research that aims to transform these activities for the better. This also makes these activities an appropriate framework with which to present my personal experience of CHI 2014, which I’ll now do in 20 projects.
More Efficient Desk Work
My first engagement at CHI 2014 was in a workshop on the theme of “Peripheral Interaction: Shaping the Research and Design Space”. This was especially exciting for me since I coined the term peripheral interaction with my 2008 PhD dissertation on “Tangible User Interfaces for Peripheral Interaction”. In my position paper , I refer back to my earlier definitions of peripheral interaction as one in which users perform fast, frequent interactions with objects on the periphery of their workspace and attention, and propose a framework for describing the qualities of peripheral interaction in general. These qualities are more relevant than ever when considered in the context of a desktop workspace; despite all the advances in mobile and ubiquitous computing, we still spend a great deal of time working with conventional PCs and laptops at desks and tables.
Several CHI projects attempt to make desktop interaction more fluid and efficient, with two in particular thinking about how to increase the utility of regular keyboards. The first, Type-Hover-Swipe , is a modified mechanical keyboard that can recognize hand gestures both on and above the keys. This work by Stuart Taylor and other colleagues from Microsoft Research Cambridge also has the distinction of winning a Best Paper Award. The second keyboard project, GestKeyboard , can recognize stroking gestures across the keys of an unmodified keyboard, in a way that can be seamlessly combined with regular typing. The first author of this work, Haimo Zhang, was an intern in the Microsoft Research Asia (MSRA) HCI Group in 2011.
Jumping from the keyboard to the mouse, Phillip Pasqual and Jacob Wobbrock have investigated how Kinematic Template Matching  can be used to predict the endpoint of a mouse pointing operation and make target selection even easier. Phillip was an intern in the MSRA HCI Group in 2012. Finally, Stephen Fitchett, an MSRA Fellowship winner and MSRA HCI intern in 2010, conducted a longitudinal field evaluation of his Finder Highlights  system. The resulting paper won an Honorable Mention Award for demonstrating improved desktop file retrieval in real-world use. It is fair to say that past MSRA HCI interns, along with our Microsoft Research colleagues in Cambridge, are playing a significant role in inventing the desktop of the future.
More Embodied Play
While desk work is a traditional area of interest for HCI, more recently there has been a shift towards the design of interactions beyond the desktop context and for purposes other than work. One of the most exciting trends in HCI right now is exertion gaming, or exergaming, in which the benefits of exercise, gaming, and social interaction are all be combined into a single activity. There were three paper sessions at CHI 2014 dedicated to exergaming, as well as two workshops, a panel, and a Special Interact Group (SIG). Much of the original and current work in the area was conducted by Florian ‘Floyd’ Mueller, another past MSRA Fellowship winner and MSRA HCI intern all the way back in 2009. Floyd and I have continued collaborating over these past five years, with our CHI 2014 paper on Exertion Cards  helping to support the creative game design process in workshop settings. The cards can help support the design of concepts like the LumaHelm – an interactive bicycle helmet expertly demonstrated in the Interactivity section of the CHI 2014 program. This was a definite CHIlight!
Towards the exertion end of the exergaming spectrum, the RecoFit  system from Dan Morris and other colleagues at Microsoft Research Redmond uses a wearable sensor to find, recognize, and count repetitive exercises. This would make for a great exergaming platform! The presentation by Dan and accompanying live demo by coauthor Scott Saponas was also the most energized talk of the conference, and rightly won a People’s Choice Best Talk Award. Another movement-based system was also the winner of the undergraduate Student Research Competition, which I had the pleasure (and pressure) of judging. Kyongwon Seo’s project on Autonomy-Based Rehabilitation Design  used the Microsoft Kinect device along with elements of gamification to encourage people recovering from stroke to continue their rehabilitation at home. This was one of many innovative uses of Kinect to be showcased at the conference.
Two further systems of particular interest looked at more playful and embodied interaction, with the hands and feet respectively. The first, VacuumTouch , was the outcome of Taku Hachisu’s 2013 internship with MSRA HCI researcher Masaaki Fukumoto. Their system is more attractive than regular touch surfaces in one specific way – it uses an air pump and solenoid air valves to move and immobilize the user’s finger using the power of suction. Another playful twist on an established interaction modality is the tangible interface from Dominik Schmidt (MSRA HCI intern 2011) and collaborators – their Kickables  system is a tangible interface operated with your feet. Even after 32 years, HCI researchers are still inventing new ways for use our bodily skills to act with technology, as well as new ways for technology to act back at us through our full range of bodily senses. After all, isn’t this what Human-Computer Interaction is all about?
More Contextual Learning
As a researcher, I am constantly tracking the state of the art in HCI and related fields so that I can continue building my mental toolkit of models, methods, concepts, theories, and frameworks. At conferences like CHI this is easy, because all you have to do (during the paper sessions, at least) is listen and learn from the top experts in the field. However, during regular working life it is more difficult to find the time and motivation to learn, because learning is hard. This doesn’t just apply to esoteric academic theories, but to many areas of knowledge and skill development. For example, in my previous research, I have explored how “microlearning” in short, sparse fragments of free time throughout the day could help people tackle the daunting but desirable challenge of learning a second language. I saw several promising projects at CHI 2014 that address the issue of lifelong learning in real-world contexts.
Two papers in particular address the challenge of contextual vocabulary learning. The first, Smart Subtitles , provides interactive video subtitles designed for language learners. The second, WADE , is an Integrated Development Environment (IDE) that can automatically modify the user interface of existing software applications, e.g., to translate UI labels and text into another language. The first authors of these respective papers, Geza Kovacs (first year PhD student, Stanford) and Xiaojun Meng (second year PhD student, NUS), will both be joining me for internships in the MSRA HCI Group this summer. We will be working hard on some exciting projects that I hope you will see at CHI 2015 in Seoul! This will also be the first time CHI is located in Asia, opening up a whole new range of cultural and linguistic experiences for CHI attendees. However, there are also likely to be several times for each attendee where conversations are impeded by language differences between native and non-native speakers. In one of three related papers, 2010 MSRA HCI intern Ge Gao (now at Cornell University) investigated the effects of sharing Automated Transcripts  on real-time multiparty conversations. This could come in very handy not just in Seoul, but in my day-to-day interactions in Beijing, since my British accent is often hard for others to understand (although not as hard as my Chinese!).
In addition to learning domain-specific knowledge and skills, it is also important to learn more general strategies for managing your time and attention. In one project funded by MSRA’s collaboration program with the Korean Ministry of Science, ICT, and Future Planning (MSIP), MSRA HCI researcher Koji Yatani collaborated with colleagues from KAIST to explore how college students were Hooked on Smartphones . They found that students at risk of addiction used their Smartphones for a daily average of 111 sessions totaling four hours of use. Not all of the participants felt that they spent this time productively or with a clear purpose in mind. One particular project that I thought offered an interesting way to claim back some of this lost time for more meaningful activities was the Work-In-Progress (WIP) on Selfsourcing Personal Tasks  from Jaime Teevan and other colleagues at Microsoft Research Redmond. This project helps people to apply the methods of crowdsourcing to themselves by decomposing large personal information tasks into manageable microtasks. Just like microlearning, this can help to sustain user motivation and engagement both throughout the day and over the long term. By lowering the barrier to productive and purposeful mobile interaction, these approaches could help make people feel like they are more in control of their smartphone use, rather than feeling like their smartphone has control over them.
More Engaging Presentations
As I mentioned earlier, one of the biggest challenges of getting the most out of CHI is deciding which of the many parallel tracks to attend. The decision-making process is complicated further when talks relating to your interests are shown in parallel sessions, meaning that you have to create an intricate session-switching plan that inconveniences both you and the presenters in each session. Researchers have long struggled with the problem of scheduling conference sessions such that each session has closely related talks and does not overlap with closely related sessions. Now, Lydia Chilton and collaborators have now helped streamline this process by harnessing the power of the crowd – in particular, the crowd of Program Committee (PC) members at the PC meeting in which papers are discussed and accepted for publication (or not). Her Frenzy  system won an Honorable Mention Award at CHI 2014 and builds on earlier crowdsourcing work we collaborated on during her internship in the MSRA HCI Group in 2011. It was also used to create the grouping of papers into the sessions that formed the final CHI 2014 conference program. I experienced the benefits of Frenzy directly in the “Presentation Technologies” paper session, since the system helped PC members to successfully group my two presentation-based papers alongside a closely related paper from our colleagues in Microsoft Research Redmond.
These three related papers cover the challenging activities of planning the narrative structure of a presentation (TurningPoint ), preparing to deliver a presentation through structured preparation and rehearsal (PitchPerfect ), and performing a software demonstration to a live audience and (DemoWiz ). The presenters and first authors of the first two projects, Larissa Pschetz and Ha Trinh respectively, were both MSRA HCI interns in 2013 working with both Koji Yatani and me. Both papers also won Honorable Mention Awards – well done Ha and Larissa! The presenter and first author of the third project, Pei-Yun Chi, was also a Microsoft Research intern working with Bongshin Lee and Steven Drucker in the Redmond lab. Clearly, there is a lot of presentation-related work happening within Microsoft that suggests interesting new directions for products like PowerPoint. Stay tuned!
It is only fitting that the 20th and final project represents great work that we can all learn from, communicated through a playful and engaging presentation that won a People’s Choice Best Talk Award. Proposing a new development model for cross-device Web applications, and demonstrating this using a presentation application and the latest wearable technologies, the Panelrama  project represents a significant step forward in how we think about cross-device experiences. As the presenter Jishuo Yang connected additional devices to his presentation application, the components dynamically redistributed to ensure the best fit between interface panels and interaction devices. In the end, Jishuo was presenting with timing information and slide control on his watch, speaking notes on his head-mounted display, the presentation slide list on his mobile phone, and the current presentation slide on his laptop connected to the projection screen. Overall, it was a very impressive demonstration of powerful cross-device interaction capabilities, enabled by relatively simple HTML extensions. This work was conducted by Jishuo in collaboration with Microsoft Research alumnus Daniel Wigdor at the University of Toronto, meaning that they didn’t have to travel far to share their far-reaching ideas.
This has been a summary of my CHI 2014 experience in 20 projects. When I started assembling this list I was unsure whether I would be able to select 20 projects of personal interest all with least some connection to Microsoft Research. As it happens, I was able to choose 19, and the remaining paper still used the Microsoft Kinect technology! Of these 20 projects, eight were archival Papers or Notes coauthored by Microsoft Researchers. Among them, these eight papers received one of the seven Best Paper Awards and two of the five Honorable Mention Awards given to Microsoft Research papers at CHI 2014. These eight papers also represent just one fifth of the 34 papers coauthored by Microsoft Researchers in total, which represents a substantial 7.5% of the final Papers and Notes program. This made Microsoft Research the second-to-top institution in terms of total papers at CHI 2014, narrowly surpassed by Carnegie Mellon University with 38 papers. This isn’t too bad given that we divide our time between advancing the state-of-the-art in research and contributing to future generations of Microsoft products. And we can always aim for the top spot at CHI 2015!
Finally, of these 20 projects, I am pleased to say that 14 were from current members and past and future interns of the MSRA HCI Group. It is always a pleasure to work with outstanding interns on projects that make a contribution to Microsoft, but it is especially rewarding to see these interns growing as HCI researchers within the CHI community at large. I would now like to conclude my CHI 20-14 report by thanking all of our past interns for their great work. I would also like to thank you, the reader, for using your Human-Computer Interaction skills to make it to the end of this rather lengthy blog post. I hope you found it interesting. Now, time to get back to work on projects for CHI 2015.