Saturday, June 30, 2012

Job: Fanhattan OpenGL Software Developer Opportunity

Via regular dgl collaborator Vidya Setlur. 

Forwarded message:

From: Jen Burns <jen@questgroups.com>
Date: Wednesday, June 27, 2012 12:39:23 PM
Subject: Fanhattan OpenGL Software Developer Opportunity...

Hope all is well. I wanted to reach out to you in regards to an opportunity I'm working on with Fanhattan located in San Jose. A company led by a team of experienced executives, designers, and engineers from TiVo, Netflix, Vudu, Disney, MTV, the VP of Engineering and he is currently looking for a solid OpenGL Software Developer. The company is backed by blue-chip venture capital firms including NEA, Redpoint Ventures, Greycroft Partners, BV Capital, LA angel investor Jarl Mohn, and independent investors from the entertainment and technology industries. Please let me know if this is something you would be interested in. I have included the job req below.

OpenGL Software Developer

Description

Fanhattan is a service that inspires you to discover all the world's entertainment. Launched at All Things
Digital in June 2011, Fanhattan brings a new approach to entertainment discovery by helping you browse all the world's movies and TV shows with a simple and elegant user experience – in the living room, on the web, and on the go. The service encourages exploration by combining movies and TV shows with an expansive world of related content, visual assets, and information pulled from the web that bring entertainment to life. Finally, Fanhattan gives you the most comprehensive set of options on where to find the entertainment you want across the top digital media providers - Netflix, Hulu Plus, iTunes, VUDU, and ABC. The Fanhattan iPad app is now live in the App Store. Expect more platforms soon, and join us now to play a pivotal role as we evolve and grow the Fanhattan service. For more information,! you can check out our website (http://fanhattan.com) and our press: http://www.delicious.com/fanhattan .

The software developer will be part of a team building the OpenGL graphics engine for content rich consumer applications on mobile platforms. Develop, test, and release new features as well as maintain existing ones in a fast-moving agile test-driven development environment.

Required Skills
• Object-oriented programming skills
• Experience in using OpenGL ES in Android with 3D animations
• Experience in threading
• Experience in performance analysis and optimization
• Knowledge of OpenGL language and shading-techniques
• Knowledge of computer architecture and operating systems
• Knowledge of data structures

Personal Attributes
• Highly self-motivated and directed
• Adapting to new technologies quickly as needed
• ! Keen attention to detail

Education
Degree in Engineering, Computer Science or related fields

Jen Burns|  Technical Recruiter | Quest Groups

Engineering, Product, Development, Leadership
Tel: 650.328.4100 x123 | Cell: 650.296.5876

Jen@questgroups.com

 

Untitled

File #5E6E72180828267D52

Friday, June 29, 2012

Spotted: optimization of touchscreen keyboards -- still hard to beat qwerty

Familiarity of new layouts may be a bad thing: an uncanny valley for text entry?

Multidimensional pareto optimization of touchscreen keyboards for speed, familiarity and improved spell checking

Mark Dunlop, John Levine

This paper presents a new optimization technique for keyboard layouts based on Pareto front optimization. We used this multifactorial technique to create two new touchscreen phone keyboard layouts based on three design metrics: minimizing finger travel distance in order to maximize text entry speed, a new metric to maximize the quality of spell correction by reducing tap ambiguity, and maximizing familiarity through a similarity function with the standard Qwerty layout. The paper describes the optimization process and resulting layouts for a standard trapezoid shaped keyboard and a more rectangular layout. Fitts' law modelling shows a predicted 11% improvement in entry speed without taking into account the significantly improved error correction potential and the subsequent effect on speed.

Spotted: crowd sourced study of typing on soft keyboards

Visual feedback reduces error, but slows speeds. 

Observational and experimental investigation of typing behaviour using virtual keyboards for mobile devices

Niels Henze, Enrico Rukzio, Susanne Boll

With the rise of current smartphones, virtual keyboards for touchscreens became the dominant mobile text entry technique. We developed a typing game that records how users touch on the standard Android keyboard to investigate users' typing behaviour. 47,770,625 keystrokes from 72,945 installations have been collected by publishing the game. By visualizing the touch distribution we identified a systematic skew and derived a function that compensates this skew by shifting touch events. By updating the game we conduct an experiment that investigates the effect of shifting touch events, changing the keys' labels, and visualizing the touched position. Results based on 6,603,659 keystrokes and 13,013 installations show that visualizing the touched positions using a simple dot decreases the error rate of the Android keyboard by 18.3% but also decreases the speed by 5.2% with no positive effect on learnability.

Trajectory-aware mobile search


Trajectory-aware mobile search

Shahriyar Amini, A.J. Brush, John Krumm, Jaime Teevan, Amy Karlson

Most location-aware mobile applications only make use of the user's current location, but there is an opportunity for them to infer the user's future locations. We present Trajectory-Aware Search (TAS), a mobile local search application that predicts the user's destination in real-time based on location data from the current trip and shows search results near the predicted location. TAS demonstrates the feasibility of destination prediction in an interactive mobile application. Our user study of TAS shows using predicted destinations to help select search results positively augments the local search experience.

Spotted: Using mobile phones to support reducing power use


Using mobile phones to support sustainability: a field study of residential electricity consumption

Jesper Kjeldskov, Mikael B. Skov, Jeni Paay, Rahuvaran Pathmanathan

Recent focus on sustainability has made consumers more aware of our joint responsibility for conserving energy resources such as electricity. However, reducing electricity use can be difficult with only a meter and a monthly or annual electricity bill. With the emergence of new power meters units, information on electricity consumption is now available digitally and wirelessly. This enables the design and deployment of a new class of persuasive systems giving consumers insight into their use of energy resources and means for reducing it. In this paper, we explore the design and use of one such system, Power Advisor, promoting electricity conservation through tailored information on a mobile phone or tablet.

Spotted: using mobiles to capture and share what you toss

Hmm... Is this gamification or shamification? 

"We've bin watching you": designing for reflection and social persuasion to promote sustainable lifestyles

Anja Thieme, Rob Comber, Julia Miebach, Jack Weeden, Nicole Kraemer, Shaun Lawson, Patrick Olivier

BinCam is a social persuasive system to motivate reflection and behavioral change in the food waste and recycling habits of young adults. The system replaces an existing kitchen refuse bin and automatically logs disposed of items through digital images captured by a smart phone installed on the underside of the bin lid. Captured images are uploaded to a BinCam application on Facebook where they can be explored by all users of the BinCam system. Engagement with BinCam is designed to fit into the existing structure of users' everyday life, with the intention that reflection on waste and recycling becomes a playful and shared group activity.

Spotted: user testing typography


Legible, are you sure?: an experimentation-based typographical design in safety-critical context

Jean-Luc Vinot, Sylvie Athenes

Designing Safety-critical interfaces entails proving the safety and operational usability of each component. Largely taken for granted in everyday interface design, the typographical component, through its legibility and aesthetics, weighs heavily on the ubiquitous reading task at the heart of most visualizations and interactions. In this paper, we present a research project whose goal is the creation of a new typeface to display textual information on future aircraft interfaces. After an initial task analysis leading to the definition of specific needs, requirements and design principles, the design constantly evolves from an iterative cycle of design and experimentation. We present three experiments (laboratory and cockpit) used mainly to validate initial choices and fine-tune font properties.

Find: intel cto on connecting research to development -- pathfinding through the valley of research death


Intel Labs: 21st Century Industrial Research

This morning I had the opportunity to present a keynote speech at the U.S. Innovation Summit in Washington D.C., alongside the US CTO and CIO, among many other distinguished participants.


I was asked to speak about the importance of U.S. innovation to job creation, the economy and the future of U.S. competitiveness, so I took the opportunity to discuss how Intel undertook a transformation in our approach to research and innovation and how far we’ve come in the past few years. We view this approach as a 21st century model of industrial research in contrast to the 20th century model of Bell Labs and the many U.S., European, and Asian companies that copied the Bell Labs model.


I’ve received several requests for the text from the speech, so I’ve included it below.  Enjoy.



The U.S. Innovation Summit 


The Newseum, Washington, D.C.June 20, 2012 


Prepared Speech by Justin Rattner, Intel CTO


Thank you and good morning.


It’s a pleasure to be here today to discuss the importance of innovation to the economic future of the United States.  I’ll try to avoid the usual platitudes and get right to what I think U.S. industry needs to do to get its innovation house in order.


It is no doubt clear to those of you who work inside the beltway that the word “innovation” is on the lips of everyone from corporate executives, government leaders and university presidents.  Each of them talks about the need for the U.S. to accelerate its pace of innovation or be overrun by innovation coming from virtually every other point on the planet. The message is simply: innovate or die.


Unfortunately, many of these same leaders often confuse innovation with ideation and that in my judgment is a critical, if not fatal, mistake. As the CTO of a major technology company, I am constantly exposed to new ideas for all manner of products and services. They’re ideas bubbling up in my organization and ideas streaming in from our customers and our collaborators, from both industries and universities. Ask any VC if he or she is lacking for ideas. They’ll tell you the same thing. Ideas are cheap; a dime a dozen. Innovation, not ideation, is where we need to focus.


Another common confusion is over the difference between invention and innovation. Every time I hear people reminiscing about the good ol’ days of research when Bell Labs or IBM Research was winning another Nobel Prize or Xerox PARC was off inventing the future of computing, I just cringe. While those industrial-scale research labs of the 20th century were great inventors of things, from the first laser to the laser printer, they were absolute disasters at making them practical and getting them to market. Despite the fact that most...

Find: Interesting new center: the Intel Science and Technology Center for Social Computing


Announcing the Intel Science and Technology Center for Social Computing

We are excited to announce today a new Intel Science and Technology Center for Social Computing, the 7th in a series of partnerships between the corporation and leading US universities.  The University of California, Irvine will be the main site for this distributed research organization, in collaboration with research groups at Cornell University, Georgia Institute of Technology, Indiana University, and New York University.  The center is co-lead by principal investigators Paul Dourish (Professor of Informatics, UC Irvine) and Scott Mainwaring (senior research scientist, Interaction and Experience Research, Intel Labs).  Bill Maurer (Professor of Anthropology and Associate Dean of the School of Social Sciences, UC Irvine) serves as academic co-PI, and Rajiv Mathur (University Collaboration Office) is the Program Director.



Social Computing is the study of information technologies and digital media as social and cultural phenomena.  While this has always been the case since the beginnings of the computing industry, the rise of social networking systems, Web 2.0, cloud and embedded computing, and the proliferation of ways and places to access digital media, this value, indeed necessity, of this perspective is increasingly clear.  For example, the tremendous success of Facebook and Twitter can only be understood as much the result of social processes as technological ones.  This and many other cases point to the pervasive entanglement of the social and technical worlds, and a pressing need for new paradigms for the design and analysis of technologies, paradigms that are rooted in the theories and methods of the social sciences and humanities as much as they are in engineering and the hard sciences.


 


For too long, social scientists and technologists have worked as if their domains were essentially independent.  In certain special cases the two communities have come together, productively, to understand and build devices, products, and services that could not be realized without such collaboration.  In the 70s and 80s, as time-sharing and PCs brought computing power to mass audiences, we saw the rise of human-computer interaction, and new or newly prominent professions like “human factors engineer” and “interaction designer”.  Likewise in the 90s and 00s, the rise of the consumer-based internet economy required and built upon different kinds of dialogs between technologists and people-focused disciplines like “ethnographic consumer research” and “experience design”.


 


Technology is now instrumental in defining who we are, how we think about ourselves and our lives, and how we act individually and collectively.  With sensors, clouds, and pervasive possibilities of access, we no longer have to actually use technology to be affected by it.  Can you really remain unaffected by Facebook even if you opt out of it, if your friends, relatives, and future employers incr...

Find: imaging parallelism -- a gigapixel camera

Panoramas done all at once, in one box. 

Duke engineers improve camera resolution

A team lead by David Brady at Duke has created a camera that has gigapixel resolution. This is thousands of times better than current consumer cameras and could revolutionize concerts,

Friday, June 15, 2012

Spotted: Evaluating legibility experimentally -- A typographical design


Legible, are you sure?: an experimentation-based typographical design in safety-critical context

Jean-Luc Vinot, Sylvie Athenes

Designing Safety-critical interfaces entails proving the safety and operational usability of each component. Largely taken for granted in everyday interface design, the typographical component, through its legibility and aesthetics, weighs heavily on the ubiquitous reading task at the heart of most visualizations and interactions. In this paper, we present a research project whose goal is the creation of a new typeface to display textual information on future aircraft interfaces. After an initial task analysis leading to the definition of specific needs, requirements and design principles, the design constantly evolves from an iterative cycle of design and experimentation. We present three experiments (laboratory and cockpit) used mainly to validate initial choices and fine-tune font properties.

Spotted: Phone as a pixel -- enabling ad-hoc, large-scale displays using mobile devices

Similar to a project from our mobiles course a bit over a year ago. 

Phone as a pixel: enabling ad-hoc, large-scale displays using mobile devices

Julia Schwarz, David Klionsky, Chris Harrison, Paul Dietz, Andrew Wilson

We present Phone as a Pixel: a scalable, synchronization-free, platform-independent system for creating large, ad-hoc displays from a collection of smaller devices. In contrast to most tiled-display systems, the only requirement for participation is for devices to have an internet connection and a web browser. Thus, most smartphones, tablets, laptops and similar devices can be used. Phone as a Pixel uses a color-transition encoding scheme to identify and locate displays. This approach has several advantages: devices can be arbitrarily arranged (i.e., not in a grid) and infrastructure consists of a single conventional camera. Further, additional devices can join at any time without re-calibration.

Spotted: Determining the orientation of proximate mobile devices using their back facing cameras

Yeah, but what if I don't want to point just at the other guy?

Determining the orientation of proximate mobile devices using their back facing camera

David Dearman, Richard Guy, Khai Truong

Proximate mobile devices that are aware of their orientation relative to one another can support novel and natural forms of interaction. In this paper, we present a method to determine the relative orientation of proximate mobile devices using only the backside camera. We implemented this method as a service called Orienteer, which provides mobile device with the orientation of other proximate mobile devices. We demonstrate that orientation information can be used to enable novel and natural interactions by developing two applications that allow the user to push content in the direction of another device to share it and point the device toward another to filter content based on the device's owner.

Spotted: Small window on a large world -- gyro and face tracking for viewing large imagery on mobile devices


Looking at you: fused gyro and face tracking for viewing large imagery on mobile devices

Neel Joshi, Abhishek Kar, Michael Cohen

We present a touch-free interface for viewing large imagery on mobile devices. In particular, we focus on viewing paradigms for 360 degree panoramas, parallax image sequences, and long multi-perspective panoramas. We describe a sensor fusion methodology that combines face tracking using a front-facing camera with gyroscope data to produce a robust signal that defines the viewer's 3D position relative to the display. The gyroscopic data provides both low-latency feedback and allows extrapolation of the face position beyond the the field-of-view of the front-facing camera. We also demonstrate a hybrid position and rate control that uses the viewer's 3D position to drive exploration of very large image spaces.

Spotted: how easily can someone shoulder surf your phone's gesture lock?


Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks

Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, Albrecht Schmidt

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack.

Spotted: The design space of opinion measurement interfaces


The design space of opinion measurement interfaces: exploring recall support for rating and ranking

Syavash Nobarany, Louise Oram, Vasanth Kumar Rajendran, Chi-Hsiang Chen, Joanna McGrenere, Tamara Munzner

Rating interfaces are widely used on the Internet to elicit people's opinions. Little is known, however, about the effectiveness of these interfaces and their design space is relatively unexplored. We provide a taxonomy for the design space by identifying two axes: Measurement Scale for absolute rating vs. relative ranking, and Recall Support for the amount of information provided about previously recorded opinions. We present an exploration of the design space through iterative prototyping of three alternative interfaces and their evaluation. Among many findings, the study showed that users do take advantage of recall support in interfaces, preferring those that provide it.

Spotted: first phone contact -- on Vanuatu.


Appreciating plei-plei around mobiles: playfulness in Rah island

Pedro Ferreira, Kristina Höök

We set out to explore and understand the ways in which mobiles made their way into an environment--Rah Island in Vanuatu--for the first time. We were struck by their playful use, especially given the very limited infrastructure and inexpensive devices that were available. Based on our findings, we discuss tensions between playfulness and utility, in particular relating to socio-economic benefits, and conclude that playfulness in these settings needs to be taken as seriously as in any other setting.

Spotted: changing the shape of your mobile device


Rock-paper-fibers: bringing physical affordance to mobile touch devices

Frederik Rudeck, Patrick Baudisch

We explore how to bring physical affordance to mobile touch devices. We present Rock-Paper-Fibers, a device that is functionally equivalent to a touchpad, yet that users can reshape so as to best match the interaction at hand. For efficiency, users interact bimanually: one hand reshapes the device and the other hand operates the resulting widget. We present a prototype that achieves deformability using a bundle of optical fibers, demonstrate an audio player and a simple video game each featuring multiple widgets. We demonstrate how to support applications that require responsiveness by adding mechanical wedges and clamps.

Spotted: mClerk -- enabling mobile crowdsourcing in developing regions


mClerk: enabling mobile crowdsourcing in developing regions

Aakar Gupta, William Thies, Edward Cutrell, Ravin Balakrishnan

Global crowdsourcing platforms could offer new employment opportunities to low-income workers in developing countries. However, the impact to date has been limited because poor communities usually lack access to computers and the Internet. This paper presents mClerk, a new platform for mobile crowdsourcing in developing regions. mClerk sends and receives tasks via SMS, making it accessible to anyone with a low-end mobile phone. However, mClerk is not limited to text: it leverages a little-known protocol to send small images via ordinary SMS, enabling novel distribution of graphical tasks. Via a 5-week deployment in semi-urban India, we demonstrate that mClerk is effective for digitizing local-language documents.

Thursday, June 14, 2012

Spotted: The google maps image of the city -- differing perceptions of the urban environment


Drawing the city: differing perceptions of the urban environment

Frank Bentley, Henriette Cramer, William Hamilton, Santosh Basapur

In building location-based services, it is important to present information in ways that fit with how individuals view and navigate the city. We conducted an adaptation of the 1970s Mental Maps study by Stanley Milgram in order to better understand differences in people's views of the city based on their backgrounds and technology use. We correlated data from a demographic questionnaire with the map data from our participants to perform a first-of-its-kind statistical analysis on differences in hand-drawn city maps. We describe our study, findings, and design implications for location-based services.

Spotted: Preserving the browsing experience with visualization -- supporting serendipitous book discoveries through information visualization


The bohemian bookshelf: supporting serendipitous book discoveries through information visualization

Alice Thudt, Uta Hinrichs, Sheelagh Carpendale

Serendipity, a trigger of exciting yet unexpected discoveries, is an important but comparatively neglected factor in information seeking, research, and ideation. We suggest that serendipity can be facilitated through visualization. To explore this, we introduce the Bohemian Bookshelf, which aims to support serendipitous discoveries in the context of digital book collections. The Bohemian Bookshelf consists of five interlinked visualizations each offering a unique overview of the collection. It aims at encouraging serendipity by (1) offering multiple visual access points to the collection, (2) highlighting adjacencies between books, (3) providing flexible visual pathways for exploring the collection, (4) enticing curiosity through abstract, metaphorical, and visually distinct representations of books, and (5) enabling a playful approach to information exploration.

Spotted: The case of the missed icon: change blindness on mobile devices


The case of the missed icon: change blindness on mobile devices

Thomas Davies, Ashweeni Beeharee

Insights into human visual attention have benefited many areas of computing, but perhaps most significantly visualisation and UI design [3]. With the proliferation of mobile devices capable of supporting significantly complex applications on small screens, demands on mobile UI design and the user's visual system are becoming greater. In this paper, we report results from an empirical study of human visual attention, specifically the Change Blindness phenomenon, on handheld mobile devices and its impact on mobile UI design. It is arguable that due to the small size of the screen - unlike a typical computer monitor - a greater visual coverage of the mobile device is possible, and that these phenomena may occur less frequently during the use of the device, or even that they may not occur at all.

Spotted: Using mobile phones to present medical information to hospital patients


Using mobile phones to present medical information to hospital patients

Laura Pfeifer Vardoulakis, Amy Karlson, Dan Morris, Greg Smith, Justin Gatewood, Desney Tan

The awareness that hospital patients have of the people and events surrounding their care has a dramatic impact on satisfaction and clinical outcomes. However, patients are often under-informed about even basic aspects of their care. In this work, we hypothesize that mobile devices - which are increasingly available to patients - can be used as real-time information conduits to improve patient awareness and consequently improve patient care. To better understand the unique affordances that mobile devices offer in the hospital setting, we provided twenty-five patients with mobile phones that presented a dynamic, interactive report on their progress, care plan, and care team throughout their emergency department stay.

Spotted: Is beautiful usable? Google, U Basel and U Copenhagen study the relationship

Long paper here that looks worth a good read. 

Is beautiful usable? What is the influence of beauty and usability on reactions to a product?

Posted by Javier Bargas-Avila, Senior User Experience Researcher at YouTube UX Research

Did you ever come across a product that looked beautiful but was awful to use? Or stumbled over something that was not nice to look at but did exactly what you wanted?

Product usability and aesthetics are coexistent, but they are not identical. To understand how usability and aesthetics influence reactions to a product, we conducted an experimental lab study with 80 participants. We created four versions of an online clothing shop varying in beauty (high vs. low) and usability (high vs. low). Participants had to find a number of items in one of those shops and buy them. To understand how the factors of beauty and usability influence final users happiness, we measured how they much they liked the shop before and after interaction.

The results showed that the beauty of the interface did not affect how users perceived the usability of the shops: Participants (or Users) were capable of distinguishing if a product was usable or not, no matter how nice it looked. However, the experiment showed that the usability of the shops influenced how users rated the products' beauty. Participants using shops with bad usability rated the shops as less beautiful after using the shops. We showed that poor usability lead to frustration, which put the users in a bad mood and made them rate the product as less beautiful than before interacting with the shop. 



Successful products should be beautiful and usable. Our data provide insight into how these factors work together.

Wednesday, June 13, 2012

Press: more reports about our mobile course's CityCamp successes

News: the N&O on our course's involvement at CityCamp Raleigh


CityCamp group devises guide to the greenways

At CityCamp, participants think up ways technology can be used to improve local government. The winning team won $5,000 and plans to launch its app next month.

Click to Continue »

Monday, June 11, 2012

Spotted: mobile software testing using both mass releases and lab work


A hybrid mass participation approach to mobile software trials

Alistair Morrison, Donald McMillan, Stuart Reeves, Scott Sherwood, Matthew Chalmers

User trials of mobile applications have followed a steady march out of the lab, and progressively further ''into the wild', recently involving ''app store'-style releases of software to the general public. Yet from our experiences on these mass participation systems and a survey of the literature, we identify a number of reported difficulties. We propose a hybrid methodology that aims to address these, by combining a global software release with a concurrent local trial. A phone-based game, created to explore the uptake and use of ad hoc peer-to-peer networking, was evaluated using this new hybrid trial method, combining a small-scale local trial (11 users) with a ''mass participation' trial (over 10,000 users).

Spotted: on the visual experience of canvas presentations like prezi

Need to read this one. 

Fly: studying recall, macrostructure understanding, and user experience of canvas presentations

Leonhard Lichtschlag, Thomas Hess, Thorsten Karrer, Jan Borchers

Most presentation software uses the slide deck metaphor to create visual presentation support. Recently, canvas presentation tools such as Fly or Prezi have begun to use a zoomable free-form canvas to arrange information instead. While their effect on authoring presentations has been evaluated previously, we studied how they impact the audience. In a quantitative study, we compared audience retention and macrostructure understanding of slide deck vs. canvas presentations. We found both approaches to be equally capable of communicating information to the audience. Canvas presentations, however, were rated by participants to better aid them in staying oriented during a talk.

Spotted: A comparative evaluation of finger and pen stroke gestures

Shumin does good work, now at google. 

A comparative evaluation of finger and pen stroke gestures

Huawei Tu, Xiangshi Ren, Shumin Zhai

This paper reports an empirical investigation in which participants produced a set of stroke gestures with varying degrees of complexity and in different target sizes using both the finger and the pen. The recorded gestures were then analyzed according to multiple measures characterizing many aspects of stroke gestures. Our findings were as follows: (1) Finger drawn gestures were quite different to pen drawn gestures in basic measures including size ratio and average speed. Finger drawn gestures tended to be larger and faster than pen drawn gestures. They also differed in shape geometry as measured by, for example, aperture of closed gestures, corner shape distance and intersecting points deviation; (2) Pen drawn gestures and finger drawn gestures were similar in several measures including articulation time, indicative angle difference, axial symmetry and proportional shape distance; (3) There were interaction effects between gesture implement (finger vs.

Spotted: A spatiotemporal visualization approach for the analysis of gameplay data

Probably some good references here. We are doing similar work. 

A spatiotemporal visualization approach for the analysis of gameplay data

Günter Wallner, Simone Kriglstein

Contemporary video games are highly complex systems with many interacting variables. To make sure that a game provides a satisfying experience, a meaningful analysis of gameplay data is crucial, particularly because the quality of a game directly relates to the experience a user gains from playing it. Automatic instrumentation techniques are increasingly used to record data during playtests. However, the evaluation of the data requires strong analytical skills and experience. The visualization of such gameplay data is essentially an information visualization problem, where a large number of variables have to be displayed in a comprehensible way in order to be able to make global judgments.

Spotted: how should we add social annotations in web search?

Seems to depend on how we parse visual structure

Social annotations in web search

Aditi Muralidharan, Zoltan Gyongyi, Ed Chi

We ask how to best present social annotations on search results, and attempt to find an answer through mixed-method eye-tracking and interview experiments. Current practice is anchored on the assumption that faces and names draw attention; the same presentation format is used independently of the social connection strength and the search query topic. The key findings of our experiments indicate room for improvement. First, only certain social contacts are useful sources of information, depending on the search topic. Second, faces lose their well-documented power to draw attention when rendered small as part of a social search result annotation. Third, and perhaps most surprisingly, social annotations go largely unnoticed by users in general due to selective, structured visual parsing behaviors specific to search result pages.

Spotted: numerically modeling color naming and space

Interesting cognitive angle on color salience

Color naming models for color selection, image editing and palette design

Jeffrey Heer, Maureen Stone

Our ability to reliably name colors provides a link between visual perception and symbolic cognition. In this paper, we investigate how a statistical model of color naming can enable user interfaces to meaningfully mimic this link and support novel interactions. We present a method for constructing a probabilistic model of color naming from a large, unconstrained set of human color name judgments. We describe how the model can be used to map between colors and names and define metrics for color saliency (how reliably a color is named) and color name distance (the similarity between colors based on naming patterns).

Jobs: Graphics & infrastructure positions at Google in Chapel Hill

Seems they do the graphics in Android and Chrome. Via our former faculty member David McAllister.
Benjamin Watson
Director, Design Graphics Lab | Associate Professor, Computer Science, NC State Univ.
919-513-0325 | designgraphics.ncsu.edu | @dgllab


---------- Forwarded message ----------
From: David McAllister <davidm@cmonline.com>
Date: Sat, Jun 9, 2012 at 9:56 AM
Subject: Fwd: [CS-Alumni] Jobs: Graphics & infrastructure positions at Google


Begin forwarded message:

Date: June 8, 2012 9:55:34 AM EDT
Cc: Tom Hudson <tomhudson@google.com>
Subject: [CS-Alumni] Jobs: Graphics & infrastructure positions at Google

Since we're all posting positions: Google would really, really like to hire several more people to do low-level graphics work in our Chapel Hill office, or to help with build/performance/tools infrastructure to support the team here. Windows, Linux, Mac, Android, OpenGL, DirectX all welcome. We're still looking for a good computational geometer and an assembly-language (SSE / Neon) hacker.

We may not be the "minutes from the beach" that Z can advertise, but we're minutes from your alma mater and the Southern Part of Heaven. Most of the pixels drawn by Chrome and Android are drawn by our code, which means your RGBs show up in front of a plurality of web & smartphone users in the world.

Tom

Sunday, June 10, 2012

Find: Film will no longer be on film -- distribution of film to cease by 2013 in the US


Celluloid no more: distribution of film to cease by 2013 in the US

A recent report from IHS Screen Digest, a company that analyzes trends in digital media, says that movie studios will cease producing 35 mm film prints for major markets by the end of 2013 (the US, France, the UK, Japan, and Australia are considered "major markets"). IHS predicts studios will stop producing film for the rest of the world by 2015.


The death of traditional film—outside of arthouse films and the occasional film student project—has been a long time coming. Film reels are more expensive than digital storage, degrade faster, and are physically much heavier to ship and carry around. Ars noted in 2006 that Canon and Nikon were taking losses on film cameras. We reported a few months later that some filmakers felt that digital film produced better movies, as it allowed them to keep the camera running while actors performed, rather than spending money on long rehearsals, only shooting when necessary.


According to the IHS study, another factor is pushing studios to make the change from film to digital: the price of silver shot up: what was once $5 is now about $28 an ounce. Silver crystals coat traditional film and help create the filmed image after exposure.

Spotted: Crowdsourcing the study of mood with mech turk and twitter


In the Mood for Social Media

Feelings affect human behavior, direct our actions, and influence our perceptions of ourselves, others, and the world around us. New research extracts emotional states from large-scale expressions of mood shared through social media.

Find: Huzzah! New Open-Access Computer Graphics Journal


New Open-Access Computer Graphics Journal

After talking about open access so much, it feels great to finally be doing something about it! Eric and I are both on the founding editorial board for the Journal of Computer Graphics Techniques (JCGT), a new peer-reviewed computer graphics journal which is open access, has no author fees, and is practice-focused (in the spirit of the Journal of Graphics Tools).


Similar to other journal “declarations of independence”, the JCGT was founded by the resigning editorial board of the Journal of Graphics Tools (JGT). Initially created as a “spiritual successor” to the Graphics Gems series of books, JGT had a unique focus on practical graphics techniques and insights, which is being continued at JCGT.


With Morgan McGuire (the editor-in-chief) and the rest of the editorial board, we plan to leverage this illustrious history, our expertise, and the advantages of online self-publishing and open access to create a truly exceptional journal. But we need your help. Like any journal, JCGT can only be as good as the papers submitted to it.


If you are a researcher, we hope the pedigree of the editorial board, the increase in impact afforded by open access, the lack of author fees, and the streamlined process will make JCGT compelling to you as a publication venue.


We also want to reach out to industry practitioners (especially game developers) who would not typically consider publication in a peer-reviewed journal. Similarly to its predecessor, JCGT emphasizes practicality and reproducibility over novelty and theory, and the papers differ from the typical “research paper” style (lengthy “related work”, “conclusion” or “future work” sections are not required). There are also many advantages to publishing in JCGT vs. more traditional industry channels such as trade conferences, magazines, books, and blog posts. The open access format guarantees wide and immediate distribution, and the peer-review process will provide valuable insights and comments on your work from some of the top experts in the field as well as assure potential readers of the high quality of the work.


Open access to research: an idea whose time has come. Be a part of it!

Wednesday, June 06, 2012

News: Students in summer mobiles course excel at CityCamp Raleigh and win $5000 prize

Students learn more when they apply their new skills with real clients. To find clients for students in this summer's interdisciplinary Mobile App Design course, we sent them to CityCamp Raleigh. In the end, they far exceeded our expectations. Perhaps even their own!

Ccral_final_grid
(Image courtesy CityCamp)

CityCamp events use new technologies to engage citizens and make their governments more transparent. In the week or so leading up to the Raleigh event, our students brainstormed ideas, identified particularly promising ones, and developed them into pitches. The four projects we brought to CityCamp were:

On Friday June 1, we listened while government, business and technical leadership spoke about the the impact technology could have on governments and communities. On Saturday, over 30 attendees lined up to pitch their ideas for one minute each. Organizers selected 25 of these for further development, including all of our students'. Through Saturday and Sunday, attendees lengthened and refined their pitches, often developing prototype apps. At 3pm on Sunday, ten teams, including all of our student teams, pitched their ideas for five minutes. The organizers met to choose the five top projects, and the team that would win at $5000 prize. We did amazingly well:

  • 1st place and $5000 to the R Greenway team
  • 2nd place to Raleigh Retold
  • 4th place to CitySeek Raleigh

A big congratulations to all of our students! You found collaborators, brought a lot of creative energy to CityCamp, and will make a big difference to Raleigh and its citizens.

To stay up to date with our course projects, visit our course's project site. You can learn more about CityCamp and how our students did there in these press reports:

Viz: Amanda Cox and countrymen chart the Facebook I.P.O.

Nice peek into the nyt process. 

Amanda Cox and countrymen chart the Facebook I.P.O.

On Thursday Facebook had the third-largest I.P.O. ever. In the week leading up it, my colleague Amanda Cox spent some time thinking how to best explain and contextualize this offering to readers. What follows is a series of sketches from Amanda, who shared her project folder with me for this post, and Matt Ericson, who edited the piece.


The universe of initial public offerings is seemingly simple: about 2,400 tech companies since 1980, compiled by Jay Ritter, a professor of finance at the University of Florida.


As a first step, Amanda charted the companies by I.P.O. date (x-axis) and value at I.P.O. (y-axis), colored them by their 3-year return. (The key’s not included in her sketch, but for these purposes, know that red is bad and green is good.)


Step1


This chart’s not bad (even if, like me, you have low standards), but it doesn’t say much other than that there was a dot-com boom, that most of those companies didn’t do so well, and that Facebook is worth a ton of money.


Next, a plot of 3-year return by I.P.O. date:


Step-2


#more 


Trying to add in more nuance to this picture, shading the companies by the companies’ price-to-sales ratio at I.P.O. and including Facebook in a random position just for size:


Step4


But rather than bringing clarity, it just sort of looked chaotic, even to the seasoned chart freaks of 620 8th Avenue. So she tried another form: a histogram of 3-year returns, colored by I.P.O. date:


Step4.5


Or the same chart but piled into three time periods (not that anyone asked me, but I really like this one):


Step5


By the way, even the queen bee of statistical charting screws up that chart the first time (be conservative with your “cex” values, folks):


Step6


Another idea, vaguely reminiscent of the balloons from “Up,” is sales vs. market cap at I.P.O. colored by year. I won’t lie, I don’t get this one:


Spotted: visualization for the analysis of gameplay data


A spatiotemporal visualization approach for the analysis of gameplay data

Günter Wallner, Simone Kriglstein

Contemporary video games are highly complex systems with many interacting variables. To make sure that a game provides a satisfying experience, a meaningful analysis of gameplay data is crucial, particularly because the quality of a game directly relates to the experience a user gains from playing it. Automatic instrumentation techniques are increasingly used to record data during playtests. However, the evaluation of the data requires strong analytical skills and experience. The visualization of such gameplay data is essentially an information visualization problem, where a large number of variables have to be displayed in a comprehensible way in order to be able to make global judgments.

Spotted: Rethinking statistical analysis methods for CHI


Rethinking statistical analysis methods for CHI

Maurits Kaptein, Judy Robertson

CHI researchers typically use a significance testing approach to statistical analysis when testing hypotheses during usability evaluations. However, the appropriateness of this approach is under increasing criticism, with statisticians, economists, and psychologists arguing against the use of routine interpretation of results using "canned" p values. Three problems with current practice - the fallacy of the transposed conditional, a neglect of power, and the reluctance to interpret the size of effects - can lead us to build weak theories based on vaguely specified hypothesis, resulting in empirical studies which produce results that are of limited practical or scientific use.

Spotted: A new technique for comparing averages in visualizations


Comparing averages in time series data

Michael Correll, Danielle Albers, Steven Franconeri, Michael Gleicher

Visualizations often seek to aid viewers in assessing the big picture in the data, that is, to make judgments about aggregate properties of the data. In this paper, we present an empirical study of a representative aggregate judgment task: finding regions of maximum average in a series. We show how a theory of perceptual averaging suggests a visual design other than the typically-used line graph. We describe an experiment that assesses participants' ability to estimate averages and make judgments based on these averages. The experiment confirms that this color encoding significantly outperforms the standard practice.

Tuesday, June 05, 2012

Spotted: a mobile based symptom monitoring system for breast cancer patients in rural Bangladesh


Findings of e-ESAS: a mobile based symptom monitoring system for breast cancer patients in rural Bangladesh

Md Haque, Ferdaus Kawsar, Mohammad Adibuzzaman, Sheikh Ahamed, Richard Love, Rumana Dowla, David Roe, Syed Hossain, Reza Selim

Breast cancer (BC) patients need traditional treatment as well as long term monitoring through an adaptive feedback-oriented treatment mechanism. Here, we present the findings of our 31-week long field study and deployment of e-ESAS - the first mobile-based remote symptom monitoring system (RSMS) developed for rural BC patients where patients are the prime users rather than just the source of data collection at some point of time. We have also shown how 'motivation' and 'automation' have been integrated in e-ESAS and creating a unique motivation-persuasion-motivation cycle where the motivated patients become proactive change agents by persuading others.

Spotted: improving ten-finger touchscreen typing through automatic adaptation


Personalized input: improving ten-finger touchscreen typing through automatic adaptation

Leah Findlater, Jacob Wobbrock

Although typing on touchscreens is slower than typing on physical keyboards, touchscreens offer a critical potential advantage: they are software-based, and, as such, the keyboard layout and classification models used to interpret key presses can dynamically adapt to suit each user's typing pattern. To explore this potential, we introduce and evaluate two novel personalized keyboard interfaces, both of which adapt their underlying key-press classification models. The first keyboard also visually adapts the location of keys while the second one always maintains a visually stable rectangular layout. A three-session user evaluation showed that the keyboard with the stable rectangular layout significantly improved typing speed compared to a control condition with no personalization.

Spotted: The meaning of digital things


Lost in translation: understanding the possession of digital things in the cloud

William Odom, Abi Sellen, Richard Harper, Eno Thereska

People are amassing larger and more diverse collections of digital things. The emergence of Cloud computing has enabled people to move their personal files to online places, and create new digital things through online services. However, little is known about how this shift might shape people's orientations toward their digital things. To investigate, we conducted in depth interviews with 13 people comparing and contrasting how they think about their possessions, moving from physical ones, to locally kept digital materials, to the online world. Findings are interpreted to detail design and research opportunities in this emerging space.

Spotted: Netnography -- The spread of emotion via facebook


The spread of emotion via facebook

Adam D.I. Kramer

In this paper we study large-scale emotional contagion through an examination of Facebook status updates. After a user makes a status update with emotional content, their friends are significantly more likely to make a valence-consistent post. This effect is significant even three days later, and even after controlling for prior emotion expressions by both users and their friends. This indicates not only that emotional contagion is possible via text-only communication and that emotions flow through social networks, but also that emotion spreads via indirect communications media.

Monday, June 04, 2012

Spotted: CheekTouch -- can we deliver telecaresses?


How do couples use CheekTouch over phone calls?

Young-Woo Park, Seok-Hyung Bae, Tek-Jin Nam

In this paper we introduce CheekTouch, an affective audio-tactile communication technique that transmits multi-finger touch gestures applied on a sender's mobile phone to a receiver's cheek in real time during a call. We made a pair of CheekTouch prototypes each with a multi-touch screen and vibrotactile display to enable bidirectional touch delivery. We observed four romantic couples in their twenties using our prototype system in a lab setting over five consecutive days, and analyzed how CheekTouch affected their non-verbal and emotional communication.

Sunday, June 03, 2012

Find: They'll be watching -- MIT's computer algorithms can tell why you're smiling better than a human can

Not sure I want to be watched. 

MIT's computer algorithms can tell why you're smiling better than a human can

MIT smile research


While some recent studies have found that investment in facial recognition technologies hasn't yet yielded great success, researchers at the Massachusetts Institute of Technology continue to move the ball forward with a system that can outperform humans at recognizing the emotion behind a person's smile. The group's study involved the difference between a happy smile and one generated out of frustration. To begin, the researchers asked subjects to act out two different emotional reactions: joy and frustration, both tracked by webcam video. MIT's algorithms analyzed the footage to determine the character of the reactions as did a group of regular humans; both could determine the express emotion with similar accuracy. The researchers then...

Find: shades of things to come -- graphics suffers on ipad3's super hd display

If this is bad, what about quad hd or quad^2 hd? We're going to have trouble scaling rendering very far above hd. 

Is the Retina display holding back iPad graphics?

nova 3


The new iPad's Retina display is certainly a sight to behold, but does it come at a cost? That's the impression you'd get from viewing screenshots of Gameloft's new shooter N.O.V.A. 3 over at NeoGAF — the results suggest that the developers have seriously dialed back the effects on Apple's latest tablet in order to push four times as many pixels. The A5X system-on-chip inside the new iPad is more than capable on paper, but what happens when its quad-core GPU is tasked with running a modern first-person shooter at 2048 x 1536? We took a look at N.O.V.A. 3 and some more of the iPad's most taxing games to see how this year's model stacks up to its predecessor.

Find: 1080p smartphone from LG is 440ppi, while iPhone is 300dpi

We can see the difference: consider 600 dpi printers. 

LG's 5-inch 1080p smartphone display goes far beyond Retina pixel density

lg 1080p smartphone display


Apparently not content with the 1024 x 768 resolution on its 5-inch Optimus Vu, LG is working on a 5-inch 1080p smartphone display for the second half of 2012. The 1920 x 1080 touchscreen offers a wonderfully excessive pixel density of 440 ppi, which is 72 percent more pixels per inch than the Optimus Vu, and a third more than the display on the iPhone 4S. Unlike the display on LG’s current 5-inch phone, the new screen will have the same 16:9 widescreen aspect ratio as the HDTV in your living room. The display technology is the same IPS-based LCD that LG has offered in recent devices like the Nitro HD and aforementioned Optimus Vu, but with the added benefit of many, many more pixels. If you're eager to get an early look at the...

Find: whoa, 16 x hd (4x4) already in the pipe

Pixels start going exponential. 

The future of TV as seen in Super Hi-Vision

nhk super hi-vision hero


Nippon Housou Kyoukai, better known as NHK, is much more than Japan's public service broadcaster — it's a national institution that seeks to push forward major advances in televisual technology worldwide. This month the NHK Science & Technology Research Laboratories is exhibiting prototype examples of the innovations it expects to see in our living rooms in the 2020s, 30s, and beyond, ranging from impossible leaps in screen size and resolution to breathtaking breakthroughs in 3D imagery. It's a head-spinning display of the presently unattainable that leaves us decidedly unimpressed with our own 1080p 3D sets. Read on to find out exactly how you'll be watching TV in the future.