KINETIC CANVAS
VIDEO PROTOTYPE

By Emily Charniga

The Product

Kinetic Canvas is a gestural painting application that allows users to use their hand and upper-body movements to paint and navigate through the interface. Users can select different brushes, brush weights, colors, and export their creations through a gesture-controlled menu. Users must wear motion-sensing gloves to track fine-motor movements, while a sensor on top of the monitor itself captures larger movements.

INITIAL SKETCHES

Prior to creating the video prototype, simple sketches were developed to visualize the physical and digital elements of the design before the video prototype. Special consideration was taken to ensure that menus were simplified and did not require excessive precision for selection. Without the tactical feedback of a keyboard or buttons, users would not be able to achieve the same level of precision when navigating and selecting menu items. Menu options are arranged as a one-column stack of large CTAs. Functions such as “save”, and “discard” included confirmation dialog boxes to help users recover from mistakes. The sketches of the physical hardware aimed to give a rough idea of what the monitor and gloves could look like.

DIGITAL PROTOTYPE

In addition to the video prototype, a simple digital interface prototype was created in Figma (link below in “Additional Links”) to expand on the interface and communicate its look-and-feel. This was aimed at displaying more details of the interface flows, independent of sensory input.

VIDEO PROTOTYPe

This video prototype includes the core gestural functions in the interface. These include the following:

  • Walkthrough of low-fidelity sketches of monitor and gloves

  • Large gesture movements to paint strokes, zoom, blend, and erase

  • Finer finger-only movements to paint, undo, redo, and navigate through the menu

  • Walkthrough of digital prototype

Not included in this prototype is the companion interface that would allow users to create accounts, determine save locations, and set up their devices.

Reflections

Process & Rationale for Prototype

This prototype was created in Adobe Express animation software. Screen recordings of paint strokes, erasing, blending, and other actions were taken on an iPad and imported into Adobe Express. Separate recordings of different hand gestures and movements were also imported. The footage was placed, edited, and timed to create the illusion of cause and effect, where the user’s gestures resulted in visual changes. Notations and sketches were added for context and verbal explanation of user actions.

A video prototype was selected to show the larger context of this interface. The dynamic and interactive nature of this application are such that a video prototype was necessary to show how the system responds to the user’s movements in real time. This product required a prototype that would show the reality of real-time visual changes in response to gestures, as would be reflected in the final product. Video prototypes excel at representing context and a fuller picture of design interactions especially in complex designs with many components (Roesler et al., 2017). These variable gesture-controlled interactions were far too complex for a working code prototype. A Wizard-of-Oz prototype was considered for its ability show appropriate responses to user interactions with the use of a wizard, but was ruled out since an additional designer would have been required to coordinate visual feedback and assume the role of the wizard or user (Franz & Reilly, 2017). It was determined that the difficulty of mapping the user’s movements onto a digital display was less effective at creating the illusion of the gestural interface, even if dedicated memorization took place to coordinate the movements. Lower fidelity prototypes like paper prototypes and user flows were ruled out for being too static and failing to show real-time visual feedback. Storyboards were considered as they can communicate context and physical space but were ruled out for being less effective at showing the gestural movement than a video. 

Low-fidelity sketches were used to show the sensory gloves and monitor display, which were not detailed in the video prototype. This prototype was primarily to represent a system that responds to sensory input (gestures) with real-time visual feedback.

Reflections on Prototyping

Reflection 1:
Prototype for the uncontrolled environment & user

Like Prototype #1, this project introduced the idea of integrating the larger situation into the prototype design. In Group Prototype #1, we had to think of ways our digital product would interact with a busy restaurant environment, considering the social, physical, and subjective context for different use cases. In the same way, this project challenged me to think of ways in which environmental and user-specific factors could affect the system. How could I be sure that users can recover from errors? How would differing physical abilities influence different users? Would small spaces limit users’ motions and the system itself? While I was unable to explore the design solutions for all these questions, I could imagine future iterations of the prototype that included accessibility settings, an audio component, and calibrated motion settings. All to say, I learned that prototypes that disregard the variables of different users and situations represent systems that are designed for stereotypes and users that do not exist (Automattic Design Inclusive Design Checklist, 2019).

 

Reflection 2:
Communicating context is a critical and challenging part of prototyping

It was clear upon reading the project prompt that the physical context of the application was as important as the interface itself. The user is primarily interacting through movement, so the challenge was to prototype a system that conveyed that context, while conveying how the system interface responds to the user and context. Like in Video Prototyping for Interaction Design Across Multiple Displays in the Commercial Flight Deck, the authors determined the insufficiency of static or partial representations that “decontextualized” them from other design components (Roesler et al., 2017). I believe that a digital prototype or wireframes alone would have fell short in the same way: neglecting to show how the interface interacts with the gloves, monitor, space, and physical movement.

 

Reflection 3:
A longer brainstorming phase and iterative approach could have enhanced creativity

Because I have extensive experience using creative applications like Photoshop, Illustrator, and Procreate, I felt strong ideas of how the interface should look: introducing a variety of tools, brushes, and colors. I did not take time to examine what implications the gestural component adds to the interface. I reflected on Steve Dow’s How Prototyping Practices Affect Design Results, where the author asserts that “effective design practice is not a straight march to a particular solution, but a process of trying out alternatives and tolerating shifts in direction” (Dow, 2011). Less investment in one particular direction could have yielded more creative results and encourages me to rethink how creative interfaces are designed.

References:

  1. Automattic Design Inclusive Design Checklist. (2019, May 20). Automattic Design. https://automattic.design/inclusive/ 

  2. Dow, S. (2011). How prototyping practices affect design results. Interactions, 18(3), 54–59. https://doi.org/10.1145/1962438.1962451 

  3. Franz, J., & Reilly, D. (2017). TangiWoZ. CHI EA ’17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. https://doi.org/10.1145/3027063.3053254

  4. Roesler, A., Holder, B., Ostrowski, D., Landes, N., Minarsch, S., Ulgen, D., Murphy, E., & Park, H. (2017). Video Prototyping for Interaction Design Across Multiple Displays in the Commercial Flight Deck. DIS 2017, 271–283. https://doi.org/10.1145/3064663.3064800

Previous
Previous

Dinosaur Zoo App

Next
Next

"Extinct" Poster Series