Microsoft announced their new Surface Studio this week. It acts as an all-in-one computer, but with a modular screen that can be re-positioned to act as a drawing tablet. The idea of a computer screen that works as a drawing tablet is nothing new. Wacom and other companies have been producing devices like this for years. What’s new about this is the fact that Microsoft has made the screen an integral part of the computing device, rather than a peripheral that can be added later if needed.
Mouse and keyboard has been the dominant form of interaction for desktop computing for decades. And it has worked well as a one-size-fits-all interactive method for many use cases. But the ‘pen and tablet’ interaction reveals a particular weakness in mouse and keyboard controls (and touch controls as well), which we should examine. Art professionals have long known that translating their hand movements directly to the screen is much more efficient than using a mouse as a medium. By drawing on a screen, they are able to draw in a way they’ve already learned via practice with a physical medium, leveraging their pen grip, pressure, angle, and other factors that all affect their style.
Mice are designed for precision input, allowing a user to hit a small target on a screen with good accuracy. This is a direct response to Fitt’s Law, which deals with human capabilities of touching a target accurately. But the mouse was never designed to produce precise movements of the cursor. How the cursor gets to the target is incidental, what’s important is how easily and quickly the user can hit the target.
This makes it untenable for many creative tasks. Programs like Photoshop and Illustrator have trained users in interface metaphors and techniques to allow users to complete these creative tasks while disconnecting them from their physical counterparts. For example, the Pen Tool allows precise drawing that attempts to replicate some freehand illustrations, but it’s functionality is based entirely on how the mouse and keyboard function as a unit, rather than how an artist might create the same drawing.
This reliance on mouse and keyboard was initially a technology issue. But at this point, we have better, more effective interface technology, so why not update our interactions?
There are lots of valid answers to this question, mainly centering around training users on new interactions, and cost involved in designing and developing new systems, but I think that the Surface Studio is a good first step towards rethinking the interactions and interface metaphors that we’ve taken for granted. The drawing tablet is an embodied interaction for a specific audience, but Microsoft announced another embodied interaction device that I find much more interesting.
The Surface Dial
In addition to the Surface Studio, Microsoft introduced the Surface Dial as an interactive device. The dial is an interface device that the user presses against their screen and turns to add additional functionality to certain applications. These functionalities include mode switching and zooms for now. It has a rubberized backing to ensure it doesn’t slide off of the screen.
The dial isn’t fully implemented in every program. Microsoft released a list of 7 or so apps that will use it, and none from major companies like Adobe. Microsoft has said they are in talks to build support for the new input device with major companies.
The Physical/Digital Divide
Currently, users experience a hard delineation between the digital realm where they are creating their work, and the physical world where they are controlling the tools. The screen creates a barrier for the user between the physical and digital world. Users are viewing the digital world they are controlling, while using physical tools on their side of the divide. The user doesn’t have direct control of the digital tools, but has control of the tools that do have direct control (the mouse and text cursor).
With a separate drawing tablet (like the Wacom Intuos) users have better control, but are still separated from the digital realm. With a drawing tablet that doubles as a monitor, users have direct control over the digital realm for drawing tasks, but still are separated from the digital realm during other tasks.
Let’s imagine the task of mode switching, which is very common in artistic apps. The user must be able to switch tools from drawing to erasing to selecting and so on. In Photoshop for example, if a user wants to switch modes or tools, they must use the pen to select these tools, taking them away from the direct control they had before, or even switch to the mouse, moving back to indirect control. The mode switching happens in the digital realm, by controlling a mouse which is controlling a cursor.
With the Dial, the user places the physical device on the barrier (the monitor), the digital controls appear, directly connected to their physical control(much like the pen) and they can dial up the tool or mode that they want. It seems likely that the Dial could do mode switching depending on which section of the screen it was touching. This takes us one step closer to direct control over the digital realm.
Microsoft also demonstrated an app where the dial was used to zoom through a 3D drawing. Zooming and view tasks are another area where the dial could make a quick impact. Instead of moving away from the work, and then having to re-situate themselves, users could keep their view on the task at hand, quickly dial in a change or update, and then immediately get back to work.
By creating a physical tool that makes physical contact with the barrier, users can imagine a new type of interaction.
Starting the Discussion
I think the Dial is exactly the kind of device that we need in a discussion about where interactions are heading. The conversation has already started, but the Dial is a sharp contrast from the other novel interactive devices like Leap Motion and the Amazon Echo, which attempt to move away from tactile interactions to gesture and voice controls.
Gesture controls haven’t caught on, and there doesn’t seem to be a device on the way that will make them work any better. Besides that, humans aren’t really designed to hold their hands up for the extended periods of time that will be required for gesture controls to work. Voice controls as well are problematic. The Echo is fine for the home, where users can feel free to speak freely, but in a workplace where many people are sitting, talking, and giving commands, the Echo simply isn’t going to work. In my personal testing with the Microsoft Hololens, I’ve already confirmed that it will accept commands from anyone within microphone range. It has no sense of who is giving it commands.But with a physical tool, we can implement discrete, and accurate controls that break down the barrier between the physical realm and the digital realm.
And the point isn’t that all of our interactive devices should be physical. There is a dark possible future where a user has to keep a dozen different interaction devices handy to control different programs and functions of their computers. That’s not going to be a particularly usable or desirable future. But we’re quickly moving into an era of computing where a one-size-fits-all control scheme is no longer going to be possible. Different users will have different needs which require new thinking on interactions. Getting us closer to directly controlling the digital realm is an important step in this journey.
The next step comes with Augmented Reality, in which users physically step into the digital realm, and the barrier breaks down even further, maybe even completely. But that’s another blog post.
With a purpose built drawing tablet, we start to pull back the interface metaphors that used to be essential, and reconnect computing to the physical tasks we’re trying to recreate. It remains to be seen if the Surface Dial can add to this effect, but there’s a lot of potential. I hope other companies start to examine how to connect bring physical interactions back into computing.
Originally posted at: blog.andyhunsucker.com