The changing nature of (ubiquitous) computing

I seem lately to be having recurring conversations on the same theme: the changing nature of computing and the movement from desktop to mobile/ubiquitous computing (aside: what is ubicomp? You could start with this defining video from 1991). Humorous anecdotes about children interacting with technology often come up in these conversations (or vague recollections of YouTube videos—anecdotes 2.0). Kids futilely trying to pinch-zoom their parents’ magazines, as in the video below; or throwing their parents’ smartphone around without a concept of its cost or—relative to, say, the family computer—the novelty of its interaction. Novel technology for the parents, mundane for the child.

New generations live and breath—not adopt—new technology, giving them a fundamentally different perspective

I think, like social change, much of technological change comes through new generations that grow up with realities their parents had to adopt—computers, the Internet, social media. Wonderful clichés like, “back in my day, we had to know how to read a map!” betray fundamentally different views of the world that are symptomatic of technological shift. When kids are so used to a technology being there that they can’t conceive of its absence—the 2-year-old pinch-zooming a magazine in vain—that is when a new generation of people, whose underlying worldview is not shaped by old ideas but built on a foundation of new technology, develop solutions that are truly native to that technological landscape.

So, what does this have to do with ubicomp? Ubiquitous computing is a thing—separate from other instantiations of interactive computing—only insofar as it isn’t ubiquitous. Once it underlies, as it increasingly does, so much of how we interact with technology on a day-to-day basis, it becomes less meaningful to say one does work in ubiquitous computing apart from other areas of human–computer interaction (HCI).

For example, my own interest in pervasive health sensing and feedback (i.e. mobile, or in-home, or ubiquitous health tech) did not arise from my interests in ubiquitous computing as an area—I had none. It arose, broadly, from my interest in human–computer interaction and a particular application area. It happens that many of the problems and questions I am interested in draw on ubicomp solutions, and are appropriate for a ubicomp audience, but if (for example) my research takes me into web-based or desktop-based solutions I will follow my way there. I suspect many other people ostensibly in ubicomp today feel similarly. Ten or twenty years out from now, when the kids that today are frustratedly pinch-zooming magazines have become researchers and app developers, it won’t occur to them that building interactive systems doesn’t involve ubicomp, since in their technological landscape the two will be the same.

Ubicomp becoming ubiquitous?

As the computing everyone uses moves off the desktop, more and more questions in human–computer interaction involve ubiquitous computing technology, such as smartphones, even if only as a platform. Does that research then become ubicomp work? Or will the notion of ubicomp become so embedded in much of the rest of HCI that this distinction is meaningless? Like most things there is a grey area here, but as ubicomp becomes integral to much of HCI it might be useful to ask if we need to rethink the boundaries of these concepts. I suspect the coming generation of pinch-zoomers will have difficulty seeing the difference.