As I wrote earlier, I think the success of the iPhone had a lot to do with the ease and speed with which users could understand and use the mutli-touch screen. They didn’t have to learn any special new ways to interact like was necessary with other technologies. Think about how much effort was required to learn how to use a remote to program the VCR, or a keypad to set a microwave, or even the scroll wheel to navigate content on an iPod.
Touch screen didn’t require users to learn how to use the iPhone’s input device. We already possesses the only device we need to interact with it–our fingers. There was no need to push around a device that moved an icon on the screen that represented our touch–an abstraction 3 levels deep of our intentions. With multi-touch, users are no longer required to create an abstract mental link between their hand and the screen. They can just touch it, and make things happen. Additionally, users can now interact with the content itself, rather than interacting with an abstraction the content–like the file/folder structure of the computers. The multi-touch screen allowed us to touch, and move the content itself. To tap a movie, and just play it.
As evidence of how natural this new kind of interaction is, check out these amazing toys for toddlers from Totoya that use the iPhone and iPod:
And I still wonder, what’s next for multi-touch? How can we make it easier to use? How can we make it more natural? More real? As I noted in my earlier post, haptic feedback is one way, another could be the system sensing more than just the fingertip on the screen, but also the shape of the hand.
And here’s another… what happens when you can interact directly with the screen itself by bending it, twisting it, and applying pressure?
What kinds of new interactions does this evolution of multi-touch enable? What will we interaction designers be able to do next? I can’t wait to find out.