In the 50s, some futurists predicted food pills instead of meals. It never happened.
The biggest reason futurists fail is that too many predictions are based on the possible, rather than the desirable.
It’s now possible for anyone to take all their nutrition from pills. But people enjoy eating food. That’s why we don’t take pills instead.
If you want to predict the future, you need to deconstruct human nature. You also need to know what will be possible. Where these two things intersect is where accurate predictions can be made.
And that’s why I can already tell you what your iMac will be like in a few years.
Computers will be everywhere, of course — in the car, kitchen, bathroom and elsewhere — continuing the current trend. However, you’ll have a main desktop system that will essentially be a huge iMac set at a drafting table angle.
It won’t sit on a desk. It will replace the desk entirely. The screen will swivel to flat, so it can be used as a desk or for more than one person to use at once (for games and so on), and it will swivel to vertical to be used as a TV, for presentations or for using the computer from the other side of the room.
The mouse will be non-existent. Keyboards will be made of software. (A minority of writers, programmers and older users will continue to use physical keyboards, which will be built to sit on the angled screen.
Most people will accept software keyboards because people will barely type. Most words and characters will be placed on screen through Siri-like voice interaction, and dictation.
What little typing takes place with a software keyboard will be heavily automated through advanced auto-correct.
The main way you’ll manipulate things on screen will be multi-touch. You’ll do this for moving things around, re-sizing, gaming and other tasks.
If you’re imagining a giant, advanced iPad plus a super-smart Siri, then you’re on the right track.
Here’s the part that doesn’t yet exist on your iPad: You’ll use a lot of in-the-air gestures as well. You’ll wave your hand to go to the next page, document or picture. You’ll to a quick “go away” gesture to dismiss applications.
While sitting or standing at your iDesk, or whatever, you’ll use mostly voice and multi-touch together. But you’ll get in the habit of continuing to use the system as you walk around the room, and for this kind of use, gestures will replace touch. You’ll change the iTV channel by waving your hand or talking. You’ll scroll down pages with hand gestures.
Our iDevices will constantly “read” us. They’ll know when nobody’s in the room. When someone comes in, they’ll know who it is, or they’ll know it’s a “stranger.”
Eventually, our computers will even read our body language to see if we’re happy or frustrated, paying attention to the screen or looking elsewhere — and change what’s happening on screen according to what we’re doing.
The reason this three-part interface of the future will happen, and is in fact is already in the process of happening, is that this is the interaction that human beings are hard-wired to want.
If you look at the whole history of computer interfaces, you’ll see a linear trend. As more compute power gets cheap, it’s increasingly applied to making the computer work harder to communicate in human language.
When we interact with other people and with the world, we use voice, touch and gestures. Huge parts of our brains are dedicated, in fact, to interacting with the world in this way.
Human language isn’t just about words. When we talk, we make facial expressions and use hand gestures, and all this is combined in the mind of the listener to receive the full measure of our communicated meaning. Computers will be taught to also understand all this.
Computer interfaces are not going in a random direction, nor are they going in the direction of what’s logical or possible. UIs are moving in the direction of interacting with us as another human would.
Understanding this fact is the secret to predicting Apple’s continued rise to dominance of the computing industry. Apple is the only company I’m aware of that understands the primacy of human nature, biology and psychology in interface design.
The number-one reason iPad has succeeded in the market and Android tablets have not is that Apple managed to eliminate the delay in motion after you swipe your finger across the screen. The iPad satisfies our need to interact with physical objects, while most Android tablets are unconvincing. The iPad thrills for reasons that don’t register consciously, while Android tablets annoy.
The same goes for Siri when compared to competitors. Siri’s playful, natural-language banter and ability to understand is just human enough to satisfy our need to interact as a human would.
Android users are confused by the success of Siri, or blame it on Apple marketing baloney or lemming-like Apple fanboy obsession. The reason they don’t get it is that they’re focusing on what the technology can do (take action on voice commands) rather than what’s satisfying to the user (understand random comments and respond with sometimes playful, human-like banter).
Using iOS is a little like interacting with objects in the physical world. Using Siri is a little like interacting with a person. The Apple devices of the future will be a lot like real life and real people. That that’s why we’ll love them.
Here comes gestures
As with all major improvements in interface design, gestures are being developed by many companies.
Microsoft has been selling its Kinect for Xbox 360 product for a year. It’s a run-away hit, and has spawned enormous customization and re-use by scientists, hackers and hobbyists. Microsoft announced this week a Kinect for PC product, which will come out next year for Windows 8, which itself has a multi-touch user interface.
The Korean company Pantech is already advertising gestures for its soon-to-be-released Vega LTE smartphone.
Dozens of other companies are shipping or working on similar technology, including Apple.
And researchers in universities all around the world have been developing gesture technologies for many years.
We can safely assume that all phones, tablets, laptops and desktops will get gesture control — especially Apple gadgets.
Apple has applied for and acquired several patents related to gesture technology. What’s interesting about Apple’s gesture patents is that they’re almost entirely for content creation, rather than content consumption or data manipulation.
For example, one group of patents is for creating CAD-like or gaming applications, where you do a multi-touch gesture on screen, but lift your fingers off the screen to render that object in 3D. Another patent uses in-the-air gestures for video editing.
Microsoft and Pantech use cameras for registering user gestures. But it’s likely that a range of sensors will be brought into play that register fine movements, as well as accurate distancing.
With the success of Microsoft’s Kinect, it’s tempting to believe gesture control is a Microsoft thing or a gaming thing.
But in a few years, I believe people will see it as an Apple thing. Why? Because as interface design delves deeply into satisfying the hardwired human desire to interact with physical objects and human beings, Apple will dominate this space because that’s what Apple is really good at.
Nobody can touch Apple in the ability to create devices that satisfy our innate human desires about how to interact with the world. And for that reason, the future belongs to Apple.
Very soon we’ll say hello (literally) to our voice-command, multi-touch and gesture-based iMacs — and say good-bye to peripheral input devices.
And we can also say good-bye to the dominance of Microsoft Windows.
Picture courtesy of DreamWorks Pictures and 20th Century Fox..