Will it still be a candy-bar with a touch screen, or could it look entirely different? And strikingly familiar at the same time perhaps.
Smart-phones are the norm today. Devices as flat as possible with maximized touchscreens that bring the content to us in the brightest colors possible. The increasing computer power moves the focus of our smart-phone from making calls to advanced messaging, web browsing and the use of dedicated applications. I think many current users spend more time with their eyes on the screen than with their ear to the speaker.
I like my smart-phone and have been using one from the moment I could afford one. But I’m wondering what’s next. What will the device look like that I use in five years time? And what would I want it to look like? And how would I like to be using my device?
One of the core tasks of my future device will be to facilitate my communication with other people. Direct communication with an other person will happen sometimes in a synchronized way such as a phone call or a video conversation. And often it will take place using delayed communication where I can send a message at a time convenient for me, and the receiver can read at anytime, like messaging or email.
Apart from direct communication with a single person, my future device will provide a platform for group communication. An example for this sort of communication is the principle used by most social media, where one person makes a post that groups of other persons can see. Like delayed direct contacts such as email and text messaging, group communications are not synchronized. I can read the information when it suits me to read it, not when the author publishes them.
I think sharing will become an increasingly important part of communication between people. Instead of explaining what you are looking at, you can post a picture for all your friends to see on facebook or twitter. Or you could send that picture to one single person in particular. My future device will allow me to let others in on what I see in real time. I could show you what I see right now. This would enable you to help me evaluate a situation, asses a design decision or just enjoy the view with me. And different than in most current devices, all this visual sharing would take place in a seamless combination with voice calls. No more taking the phone away from my ear to study the picture on the screen, navigating through menu’s and trying not to accidentally terminate the call instead of zooming out.
My future smart-phone will not only be about communication. Providing and storing information will be an increasingly important role for any personal device I have on me all day. It is not hard to see where this will be going: my future smart-phone will give me access to all my personal documents and media like photo’s, music etc. All this data will be synchronized with online storage solution that I can access from any place or device I log on to.
My phone will also store information on where I am and what I do, so I can trace back the name of that lovely restaurant we had dinner two months ago. Pressing privacy issues will lead to decent encryption of all this data and a more personal control about who you share it with.
Providing information goes far beyond that. My future device will be able to project a whole new layer of information on my physical world. It will know where I am and it will detect what I look at. And it will serve me the history of a building if I want to. It will even read the text to me, so I could have my personal audio-guide to… well, everything. Directions, shops, restaurants, the dish on the menu of the restaurant, the listed ingredients, information about everything I see can be accessed immediately. If I point my eye to a bird in the sky it could tell me the species, current elevation and flight patterns combined with a list of sighted predators in the area. And the information would not stop at objects, animals or plants.
Imagine this future smart-phone would use clever face-recognition techniques on the persons around me. It would find the names of the individuals I see walking on the streets on facebook automatically and instantly if I want it to, showing me their name, religion and anything else I might want to know and they are willing to share. The public part of your profile on any website would become very public indeed. But of course, it already is. Anyone can google your name or find you on social media sites, but what will change is the information you need to find a person. Today you need a name, and usually also the area where someone lives (unless you’re looking for an individual with a rather unique name). But soon all you need is a face on the subway and a camera in your phone.
We like to follow each other via mini blogs and other social media, both in text and images. And I think that urge will only increase. Somehow human beings find it irresistible to peek at what others do and think and say. Probably in the future the representation on social networks will become more of a stream you can log on to than an occasional photo or comment. My future smart-phone can keep all my friends posted on where I am and what I do and whom I talke to. This means the amount of data available to followers will explode and choosing what you want to follow from any person you´re interested in via information filters will make it possible to follow different friends in different ways or intensities. So stop following me on twitter, log on to the live feed of my life!
If this sums up what my future smart-phone will be capable of in terms of functionality, what could it look like? Let’s assume that we are not going to implant chips in our brains or camera’s in our eyes. I will limit the design ideas for this future device to some sort of object I can put away if I’m going to bed (although keeping it on when going to bed could open up a whole new branch of entertainment, I suppose). And I would like to be able to buy a new one or get it repaired etc., so it should be a physical entity detached from my body and not some cyborg add-on.
First of all, it should be portable. And since I have a feeling I’m going to use it a lot more than my current phone, I would like it to be hands free by default, so I can use both my hands while using the functionalities of the device. The device should not hinder me in my day to day tasks or prevent me in any way from using my senses.
Since there are many functionalities that require visual feedback from the device, such as producing information layers on top of what I see, the first thing that comes to mind is a pair of glasses. I don’t usually wear glasses or contacts, but since it is an apparatus that evolved over quite some time and is used by many persons, I presume it forms a mature platform for layering someones visual sense with a minimum of interference. The contemporary blue-tooth hands-free phone devices provide a solid link to how audio can be integrated easily into the device where the gadget meets the ear.
I don’t think that it will be possible to incorporate all necessary electronics and batteries inside the spectacles without creating a heavy and rather coarse device, so I figure some sort of a connection is needed from the eyeglasses to a power supply. Maybe in five years time my clothing will be able to harvest some of my body heat to power the device.
The lenses will of course be the most important feature of the ‘phone’. They will enable me to enhance my field of vision with information, project an incoming image or text from a stream I follow and constitute the main interface to the functionalities of my phone. I figure some sort of heads-up display generated by an integrated screening layer in the lenses will project a layer of information on top of what I see. A sensor monitoring the physical setting of my eye will enable the device to project the information on a virtual distance from me, so that it blends with the focus of my eye. Such sensors will also detect what I look at, thus enabling the device to provide information about this object or person if I request it.
And how would I request this information? Would I push a button, give a voice command, blink my eye twice or just think the command real loud? I’m not sure the thinking will do the trick in five years. With the help of some sensors on my brain (could they be integrated in the device?) “Left” or “Right” commands are presumably understandable, but I expect that more complicated instructions will be needed as well. So that leaves a combination of voice-commands, blinking commands or buttons. There are situations imaginable where voice commands would be rather irritating or revealing to my surroundings, so a secondary means of control will be needed.
Since I stated that I would like the device to not interfere with my normal tasks, it would be best if the buttons could (also) be operated by eye movement. I can imagine that what I can do with my finger on the touchscreen of a smart-phone or tablet, can also be achieved with pointing my eye instead of my finger. And eye control is potentially much more efficient, since my eye can move faster than my finger. A fluent and intuitive interface for control by eyeball is of course expected to be a strong feature of my future phone.
Although the AMOLED touch screen in ever increasing size will probably dominate the smart-phone look for the coming years, I would welcome some rethinking when it comes to the personal packages offered for communication in the near future. Stop focusing on law-suits and make me happy with some revolutionary options for my next phone!