5 Ways We’ll Interface With Future Computers

Since the dawn of personal computing, the mouse has served as the link between human and machine. As computers have become ever more powerful and portable, this basic interface of point-and-click has remained tried, true and little changed.

But now, new ideas and products are offering revolutionary ways for people to interact with their computers. In the tradition of squeezing the most out of machines in the least amount of time, the mouse and even the keyboard might someday come to be relics of a slower, bygone era.

Here are five emerging technologies likely to shake up how we get computers to follow our commands.

Multi-touch

Rather than pointing with a mouse or laptop touchpad and then double-clicking on an icon or dragging a scroll bar, for instance, "multi-touch" lets users input complex commands with simple finger gestures. A well-known example is the "pinching" of an Apple iPhone screen with two fingers to zoom, or a two-fingered "swipe" to go through Web pages.

Many other cell phone companies have followed the multi-touch lead of Apple, which has made extensive use of it in its iPhone, iPod touch, MacBook and the soon-to-be-released iPad. And the top surface of Apple's new Magic Mouse is actually a gesture-recognizing touch pad.

An advantage to bringing multi-touch to regular computers would be increasing the pace at which commands can be entered: Multiple fingers trump the single coordinate of an onscreen mouse pointer.

But two key hurdles stand in the way. First, people cannot comfortably reach out and touch computer screens for long periods of time. Secondly, users block the screen they're trying to view when they multitouch.

One proposed way around these problems is the multi-touch interface called 10/GUI by graphic designer R. Clayton Miller. Users rest their hands on what looks like a large laptop touchpad (the keyboard appears above this pad), putting all ten fingers to use in navigating a computer screen and performing actions.

On the computer screen, 10 small, see-through circles appear that represent the users' fingers. Pressing and moving with a certain number of fingers allows the user to access application menus, scroll through pages, and so on.

Gesture sensing

Beyond motion sensing, which a mouse rolling on its trackball already does handily, or iPhone pinching, gesture sensing can allow movement in three dimensions.

In recent years, Nintendo's Wii gaming console has introduced gesture sensing to the masses. A plethora of other manufacturers have recently put out gesture-sensing products, though mostly for gamers.

One company that is likely to target the average desktop user down the road is Los Angeles-based Oblong Industries, Inc. They make a product called g-speak that serves as an "operational environment." A user wearing special gloves stands in front of a giant wall-mounted screen and tabletop monitor. Using a range of gestures akin to a traffic cop – as well as finger pistol-shooting – the user can move images and data around from one screen to the other. (A technology very similar to Oblong's was featured in Steven Spielberg's 2002 film “Minority Report.”)

Christian Rishel, chief strategy officer at Oblong, said this interface lets people sift through massive data sets quickly, "when you're flooded with data and you need to find the right thing at the right time."

Early adopters of the expensive interface include the military and oil companies, said Rishel, but he thinks in five to 10 years all computers will include some form of this technology.

By taking human-computer interactions outside of the two-dimensional screen of the computer, Rishel thinks the time we spend with our computers will become more physical, rewarding and effective.

"We need to let the data swim out and paint the walls with it," Rishel told TechNewsDaily.

Voice recognition

Instead of interfacing with a point-and-click mouse and a keyboard, what if we just spoke to our computers?

The concept of voice recognition as an interface has been around for decades and a number of software products are currently available. Many of these act as transcriptionists – a handy feature given that people can speak words about three times faster than they can type them, according to Nuance, a Massachusetts-based company that makes Dragon NaturallySpeaking.

Dragon goes much further than mere stenography, however. It has allowed people with physical disabilities who cannot operate a traditional keyboard and mouse to operate their computers.

"We've got a core group of users . . . who use their voice to control their computer 100 percent of the time," said Peter Mahoney, senior vice president and general manager for Dragon.

Mahoney gave some examples of how Dragon recognizes voice commands and acts on them. For instance, while speaking in Microsoft Word, one can say "underline while speaking," and Dragon will do so. Users call out punctuation ("period, new paragraph") and menu options ("Review, track changes") to interface with programs.

Saying "search the Web" launches an online browser, and voice commands then allow the user to select links to read. Other programs such as an email application can also be opened with simple voice commands.

"A speech interface is flexible and nearly infinite in what it can do," Mahoney said in a phone interview. "It goes far beyond the capabilities of a physical device" like a mouse.

Eye-tracking

Since we are looking at what we want to click, why not harness the power of our gaze?

So-called eye tracking relies on a high-resolution camera and an invisible infrared light source to detect where a user is looking.

The technology has proven useful in scientific and advertising research. When it comes to everyday desktop or laptop computer use, however, eye tracking is mostly geared for those with disabilities, and is currently pricey.

One effort that has aimed to develop eye tracking for the general public is the GUIDe (Gaze-enhanced User Interface Design) research project. It produced EyePoint software, which allows users to place both hands on a keyboard where the key inputs are modified to work like a mouse.

When a user focuses on a point on the screen with their eyes, that section becomes magnified, and pressing the keyboard tells the program to proceed.

Test subjects who used EyePoint felt that the "gaze-based approach was faster and easier to use . . . because they were already looking at the target," said Manu Kumar, a former researcher at Stanford University who spearheaded the project a few years ago.

Also, EyePoint results in far less wrist strain than a regular mouse, Kumar said, though with slightly higher gaze-and-click than point-and-click error rates.

"I firmly believe that this approach can be developed to a point where it could supplant the mouse," said Kumar. Cost, he said, remains the biggest obstacle in widespread adoption of eye tracking.

Brain-computer interfaces

Think it, and the computer will do it. This ultimate melding of mind-and-machine is closer than you might suppose, yet will have to overcome some potential show-stoppers before ever becoming fast enough or commonplace.

Known as a brain-computer interface (BCI), the method translates the electrical impulses of neurons to actions on a computer screen or mechanical device.

As with voice recognition, BCIs have arisen to help those with injuries or debilitating ailments, such as brain stem strokes or amyotrophic lateral sclerosis, often called Lou Gehrig's disease. Over the past decade, BCIs have enabled human patients who cannot move their bodies to move a cursor on a monitor.

A long-recognized problem in developing commercial BCIs for healthy people is that getting a strong, clear enough signal from the brain requires implanting electrodes that are prone to infection, bodily rejection and forming of scar tissue.

However, other existing, non-invasive brain scan technologies such as electroencephalography (EEG) – worn shower-cap style with electrodes on the scalp – have made some strides recently.

At the CeBIT tradeshow in Germany earlier this month, Guger Technologies showed off the Intendix device, which the company calls the “world’s first BCI speller.” Letters and numbers on a virtual keyboard flash on the monitor, and when the one you want lights up, Intendix registers a slight spike in brain activity and presto, the character is selected.

The company says Intendix will let injured or ill people communicate, and that learning how to use the Intendix interface takes mere minutes to produce a character rate of five to 10 per minute. This is clearly too slow for everyday use by healthy people, and another disadvantage is that the device cost $12,000.

Down the road, continuing research into "neural prosthetics" – devices connected to people's brains and operated by brain waves – may pave the way for possible desktop adoption.

Whatever the future might hold for human-computer interfaces, it seems that the mouse's days as a humble, steady workhorse appear just as numbered as those of the horse-and-buggy of yesteryear.

Adam Hadhazy
Adam Hadhazy is a contributing writer for Live Science and Space.com. He often writes about physics, psychology, animal behavior and story topics in general that explore the blurring line between today's science fiction and tomorrow's science fact. Adam has a Master of Arts degree from the Arthur L. Carter Journalism Institute at New York University and a Bachelor of Arts degree from Boston College. When not squeezing in reruns of Star Trek, Adam likes hurling a Frisbee or dining on spicy food. You can check out more of his work at www.adamhadhazy.com.