The Evolution Of Human-Computer Interaction And Digital Signage

October 25, 2019 by guest author, sixteenninewpadmin

Guest Post: Derek DeWitt, Visix

If you want to use a computer or other digital device, you have to interact with it in some way. There’s a whole discipline devoted to this, called Human-Computer Interaction, or HCI (though sometimes it’s called CHI, putting the computer first).

HCI is really the place where communication between person and machine takes place, generally a display of some kind and then various other tools for sending data to and getting data from the computer. This interaction between person and machine is called the “loop of interaction”. Today, the most commonplace tools for maintaining this loop are keyboards and some sort of pointing device like a mouse. And, of course, touchscreens are everywhere these days, especially on laptops, tablets and smartphones.

Derek DeWitt

There’s a lot of talk these days about interactivity for digital signage, but the truth is that all such systems have always been interactive to some degree. People interact with a digital sign as they pass by looking at it, and maybe follow a call to action like scanning a QR code. But most of that interaction has been passive and not tailored to the viewer. As we study how humans interact with computers and screens, we see a trend toward hands-free interactions, automation and personalization.

In much the same way that the 20th century was, in many respects, the electricity century (and the 1920s saw the widespread electrification of population centers around the world), the technologies that are about to come out in the 2020s will most likely define what the 21st century is. If we look back over the years of human-computer interaction, we can see a clear line to the technologies of today. This then might allow us to make some educated guesses as to what the next decade will bring.

Early Computers
When we think of the early days of computing, we might think of giant, room-sized machines, filled with vacuum tubes, using punch cards to maintain the loop of interaction. While there were earlier versions of computers actually made of cogs, gear and wheels (like the Turing machine), and the idea goes all the way back to the 1820s when Charles Babbage thought of a steam-powered analytical machine, what we are really considering is the first electronic digital computers.

These were built in 1937 by Atanasoff-Berry Computer (ABC), and would eventually become the Electronic Numerical Integrator and Computer (ENIAC) in 1946. ENIAC weighed 30 tons and had more than 18,000 vacuum tubes. It could only perform one task at a time and has no OS. Transistor-based computers were smaller, like the 1951 UNIVAC 1 and, in 1953, the IBM 600 and 700 series. Between that time and 1962, over 100 computer programming languages were developed, and computers had operating systems, memory, storage and access to printers.

In 1963, the integrated circuit changed things again, giving birth to computers we might think of as modern. MS-DOS came out in 1980, and IBM premiered the personal computer, or PC, in 1981. Apple followed just three years later with the Macintosh icon-based interaction system. Microsoft’s Windows operating system then launched in the 1990s.

Early HCI
HCI as a discipline really came about in the 1980s. That era was all about how to make computers, which could now be in people’s offices and homes, more usable. When Apple launched the Macintosh in 1984, it was a game changer. People no longer had to be experts to use computers, so interaction became much easier. This was the time of the keyboard, mouse and icon-based interface. At that time, HCI gave birth to cognitive models in how to design UIs, using concepts like “folders” and “a desktop”, because people mainly used computers to complete tasks they had been performing pre-computer.

In the 90s, the Windows OS came along, cementing the supremacy of icon-based UIs. Then the world wide web made the internet accessible to all. One important feature of the new connected web was that people were interacting with computers in order to interact with one another. Computers became tools for communication as well as tasks, and HCI design ideas shifted from how to accomplish tasks to encouraging interaction.

This was the rise of what’s known as social computing. We started seeing alternatives to the standard keyboard and mouse, like track balls, joysticks and optical mouse devices (that use light instead of a ball and don’t need a mousepad). The touchpad showed up, allowing people to use a stylus or even just their fingers to interact with programs and data on their screens, via the pad.

Evolution of HCI
Apple then came out with an innovative new interface in 2001 – the wheel on their iPod music player. Throughout the noughties, new mouse designs morphed them into all kinds of shapes (some even bendable). Keyboards got redesigned with physical waves and ripples, keys were moved, and functional shortcut keys were added. In general, HCI was driven by a need to make personal interaction with computers more ergonomically sound.

The launch of the iPhone in 2007 popularized touch as a new way of thinking about HCI. Interaction now needed to be multifaceted and fast, natural and personal. Touchscreens started becoming more common for personal devices, and started showing up on computers, laptops and tablet computers. With touch technology, interactive digital signage became a real possibility.

Early versions of touchscreens were expensive, single-touch, professional-grade displays. Then the multi-touch screen came out, and surfaces that were static could suddenly be interacted with just like the iPhone. As more display manufacturers jumped into touch technology, prices came down and even static displays could be made interactive using overlays.

In the past decade, touchscreens have become ubiquitous. If you went to an AV trade show like InfoComm just ten years ago, there were very few touchscreens. Today, every screen in the exhibit hall has fingerprints all over it, even if it isn’t interactive. People now expect to be able to touch and choose and sort on every screen.

But all this touching has some drawbacks. The screens get dirty, there is a mild risk of spreading illness or infection, and screens that are touched too much might stop working correctly (think of an interactive wayfinding map at an airport that’s used by thousands of people a day – that screen is getting touched and tapped hundreds of thousands of times in a year).

New HCI methods are giving viewers more alternatives, and taking a more humanist approach to include people with different preferences, disabilities and anxieties, while personalizing interaction tools and workflows as much as possible. Not only are HCI options expanding exponentially, but methods developed for phones, homes and other personal devices are being adapted more quickly for signage systems.

Evolution of Human-Signage Interaction
All of the previous computer interface technologies we’ve covered have been physical transducers for Muscle-Computer Interfaces (muCIs). They are physical objects that use human muscles to do most of the interacting.

Today, we’re seeing technologies such as Voice-Activated Interfaces (VUIs) that allow interaction with a screen without touching it at all. You just speak some commands or questions to it, and it shows you what you want. It’s very similar to smart speakers in homes, like Amazon’s Alexa, and computer systems that already allow this type of interaction, like Microsoft’s Cortana. It aims to satisfy an audience used to hands-free commands with smartphone apps like Google Assistant, Siri and Bixby.

Augmented Reality (AR) co-opts people’s smartphones or tablets into becoming part of the display interface for a short while, overlaying data on the digital sign or the real world itself through their own personal device. This is a way that static displays can still offer an interactive experience. It also introduces gameplay techniques to digital signage ecosystems.

Facial recognition is the newest kid on the block. Screens can have cameras embedded or attached to them, and the sign “looks” at a person in front of it to recognize their facial expression. Even more advanced face recognition technology is making it possible for that camera to see more – the person’s height, gender, age, ethnicity, whether they are alone or with their family, and so on. It then decides what content might be relevant to that viewer, based on given sets of parameters.

Future of Human-Signage Interaction
So, we’ve gone from punch cards and keyboards to simply talking to our devices, or interacting with a device by using another device. Soon, just standing in front of a device will be a type of interaction of its own.

As AI programs and machine learning get more and more sophisticated, we’ll start seeing content specifically tailored to us as unique individuals, simply by walking up to the screen and looking at it. Soon it will even be able to recognize the actual individual looking at the screen, and access their entire browsing and purchasing history to tailor content to that specific person.

This is a type of HCI that is almost completely hidden, and so seems more natural. The interaction, through RFID, NFC, Bluetooth and cameras, becomes passive, at least on the viewer’s end of things. People go about their day in the world, and the screens around them serve up content and offer things to them.

There’s been a lot of talk about gesture interfaces. The film “Minority Report” popularized that idea, and quite a bit of progress has been made in this technology (which was based on an actual prototype called g-speak by Oblong Industries). We can already see some basic form of gesture interaction on the market and at trade shows now. Expect to see a lot more in the next few years.

We also have early versions of eye gaze tracking (EGT) software that can see where a person is looking on a screen. A certain blink pattern can make the text larger or clearer. By using EGT-only, or even hybrid systems, interactions can be much faster than they are now.

There is even talk that, some day soon, people will begin to embed tech into their bodies. This tech will then interact with the devices around us. Such embedded tech will leverage micro-expressions and gestures, things we humans do unawares when thinking about something. 

Right now, some companies are experimenting with tooth-click control for devices (combined with head movements), enabling paralyzed or partly paralyzed people full access to a computer interface. If this is successful, it will probably trickle out into the general market as well. And then there’s BCI, or brain-computer interfaces. Sounds like science fiction, but this is actually being worked on in earnest.

All this technology will probably show up in the computer and gaming markets first, but will quickly move into advertising, as well as the realm of organizational communications and digital signage.

Looking Ahead
Soon, everything will be interactive, but won’t need to be actually touched. In fact, the very concept of digital signage might well change.

If people have embedded tech in their bodies, then the space for communications to reach them might extend out from a physical display to a certain part of a building – the whole space, like a building lobby. In fact, there may be no need for a physical screen, if you think of wearable computing devices, holograms and other proposed technologies.

As people move through the lobby, the content management system would be communicating with them, sending them messages that are specifically relevant and interesting to them as individuals, and no two people would get exactly the same content at the same time. And they can interact with the content they’re receiving right then and there, with minimal gestures, blinks, tooth clicks, or what have you. The entire space becomes a tailored content zone for each person in it.

This is really the ultimate hands-free and personalized experience. It’s starting now with voice-activated interfaces, gesture controls and eye-gaze tracking, and will bloom into completely new ways to communicate. In 50 years, people will look back and say, “Wow, did they really used to actually touch screens with their fingers?”

Leave a comment