Here’s a very different video demo that shows some possibilities of how digital signage might work with smart devices like the Google Home personal assistant, Google’s answer to Alexa.
The demo from UK-based ScreenCloud shows how content could be accessed via voice command. It’s kinda neat, I thought, but was a little stumped by real-world use cases and how this would work in areas where there is background noise and multiple people talking.
So I asked ScreenCloud CEO Mark McDermott, who wrote back:
So I guess this is just the very early stages of us trying to move beyond the classic digital signage category and more into how screens can be used better in various environments. We see interactivity, not just via kiosk apps, as essential to their usefulness.
Example uses could be:
- In an office walking with a client. You reference a piece of work and realise it would be better explained with a video case study. So you ask ScreenCloud for the content or relevant playlist at a nearby screen. Or pull up various dashboards for a quick meeting;
- A sales assistant is talking to an interested customer and wants to interface with a screen to assist in the sales process by bringing up reviews, images etc;
- Presenting in a college or school and you ask ScreenCloud to bring up a presentation from your Google drive rather than fiddle with HDMI ports and VGA cables.
We very much see this working with other forms of interaction such as touch, swiping an app or even the arrow keys on a remote control. In terms of background noise, both Google Assistant and Alexa have done a decent job here filtering that. However I probably wouldn’t recommend to busy environments.
In terms of filtering, this demo is fairly raw but to lock it down would be easy. We would just restrict to already existing content configurations in the ScreenCloud library using the content title as the search trigger. I did think Luke was asking for trouble asking for Carl Cox, but that’s his favourite DJ and he was up late making it so I let him off 🙂
So in short:
- We think interactivity is essential if screens are going to be meaningful;
- We need to think beyond touch as most screens don’t have that. We should embrace the interaction methods people are using elsewhere (voice, mobile, even remote) and make it easy and cost effective to add to existing screens (eg Google Assistant already rolling out to Android TV, Alexa built into Fire TV);
- We need to pass on a degree of flexibility and control to the user over what content is shown (where applicable and with some form of restriction).