Intuiface To Show Tight AI/GPT-4 Workflow Integrations For Interactive Screens At ISE
January 26, 2024 by Dave Haynes
One of the things I was intending to look for at ISE in Barcelona next week were tangible uses of AI/Large Language Models by digital signage companies within their products, and the French interactive software firm Intuiface has passed along news that will make me want to stop by its stand.
CMO Geoff Bessin tells me developers have a couple of distinct innovations to show:
- Enabled Intuiface designers to build experiences that send pre-defined or customer-entered prompts to the OpenAI models GPT -4, DALL-E, and Whisper.
“Those responses can be immediately incorporated into a running Intuiface experience, even if on the web. Prompts can be modified on the fly to reflect contextual needs, and can be validated to ensure nothing untoward is requested. For GPT-4/Whisper, we’ll have a great demo in our booth of a smart wayfinder for a museum, where we use Whisper to collect visitor questions with a microphone and use GPT-4 to determine which part of the museum will answer their question, indicating the location on a map. It’s pretty awesome, actually. We also show off DALL-E for the creation of user-specified backgrounds that lend a nice ambiance to their kiosk use.”
- Created an Intuiface Coding Assistant GPT – available in the OpenAI GPT Store – that enables even non-developers to create API and SDK integrations that work with Intuiface.
For years we have had API Explorer, which automatically builds integrations with any Web API. This GPT is about integration opportunities out of API Explorer’s scope, such as any JavaScript-based SDK for external peripherals or sensors. We have “trained” our GPT to understand our development requirements. This is another example of our move toward a complete no-code experience for our designers.
From PR:
At ISE 2024, visitors to Intuiface Booth 6K820 will be the first to see two significant product developments that take advantage of the AI revolution.
First, any Intuiface customer can have their Intuiface experiences interact with the OpenAI models GPT-4, DALL-E 3, and Whisper. This means predefined and user-generated prompts can be submitted to the latest and most popular LLMs (large language models) and have the responses displayed in real time. The result is a multi-modal digital experience giving signage content providers enormous power for communicating with and engaging modern audiences.
The GPT-4 integration processes prompts of any length and returns responses for immediate onscreen display. Intuiface experience designers can modify the response before display or allow a multi-prompt conversation to refine the response. An example usage demonstrated at ISE is an intelligent wayfinder that processes user questions entered by an integrated microphone (enabled by Whisper) and determines the appropriate museum location to visit.
With Vision, an extension of GPT-4, users can select an image – such as a snapshot taken with an integrated camera or the result of a DALL-E generation – and then ask questions about the content of that image or have that image modified by DALL-E. For example, Vision + DALL-E could be combined to create a funny photobooth experience.
With DALL-E, custom prompts – predefined by experience designers or specified by users in real-time – result in the generation and then optional display of the requested images. Examples include the creation of contextually meaningful background images or avatars.
All of these custom prompts can be created ahead of time or modified in real-time to accommodate environmental variations and restrictions required for the deployment. Intuiface’s OpenAI Whisper speech transcription support makes it possible to collect user prompts via an integrated microphone. In all cases, user-generated prompts can be pre-checked by a “hidden” GPT-4 prompt to ensure no inappropriate content has been requested.
A second development is the tech preview of Intuiface Coding Assistant, a free GPT enabling non-developers to create custom interface assets. Interface Assets (IAs) connect Intuiface experiences to third-party APIs and SDKs. Although Intuiface API Explorer can automatically generate IAs for most Web APIs, some complex Web APIs require custom coding. In addition, Intuiface can work with third-party JavaScript-based APIs and SDKs, for which IAs must also be custom-coded. With Intuiface Coding Assistant, all code is created automatically. The result is another innovative example of how Intuiface is driven to make interactive content creation accessible to users of any skill set, even for those with zero coding knowledge.
Intuiface has trained the Intuiface Assistant GPT to understand the entirety of Intuiface’s TypeScript-based Interface Asset libraries and associated Component Development Kit (CDK). Natural language inputs to Intuiface Coding Assistant generate IAs ready for use in any Intuiface experience. These IAs could range from processing input – such as converting EUR to USD using the day’s exchange rates – to integration with third-party Web APIs or SDKs. All would be accessible to non-developers and usable in any experience.
Use of Intuiface’s support for OpenAI models requires an OpenAI account with an API Key and available tokens for prompt processing / image generation. Use of the Intuiface Coding Assistant GPT requires a ChatGPT Plus subscription. The GPT-4, DALL-E, and Whisper integrations are publicly available today. Vision will be available in the next month.
The Intuiface Coding Assistant GPT is now available in the GPT Store.
With the qualifier that my true grasp of what all that means is tenuous – I did not miss my calling as a coder – this sounds pretty interesting.
Leave a comment