Announcing PullString Converse 1.2: Create Multi-modal Voice Experiences More Efficiently

June 8, 2018

When organizations new to the voice space ask us how to get started, we typically recommend taking a “crawl, walk, run" approach: start with a delivering a basic Alexa skill in market to learn about the skill creation process as well as what their customers are looking for; then add additional functionality to enhance the experience as more is learned.

We have accompanied many organizations throughout that journey over the last twelve months and many are now getting to the “run” stage.  Our Converse 1.2 release is aimed at helping organizations deploy and manage more complex skills in an efficient and easy to use way, which has been the hallmark of our PullString Converse voice app development platform since day one.

Converse 1.2 will help you create more advanced skills with multi-modal content, robust state and conversation management and do so more efficiently to accelerate time to market and the iteration cycles in your voice app development.

Building multi-modal voice experiences

To enhance the voice experience, Amazon has done a great job of augmenting its line of smart speakers with screen based devices like the Echo Show or the Echo Spot which can display visual elements to help users review a recipe or browse a list of results.

Converse 1.2 now allows you to craft multi-modal voice applications that support these screen based devices. In our latest Converse release, you can now add visual elements to augment your Alexa skill:

DisplayTemplate

  • Video: the Echo Show and Echo Spot can stream video so that you can turn your skill into a show and tell!

In addition to audio assets, our asset library now supports image and video assets so that you do not need to invest in a separate content management system to host these files.

Converse 1.2 makes it very easy to add these screen based experiences to your skill and create richer experiences that will boost user engagement and re-engagement.

Easily debug skills during development

As voice applications become more complex, they typically add more conversation states (i.e. variables that record the different states of the conversation and personalization of the experience), logic flows, and web services calls. As these projects grow, so does the pain of debugging them and finding where errors might occur.

Our new release of PullString Converse makes it super easy to debug your project: choose the conversation block you want to start the debugging session from and start the debugger. Our debugger runs through the dialog lines based on inputs and displays all the log information of what is happening behind the scene: variable initialization, conditional statement matched, web services called, etc…

debugger

This provides an easy way to do unit testing on your voice app and discover any issues very rapidly, cutting down development time.

Staging and beta environments

Deploying a great voice experience requires a lot of testing either via a quality assurance team or running private beta to get user feedback.

DeployOptions

Converse now allows you to deploy your voice project to multiple environments including staging, beta and production, and control which version of the project is running on each environment. It supports rolling back to a previous version or pushing forward a staging version to production.

Staging, beta and production can all be under different Amazon vendor IDs. For example, a digital media advertising agency could run the staging version of a skill in their vendor ID, but have the production version in their customer vendor ID.

Sample voice app projects

In this new release, we are also providing sample voice assistant projects to get you started faster on bringing your first voice application to market. Projects as simple as a “knock knock joke” or “choose your own adventure game” will help you understand the fundamentals of building a voice application. Projects that demonstrate back-end service integration to weather API or send SMS to create multi-channel experiences will help you take your skill to the next level.

NewProjectSample

Improving usability and ease of use

Finally, we continue to keep improving the user experience based on customer feedback to ensure that you can craft rich and engaging voice experiences. For example, in Converse 1.2, you can now easily add sound effects from the Alexa sound library in any dialog line.

Wrapping it all up

At PullString, we have been building voice experiences for the last 7 years. With Converse 1.2, we have built into our solution all the best practices we have learned to test, stage and release a great voice application that supports multi-modality. We can't wait to see the amazing new voice apps our new version will help you create!

Related Voice Experience Posts:

 

Written by Guillaume Privat

Guillaume is Vice President Product at PullString, where he leads product strategy and design. His mission is to combine art and science to create the simplest experience to design, prototype and publish voice applications. Before joining PullString, he held various product executive positions in Adobe’s Digital Media and Digital Marketing business units, Macromedia, Siebel Systems, and Grameen. When not working at PullString, Guillaume produces Olive Oil from a grove in the South of France.

Recent Posts