If you’re a developer (or just a tinkerer) who’s interested in building skills for Amazon Alexa, the platform has just introduced a host of new features and capabilities for your coding pleasure.

These 31 new features range from relatively minor tweaks to fundamental changes. Key among them, and apparently a big developer ask, is Alexa Conversations, which makes the digital assistant more natural in its chatter with human beings. “Being prepared with random turns of phrase, remembering context, carrying over the context, dealing with oversupply or undersupply of information—it’s incredibly hard,” is how Nedim Fresko, VP of Alexa Devices & Developer Technologies, described the problem to TechCrunch. “And if you put it in a way and create a state diagram, you get bogged down and you have to stop. Then, instead of doing all of that, people just settle for ‘Okay, fine, I’ll just do robot robotic commands instead.’”

But how does it actually work from a developer perspective? If you’re tried to build a skill for Alexa (or any other digital assistant), you know that the “tree” of possible commands and responses can quickly become very, often overwhelmingly complex. With Alexa Conversations, the developer provides “sample dialogs,” and the platform leverages machine learning to come up with a wide number of phrasing variations and dialogue paths based on context. This could streamline the whole process, as well as make skills more versatile.

The developer can further assist the platform’s machine-learning algorithms via annotations (Responses, Utterance Sets and Dialog Acts) and specifications for when APIs are invoked. Or as described by the Amazon Developer blog:

“The trained model can predict the next steps in the dialog based on the entire conversation’s history, the current state, and the capabilities of the developer’s APIs. It can take action to drive the conversation forward, such as confirming inputs, eliciting missing information, retrieving information through an API call through your skill, or gracefully following the customer’s direction.”

There are also testing tools for debugging. Alexa Conversations is still in beta, but will theoretically emerge as a more finished product at some point.

Mobile developers may also be interested in Alexa for Apps, currently in limited developer preview, which allows users to interact with iOS and Android apps via Alexa vocal requests. “Key use cases include using voice to quickly search, view more information, and access any functionality inside your app,” reads the official blog posting on the matter. “Alexa for Apps is easy to implement with any app that can be opened with deep links, and is already being added to experiences for TikTok, Yellow Pages, Uber, Sonic, Zynga, Volley, and others.”

For game developers, there’s an Alexa Web API for Games that enables the building of games for all Alexa devices with screens. Amazon is pushing the idea that voice-controlled games are as immersive as anything you can play with a physical controller. 

Amazon Alexa Skills: Lucrative or Nah?

A few years ago, Amazon enabled developers to monetize their Alexa skills. However, it’s kept the metrics for payouts under wraps, which has frustrated developers who want to refine their skills for maximum cash. A more powerful and naturalistic Alexa, by compelling more use, could help them achieve that goal.