In a recent move to enhance its digital assistant’s capabilities, Amazon announced that it will integrate a purpose-built large language model (LLM) into almost every new Echo device. Dave Limp, SVP of Amazon Devices and Services, made the announcement amidst the unveiling of a host of Amazon-branded tablets and Alexa-powered tech. The LLM is designed around five foundational capabilities, with a focus on making interactions more conversational. According to Amazon, the art of conversation extends beyond mere words, encapsulating elements like body language, understanding the addressee, eye contact, and gestures. The company is yet to incorporate these features into its Echo devices.
However, the showcased demos revealed that Amazon still has some ground to cover in perfecting the LLM. For instance, when Limp asked Alexa to compose a casual message inviting friends over for a BBQ, the assistant formalized the invitation, a departure from the human-like conversational tone that Amazon is aiming for. Furthermore, there were instances when Alexa completely disregarded Limp’s requests during the demonstration, although these issues may be attributed to the complexities of conducting voice assistant demos in a live setting.
Amazon’s Digital Assistant to tap into Large Language Model
Dave Limp, SVP of Amazon Devices and Services, announced that Amazon’s digital assistant will soon be powered by a large language model (LLM). This move comes amidst an array of Amazon-branded tablets and Alexa-powered devices.
Foundational Capabilities of Amazon’s LLM
The LLM is designed based on five foundational capabilities. One of the primary goals is to make interactions more conversational. Amazon studied the components of a great conversation, including body language, understanding the audience, eye contact, and gestures. However, we are still expecting Amazon to include eyes and hand gestures in its Echo devices.
Alexa’s Performance at Amazon’s Showcase
At Amazon’s showcase, Alexa’s performance was not entirely flawless. During a demonstration, when Limp asked Alexa to compose a message inviting friends over for a BBQ, the assistant’s phrasing was a tad awkward. Moreover, Alexa ignored some of Limp’s requests during the presentation. However, these issues could be attributed to the challenges of demonstrating voice assistant technology in a live setting.
Apple Introduces Double Tap Interaction on Apple Watch Series 9
Alongside Amazon’s advancements, Apple is also introducing a new method of interaction called Double Tap in its Apple Watch Series 9. This feature, along with on-device Siri processing, enhances the user interaction, especially when both hands are occupied.
Cyberattack on MGM Resorts
In other news, MGM Resorts suffered a cyberattack that shut down systems across the company. The ALPHV ransomware group took responsibility for the attack, claiming to have used social engineering tactics to access crucial systems. The full extent of the damage remains unclear.
Amazon’s New Accessibility Features
Amazon has announced two new accessibility features set to be released later this year. The first is Eye Gaze on Alexa, which will allow users with mobility or speech disabilities to use their gaze to perform preset actions on the Fire Max 11 tablet. The second feature is Call Translation, which will transcribe Alexa calls on Echo Show devices into over 10 languages.
Final Thoughts
While Amazon’s new LLM for Alexa promises to enhance user interaction, the recent demonstration suggests that there’s still room for refinement. Nonetheless, Amazon’s focus on accessibility features like Eye Gaze and Call Translation is a commendable step towards inclusivity. In the era of digital assistants, companies must continue to push the boundaries of technology while ensuring that their products are accessible to all.