Share this:TwitterFacebookLinkedInMoreRedditTumblrPinterestWhatsAppSkypePocketTelegram Tags: Analog, Design Methods, IoT Audio processing at the edge has emerged as a particularly hot topic as users have (largely) embraced voice-based interfaces for personal devices and home electronics. The “largely” qualifier arises from specific concerns about response latency and overall feature capabilities as well as broader privacy concerns over personal conversations reaching the public cloud. It’s no surprise that developers are gaining more solutions such as the Knowles IA8201 device, which integrates a pair of Tensilica-based processor cores: one for high-performance computing and machine-learning inference and another for very-low-power always-on audio-signal processing. Unlike Knowles’ earlier IASonic IA8508, which combines similar audio-processing cores with an Arm Cortex-M4 processor, the IA8201 is specifically designed to serve as a dedicated companion processor in designs featuring voice-activated applications (see below). click either image for larger version Knowles’ IASonic processors include the new IA8201 companion audio processor (left) and earlier IA8508 audio application processor (right). (Source: Knowles) Some of the use cases Knowles looked to address specifically with the IA8201 include multi-microphone processing and machine-learning inference. With its expertise and history in microphone technology, the company is well-acquainted with the kind of multi-microphone configurations commonly used to increase accuracy of speech recognition in voice-activated products. According to Knowles, adding microphones makes channel separation exponentially more difficult, leading to significant challenges in designing devices based on more general-purpose processors. In contrast, the IA8201’s ability to handle those processing loads enables developers to use the device to create multi-microphone designs that achieve 10-100x efficiency of those earlier approaches. Similarly, the processing loads associated with machine-learning inference has limited the capabilities of voice-activated systems, requiring the use of cloud-based resources with increased response time and privacy vulnerabilities. The result, as our colleague Max Maxfield says, “…enables new audio use cases beyond what the host processor provides.” For more on Knowles microphone history and its new IA8201 device, check out Max’s article: “Next-Gen Processor for Audio and AI at the Edge. “ Continue Reading Previous Marvell extends strategic partnership with ArmNext Exploring the intersection of IoT, AI and quantum computing Leave a Reply Cancel reply You must Register or Login to post a comment. This site uses Akismet to reduce spam. Learn how your comment data is processed.