WEBINAR DETAILS
  • When
  • About
    As AI workloads increasingly migrate from the Cloud to the Edge, engineers face new challenges in optimizing performance, power, and latency while maintaining model accuracy and system reliability. This webinar provides an in-depth exploration of cutting-edge techniques and architectures enabling efficient AI inference at the Edge. We will examine the latest advancements in Edge SoCs and their associated toolsets, and heterogeneous compute architectures, along with real-time operating system (RTOS) integration, and memory bandwidth strategies.
     
    The session may also cover key considerations in deploying computer vision, sensor fusion, and NLP models on resource-constrained edge platforms, including microcontrollers and embedded Linux systems. This webinar will deliver actionable guidance to accelerate deployment of AI at the Edge—balancing intelligence, power efficiency, and robustness in environments where milliseconds and milliwatts matter.

    This session is part of our “A.I. in 2025: Shifting from the Whiteboard to Implementation” webinar series. Registration/event page here.
  • Duration
    1 hour
  • Price
    Free
  • Language
    English
  • OPEN TO
    Everyone
  • Dial-in available
    (listen only)
    Not available.
FEATURED PRESENTERS