Logo
Sid Sheth, d-Matrix | Robotics & AI Infrastructure Leaders

Sid Sheth, d-Matrix | Robotics & AI Infrastructure Leaders

Episode 80
Jun 21, 202521 minutes
0:00/21:20

Show Notes

In this episode of theCUBE, Sid Sheth, CEO of d-Matrix, discusses the transformative role of in-memory computing in enhancing AI inference capabilities. Engaging with hosts John Furrier and Dave Vellante during the Robotics & AI Infrastructure Leaders 2025 event, Sheth delves into d-Matrix’s flagship product, Corsair, which is engineered for low-latency and high-efficiency workloads. This architecture is designed to meet the escalating demand for real-time AI applications, particularly in fields like interactive large language models (LLMs) and video generation. Sheth emphasizes the stark differences between training and inference, outlining how custom architectures are reshaping AI deployment across varied environments, from the edge to data centers. The conversation also highlights the company's commitment to optimizing energy consumption while ensuring scalable AI solutions.

Key Topics Covered:
  • Introduction to Sid Sheth and d-Matrix's flagship product, Corsair.
  • Role of in-memory computing in reducing energy consumption and latency.
  • Distinction between training and inference in AI processes.
  • Overview of d-Matrix’s architectural innovations for efficient AI inference.
  • Market trends and the growing importance of cost-efficient AI model deployment.
  • Emerging use cases: interactive LLMs, reasoning models, and real-time video generation.
  • Insights on customer deployment strategies and existing infrastructure alignment.
  • Future product roadmap focusing on scalable AI solutions in data centers.