CHANGING THE PHYSICS OF AI
Industry-leading low power AI
Providing the highest AI performance
at the industry's lowest power
Analog Inference is pioneering deep sub-threshold analog in-memory computation
Imagine AI acceleration at the edge unconstrained by power or neural net size. Analog in-memory compute is the next wave of AI!
AI power consumption inhibits adoption
Our unique technology performs data-center grade AI workloads at orders of magnitude lower power
Neural networks are constrained by size at the edge
We enable server-class networks to be deployed with extremely low latency in cost effective edge devices
Run complex networks at full resolution with ultra-low latency & no active cooling
Smart City and Retail
Run full resolution object detection, recognition & behavior networks simultaneously on a single device
Run 200,000 images of per second on a single PCIe card
Ultra-low power inference for always-on audio and vision applications
Analog Inference is building a world-changing line of AI inference accelerators using our novel analog in-memory compute technology. Our solutions provide orders of magnitude more performance per watt any any other technology, and are targeted at markets ranging from edge servers all the way to mobile devices.
Interested in working on the best AI technology in the industry?
Do you need best-in-class AI performance in your edge product?
Contact us using the form below!