top of page

CHANGING THE PHYSICS OF AI
Industry-leading low power AI
Technology
Providing the highest AI performance
at the industry's lowest power

Analog Inference is pioneering deep sub-threshold analog in-memory computation

.png)
.png)
Imagine AI acceleration at the edge unconstrained by power or neural net size. Analog in-memory compute is the next wave of AI!
AI power consumption inhibits adoption
Our unique technology performs data-center grade AI workloads at orders of magnitude lower power
Neural networks are constrained by size at the edge
We enable server-class networks to be deployed with extremely low latency in cost effective edge devices
Applications

Industrial Vision
​
Run complex networks at full resolution with ultra-low latency & no active cooling

Smart City and Retail
​
Run full resolution object detection, recognition & behavior networks simultaneously on a single device

Edge Servers
​
Run 200,000 images of per second on a single PCIe card

Intelligent Always-On
​
Ultra-low power inference for always-on audio and vision applications
About Us

ABOUT US
Analog Inference is building a world-changing line of AI inference accelerators using our novel analog in-memory compute technology. Our solutions provide orders of magnitude more performance per watt any any other technology, and are targeted at markets ranging from edge servers all the way to mobile devices.
​
At Analog Inference we are passionate about bringing the power of artificial intelligence to equipment from the cloud to the edge. Learn more about our management team, investors and advisors
Interested in working on the best AI technology in the industry?
Do you need best-in-class AI performance in your edge product?
Contact us using the form below!
Contact
bottom of page