top of page


Industry-leading low power AI


Providing the highest AI performance

at the industry's lowest power


Analog Inference is pioneering deep sub-threshold analog in-memory computation

Ellipse (1).png
Ellipse (2).png

Imagine AI acceleration at the edge unconstrained by power or neural net size. Analog in-memory compute is the next wave of AI.

AI power consumption inhibits adoption

Our unique technology performs data-center grade AI workloads at orders of magnitude lower power

Neural networks are constrained by size at the edge

We enable server-class networks to be deployed with extremely low latency in cost effective edge devices

Industrial vision icon.png

Industrial Vision

Run complex networks at full resolution with ultra-low latency & no active cooling

City Retail.png

Smart City and Retail

Run full resolution object detection, recognition & behavior networks simultaneously on a single device

Cloud icon.png

Edge Servers

Multi-stream high-end HD detection and classification networks  within Edge budgets

always on sound.png

Intelligent Always-On

Ultra-low power inference for always-on audio and vision applications

About Us


Analog Inference is building a world-changing line of AI inference accelerators using our novel analog in-memory compute technology. Our solutions provide orders of magnitude more performance per watt any any other technology, and are targeted at markets ranging from edge servers all the way to mobile devices.

At Analog Inference we are passionate about bringing the power of artificial intelligence to equipment from the cloud to the edge. Learn more about our management team, investors and advisors

Interested in working on the best AI technology in the industry?

Do you need best-in-class AI performance in your edge product?

Contact us using the form below.

bottom of page