CHANGING THE PHYSICS OF AI

Industry-leading low power AI

 

Providing the highest AI performance

at the industry's lowest power

Analog Inference is pioneering deep sub-threshold analog in-memory computation

Imagine AI acceleration at the edge unconstrained by power or neural net size. Analog in-memory compute is the next wave of AI!

AI power consumption inhibits adoption

Our unique technology performs data-center grade AI workloads at orders of magnitude lower power

Neural networks are constrained by size at the edge

We enable server-class networks to be deployed with extremely low latency in cost effective edge devices

 

Industrial Vision

Run complex networks at full resolution with ultra-low latency & no active cooling

Smart City and Retail

Run full resolution object detection, recognition & behavior networks simultaneously on a single device

Edge Servers

Run 200,000 images of per second on a single PCIe card

Intelligent Always-On

Ultra-low power inference for always-on audio and vision applications

 

ABOUT US

Analog Inference is building a world-changing line of AI inference accelerators using our novel analog in-memory compute technology. Our solutions provide orders of magnitude more performance per watt any any other technology, and are targeted at markets ranging from edge servers all the way to mobile devices.

At Analog Inference we are passionate about bringing the power of artificial intelligence to equipment from the cloud to the edge. Learn more about our management team, investors and advisors

Interested in working on the best AI technology in the industry?

Do you need best-in-class AI performance in your edge product?

Contact us using the form below!

 
2350 Mission College Blvd, Suite 300 Santa Clara, CA 95054
  • Twitter

We'd like to hear from you. If you would like more information or would like to add your skills to our team, drop us a line.

Contact Us