Highest AI performance at industry's lowest power
Eliminating the painful choice of performance or energy efficiency
Imagine AI acceleration unconstrained by power or neural net size.
Analog in-memory compute is rapidly emerging as the next wave of AI.
Analog Inference is pioneering deep
sub-threshold analog in-memory computing
LLM & Generative AI
​
Single or multi-modal Generative AI apps without extreme hardware costs or power penalties
​
-
Recommendation Engines
-
Sentiment Analysis
-
Content Generation
-
Translation & Localization
-
Virtual Assistants
-
Education
AI-enabled Computer Vision
​
Multi-stream, high-definition resolution, low latency real-time inference for Edge applications & sectors​
​
-
Anomaly Detection (manufacturing)
-
Safety & Security
-
Retail Applications
-
Metro & Campus
Neural networks are constrained by size across all applications
We enable deployment of server-class networks with extremely low latency in cost effective devices, enabling an entirely new range of applications.
AI application power consumption inhibits adoption
Our unique technology performs data-center grade AI workloads at orders of magnitude lower power.
ABOUT US
Analog Inference is building a world-changing line of AI inference accelerators using our novel analog in-memory compute technology. Our solutions provide orders of magnitude more performance per watt than other solutions, and solve the AI inference deployability problems from Edge to Data Center.
​
Meet our Investors & Advisors. We are supported by several true visionaries in the tech industry, and a strong syndicate of long-view investors. It's a team we are always looking to expand.​
​
​