LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
LiquidAI: LFM2-8B-A1B – Recent Activity and Usage Stats | OpenRouter
Recent activity on LFM2-8B-A1B
Total usage per day on OpenRouter
Completion
106M
Prompt
62.3M
Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.