LFM2 1.2B
Q4_K_M·1.2B params·GGUF
intelligence: see on Artificial Analysis →
checkpoint:
LiquidAI/LFM2-1.2B-GGUF:Q4_K_MAll runs (15)
| Hardware | Backend | Shape | Conc. | Gen tok/s ↓ | TTFT | TPOT (ms) | Out tok | Total | VRAM Δ |
|---|---|---|---|---|---|---|---|---|---|
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | codegen | 1 | 529.6 | 12ms | 1.9 | 536 | 1.03s | 0.000 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | agent | 1 | 513.2 | 12ms | 1.9 | 500 | 964ms | 0.000 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | chat | 1 | 508.7 | 11ms | 1.9 | 100 | 196ms | 0.000 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | rag | 1 | 485.5 | 27ms | 1.9 | 76 | 209ms | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | codegen | 1 | 471.0 | 22ms | 2.1 | 733 | 1.57s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | chat | 1 | 458.1 | 14ms | 2.1 | 100 | 211ms | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | agent | 1 | 446.9 | 48ms | 2.1 | 500 | 1.12s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | rag | 1 | 426.4 | 37ms | 2.1 | 76 | 225ms | 0.000 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | agent | 4 | 223.8 | 352ms | 4.0 | 500 | 2.17s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | agent | 4 | 214.0 | 328ms | 4.0 | 497 | 2.07s | 0.000 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | codegen | 1 | 208.8 | 24ms | 4.7 | 637 | 3.05s | 0.002 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | chat | 1 | 204.5 | 18ms | 4.7 | 100 | 488ms | 0.000 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | rag | 1 | 194.7 | 16ms | 4.8 | 131 | 640ms | 0.001 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | agent | 1 | 194.6 | 77ms | 5.0 | 434 | 2.25s | 0.001 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | agent | 4 | 109.4 | 404ms | 8.2 | 435 | 4.00s | -0.005 GiB |
Environment
GeForce RTX 3090 · 24 GiB
cpuAMD EPYC 7302P 16-Core Processor
gpuNVIDIA GeForce RTX 3090
archNVIDIA
vram24 GiB (system 64.0 GiB)
power200 W / 450 W max(44% cap)
backendllama.cpp 59778f0 (cuda)
serverlemonade unknown
osUbuntu 24.04 LTS
kernel6.17.13-7-pve
driver590.48.01
python3.12.3
containerizedtrue
runs/cell5
warmups2
endpoint/v1/chat/completions
streamingtrue
Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM)
cpuAMD RYZEN AI MAX+ 395 w/ Radeon 8060S
gpuAMD Radeon 8060S
archStrix Halo (gfx1151)
vram96 GiB (system 31.1 GiB, unified)
backendllama.cpp b8940 (rocm)
serverlemonade 10.4.0
osUbuntu 24.04.4 LTS
kernel7.0.2-2-pve
python3.12.3
containerizedtrue
runs/cell5
warmups2
endpoint/v1/chat/completions
streamingtrue
GeForce RTX 5070 · 11.94 GiB
cpuAMD Ryzen 9 7900 12-Core Processor
gpuNVIDIA GeForce RTX 5070
archNVIDIA
vram11.94 GiB (system 30.4 GiB)
power250 W / 300 W max(83% cap)
backendllama.cpp b9174 (cuda)
serverlemonade unknown
osCachyOS
kernel7.0.0-1-cachyos
driver595.58.03
python3.14.4
containerizedfalse
runs/cell5
warmups2
endpoint/v1/chat/completions
streamingtrue