LFM2 8B-A1B
Q4_K_M·8B params·GGUF
intelligence: see on Artificial Analysis →
checkpoint:
LiquidAI/LFM2-8B-A1B-GGUF:Q4_K_MAll runs (15)
| Hardware | Backend | Shape | Conc. | Gen tok/s ↓ | TTFT | TPOT (ms) | Out tok | Total | VRAM Δ |
|---|---|---|---|---|---|---|---|---|---|
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | codegen | 1 | 364.9 | 33ms | 2.7 | 791 | 2.17s | 0.000 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | agent | 1 | 355.0 | 15ms | 2.8 | 426 | 1.20s | 0.000 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | chat | 1 | 336.1 | 28ms | 2.7 | 100 | 295ms | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | codegen | 1 | 332.9 | 39ms | 2.9 | 883 | 2.62s | 0.000 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | rag | 1 | 319.2 | 52ms | 2.7 | 117 | 370ms | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | chat | 1 | 318.7 | 24ms | 2.8 | 100 | 302ms | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | agent | 1 | 315.9 | 73ms | 3.0 | 434 | 1.42s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | rag | 1 | 278.6 | 62ms | 2.9 | 111 | 374ms | 0.000 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | codegen | 1 | 151.4 | 75ms | 6.5 | 843 | 5.59s | 0.004 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | agent | 1 | 144.7 | 21ms | 6.7 | 347 | 2.41s | 0.001 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | chat | 1 | 144.6 | 49ms | 6.5 | 100 | 691ms | 0.001 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | rag | 1 | 142.6 | 25ms | 6.6 | 81 | 555ms | 0.001 GiB |
| GeForce RTX 5070 · 11.94 GiB | llama.cpp b9174 (cuda) | agent | 4 | 119.4 | 289ms | 7.8 | 362 | 2.93s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | llama.cpp 59778f0 (cuda) | agent | 4 | 116.0 | 339ms | 8.0 | 357 | 3.17s | 0.000 GiB |
| Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM) | llama.cpp b8940 (rocm) | agent | 4 | 61.2 | 506ms | 15.7 | 359 | 5.76s | -0.009 GiB |
Environment
GeForce RTX 3090 · 24 GiB
cpuAMD EPYC 7302P 16-Core Processor
gpuNVIDIA GeForce RTX 3090
archNVIDIA
vram24 GiB (system 64.0 GiB)
power200 W / 450 W max(44% cap)
backendllama.cpp 59778f0 (cuda)
serverlemonade unknown
osUbuntu 24.04 LTS
kernel6.17.13-7-pve
driver590.48.01
python3.12.3
containerizedtrue
runs/cell5
warmups2
endpoint/v1/chat/completions
streamingtrue
Strix Halo · Radeon 8060S · 128 GiB unified (96 GiB VRAM)
cpuAMD RYZEN AI MAX+ 395 w/ Radeon 8060S
gpuAMD Radeon 8060S
archStrix Halo (gfx1151)
vram96 GiB (system 31.1 GiB, unified)
backendllama.cpp b8940 (rocm)
serverlemonade 10.4.0
osUbuntu 24.04.4 LTS
kernel7.0.2-2-pve
python3.12.3
containerizedtrue
runs/cell5
warmups2
endpoint/v1/chat/completions
streamingtrue
GeForce RTX 5070 · 11.94 GiB
cpuAMD Ryzen 9 7900 12-Core Processor
gpuNVIDIA GeForce RTX 5070
archNVIDIA
vram11.94 GiB (system 30.4 GiB)
power250 W / 300 W max(83% cap)
backendllama.cpp b9174 (cuda)
serverlemonade unknown
osCachyOS
kernel7.0.0-1-cachyos
driver595.58.03
python3.14.4
containerizedfalse
runs/cell5
warmups2
endpoint/v1/chat/completions
streamingtrue