Qwen/Qwen2.5-Coder 14B-Instruct
unknown·14B params·unknown
intelligence: see on Artificial Analysis →
checkpoint:
Qwen/Qwen2.5-Coder-14B-Instruct-AWQcommit:
eb3172f06a6dweights 9.29 GiB
All runs (5)
| Hardware | Backend | Shape | Conc. | Gen tok/s ↓ | TTFT | TPOT (ms) | Out tok | Total | VRAM Δ |
|---|---|---|---|---|---|---|---|---|---|
| GeForce RTX 3090 · 24 GiB | vLLM 0.21.0 (cuda) | chat | 1 | 42.6 | 58ms | 22.9 | 93 | 2.06s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | vLLM 0.21.0 (cuda) | codegen | 1 | 41.1 | 97ms | 24.3 | 554 | 13.49s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | vLLM 0.21.0 (cuda) | agent | 1 | 40.6 | 57ms | 24.3 | 236 | 6.04s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | vLLM 0.21.0 (cuda) | rag | 1 | 39.9 | 55ms | 24.6 | 68 | 1.76s | 0.000 GiB |
| GeForce RTX 3090 · 24 GiB | vLLM 0.21.0 (cuda) | agent | 4 | 38.9 | 105ms | 25.3 | 236 | 6.06s | 0.000 GiB |
Environment
GeForce RTX 3090 · 24 GiB
cpuAMD EPYC 7302P 16-Core Processor
gpuNVIDIA GeForce RTX 3090
archNVIDIA
vram24 GiB (system 64.0 GiB)
power200 W / 450 W max(44% cap)
backendvLLM 0.21.0 (cuda)
serverlemonade unknown
osUbuntu 24.04 LTS
kernel6.17.13-7-pve
driver590.48.01
python3.12.3
containerizedtrue
runs/cell5
warmups2
endpoint/v1/chat/completions
streamingtrue