Quick Start
openclaw skills install llamacpp-benchbyalexhegit · crypto
Run llama.cpp benchmarks on GGUF models to measure prompt processing (pp) and token generation (tg) performance. Use when the user wants to benchmark LLM models, compare model performance, test inference speed, or run llama-bench on GGUF files. Supports Vulkan, CUDA, ROCm, and CPU backends.
openclaw skills install llamacpp-bench Or ask OpenClaw: "Install the llama.cpp Benchmark skill"
openclaw skills install llamacpp-benchInstall and run llama.cpp Benchmark instantly — no setup required.