Toggle navigation
Home
Categories
AI
Architecture
Backends
Clouds
Conferences
Containers
Developer Relations
Distribution and Distributed Systems
Formats
Game Development
Hardware
Language Development
Languages
Libraries
Management
Patterns
Places of interest on the Internet
Platforms
Presentation
Reading
Security
Speaking
Storage
Teaching
Thinking
Tools
Virtual Machines
Writing
All Pages
All Tags
Subscribe (RSS)
Hardware for LLM use
A collection of links and pages about hardware for local LLM execution.
Select the right hardware for your local LLM deployment with this online guide
Alex Ziskind
- tests of pcs, laptops, gpus etc. capable of running LLMs
Digital Spaceport
- reviews of various builds designed for LLM inference
Donato Capitella
- practical and insightful tutorials on running LLMs locally
JetsonHacks
- information about developing on NVIDIA Jetson Development Kits
Kolosal - LLM Memory calculator
- estimate the RAM requirements of any GGUF model instantly
LLM Inference VRAM & GPU Requirement Calculator
- calculate how many GPUs you need to deploy LLMs
Miyconst
- tests of various types of hardware capable of running LLMs
Strix Halo Wiki
- a website to gather important information and practical guides for systems powered by AMD Ryzen AI MAX and MAX+ processors
ZLUDA
- CUDA on non-NVIDIA GPUs
Tags:
hardware
ai
Last modified 07 May 2026