LLM Performance and PCIe Lanes: Key Considerations
Thinking of installing second gpu for LLMs?
How PCIe Lanes Affect LLM Performance? Depending on the task. For training and multi-gpu inferrence - perdormance drop is significant.
Thinking of installing second gpu for LLMs?
How PCIe Lanes Affect LLM Performance? Depending on the task. For training and multi-gpu inferrence - perdormance drop is significant.
And why have I been seeing this BSOD...
I was hit hard by this issue. But you should investigate and test your PC if your BSOD is like mine. The cause was Intel’s CPU Degradation Issue in 13th and 14th generation.
Ollama on Intel CPU Efficient vs Performance cores
I’ve got a theory to test - if utilising ALL cores on Intel CPU would raise the speed of LLMs? This is bugging me that new gemma3 27 bit model (gemma3:27b, 17GB on ollama) is not fitting into 16GB VRAM of my GPU, and partially running on CPU.
AI requires a lot of power...
In the midst of the modern world’s turmoil here I’m comparing tech specs of different cards suitable for AI tasks (Deep Learning, Object Detection and LLMs). They are all incredibly expensive though.
Configuring ollama for parallel requests executions.
When the Ollama server receives two requests at the same time, its behavior depends on its configuration and available system resources.
It's much simpler comparing to old printer drivers
Installing ET-8500 on Windows is well documented in instruction. The ET-8500 Linux Driver installation is simple but not trivial.
Let's test the LLMs' speed on GPU vs CPU
Comparing prediction speed of several versions of LLMs: llama3 (Meta/Facebook), phi3 (Microsoft), gemma (Google), mistral(open source) on CPU and GPU.