Videos
Hi guys,
I got 2 questions:
1. How can i change the folder where models are downloaded / inserted to for ollama? Currently its the standard path on C i guess i want to change it to D.
ollama run deepseek-r1:671b ->
2. Will my PC be able to run it ?
Prozessor Intel(R) Core(TM) i7-14700KF 3.40 GHz
Installierter RAM 48,0 GB (47,8 GB verwendbar)
GTX 4080
Seems like it works but is far from straightforward and you really need a beast of a local setup to run this well.
I have 8 A100, each with 40GB video memory, and 1TB of RAM. How to deploy deepseek-r1∶671b locally? I cannot load the model using the video memory alone. Is there any parameter that Ollama can configure to load the model using my 1TB of RAM? thanks
Hello everyone,
Two days ago, I turned night into day, and in the end, I managed to get R1 running on my local PC. Yesterday, I uploaded a video on YouTube showing how I did it: https://www.youtube.com/watch?v=O3Lk3xSkAdk
I don't post here often, so I'm not sure if sharing the link is okay—I hope it is.
The video is in German, but with subtitles, everyone should be able to understand it.
Be careful if you want to try this yourself! ;)
Update:
For those who don't feel like watching the video: The "trick" was using Windows' pagefile. I set up three of them on three different SSDs, which gave me around 750GB of virtual memory in total.
Loading the model and answering a question took my PC about 90 minutes.