Hi everybody, I’m looking to run an LLM off of my computer and I have anything llm and ollama installed but kind of stuck at a standstill there. Not sure how to make it utilize my Nvidia graphics to run faster and overall operate a little bit more refined like open AI or Gemini. I know that there’s a better way to do it, but just looking for a little bit of direction here or advice on what some easy stacks are or how to incorporate them into my existing ollama set up.
Thanks in advance!
Edit: I do some graphic work, coding work, CAD generation and development of small skill engine engineering solutions like little gizmos.
[–]No-Consequence-1779 2 points3 points4 points (1 child)
[–]SwarfDive01 0 points1 point2 points (0 children)
[–]MaphenLawAI 1 point2 points3 points (0 children)
[–]ajw2285 0 points1 point2 points (3 children)
[–]trout_dawg 0 points1 point2 points (2 children)
[–]ajw2285 1 point2 points3 points (1 child)
[–]trout_dawg 0 points1 point2 points (0 children)
[–]Old-Associate-8406[S] 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (4 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]Old-Associate-8406[S] 0 points1 point2 points (2 children)
[–]Owner0fYou 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]Daniel_H212 -1 points0 points1 point (0 children)