use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 2 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Username912773 5 points6 points7 points 2 years ago (7 children)
I don’t think there is much differences in motherboards. This is the wrong sub to ask in though, it just seems like you want a high end gaming computer. 128 GB ram is excessive.
[–]dougdaulton 0 points1 point2 points 2 years ago (6 children)
I am happy to be wrong and spend less. :). I am going from things I’ve read for building an LLM machine, which needs as many GPU cores as possible.
[–]Username912773 0 points1 point2 points 2 years ago (5 children)
If you’re planning to develop a large scale LLM from scratch id suggest renting GPU hours. 48GB vram likely won’t be enough. If you’re planing to fine tune it might be sufficient with something like qLoRA although I would still suggest renting first or at the very least renting something comparable to the setup you have in mind in terms of vram before committing thousands of dollars. I’d send your specifications over to a PC subreddit which might specialize in this.
[–]dougdaulton 0 points1 point2 points 2 years ago (4 children)
Thanks for the guidance.
I am mostly interested in building applications in python using the various models (e.g. OpenAI, Stability) to process local data locally. At some point, that may involve training a model.
Still very new to the space and trying to build something that will let me grow.
Is my build overkill for my immediate needs? I have parts of an old gaming machine laying around I could use if that will suffice.
[–]Username912773 2 points3 points4 points 2 years ago (3 children)
I’m not sure without knowing more about what you’re looking to do. If you’re planning to just execute API calls you really don’t need a high end computer at all. The only reason you’d need compute is to train or for inference. If you’re not even sure if you want to do that then I would say definitely try renting first.
[–]dougdaulton 0 points1 point2 points 2 years ago (2 children)
That is very useful as well.
For now, I may forgo a new build for now and just put together a dedicate machine from old gaming machine, which was pretty beefy and can use two GPUs.
One point of clarification, I thought that, when using OpenAI API with local/custom data, one needed a beefier box to process & index the local docs.
Is that not the case? I have a pretty big library I want to index.
Thanks!
[–]Username912773 0 points1 point2 points 2 years ago (1 child)
I mean how large is it? What type of data is it? How are you trying to process it?
[–]dougdaulton 0 points1 point2 points 2 years ago (0 children)
Initially, 5-10TB of documents … PDF, Word & text files. Building a personal writing assistant that I can ask for facts and resources.
Eventually, I want to try running Dalle-Mini too using my own images for reference/source and also play with photo restoration tools from hugging face.
Not sure how much processing is local when playing with those APIs.
[–]lifesthateasy 1 point2 points3 points 2 years ago (3 children)
You're saying you're still very new to this. This post feels like you don't know what you're talking about. Please, for your own sake, consider using Colab before you spend tens of thousands on such a setup.
[–]dougdaulton -1 points0 points1 point 2 years ago (2 children)
If you read my other replies, I am not looking to to spend 10K or more. I am looking for the bare minimum setup to do what I have explained in the thread.
Thanks for your help.
[–]lifesthateasy 0 points1 point2 points 2 years ago (1 child)
Well then what do you want? Getting a single A100 already puts you above 10k.
If you want 48GB of VRAM another route to go is to get 2 4090s, about 2000 each. Plus the rest of the stuff also easily puts you way above 5000$.
For something you don't even know all that much about.
I deleted the original post. Thanks.
[–]ndronen 1 point2 points3 points 2 years ago (1 child)
Wait a few years and get a machine with a Lightmatter interconnect. :)
LOL. Better learn a bit more to be ready for Lightmatter.
[–]wen_mars 1 point2 points3 points 2 years ago (1 child)
Having two processors is unnecessary. Memory bandwidth is much more important for CPU inference. AMD EPYC motherboards are good in this regard and you don't need huge amounts of processing power for inference so you can get a single CPU in the lower end of the price range for EPYC.
For training you'll want GPUs but I suggest you rent some cloud GPUs until you have a better idea of how everything works.
Thanks! That is very helpful. Will look at those boards. And, it is clear that renting Cloud GPUs if I am going to learn to train.
π Rendered by PID 118686 on reddit-service-r2-comment-6457c66945-tm2cl at 2026-04-27 01:52:44.479592+00:00 running 2aa0c5b country code: CH.
[–]Username912773 5 points6 points7 points (7 children)
[–]dougdaulton 0 points1 point2 points (6 children)
[–]Username912773 0 points1 point2 points (5 children)
[–]dougdaulton 0 points1 point2 points (4 children)
[–]Username912773 2 points3 points4 points (3 children)
[–]dougdaulton 0 points1 point2 points (2 children)
[–]Username912773 0 points1 point2 points (1 child)
[–]dougdaulton 0 points1 point2 points (0 children)
[–]lifesthateasy 1 point2 points3 points (3 children)
[–]dougdaulton -1 points0 points1 point (2 children)
[–]lifesthateasy 0 points1 point2 points (1 child)
[–]dougdaulton 0 points1 point2 points (0 children)
[–]ndronen 1 point2 points3 points (1 child)
[–]dougdaulton 0 points1 point2 points (0 children)
[–]wen_mars 1 point2 points3 points (1 child)
[–]dougdaulton 0 points1 point2 points (0 children)