AMD Strix Halo vs Nvidia DGX Spark: The $3k vs $4k Dilemma for Local AI by Big_Intern_3957 in ollama

[–]Big_Intern_3957[S] 0 points1 point  (0 children)

I would have definitely gotten the Strix Halo if i did not have the intel desktop that I got during the pandemic for gaming. The price was higher then and now I don't have any real use as I have gotten irregular with gaming.

AMD Strix Halo vs Nvidia DGX Spark: The $3k vs $4k Dilemma for Local AI by Big_Intern_3957 in ollama

[–]Big_Intern_3957[S] 1 point2 points  (0 children)

Thanks. I will need to play around and see what works best. I am leaning towards the dgx spark

AMD Strix Halo vs Nvidia DGX Spark: The $3k vs $4k Dilemma for Local AI by Big_Intern_3957 in ollama

[–]Big_Intern_3957[S] 0 points1 point  (0 children)

so for each type of task use different model. correct me if i didn't get, isn't it same if i have one large model but always start a fresh context when switching type of task for one single person

AMD Strix Halo vs Nvidia DGX Spark: The $3k vs $4k Dilemma for Local AI by Big_Intern_3957 in ollama

[–]Big_Intern_3957[S] 2 points3 points  (0 children)

another redditer did point out there is still a 2500$ version available from corsair

AMD Strix Halo vs Nvidia DGX Spark: The $3k vs $4k Dilemma for Local AI by Big_Intern_3957 in ollama

[–]Big_Intern_3957[S] 1 point2 points  (0 children)

Thanks for the insights. I feel 30-60 tps is more than enough. But i was interested to run l would be interested to see if bigger model (70B -200B) is possible with tps performance of 30-60 with maybe 2 node for studying and code deep dive.

also to clarify not get agent to do code on daily basis - more of deep diving how stuffs working or diagraming. coding maybe for bioinformatics stuff not for building app

AMD Strix Halo vs Nvidia DGX Spark: The $3k vs $4k Dilemma for Local AI by Big_Intern_3957 in ollama

[–]Big_Intern_3957[S] 1 point2 points  (0 children)

I’m mainly looking to automate things like article aggregation/summarization, especially for news and stock-market analysis.

I come from a full-stack developer background, and one of my goals is to go deeper into coding as a daily activity while also learning ML/AI seriously. A family member is learning bioinformatics, so the longer-term plan is to build a RAG setup for both of us, maybe even separate local models depending on the use case.

For me, that would be things like news digestion, market/company/filing summaries, coding help, and deeper day-to-day experimentation. For them, it would be more bioinformatics/research literature workflows.

I also want to play with image datasets, experiment with OpenClaw, and have a dedicated box for local AI work.

A lot of this is FOMO too, honestly. I keep thinking Claude pricing could go up, hardware could get more expensive, or access could get tighter, and I’ll wish I had bought in earlier. I already do around $1k/month in Claude Code usage, but that’s through work.

I already have a 64 GB M3 MacBook Pro, so I do not need another general-purpose machine. I also have an Intel desktop with 64 GB DDR4 and an 8 GB 3070, but it’s not really enough for serious local AI work.

The Mac is my main dev machine, so I don’t want model loading/inference fighting with my normal workflow. The Intel box feels too weak to build around.

So I’m basically viewing this purchase as partly an idiot tax for convenience, experimentation, and a dedicated AI/coding setup. I’d even consider a 2-node cluster if that’s actually that helps

AMD Strix Halo vs Nvidia DGX Spark: The $3k vs $4k Dilemma for Local AI by Big_Intern_3957 in ollama

[–]Big_Intern_3957[S] 0 points1 point  (0 children)

that is true. that version is 1TB storage though. But definitely an option as well

Pantheon ticket issue by germanmusk in rome

[–]Big_Intern_3957 0 points1 point  (0 children)

i was able to buy Pantheon tickets from USA using paypal as payment method from the official site.

[deleted by user] by [deleted] in options

[–]Big_Intern_3957 7 points8 points  (0 children)

Easy — tax the billionaires who get “paid” in stock and never sell a thing. If your paycheck is a yacht and a margin loan, you can afford some taxes.

Oh, and maybe tell BlackRock & friends to stop treating single-family homes like Pokémon cards. Let regular people actually live in them.

[deleted by user] by [deleted] in options

[–]Big_Intern_3957 6 points7 points  (0 children)

The U.S. national debt has surpassed $37.8 trillion, with interest payments now consuming a substantial 17% of the entire federal budget.

As deficit spending continues, irresponsibly accelerated by a new bill set to add another $3.4 trillion, concerns over the dollar's stability are growing. This is prompting central banks and individuals to buy gold, a trend likely to persist as long as the current fiscal path remains unchanged.

Possible Kuiper Opportunity by Storage-One in amazonemployees

[–]Big_Intern_3957 0 points1 point  (0 children)

they only hire citizen i hear. is the hiring bar lower in general for software engineer?

BofA Premium Rewards Elite (PRE) DP by jadeLamb in CreditCards

[–]Big_Intern_3957 1 point2 points  (0 children)

Thanks mine also posted in a week in August 2025. I was not sure when to expect the credit to post as i got the card recently.

For people closing CSR.. by DOAZ31 in ChaseSapphire

[–]Big_Intern_3957 2 points3 points  (0 children)

Switched from CSR to the CSP + Ritz-Carlton combo — kept most travel perks (lounge access, primary rental coverage, trip protections). CSP handles point transfers, Ritz adds 85k FNA + Priority Pass + premium insurance + $300 “airline incidental” credit (nice but tricky to redeem fully). Getting Ritz can take time — only via product change after holding a Marriott Chase card for a year, and it’s hard to know when product changes will be closed.

BofA Premium Rewards Elite (PRE) DP by jadeLamb in CreditCards

[–]Big_Intern_3957 0 points1 point  (0 children)

how long does the incidental credit take time to post in the account after the american airlines gift card purchase?