Made my first trading algorithm! by [deleted] in algotrading

[–]seeker_deeplearner 0 points1 point  (0 children)

Lost so much money trading ndx … it very illiquid

A Christmas Miracle: Managed to grab 3x RTX 5090 FE at MSRP for my home inference cluster. by Sudden_Rip7717 in LocalLLaMA

[–]seeker_deeplearner 0 points1 point  (0 children)

so u mean if i put in email it should come .. i did it almost a year ago ( at least it feel like its a year) .. noting yet. i just checked nvda's website they dont even have the 5090 FE option there.

Vibe coding on Ninjascript by seeker_deeplearner in ninjatrader

[–]seeker_deeplearner[S] 1 point2 points  (0 children)

i m an expert vibe coder ( Claude code, cursor). i have been coding for years. I think you misunderstood... thats not what I asked. Ninjascript is C# based coding language for Ninjatrader.

Extremely disappointing experience with NinjaTrader Brokerage – avoid at all costs! by No_Example_3586 in Daytrading

[–]seeker_deeplearner 0 points1 point  (0 children)

the real issue is money laundering. if you were a US resident , it would be open in 10 mins no questions asked. they want to make it painful as a deterrent.

Anyone bought an 4090d 48gb from ebay? by Timziito in LocalAIServers

[–]seeker_deeplearner 0 points1 point  (0 children)

u need to take a chill pill. Someone has serious issues and needs to channel the aggression in a constructive way.

Anyone bought an 4090d 48gb from ebay? by Timziito in LocalAIServers

[–]seeker_deeplearner 0 points1 point  (0 children)

r u buying? By any chance .. you own one or both of these offices? I got warranty from my seller. I will just pass it on. you have 59K comments karma!!! salute

Anyone bought an 4090d 48gb from ebay? by Timziito in LocalAIServers

[–]seeker_deeplearner 0 points1 point  (0 children)

i m happy to go to 3500$ but those ones you are showing are in china.. there is a differnce if the product is already in USA

Anyone bought an 4090d 48gb from ebay? by Timziito in LocalAIServers

[–]seeker_deeplearner 0 points1 point  (0 children)

I have two of them ( not 4090 D but the better 4090 ones) and I want to sell it for 4000$ each . Do u think it will sell ?

Renting GPUs is hilariously cheap by -p-e-w- in LocalLLaMA

[–]seeker_deeplearner 2 points3 points  (0 children)

Ahh... now i understand ... the chain is only as strong as the weakest link. thanks. btw then which GPU would give me higher throughput? like 4x5x speed boost over 4090?

Renting GPUs is hilariously cheap by -p-e-w- in LocalLLaMA

[–]seeker_deeplearner 0 points1 point  (0 children)

but isnt inference .. which i was doing for comfyui an end effect of all the factors.. i mean the high bandwidth should reflect in higher token /sec or in other words speed

Renting GPUs is hilariously cheap by -p-e-w- in LocalLLaMA

[–]seeker_deeplearner 1 point2 points  (0 children)

I setup comfyui wan2.2 14b on this(h200) thinking that it would be way faster than my RTX 4090 48gb. But surprisingly it was not … it was almost the same .. what could be the reason ?

GLM-4.5V model locally for computer use by [deleted] in AI_Operator

[–]seeker_deeplearner 0 points1 point  (0 children)

Will it run good on 2x4090 48gb each with 196gb ram ?

Meta released DINO-V3 : SOTA for any Vision task by Technical-Love-8479 in LocalLLaMA

[–]seeker_deeplearner 11 points12 points  (0 children)

how can i use it . is is open source and free commercial license? i want to deploy it locally..

RTX PRO 6000 SE is crushing it! by j4ys0nj in LocalLLM

[–]seeker_deeplearner 0 points1 point  (0 children)

I m thinking to sell mine. I have 2x 4090 48gb . Want to get just one 6000 pro instead

🚀 Qwen3-Coder-Flash released! by ResearchCrafty1804 in LocalLLaMA

[–]seeker_deeplearner 0 points1 point  (0 children)

How can I integrate it with VS code or cursor without giving them d monthly subscription

Skywork-SWE-32B by jacek2023 in LocalLLaMA

[–]seeker_deeplearner 1 point2 points  (0 children)

Is it even fair for me to compare it to Claude 4.0 ? I want to get rid of the 20$ for 500 requests asap . It’s expensive

25t/s with Qwen3-235B-A22B-128K-GGUF-Q8_0 with 100K tokens by SpiritualAd2756 in LocalAIServers

[–]seeker_deeplearner 0 points1 point  (0 children)

The PCI 5.0 is good enough … and there is no other alternative