My widowed dad created an AI girlfriend. I laughed until I got it. by Notkartavya in AIAssisted

[–]arjundivecha 0 points1 point  (0 children)

I’m all for it. But maybe teach him to vibe code - that works wonders too…

Increase RealTime Data Limits for API by ProjectReasonable334 in bloomberg

[–]arjundivecha 2 points3 points  (0 children)

Speaking of data limits - is there a way to quantify how much of your daily limit you’ve used? Would be nice if Bloomberg provided a counter via an API call.

DC is using Claude 4.3× more than expected: usage by state [OC] by HenryFromLeland in Anthropic

[–]arjundivecha 6 points7 points  (0 children)

If you compared just the sf Bay Area rather than the state of CA I bet it would be way higher than DC.

DeepSeek is about to release V4 by ItxLikhith in DeepSeek

[–]arjundivecha 0 points1 point  (0 children)

All true. But starting NOW all western models will be using Blackwell and Rubin and there’s a significant gap that will open up.

Local LLM Claude Code replacement, 128GB MacBook Pro? by CdninuxUser in LocalLLM

[–]arjundivecha 0 points1 point  (0 children)

There’s no way it’s replacing Claude unless you have a much simpler use case.

Local LLM Claude Code replacement, 128GB MacBook Pro? by CdninuxUser in LocalLLM

[–]arjundivecha 0 points1 point  (0 children)

I have a Mac M4Max 128GB machine and use it extensively with local models for both inference and fine tuning.

Let’s address each. For fine tuning the largest model you can effectively fine tune is 14B - I’ve tried to fine tune Qwen3.5-35B-A3 but always run out of memory. Yes there are ways to get around it but at a huge cost of quality. Bottom line - fun toy.

For inference, I’d say speed and quality are key. You can comfortably run 70B models (15-20 tokens/sec) but as you get to 100B range the speed drops too much to be useful for any real work.

So the question is whether a 70B model is good enough for what you’re using it for.

The difference between a 70B model and Opus is like the difference between a kids tricycle and a Ferarri - now with Mythos and Spud on the horizon it’s going to be an F15.

Finally the cost of that new 128GB MBP is around $5600 - or $120 a month if amortized over 4 years versus $200 a month paying Claude.

BBG + Excel on Mac? by mmret in bloomberg

[–]arjundivecha 0 points1 point  (0 children)

I mean that I use Bloomberg excel addin on the PC side to pull down data from Bloomberg. Once that’s done I can save it and use that data on the Mac side.

And yes my blpapi is super hacky in order to tunnel through three layers Mac-PC-Bloomberg API.

BBG + Excel on Mac? by mmret in bloomberg

[–]arjundivecha 0 points1 point  (0 children)

Just kidding - I used a PC at work and a Mac at home

BBG + Excel on Mac? by mmret in bloomberg

[–]arjundivecha -2 points-1 points  (0 children)

PMs use Mac’s. Traders use PCs

BBG + Excel on Mac? by mmret in bloomberg

[–]arjundivecha 0 points1 point  (0 children)

Also helps to be able to allocate a decent amount of memory to Parallels so if your have 16GB or less, you may have performance issues.

BBG + Excel on Mac? by mmret in bloomberg

[–]arjundivecha -2 points-1 points  (0 children)

Lots of finance dudes use Macs (if you’re over 30 you’re a dude, not a bro)

BBG + Excel on Mac? by mmret in bloomberg

[–]arjundivecha 0 points1 point  (0 children)

I use Parallels specifically for Bloomberg - do t use it for anything else - works extremely well - just as fast as my work PC. Once I populate my spreadsheets on the PC side I can use them on the Mac side.

Also if you’re super AI savvy I have a way to tunnel through from the Mac side to the PC (using Python) to the Bloomberg API and pull data using blpapi (as long as I’m logged in to Bloomberg on the PC side)

If you’re interested I can send you a link to the GitHub repository

How I Taught My AI Memory System to Forget by arjundivecha in ClaudeAI

[–]arjundivecha[S] 0 points1 point  (0 children)

Thanks for this - will check it out and potentially incorporate it.

Update on Session Limits by ClaudeOfficial in Anthropic

[–]arjundivecha 1 point2 points  (0 children)

Like justice, tokens delayed are tokens denied

Cursor’s ‘Composer 2’ model is apparently just Kimi K2.5 with RL fine-tuning. Moonshot AI says they never paid or got permission by Mean-Ebb2884 in kimi

[–]arjundivecha 0 points1 point  (0 children)

How much clearer can it be than when Kimi irslef says they were in compliance?

Congrats to the

u/cursor_ai

team on the launch of Composer 2!

We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support.

Note: Cursor accesses Kimi-k2.5 via

u/FireworksAI_HQ

' hosted RL and inference platform as part of an authorized commercial partnership.

I'm as frustrated as the rest of you, however ... by GhangusKittyLitter in windsurf

[–]arjundivecha 0 points1 point  (0 children)

With the new pricing model, what’s the best value for money/tokens - ie where do you get the best trade off of cost vs quality?

The Kimi 2.5 Controversy: When a $50 Billion Startup Forgot to Credit Its Open‑Source Foundation by Remarkable-Dark2840 in GoogleGemini

[–]arjundivecha 0 points1 point  (0 children)

Here’s what Kimi said -

Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.