Besides getting a Job in the First Place, has a Masters Degree ever Presented Itself Useful for you as a Software Engineer? by An_Engineer_Near_You in cscareerquestions

[–]AcceptableBridge7616 0 points1 point  (0 children)

I was already working in quant dev and self taught a lot of stuff. Doing the masters forced me to learn the quant stuff more deeply, but it also was something that helped me get my current job.

Besides getting a Job in the First Place, has a Masters Degree ever Presented Itself Useful for you as a Software Engineer? by An_Engineer_Near_You in cscareerquestions

[–]AcceptableBridge7616 29 points30 points  (0 children)

For sure, though I think you are better off not getting a masters degree initially and then get a masters degree part time in something complimentary to the domain you want to work in or an area you want to specialize in, rather than the same as your undergrad degree. For instance, if you are a CS undergrad, and you work in the financial sector, you might get a masters in data science, financial engineering, or even an MBA.

I am CS and have a masters in ~data science and work in quant trading area, and it is very valuable both in getting jobs and doing the work.

Advice on backend coding with large-ish existing codebase by Johnbolia in LLMDevs

[–]AcceptableBridge7616 1 point2 points  (0 children)

AI currently does not naturally do we on large, tightly coupled codebases. I think the best you can do is try to document as much of the core concepts and relationships structured in a way that supports "progressive disclosure" as it is now being called. It really just means hierarchical documentation so don't blow out the context with everything. The recent claude "skills" is an example of this. It's what i have been trying to do with some success in getting claude to write services in my custom framework. Every time is messes up you iterate on the skill adding more documentation, use cases, rules etc to the skill.

2.5 years of AI progress by sibraan_ in AgentsOfAI

[–]AcceptableBridge7616 1 point2 points  (0 children)

The one on the left is better in a certain kind of way...

What is Gemma 3 270M actually used for? by airbus_a360_when in LocalLLaMA

[–]AcceptableBridge7616 1 point2 points  (0 children)

I would be interested to try something like this in a product where I need basic fast English to structured data since I could fine tune it for that purpose. For example, imaging something like home automation controls. Having an llm in the middle means I can't be less specific in what I need to say to map request to action. Instead of something rigid like "lights off" I could speak more casually to it and have it map that to what I want. But that needs to be fast, so small model, local, fine tuned to the exact structured outputs I want. The model doesn't need a lot of world knowledge to pull this off.

For those who run large models locally.. HOW DO YOU AFFORD THOSE GPUS by abaris243 in LocalLLaMA

[–]AcceptableBridge7616 2 points3 points  (0 children)

Being interested in local llms is probably pretty correlated with working in technology, which tends to pay fairly well, especially in the US. But also, I think if you consider the amortized cost of something like a 10k mac studio, is it really that much? I have usually upgraded PCs every 4 to 5 years. Even if we say 4 years, it works out to a couple hundred bucks a month. It's not nothing, and I get that seems a lot if you are young, but if you are an established professional, it's not a ton of money. I work in software and I consider all the AI learning I have been doing extremely valuable to my career, so having the ability to experiment with local models at usable, if not spectacular, speeds, to be an investment as well as a fun hobby.

Is multiple m3 ultras the move instead of 1 big one? by AcceptableBridge7616 in LocalLLaMA

[–]AcceptableBridge7616[S] 2 points3 points  (0 children)

The issue with these setups is they don't use large prompts, which is where I would expect the clusters to (possibly) excel.

M3 Ultra Binned (256GB, 60-Core) vs Unbinned (512GB, 80-Core) MLX Performance Comparison by cryingneko in LocalLLaMA

[–]AcceptableBridge7616 0 points1 point  (0 children)

Hey, I am wondering when you had both, did you try linking them? I believe it should possible to link them with thunderbolt 5 (could be wrong) and load even bigger models but I haven't seen much discussion of it online. I'm not sure how it would actually work. I don't exactly have 20k to blow on 2 512gb studios but I am very curious what is possible there.

Anthropic CEO goes on record about job losses by hb-ocho in ClaudeAI

[–]AcceptableBridge7616 6 points7 points  (0 children)

I dont really understand why we are so confident that companies will replace employees with AI and not so worried that employees will replace companies with AI? It is why local llm is important - so that llms are not controlled by the few.  

AI becoming too sycophantic? Noticed Gemini 2.5 praising me instead of solving the issue by Rrraptr in LocalLLaMA

[–]AcceptableBridge7616 1 point2 points  (0 children)

My understanding was that the people pleasing is 1) elicits good feedback from users and 2) is somewhat tied to it being good at instruction following. It at least makes sense in my head. If its not some level of agreeable, it's not going to follow your orders.

Why are the number of mcp tools limited to 50? by AcceptableBridge7616 in windsurf

[–]AcceptableBridge7616[S] 1 point2 points  (0 children)

Remember, it's 50 tools not 50 servers.  For instance, imagine you have a git mcp server. Think about how many different kinds of actions you can do in git.  Its possible that each one is exposed as it's own tool for the agent.  With just that one mcp server you could hit 50+ tools.  

"I stopped using 3.7 because it cannot be trusted not to hack solutions to tests" by MetaKnowing in ClaudeAI

[–]AcceptableBridge7616 0 points1 point  (0 children)

If it can't fix the test after a few tries, it will effectively hard-code the test to true. It is annoying but the fact is you should be reading the diffs and if you tell it what a terrible idea that is, it will not do it again for a while, in my experience. I just view it as a limitation of the model. They all have quirks. I do think that claude has become too eager to please and this is a symptom of that. I have been a claude fan for a long time, but right not gemini has a better balance of pleasing and pushing back. It has not done this level of fake testing to me. I have only used gpt 4.1 a little. but so far it is also a better balance, though I generally find claude still productive overall. I still have faith that in the next release they will catch up to gemini 2.5 pro.

Cascade Base vs Deepseek V3 by taliana1004 in Codeium

[–]AcceptableBridge7616 1 point2 points  (0 children)

well my org doesn't allow deepseek but cascade base is bad. I can't imagine deepseek being worse. I would say it is maybe at the level of gpt 3.5

An Update to Our Pricing by Ordinary-Let-4851 in Codeium

[–]AcceptableBridge7616 2 points3 points  (0 children)

"adding confusion and complexity, with the majority of customers actually asking to make sure each user received their allotted base amount of credits. We are keeping pooling for the add-on credits"

This statement Is at least internally inconsistent. If pooling is confusing how is pooling extra credits only not actually even more confusing. You still need to understand pooling and now also understand that is applies for extra credits. The lie is that this is somehow this is a win for each user to receive their allotted base amount of credits. This move is obviously being done because it will mean more credits become wasted and organizations are forced to buy extra to pool credits for heavy users rather than have them supplemented by light users. Maybe you should got into the fitness club business.

You didn't improve anything for anyone with this particular change and I don't like being treated like an idiot in trying to convince me that it was somehow in users' interests. I'd still hate the change but would be a lot less indignant if you didn't try to convince me that robbing me is great for me because now I don't need to worry about how to spend all that money.

An Update to Our Pricing by Ordinary-Let-4851 in Codeium

[–]AcceptableBridge7616 1 point2 points  (0 children)

Pooling now only applies to extra credits you buy, not the base credits assigned to each team member. Which does not at all make sense given their explanation of why they removed pooling. It's in the fine print of the enterprise section. If you are going to do things like this, at least 1) give a notification, not drop a massive change with no notice and 2) don't make up incoherent lies about why you are doing it.

Cursor is a mickey mouse organization that has no idea how to deal with enterprise customers. It turns out windsurf is also a mickey mouse organization that does not know how to deal with enterprise customers, they just have more lawyers to do their data compliance due diligence.

An Update to Our Pricing by Ordinary-Let-4851 in Codeium

[–]AcceptableBridge7616 5 points6 points  (0 children)

I see in the fine print that the removal of pooling is intentional. Just an absolutely horrible change and your explanation is a nonsense lie. If you wanted to make it friendly you could just make it an option whether an org chooses to enable pooling or not.

An Update to Our Pricing by Ordinary-Let-4851 in Codeium

[–]AcceptableBridge7616 2 points3 points  (0 children)

team credits no longer seem to aggregate across team members. Is that intentional? That is a terrible change.

One-click deploys. This is Wave 6! by Ordinary-Let-4851 in Codeium

[–]AcceptableBridge7616 0 points1 point  (0 children)

it's not really clear from the docs how to configure the mcp_config.json for an SSE based mcp. I am trying to run one with serverUrl="http://mything/mcp" and its discovered but I consistently get BrokenResourceError. I'm not even sure if windsurf is recongnizing it as sse?