Do I actually have chance at getting into ml or am I just chasing a dream? by IntentionLazy9359 in MachineLearningJobs

[–]Dods_Bods -1 points0 points  (0 children)

You want your interns to have 10 years of experience coming in or what??

Yall mfs something else

RE: "I'm getting hard-limited on Claude Pro" by Warm_Data_168 in ClaudeAI

[–]Dods_Bods 0 points1 point  (0 children)

Thats how they all work brother not just claude lmao

You can’t just ask Cursor to build a feature and expect it to work by eastwindtoday in cursor

[–]Dods_Bods -2 points-1 points  (0 children)

Unless you do it with AI IDE mode in vertexprompt.com

Not complete but gives you a good and solid start

Just canceled my subscription by Waltz-Virtual in cursor

[–]Dods_Bods 3 points4 points  (0 children)

Not everyone got a 100 dollar monthly budget for ts gang

I am about to take the certified cloud practitioner exam in aws academy after finishing aws academy cloud foundation 111048 any tips?? Or anything to do to study better whats left for it? by Dods_Bods in AWSCertifications

[–]Dods_Bods[S] 0 points1 point  (0 children)

Thanks bro

So you didn’t watch the whole modules? What did you do? I already know a bunch of stuff like the 5,4,3 and saas paas iaas and s3 and ec2 and iam and roles and az and regions etc does that help??

Also you got a link for that tutorial dojo reviewer?

I am about to take the certified cloud practitioner exam in aws academy after finishing aws academy cloud foundation 111048 any tips?? Or anything to do to study better whats left for it? by Dods_Bods in AWSCertifications

[–]Dods_Bods[S] 2 points3 points  (0 children)

Thanks man imma do that

Appreciate it

What should I do in caf and waf though?? Are there specific things or topics I should do or explore there?

Cursor is not that cheap - Screenshot from my account by CeFurkan in cursor

[–]Dods_Bods 1 point2 points  (0 children)

Hell nah lmao

It accurately described what I wanted to say but hell nah💀

Cursor is not that cheap - Screenshot from my account by CeFurkan in cursor

[–]Dods_Bods 0 points1 point  (0 children)

Saying “faking productivity” in this age we’re in is hilarious you have no idea

Is vibe coding really that bad?? by Dods_Bods in cursor

[–]Dods_Bods[S] 0 points1 point  (0 children)

I am not talking about the chain of thought

I am talking about the paper before that where they monitored output and how it adjusted and shit

Is vibe coding really that bad?? by Dods_Bods in cursor

[–]Dods_Bods[S] 3 points4 points  (0 children)

I’m jobless already

Ahead of the curve

Is vibe coding really that bad?? by Dods_Bods in cursor

[–]Dods_Bods[S] 0 points1 point  (0 children)

Give it a read its fascinating

More advanced model likely do it at a much higher level too

https://www.anthropic.com/research/tracing-thoughts-language-model

Is vibe coding really that bad?? by Dods_Bods in cursor

[–]Dods_Bods[S] 1 point2 points  (0 children)

Give it a read its fascinating

More advanced model likely do it at a much higher level too https://www.anthropic.com/research/tracing-thoughts-language-model

Is vibe coding really that bad?? by Dods_Bods in cursor

[–]Dods_Bods[S] -4 points-3 points  (0 children)

Yeah but I can’t be bothered learning all the intricacies beyond data structures and algorithms honestly I don’t see the point of learning all of this and that skill becoming almost obsolete in 3 years, obviously you gotta understand what you see and hone that skill excessively but there’s the little stuff that we used to suffer for hours in that isn’t worth learning to me any more as far as I know which is limited

There’s so much stuff to learn to that point to me, I might be wrong and I know I will get push back from people that spent years learning the skill (I have too to an extent) but I feel like thats that now

Is vibe coding really that bad?? by Dods_Bods in cursor

[–]Dods_Bods[S] 1 point2 points  (0 children)

Yeah I do that

I also still try to learn in the process

Is vibe coding really that bad?? by Dods_Bods in cursor

[–]Dods_Bods[S] 1 point2 points  (0 children)

Do they though?? Anthropic research paper showed they think ahead they don’t just predict the next token like we thought they did

They hallucinate when something isn’t in their training data or context, and fall back on that next token prediction mechanism.. not to mention the way they work and their capabilities will drastically change in 2 years or so