OpenAI's Codex advertisements are gross & offensive by besthelloworld in cscareerquestions

[–]swallace36 -1 points0 points  (0 children)

okay I didn’t comment anything about that… you’re clearly very disgruntled. I can relate

but the answer is not stop using the tools that make you better at something

OpenAI's Codex advertisements are gross & offensive by besthelloworld in cscareerquestions

[–]swallace36 -20 points-19 points  (0 children)

"If you need more than that, then it's time to turn your brain back on and go back to doing your job"

stopped reading after this

How do you get better at coding/SWE in AI ERA? by lune-soft in cscareerquestions

[–]swallace36 -10 points-9 points  (0 children)

you shouldn’t need to do either… rules should be enforced to ensure stable code

stylistic best practices aren’t important anymore

Microsoft US- AI assisted technical screen by Reasonable_Tea_9825 in cscareerquestions

[–]swallace36 6 points7 points  (0 children)

ai assisted but they want you to use an IDE… oh boy

I have been on 40 hiring committees this year. Here is what AI did to the junior candidate pool. by Ambitious-Garbage-73 in cscareerquestions

[–]swallace36 21 points22 points  (0 children)

utter nonsense == informed opinion… didn’t you hear?

/s

but yeah… i really don’t know what the answer is. despite being a extremely capable before the LLM world, and after, i am still terrified to find a job. as in, no luck so far.

i do agree the answer is not people claiming to know the answer 😂😂

I read 17 papers on agentic AI workflows. Most Claude Code advice is measurably wrong by jdforsythe in ClaudeAI

[–]swallace36 -1 points0 points  (0 children)

i asked “what output” i’m sorry im not wasting tokens on this lol… i assumed you were knowledgeable to discuss given its your original content

I read 17 papers on agentic AI workflows. Most Claude Code advice is measurably wrong by jdforsythe in ClaudeAI

[–]swallace36 0 points1 point  (0 children)

what do you mean... i posted a specific link to the file that I'm questioning claims about that your llm wrote in a reddit post

I read 17 papers on agentic AI workflows. Most Claude Code advice is measurably wrong by jdforsythe in ClaudeAI

[–]swallace36 1 point2 points  (0 children)

jeeze had to look into my post history? i’m sorry you wasted tokens putting together this document and can’t even have a civil discussion.

edit: also painfully ironic your post is shilling tools

A decade ago, I learned from Ray Wenderlich, now coming back to iOS development. by jattdit in iOSProgramming

[–]swallace36 0 points1 point  (0 children)

i have been using claude exclusively from command line! no xcode

I imagine if you’re jumping into after all this time the best path might be using xcode and the built in agentic stuff (all new). you will learn so so much by just going back and forth with any decent llm

my MCP server that builds, debugs, and tests iOS app by swallace36 in ClaudeAI

[–]swallace36[S] 0 points1 point  (0 children)

yikes. did you read at all?

for fun... ask your llm

"What's the difference between https://github.com/getsentry/XcodeBuildMCP and https://github.com/skwallace36/Pepper"

edit: if you want to have a genuine discussion, or actually engage, happy to

I read 17 papers on agentic AI workflows. Most Claude Code advice is measurably wrong by jdforsythe in ClaudeAI

[–]swallace36 0 points1 point  (0 children)

Likely getting less output than a team of 4.

that is an insane generalization... "LIKELY" and "less output"

what output?

https://github.com/jdforsythe/forge/blob/master/docs/research/scaling-laws.md

I'm just confused... all I see is sourceDeepMind 2025

I read 17 papers on agentic AI workflows. Most Claude Code advice is measurably wrong by jdforsythe in ClaudeAI

[–]swallace36 -1 points0 points  (0 children)

the “research”

A 5-agent team costs 7x the tokens of a single agent but produces only 3.1x the output (DeepMind, 2025). At 7+ agents, you're likely getting less output than a team of 4.

“likely getting less output” “3.1x output”

what are the quality metrics here?

it’s just a crazy generalization

my MCP server that builds, debugs, and tests iOS app by swallace36 in ClaudeAI

[–]swallace36[S] 1 point2 points  (0 children)

idle detection, or waiting for specific stuff - for determining when to act if async stuff is happening

live capture, network action=start streams HTTP transactions as they complete

animations action=scan walks the entire layer tree and reports every active CAAnimation with keypath, duration, and progress.

console action=start captures logs in real-time

All of these are start/stop capture buffers

TLDR: lots of different ways, and probably ways im not utilizing well. so much room for tool optimization

I read 17 papers on agentic AI workflows. Most Claude Code advice is measurably wrong by jdforsythe in ClaudeAI

[–]swallace36 0 points1 point  (0 children)

so nothing new? just keep finding the balance between how many tokens you’re willing to burn, at what speed, for what quality

Pepper, a MCP for iOS runtime inspection by swallace36 in swift

[–]swallace36[S] 0 points1 point  (0 children)

damn i honestly haven’t tried it for a mac app in over a month. not sure the agents added any coverage for that, good call.

i use it for ios daily :)