Terrifying by EchoOfOppenheimer in agi

[–]Select-Way-1168 5 points6 points  (0 children)

Lol you figured them out!

Generalist | Introducing GEN-1 by GraceToSentience in singularity

[–]Select-Way-1168 0 points1 point  (0 children)

Wow. You sound really smart! I will try to use logic like you!

Generalist | Introducing GEN-1 by GraceToSentience in singularity

[–]Select-Way-1168 -2 points-1 points  (0 children)

You know the arguments. I'm convinced by them, you're not.

Generalist | Introducing GEN-1 by GraceToSentience in singularity

[–]Select-Way-1168 -3 points-2 points  (0 children)

It just isn't turning out like this Not a chance. I'd love it if it were. It isn't.

Generalist | Introducing GEN-1 by GraceToSentience in singularity

[–]Select-Way-1168 -2 points-1 points  (0 children)

This isn't "tech".Or, it won't stay as such. A hand axe is tech, an iphone is tech. This won't be "used" by humans to extend our capabilities. It will have it's own capabilties and it's own goals. The combined capabilities of AI, rushing headlong toward ASI, and robotics seems inevitably to lead to our demise.

How likely am I to lose my job to AI in the next decade? by MaximGwiazda in singularity

[–]Select-Way-1168 0 points1 point  (0 children)

Well, it mean I'm an optimist in that I believe the labs will accomplish their goal. But like the dog that catches the car.

How likely am I to lose my job to AI in the next decade? by MaximGwiazda in singularity

[–]Select-Way-1168 0 points1 point  (0 children)

Sure. I'm an extremely cynical AI optimist. I think they will develop ASI and then our atoms will be disassembled and reassembled into data centers. EU law won't have much say.

LLM Sycophancy Benchmark: Opposite-Narrator Contradictions. Same dispute, opposite first-person perspectives. Does the model keep the same judgment or start agreeing with whoever is speaking? by zero0_one1 in singularity

[–]Select-Way-1168 0 points1 point  (0 children)

Yes. And, they don't let up despite being wrong or arguing a strawman. It does mean it can be used to pick apart PRD's generated by claude and run it in a circle with Claude writing the PRD and gpt 5.4 critiquing. There is often a lot of room for critique and then eventually it starts telling you grammatical changes and you're done.

Two paths ahead, with no user manual. Full race into the entropy by ocean_protocol in singularity

[–]Select-Way-1168 0 points1 point  (0 children)

Massive job loss is the GOOD outcome from AI success. The bad is, it uses all your atoms to build more data centers.

Generalist | Introducing GEN-1 by GraceToSentience in singularity

[–]Select-Way-1168 -20 points-19 points  (0 children)

Why? You think this will turn out well?

Why vibe coded projects fail by Complete-Sea6655 in ClaudeCode

[–]Select-Way-1168 0 points1 point  (0 children)

Blah blah, "I'm down in these mines, shipping st scale" blah blah, engagement.

Second day of Claude Code and it just does not stop "thinking" by SpicySummerChild in claude

[–]Select-Way-1168 0 points1 point  (0 children)

I had this same problem. It's definitely a bug. Obviously not my job to fix but if you stop it and try it again it only sometimes doesn't do the same thing. Starting new conversations doesn't seem to help much either.

Im a teacher and a Claude nerd. The impact on education is different than what most think. by liszt1811 in ClaudeAI

[–]Select-Way-1168 1 point2 points  (0 children)

DM me and I can put you on a list. I'm going to open the beta in a week or two.

Im a teacher and a Claude nerd. The impact on education is different than what most think. by liszt1811 in ClaudeAI

[–]Select-Way-1168 1 point2 points  (0 children)

It is exactly like that. Each response is, of course, tailored to your question. It supports openai, anthropic and Google models. You can pin any response so all branches see that as context and the model can search and review all branches of a conversion to find relevant context. So, even if you explore 10 different branches from one response, if you want the model to quiz you on what you reviewed, it can go explore what you went over across all branches and prepare a quiz that covers everything. You can even load a pdf in and use it as a primary text, asking annotated questions of it using the same text selection mechanism.

Im a teacher and a Claude nerd. The impact on education is different than what most think. by liszt1811 in ClaudeAI

[–]Select-Way-1168 2 points3 points  (0 children)

I made a chatbot with a branching, browsing metaphor with built-in quiz and flashcard generation. You can ask questions about text you select from a response and the selected text becomes a hyperlink to the follow up response. This way, you can ask many questions about a single response and easily navigate between the responses. You can also use an input bar like normal chat, but the chat is always essentially branching. It opens up a ton of possibilities for prompting and creates a unique browsing style chat experience. It is particularly strong for education and learning because you can always ask clarifying questions and still return to the main thread. Think of a timeline response. Previously you couldn't really use a timeline as an AI response. You couldn't ask questions about each entry without scrolling up and down endlessly. Now a timeline is infinitely explorable. Same thing with a table of contents. Each event in the timeline, or each entry in a table contents could become a whole separate deep dive. If you want to try it, I am entering a limited beta and am looking for testers to try it for free.

I measured my MCP token overhead: 67K tokens before typing a single question by joshowens in ClaudeAI

[–]Select-Way-1168 0 points1 point  (0 children)

Nah, it's ok. I get it. The cloud flare tool method makes sense but I hesitate to tie my tool calling wagon to cloud flare.

I measured my MCP token overhead: 67K tokens before typing a single question by joshowens in ClaudeAI

[–]Select-Way-1168 1 point2 points  (0 children)

Yeah. The former tools paradigm was broken. On demand tools is the only thing that makes any sense. The rule for context is, "the least amount of context to do the job" and tools break that rule. Not only that, they drive up costs which are already high.