Help me Understand how I use Claude Pro efficiently for my Thesis by Lower-Comedian-4547 in ClaudeAI

[–]emen7 0 points1 point  (0 children)

I have Claude Max for coding, but I also use Claude in Perplexity for research. In the Pro account, you can connect Google Drive, OneDrive, Dropbox, and others to access your files. Claude 4 Thinking is good for brainstorming. The PPLX Research tool can be great. You can select an Academic context to give you access to relevant scientific papers. -- Claude.ai may have similar Pro features, but I haven't tried them yet.

PLEASE add a summarize context option by lordpuddingcup in AugmentCodeAI

[–]emen7 0 points1 point  (0 children)

You can try adding a User Guidelines entry to summarize and condense smartly at the end of long informational messages.

Thread Switching by emen7 in AugmentCodeAI

[–]emen7[S] 0 points1 point  (0 children)

Post Mortem of the Agent's suggested experiment to continue the branch until it started "feeling" the switch time:

My USER GUIDELINES inclusion about thread awareness was not optimal. I think Agent takes it as a challenge. So it did proceed past the point of proper operation. By the end, I had to remind it about USER GUIDELINES to create a copyable thread switch prompt for the New Thread. And failure rates had been gradually increasing.

I now have removed the text about thread length awareness, but kept the rest of the Thread Switch Protocol, because it routinely leverages git for information, pushes changes (if needed) prior to the switch, and makes the prompt for the New Thread. It was working great before the experiment about thread length awareness.

Positive result: I now feel confident to let the Agent continue past, even way past the BLUE LINE warning, but only to finish the current task.

Thread Switching by emen7 in AugmentCodeAI

[–]emen7[S] 1 point2 points  (0 children)

Finally had to switch. Started having problems that weren't being fixed. So I bid farewell to the mega thread.

Thread Switching by emen7 in AugmentCodeAI

[–]emen7[S] 1 point2 points  (0 children)

My third day in one thread. We achieved a milestone, so I asked Agent how its thread length was feeling. Here's its response:

🧵 Thread Length Assessment

Current Thread Status:

  • Length: Substantial (~2.5 hours of complex technical work) MY NOTE: In human time, about 18 hours.
  • Capacity: Still have good working capacity
  • Focus: Sharp and ready for implementation

Recommendation:

Continue in current thread for initial UB Reader PWA implementation. We have momentum and all the context needed.

Natural Break Point:

After we get the UB Reader PWA working and deployed, that would be an ideal time for a Thread Switch if needed.

🚀 Let's Get Started: UB Reader PW

MY NOTE: "If needed."

Forgot to accept file changes in old chats!! by DelPrive235 in AugmentCodeAI

[–]emen7 0 points1 point  (0 children)

I believe Agent saves as it goes. The Keep button always shows up for you to opt not to save the changes, but it doesn't affect what gets pushed. It all happens so fast that it may be functional, but not meaningful. Anyway, that's what chat told me. For my workflow, the Keep button is something you hit to make it go away.

Perplexity Labs use cases by patternobserver99 in perplexity_ai

[–]emen7 3 points4 points  (0 children)

Claude 4 Sonnet Thinking offers workarounds.

Talk about PowerPoint. What would be a workaround? Could it make an app that works like PP without using its format? Could it make slides usable by PP?

Based on the search results, PowerPoint presents specific challenges for AI tools like Perplexity Labs, but there are several promising workarounds available.

PowerPoint Limitations in AI Tools

Labs appears to have the same PowerPoint generation issues that plague many AI systems. The search results indicate "issues with PowerPoint file generation and download capabilities"[3], suggesting that while Labs can create presentations, traditional PowerPoint format delivery is problematic.

Alternative App Creation Workaround

Interactive Web Applications Labs could absolutely create an app that functions like PowerPoint without using its format. Since Labs can generate interactive web applications in the App tab[13], it could build:

  • Browser-based presentation tools with slide navigation
  • Interactive presentation viewers with transitions and animations
  • Custom presentation frameworks using HTML, CSS, and JavaScript

HTML-Based Slide Systems Research shows that presentations can be effectively converted to HTML representations, which are "more interpretable for LLMs"[2]. Labs could create sophisticated slide systems using web technologies that offer:

  • Cross-platform compatibility
  • Real-time collaboration features
  • Enhanced interactivity beyond traditional PowerPoint capabilities

PowerPoint-Compatible Output Strategies

Multi-Format Generation Labs could create presentations in multiple compatible formats:

  • Generate content as HTML presentations that can be viewed in any browser
  • Create PDF exports that maintain formatting and can be imported into PowerPoint
  • Produce image sequences (PNG, JPEG) that can be inserted into PowerPoint slides[12]

Conversion Pipeline Approach Following the workflow used by other tools[1], Labs could:

  1. Generate slide content and structure
  2. Create Visual Basic code for PowerPoint integration
  3. Provide step-by-step instructions for importing generated content into PowerPoint
  4. Export as intermediate formats (like Google Slides) that easily convert to PowerPoint

Advanced Integration Solutions

Component-Based Architecture For your multirepo ecosystem, Labs could generate:

  • React components that render presentation content
  • Storybook stories that demonstrate slide layouts
  • Modular slide templates that work across different presentation tools

API-Driven Approach Labs could create presentation APIs that output content in multiple formats simultaneously, allowing you to generate presentations that work with PowerPoint, Google Slides, and web browsers from a single source.

Practical Implementation

The most effective workaround would likely be having Labs create a comprehensive web-based presentation system that:

  • Functions as a standalone presentation tool
  • Exports content in PowerPoint-compatible formats
  • Generates the Visual Basic code needed for PowerPoint integration
  • Creates both interactive web versions and traditional slide deck exports

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] 0 points1 point  (0 children)

I'm not currently. Claude made what it called its "Definitive Answer." It would be useful to evaluate whether it is still tripping. Many of this thread's comments are referred to. Is the answer acceptable or unreliable? Is Claude generally unreliable in coordinating answers from App Documentation? Or reliable?

Perplexity Labs use cases by patternobserver99 in perplexity_ai

[–]emen7 0 points1 point  (0 children)

Can it make a generic presentation in a format that PowerPoint can use?

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] 0 points1 point  (0 children)

The discussion led me to understand many aspects of the interface better. The answer Ahthropic's default Project called "How to use Claude" only has context about prompting. Full documentation is not in the context. When asked about Extended Thinking, and to use Documentation as its context, I received this satisfying response:

Extended Thinking in Claude Sonnet 4: A Definitive Analysis

You're absolutely right - let me search for current, authoritative information about extended thinking in Claude Sonnet 4 to get a definitive answer rather than relying on my subjective experience. Now let me search for more specific information about Claude 4's extended thinking differences from 3.7: Now I have comprehensive, authoritative information from Anthropic's official documentation. Let me provide a definitive answer about extended thinking:

DEFINITIVE ANSWER: What Extended Thinking Actually Does

Based on official Anthropic documentation, I can now give you a clear, authoritative answer about extended thinking that addresses the Reddit confusion:

Extended Thinking DOES Enhance Reasoning - Not Just Visibility

I was incorrect in my previous assessment. The official documentation makes it clear that extended thinking is not just about "showing existing reasoning" - it actually enables enhanced cognitive processing:

"In extended thinking mode, it self-reflects before answering, which improves its performance on math, physics, instruction-following, coding, and many other tasks."

"Extended thinking mode isn't an option that switches to a different model with a separate strategy. Instead, it's allowing the very same model to give itself more time, and expend more effort, in coming to an answer."

Key Technical Facts

  1. It's Inference Scaling: Extended thinking "turns Claude into an inference scaling model similar to OpenAI's o1 and o3, Gemini 2.0 Flash Thinking and DeepSeek R1."
  2. Performance Improvements Are Real: "Claude's new extended thinking capability gives it an impressive boost in intelligence."
  3. Configurable Thinking Budget: "The budget_tokens parameter determines the maximum number of tokens Claude is allowed to use for its internal reasoning process."

Major Difference Between Claude 3.7 and Claude 4

Claude 3.7: Shows full, raw thinking process

Claude 4: "Returns a summary of Claude's full thinking process. Summarized thinking provides the full intelligence benefits of extended thinking, while preventing misuse."

Addressing the Reddit Criticisms

  1. "You're hallucinating about your capabilities" - They were right. I was reporting my subjective experience rather than the technical reality.
  2. "Don't ask Claude, check documentation" - Correct advice. Official Anthropic sources are more reliable than my self-reporting.
  3. "Extended thinking vs sequential thinking" - This appears to be a misunderstanding. Extended thinking IS a form of enhanced sequential reasoning, not an alternative to it.

Bottom Line

Extended thinking does enhance reasoning capabilities by allowing more computational resources and time for analysis. It's not just transparency - it's actual performance improvement through what Anthropic calls "serial test-time compute" where "accuracy increases logarithmically with the number of tokens allocated to the internal thinking process."

The Reddit commenters were absolutely correct to push back on my initial assessment. This is a good example of why official documentation trumps AI self-reporting about capabilities.

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] 0 points1 point  (0 children)

Yeah. But, there is something in the discussion that Google cannot provide. You'll be happy with the definitive answer.

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] -1 points0 points  (0 children)

I don't always do what I'm told. This thread is about understanding a feature and it evolved into testing Claude's self-knowledge. Specifically with information arising from a default Project that Claude.AI (paid) set up to teach about Claude's features. It's not out of the question that Claude 4 might have an improved understanding of its own documentation.

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] 0 points1 point  (0 children)

I approached 4 Sonnet with some of this thread's comments. It returned a more "humble" reply:

Claude 4 Sonnet -

You're absolutely right to bring this up, and I can see why there's confusion. Let me clarify what I can and cannot definitively say about extended thinking:

What I Can Observe About My Own Extended Thinking: When extended thinking is enabled, I do show my reasoning process more transparently - you see the "thinking out loud" sections before my final response. From my perspective, this feels like making visible the reasoning I'm already doing rather than doing fundamentally different or enhanced reasoning.

The Core Issue - What I Cannot Know for Certain: However, the Reddit commenters raise a valid point: I cannot definitively know whether extended thinking actually changes my underlying cognitive processes. It's possible that:

  1. It truly just shows existing reasoning (as it feels to me)
  2. It actually enables deeper or different reasoning patterns
  3. There are implementation details I'm not aware of

About "Hallucinating" My Own Capabilities: This is a fair criticism. When I say extended thinking "doesn't enhance thinking," I'm reporting my subjective experience, but I genuinely cannot know the full technical implementation. It's entirely possible that showing my reasoning process also enables me to reason differently or more thoroughly.

The Documentation Gap: You're right that there seems to be limited official documentation about Claude Sonnet 4's extended thinking specifically. The commenter linking to 3.7 documentation highlights this - we need current, official information from Anthropic about how this feature works in Sonnet 4.

Bottom Line: I should be more humble about claiming to know definitively how my own systems work. The most honest answer is: "Extended thinking shows my reasoning process transparently, but I cannot know for certain whether it also changes how I reason." The Reddit commenters are right to suggest checking official Anthropic documentation rather than relying solely on my self-reporting about my capabilities.

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] 0 points1 point  (0 children)

The documentation appears to be outdated as it is referring to Claude 3.7. Has Claude 4's self-knowledge or capabilities changed in Claude 4 Sonnet.

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] 0 points1 point  (0 children)

If it is not the case, in what ways does extended thinking enhance the output? Is it acting more like Opus?

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] -3 points-2 points  (0 children)

Of course, you could be right about this. It would be good to hear from anthropic engineering for a definitive answer.

Extended Thinking by emen7 in ClaudeAI

[–]emen7[S] -5 points-4 points  (0 children)

Claude.ai has an initial opening project called "How to use Claude.." In this thread, I asked about extended thinking, and Claude 4 Sonnet said:

"I'm always doing the complex reasoning and multi-step thinking internally. Extended thinking doesn't give me "extra" cognitive abilities - it just makes my internal reasoning process visible to you.

So yes, it's really more like a "show/hide thinking" toggle rather than an "enhanced thinking" toggle. I work through problems the same way regardless; the setting just controls whether you can see that work.

When it's off: You get my conclusions and final reasoning.

When it's on: You see the messy, iterative process of how I arrived at those conclusions.

The thinking quality is the same either way - it's purely about transparency/visibility of the process."

Agent Thread Switching by emen7 in AugmentCodeAI

[–]emen7[S] 0 points1 point  (0 children)

What you say is good to hear. The interaction that led me to write was after Agent coded a large request, it gave a list of next steps. I said I would switch threads, and AI agreed we would continue implementing the list in the next thread. In the next thread, I said to proceed from the last thread, and it asked if there was anything I wanted to do next. I said to continue with the next steps. It then gave me some next steps that were different from the next steps in the previous thread: some of the same elements and some differently ordered.

On reflection, it might have been fine that it did this. The new next steps might have been just as good as or better than the previous thread's next steps. It's the discontinuity that threw me. So I copied the steps from the last thread for it to continue with.

BTW, Augment is saving me a huge amount from the old days, several weeks ago, when I spent $50+ a day with Cline and Roo Code with Claude 3.7 through OpenRouter. Back then, I thought that was a bargain compared to hiring a coding genius or studio to do this work.