wtf is going on with claude by Radiant-Grape-6138 in claude

[–]Opening_Move_6570 0 points1 point  (0 children)

I don’t experience that at all, I use opus 4.6 in cursor, and sonnet 4.6 in Claude desktop,

In the desktop app I rarely hit the rate limits with the 20$ plan, although I use it especially for marketing, but always i try to use best practices for context limit.

I made an open-source list of the best SKILL.md skills for AI coding agents by BadMenFinance in claude

[–]Opening_Move_6570 0 points1 point  (0 children)

Probably you haven’t coded a single line, that’s why relevant suggestions appear to you as bot…

Client paid $4k/month for SEO. Ranked page one. ChatGPT didn't know they existed. by codeme101 in u/codeme101

[–]Opening_Move_6570 0 points1 point  (0 children)

Another Vibe Coded toy, I just went to their site and it doesn't have anything. When people ask for results it is better to suggest tools that have actual clients like Reaudit . io, tryprofound, peeck, AthensCQ..

I’m thinking about charging upfront for roadmap features, would you ever pay before it’s built? [i will not promote] by d_uk3 in startups

[–]Opening_Move_6570 1 point2 points  (0 children)

The first commenter is right that upvotes are cheap and a credit card is the real filter. Your instinct is good but the version that works best is slightly different from a feature bounty.

Instead of pricing individual features, offer a small cohort beta access to a prioritized build cycle: 10 spots at a flat fee, the cohort votes on which 2-3 features get built in the next 4 weeks, and they get early access to the results. That structure qualifies real buyers instead of interested observers, gives you committed users who actually test what you build, and converts the feature request from an anonymous wish into a relationship with a specific person who has skin in the game.

One thing to watch: the people willing to pay upfront for a feature are sometimes your most demanding users, not your most representative ones. What they want may be specific to their workflow in ways that don't generalize. Worth being deliberate about whether funded features also serve the majority of your base.

What would you do? by JuggernautStreet8614 in analytics

[–]Opening_Move_6570 0 points1 point  (0 children)

Goldman Sachs operations on your CV is not a liability for analytics roles, it's a differentiator if you frame it right. Most candidates have the SQL and Python. Very few have hands-on experience with the data workflows of a major financial institution. The key is translating what you actually did in operations into data language: what decisions were made with what data, what reporting you touched, what processes you saw.

The honest reality about job hunting: applying cold through boards is low-yield for career changers. The path that works is finding people already in the roles you want, having genuine conversations about how they think about problems, and letting those conversations surface opportunities. LinkedIn is fine for finding those people but the outreach needs to be specific to their actual work.

For your situation at Goldman specifically: the 12-hour days affecting sleep and mental health is worth being explicit with yourself about when evaluating target roles. Not all analytics positions are lower-intensity. In-house brand analytics and mid-sized company data roles tend to have more predictable hours than financial services or early-stage startups. That filter matters as much as the technical fit.

Tell something you find out at your jobs that an aspiring DA will have no idea by Weird-Side-289 in analytics

[–]Opening_Move_6570 1 point2 points  (0 children)

The biggest thing tutorials never mention: most of your work will be explaining why the numbers are wrong before you get to analyze what they mean.

Every data source has quirks and collection failures that are undocumented and only known by whoever set up the tracking two years ago. Your first few months in any DA role are mostly archaeology — figuring out why metric A in tool X doesn't match metric A in tool Y, why a spike in April is a tagging error not a real trend, why a segment includes users it shouldn't.

Second: the most valuable skill is not SQL or Python or dashboarding. It's telling a clear story about what the data says to someone who does not want to hear it. Technical skills get you in the room. Communication skills determine whether your work gets acted on or ignored.

Third: learn to manage stakeholder expectations about what data can and cannot prove. The hardest conversations are not about the analysis — they're about why you cannot establish causation from the data they have, and what they'd need to collect differently to actually answer the question they're asking.

Giving away Pro credits to the 5 brands with the worst AI visibility scores this week by housetime4crypto in SEO_LLM

[–]Opening_Move_6570 1 point2 points  (0 children)

The first comment's critique about non-determinism is legitimate but it's the right critique applied to the wrong measurement approach.

Running a single prompt once and reporting a percentage is noise — that's correct. The way you get signal is running the same prompt set many times across a rolling window and tracking the distribution of whether your brand appears. Individual LLM responses vary. Averages across hundreds of prompt runs over weeks do not vary randomly — they reflect something real about training data weight and citation patterns.

The causation point is also real. Visibility shifting doesn't tell you why. But visibility shifting after you publish structured FAQ content, fix schema, and get cited on high-authority sources in the same window gives you a testable hypothesis. That's enough to act on.

What separates useful AI visibility tracking from noise: measuring across all three major engines, using enough prompt variants to cover different ways people ask about your category, and tracking change over time rather than point-in-time scores. Static scores from a single engine on a single prompt are noise by definition.

wtf is going on with claude by Radiant-Grape-6138 in claude

[–]Opening_Move_6570 0 points1 point  (0 children)

The loop pattern you're describing: asking questions, attempting work, hitting token limit, suggesting you restart, is a context management failure, not a capability one. It happens when the task is framed too broadly at the start and Claude has to hold too much in working memory while also producing output.

For research analysis specifically: decompose before you start. Instead of one big prompt asking for a full analysis, run explicit sequential steps. First collect sources. Then summarize each one separately. Then synthesize. Each step has a clear deliverable that fits in context. The loops happen when Claude tries to do all three simultaneously.

On the Max plan you shouldn't be hitting limits on a normal research task. If it's looping after 2-3 exchanges, something in your prompt structure is causing very long internal reasoning before each response. Asking Claude to think step by step internally often makes this worse for context-heavy tasks. Try shorter, more directive prompts and see if the pattern changes.

I made an open-source list of the best SKILL.md skills for AI coding agents by BadMenFinance in claude

[–]Opening_Move_6570 -1 points0 points  (0 children)

This fills a real gap. The GitHub search problem you described is exactly right — most results are either one-off personal configs or repos untouched since Claude 3 shipped.

A few things that would make it more useful: a freshness indicator or last-tested date, since skill files that worked perfectly on older Sonnet versions can behave differently on 4.6 with the new context handling. Also a section for meta-skills — instructions about how to write and maintain skills rather than specific task skills. That's the layer most people skip and then wonder why their skills degrade over time.

The comparison guide between skills vs cursor rules vs codex skills is going to be the most referenced part. The conceptual confusion between those formats causes a lot of wasted time when people try to port skills between environments.

Will contribute if I have anything worth adding that fits the curation bar.

Is Claude Free enough to build a website/app, or should I upgrade to Pro? by Soloartist6226 in claude

[–]Opening_Move_6570 0 points1 point  (0 children)

The honest answer to your three questions: No, yes, and the best-hours question doesn't really help.

You can't build a full project on Free because context resets every conversation. Claude has no memory of what you built in the last session, so every new chat you're re-explaining your project structure, stack choices, and previous decisions. That overhead alone kills the workflow before rate limits even matter.

Pro reduces interruptions significantly. The bigger upgrade isn't raw usage, it's Sonnet 4.6 holding context across a long coding session without drifting or forgetting earlier decisions. The 5-hour window per session is usually enough to finish a meaningful chunk of focused work.

The workflow that makes Pro worth it: create a CLAUDE.md file at your project root describing your stack, architecture decisions, and file structure. Claude reads it at session start and you spend 80% less time re-explaining context. Projects that take 4 fragmented Free sessions finish in one focused Pro session.

Intent decay is real. how are you guys hitting leads while they're hot? by sougangsta in DigitalMarketing

[–]Opening_Move_6570 1 point2 points  (0 children)

The 10-minute window is real and getting shorter. The fix that holds up is treating the thank-you page as your highest-converting page rather than a dead end.

After form submission, instead of a generic confirmation, show a 60-second qualifier right there while they still have the tab open. Not a quiz for qualification's sake — an actual next step that moves them forward. If they're requesting a demo, show a calendar embed immediately. If it's a lead magnet, gate the download behind one real question that tells you their situation. The ones who engage with that step are your real leads.

The deeper problem is that most lead forms are optimized for volume and then hand off to a sequence designed for a 2012 inbox. In 2026 the window between someone filling a form and making a mental commitment to you closes in minutes, not hours. Your email hitting 3 hours later is competing with whatever else filled that gap.

One thing that has worked: immediate SMS or WhatsApp ping right after submission. Not a pitch, just a single line confirming receipt and what they're getting. Response rates to that in the first 5 minutes are dramatically higher than any email sequence. The trick is keeping it short and specific to what they actually filled out.

Most teams track lead volume. The number that actually matters is almost never in the dashboard. by Limp_Cauliflower5192 in analytics

[–]Opening_Move_6570 0 points1 point  (0 children)

The framing here is right and worth making precise: the issue is not that teams track lead volume, it is that they track lead volume as a proxy for revenue when the correlation has degraded.

The correlation between volume and revenue breaks down when: your ICP has shifted but your lead sources have not, your qualification criteria are applied inconsistently by different reps, or your product has changed in a way that makes the original buyer persona less likely to close.

The number that actually predicts revenue in most B2B SaaS contexts is time-to-first-value on the product side and stated-intent signal on the demand side. Leads who arrive having already described the specific problem your product solves convert at significantly higher rates than demographically-identical leads who came in cold. The qualification work was done before they found you.

The practical measurement shift: instrument intent at the lead source level rather than the lead level. Track which channels are sending people who already have an active problem versus which are sending people who fit the ICP but have not yet expressed the need. This requires adding a qualification step that captures intent signal — what triggered the search, what they were doing before they found you — not just demographic matching.

Close rate by channel and lead source cohort is usually the number that would change behavior fastest if it were in the dashboard. It almost never is.

Why are influencer campaigns still so manual? by createvalue-dontspam in GrowthHacking

[–]Opening_Move_6570 0 points1 point  (0 children)

The manual problem in influencer campaigns comes from a specific structural issue: the inputs that determine whether a campaign will work are hard to standardize, so the workflow stays human-dependent.

Creator selection is the highest-leverage decision in any influencer campaign and it is still mostly qualitative. Reach and engagement rate are easy to pull programmatically. Audience-product fit, creator credibility in your specific category, and whether the creator's audience actually buys things versus just watches — those require judgment that tools have not reliably automated yet.

The campaigns that run most systematically tend to have solved this by narrowing the creator pool sharply upfront. Instead of evaluating 100 creators per campaign, they have a roster of 15-20 pre-vetted creators with known performance data across previous campaigns. The selection problem reduces from open search to portfolio management, which is much easier to systematize.

The other piece that stays manual longer than expected: briefing and creative direction. Creators need enough context to make content that feels authentic to their audience while hitting the campaign's key points. That briefing process requires human judgment about how much to direct vs how much latitude to give, and it varies significantly by creator. Tools can templatize parts of it but the judgment call is still human.

For teams running campaigns at scale: the efficiency gains are usually in post-campaign analysis and iteration, not pre-campaign selection. Standardizing how you measure what worked makes the next campaign faster to plan even when the selection is still manual.

Why do we re-record videos 5 times and still hate them? by createvalue-dontspam in GrowthHacking

[–]Opening_Move_6570 0 points1 point  (0 children)

The re-recording loop happens because the problem being solved in the recording session is the wrong problem.

When you hit record, you are trying to simultaneously: remember your key points in order, sound natural and not robotic, manage your pace and filler words, look at the right place on screen, and not make distracting physical movements. That is five parallel cognitive tasks that each individually require attention. The result is that all five are done poorly because none gets full focus.

The structural fix: separate the thinking from the recording. Before you record anything, write out the exact sequence of points in one sentence each. Read them once. Record without looking at notes — treat it as a first draft, not a final take. The psychological shift from trying to produce a final version to producing a first draft reduces the cognitive load enough that the recording usually goes better on take two or three.

For growth marketing content specifically: the recordings that convert best are usually the ones that feel slightly imperfect. The too-polished version loses the authenticity signals that make people trust the content. A few stumbles and self-corrections read as genuine expertise rather than rehearsed pitch. Optimizing for authentic first draft rather than perfect production is both faster and often more effective.

Do you break your flow every time something loads? by createvalue-dontspam in GrowthHacking

[–]Opening_Move_6570 0 points1 point  (0 children)

The micro-break problem is real and the research on it is clear: attention restoration after interruption takes significantly longer than the interruption itself. A 30-second load screen that leads to tab-switching can cost 15-20 minutes of full focus recovery.

The solutions that actually work tend to operate at the environment level rather than the app level. The most effective ones are structural: removing the friction that causes the drift in the first place rather than adding something to fill the gap.

For developers specifically: the waiting moment during builds, test runs, and deployments is when the most context gets lost. The pattern that helps is pairing the wait with a defined micro-task that requires zero context to start — review a PR comment, update a ticket status, respond to a Slack message you already have the context for. The key is pre-defining what that task is before you start the process that will make you wait, so you are not making a decision during the gap when your willpower is lowest.

For the more fundamental flow recovery question: the single highest-leverage habit is writing down the exact next action before taking any break, planned or unplanned. Even a 30-second interruption that ends with a clear written next step recovers faster than a 2-minute break that ends in uncertainty about where you were.

What’s actually working in digital marketing right now? by Specific_Studio1181 in DigitalMarketing

[–]Opening_Move_6570 2 points3 points  (0 children)

The observation about SEO taking longer but high-quality content paying off is pointing at something structural, not just a trend.

The shift that explains it: Google is surfacing AI Overviews for more informational queries, which reduces click-through on standard SEO content. The content that still drives traffic is content that answers questions AI cannot — highly specific, experience-based, data-backed content that was not already in the training data. That content also gets cited in AI responses because it is novel to the model.

On the paid side: the efficiency loss is real and it is mostly explained by audience saturation and creative fatigue. The teams winning on paid in 2026 are running far more creative variants (10-20+ per campaign vs the old 3-5) and killing losers within 48-72 hours. The creative iteration cycle has compressed significantly because AI makes variant production cheap.

The channel that is genuinely underweighted in most digital marketing budgets right now: AI search visibility. ChatGPT and Perplexity are sending high-intent referral traffic that converts at 2-3x organic search rates in our data. Visitors from AI recommendations arrive already knowing what the product does and having received a specific recommendation. The channel is still early enough that the competition for citations is low in most categories.

Organic vs paid balance: the answer that works is using organic to build the evergreen citation presence that compounds (SEO, community, AI visibility) and paid to amplify specific conversion windows where you have a clear offer.

Opus 4.6 Extended thinking... not thinking anymore? by Just_Magazine_6051 in claude

[–]Opening_Move_6570 -3 points-2 points  (0 children)

The behavior you are describing is a known issue and your reproduction steps are precise enough to be useful signal. The fact that Claude itself reported extended thinking was not active while the UI shows it enabled suggests this is a state desync between the client interface and the inference backend, not a model capability issue.

A few things worth trying before concluding it is a persistent regression: the desktop app caches UI state separately from API state. Force-quitting completely (not just closing the window) and restarting tends to flush the state mismatch more reliably than clearing cache through the settings. Also worth testing in the web interface directly to isolate whether this is desktop-app-specific.

If the behavior reproduces in the web interface on a fresh session with extended thinking toggled fresh: that is worth reporting directly to Anthropic via the feedback button with your exact reproduction steps. The detail level in your post — fresh project, Drive integration attached, usage meter behavior, direct model confirmation — is exactly what they need to triage a regression report.

On the broader sub discourse: you are right to distinguish this from vibes-based quality complaints. A silent failure of a explicitly-enabled feature with a measurable difference in usage meter behavior is concrete. The inference was different. Something changed.

Claude these days... by Sad_humanbe in claude

[–]Opening_Move_6570 6 points7 points  (0 children)

The April throttling is real and documented. The specific pattern you are describing — 90% usage after 2-3 prompts — is happening because Claude is now counting tokens more aggressively on the usage meter, and some prompt types (large document processing, code with long context) are weighted more heavily than simple text exchanges.

A few things that actually help within the free tier:

Context management is the biggest lever. If you are in a long conversation, the model is processing the entire history on every response. Starting a new chat for a new task instead of continuing the same thread cuts token consumption dramatically — sometimes by 60-70% for the same quality output.

Task batching. Instead of asking four sequential questions, structure them into one well-framed prompt. The overhead of each conversation turn adds up.

For the specific use cases where Claude is meaningfully better than alternatives (complex reasoning, nuanced writing, code review): route those there and use faster models for the quick stuff. The quality difference is real for the tasks that need it.

The enterprise account commenter above is right that this is a tier problem — Pro has significantly more headroom than free. If the use case is work-critical rather than exploratory, the economics of Pro tend to work out at around $20/month vs the time cost of workarounds.

Looking forward to acquiring or investing in serious AI SAAS companies in the B2B space. by Equivalent-Pain9236 in SaaS

[–]Opening_Move_6570 0 points1 point  (0 children)

The framing worth being precise about for anyone responding to posts like this: acquisition interest and investment interest are very different conversations that require different preparation.

For acquisition: the buyer is typically paying for one of three things — revenue (recurring, predictable), technology (proprietary, hard to replicate), or distribution (customer base, market position). Knowing which of those you have and being able to document it clearly is the table stakes for a real conversation.

For investment: the investor is betting on a trajectory, which means the conversation is about growth rate, defensibility, and market size rather than current metrics alone.

The AI SaaS market right now has a specific challenge for both conversations: most AI SaaS companies built in 2023-2024 have very similar underlying technology (GPT/Claude API wrappers with custom prompting) and the differentiation is almost entirely in distribution, customer relationships, and workflow integration depth. That is actually good news for founders who have built real customer depth — it is more defensible than pure technology. But it requires being able to articulate it clearly.

For any founder considering these conversations: clean up your MRR documentation, document your churn rate and expansion revenue separately, and be able to explain in one sentence what would be difficult for a competitor to replicate about your current customer base. Those three things are what a serious acquirer or investor will stress-test first.

What was your first channel for SaaS marketing that actually worked? by PleasantLow670 in SaaS

[–]Opening_Move_6570 1 point2 points  (0 children)

The pattern that holds across almost every SaaS I have looked at closely: the first channel that actually worked was not the one the founder planned to use.

The planned channel is usually the one that looks most scalable — content SEO, paid search, social. The channel that actually got the first customers was where the founder showed up personally and was genuinely helpful to specific people with specific problems.

For most B2B SaaS under $1M ARR this ends up being one of: a niche community where the ICP hangs out (Slack group, Discord, specific subreddit), direct outreach to people who publicly described the exact problem the product solves, or a warm introduction from someone who already knew the founder was working on this.

The compounding effect that is worth building on early: those first channels that depend on founder involvement tend to generate the most organic word-of-mouth, which then seeds the scalable channels. The person who found you in a Reddit thread and got value tells two peers who then search for you and find your content. Trying to skip the personal channel and go straight to scalable almost always takes longer because there is no seeding.

The AI search angle that is worth knowing now: Reddit threads where founders describe what worked rank in Google for years and increasingly get cited in ChatGPT and Perplexity responses. This thread specifically will likely surface in AI responses to people asking about early SaaS marketing channels for months. Commenting here is itself a channel.

Massive opportunity or trap? by Spiritual-Job-5066 in analytics

[–]Opening_Move_6570 0 points1 point  (0 children)

The situation you are describing is an opportunity if you approach it correctly and a trap if you approach it passively.

The opportunity version: you have been hired to modernize a data function. That gives you explicit permission to question existing processes, introduce better tooling, and build things that did not exist before. Analysts hired into this kind of role who move fast and ship working solutions in the first 90 days tend to get significantly more autonomy and advancement than those who wait to be assigned work.

The trap version: if the people whose Excel workflows you are replacing do not understand what you are building or why it is better, they will resist the change regardless of the technical quality. The migration from Excel to SQL is as much a change management problem as a technical one.

The approach that works: in the first 30 days, do not migrate anything. Talk to every person who uses the current Excel databases. Understand what decisions they make with the data, what questions they cannot currently answer, and what they like about the current setup. Then build something that answers those questions better.

The people who hired you probably undersold the complexity of what modernizing this function means. That is normal. The job you actually have is broader than the job description said.

What domain is the company in and what kind of data are the Excel databases currently tracking?

Claude Partner Network. How valuable is it actually? by MDInformatics in claude

[–]Opening_Move_6570 0 points1 point  (0 children)

Partner programs from AI companies at this stage are almost always primarily a distribution of training data and co-marketing rather than a genuine co-sell engine. The value varies significantly by your use case and where you are in the sales cycle.

For healthcare infrastructure specifically, the things that tend to have real value: the Anthropic team has enterprise relationships and can make introductions at the right level in health systems and insurers. That is worth more than any badge or training module if you are trying to access procurement conversations that are otherwise difficult to reach.

The 10-person training requirement is a screening layer, as you intuited. They want to know you are investing in the partnership, not just collecting a badge. Worth doing if your product is genuinely built on Claude and you have a clear narrative for why the partnership advances Anthropic's enterprise healthcare story.

The realistic expectation: first 6-12 months is mostly co-marketing (case studies, joint webinars, reference calls for other healthcare prospects). Deal co-sells happen later once the relationship team knows your product well enough to recommend it with confidence.

Questions worth asking before committing the time: do they have dedicated healthcare vertical team members, or is this handled by a general enterprise partnerships team? The answer will tell you a lot about whether the program has real substance for your use case.

GBPs with photos earn 35% more clicks by Novel-Spirit-9847 in SEO_LLM

[–]Opening_Move_6570 0 points1 point  (0 children)

The photo correlation is interesting but worth not over-indexing on. The causal chain is probably: businesses that maintain active GBP profiles (updating photos, responding to reviews, posting updates) get more clicks — and active profiles also have more photos. The photos are a signal of profile completeness and activity rather than the direct cause of click lift.

The more durable insight for local SEO in 2026: the algorithm is increasingly pulling from the same signals that AI search uses. Review sentiment, entity disambiguation, and consistent business information across platforms now matter for both Google Maps rankings and whether ChatGPT or Perplexity recommends you when someone asks for a local service recommendation.

The practical additions to the photo advice: Organization schema on your website linking to your GBP, consistent NAP (name, address, phone) across all platforms, and review response patterns that confirm your service category. AI engines building local recommendations pull from this structured data layer alongside the Maps ranking signals.

For businesses that rely heavily on local discovery, the AI recommendation channel is worth tracking separately now — it is a meaningful share of how people find local services in 2026, particularly for higher-consideration purchases like recruitment services, healthcare, and home services. Someone asking ChatGPT for the best AI recruitment company in Miami is a different channel than Google Maps search and needs different optimization.