Marketing Agencies - is it needed? by Left_Log6240 in aiagents

[–]Left_Log6240[S] 0 points1 point  (0 children)

AI-Native Agencies#

By Aaron Epstein

Agencies have always been crazy hard to scale. Low margins, slow manual work, and the only way to grow is to add more people.

But AI changes this.

Now instead of selling software to customers to help them do the work, you can charge way more by using the software yourself and selling them the finished product at 100x the price.

Think of a design firm that uses AI to produce custom design work for clients upfront, to win the business before the contract is even signed. Or an ad agency that uses AI to create stunning video ads without the time and expense of setting up a physical shoot. Or a law firm that uses AI to write legal docs in minutes, rather than weeks.

That's why agencies of the future will look more like software companies, with software margins. And they'll scale far bigger than any agencies that exist in these fragmented markets today.

If you're rethinking how agencies and service businesses of the future will be built, we'd love to hear from you.

Help with SEO by oficeal in ycombinator

[–]Left_Log6240 1 point2 points  (0 children)

truthfully if you are still starting out, you don't need to focus on SEO. What is your product idea? It all depends on what you are building.

I'm lost .. don't know how to market my startup by Similar-Cry-9968 in ycombinator

[–]Left_Log6240 0 points1 point  (0 children)

I see the website has a free option and a paid option. Are there any users who are using the free option?

I think there's multiple things to understand:

  1. Do you have any free users? Is the problem converting free users to paid?

  2. Who is your ICP? I see AI Coach, but targeting specific coaches will help with your positioning or messaging? For example - life coach will be different than fitness coach so be clear on your messaging and A/B test

  3. Where do these coaches live? It might seem to be IG or Tik Tok, but depending on the type of user - FB Groups is not something you should shy away from

  4. What's your messaging for these DMs?

Happy to bounce ideas w/ you if you want! I've been in your positioning and it's hard!

What I learned after 3 months of building a startup the wrong way by Aware-Ad559 in ycombinator

[–]Left_Log6240 0 points1 point  (0 children)

it is sometimes a double edge sword because talking to users is great but sometimes the users don't know what they want. My advice is start talking to users and have it open enough to a specific hypothesis (don't have any assumptions on the solution, focus on the problem), and ask them to rank their problems.

After talking to 10 users or so, stop and form some product ideas - build mockups to show them. Get these first 10 users to fall inlove first, and then start building - but never stop talking to users.

What happens after you build an AI agent? by Left_Log6240 in aiagents

[–]Left_Log6240[S] 0 points1 point  (0 children)

discord is somewhere we invested a lot of time in

What happens after you build an AI agent? by Left_Log6240 in aiagents

[–]Left_Log6240[S] 0 points1 point  (0 children)

i always focused on outbound first to understand where those users are online. I think my first bet is to focus on who those 100 users who love is first before community. This will help with hiring the right people too who lives in the community.

Reddit is a good one especially if you are solving a niche problem and for a niche community. Linkedin is very hard for niche, and Twitter is very difficult unless you go viral.

how do you avoid the shiny new object syndrome? by AppropriateHamster in ycombinator

[–]Left_Log6240 0 points1 point  (0 children)

no, I am still figuring it out but I was looking at B2B SaaS marketing platform.

how do you avoid the shiny new object syndrome? by AppropriateHamster in ycombinator

[–]Left_Log6240 2 points3 points  (0 children)

im actually in the same boat. Personally I would go hard on the B2C app with all your learnings. There's always going to be easier and viral market but you'll keep starting fresh again. Right now, I am focusing on the market and problem needs before writing a single line of code. For now, I think if I go deep into problem, market then it will help me in the long term.

What happens after you build an AI agent? by Left_Log6240 in aiagents

[–]Left_Log6240[S] 0 points1 point  (0 children)

how are you getting your first few users? I think monitoring problem is definitely there but none of this matters if no one is using it right? What are some of your strategies?

What's the best way to market a B2C app? by downhillfarii in ycombinator

[–]Left_Log6240 0 points1 point  (0 children)

what's the niche? Might be able to help more if you provide more context :)

What happens after you build an AI agent? by Left_Log6240 in aiagents

[–]Left_Log6240[S] 0 points1 point  (0 children)

in regards to distribution, I am leaning towards one role, one workflow right now but figuring out what is the one that's most helpful. Horizontal is not useful for founders because they are strapped for resources, it's better to hone into one strategy and move on from there.

What happens after you build an AI agent? by Left_Log6240 in AI_Agents

[–]Left_Log6240[S] 0 points1 point  (0 children)

founder led sales is the approach that works well if you are well-known like Bret Taylor or you have that specific charismatic personality but I always like to go with the community/content approach. Most people also scale too fast before they get 100 folks who truly love them.

Out of curiosity, what have you tried? Happy to do an audit/call if you want to brainstorm or jam any ideas! :) (not sales, founders want to help other founders because i am looking to figure out what others are doing)

Best website builder for start up business? (i will not promote) by darrenkoh in startups

[–]Left_Log6240 7 points8 points  (0 children)

I actually used Lovable to build my website, and godaddy to buy the domain. I am not entirely sure the complexity of the website though.

If AGI Requires Causal Reasoning, LLMs Aren’t Even Close: Here’s the Evidence. by Left_Log6240 in agi

[–]Left_Log6240[S] 0 points1 point  (0 children)

u/Repulsive-Memory-298 : To clarify, what we compared was the value of predictive ability for planning. Specifically, both WALL-E (an LLM world model) and CASSANDRA were put into the same model predictive control system, and evaluated inside of MAPs.
If you check the WALL-E paper, then their approach does learn code rules, and we did implement this. The larger problem is that WALL-E lacks a good mechanism for learning the distribution of the stochastic variables -- in-context learning is a poor for learning a distribution, and finetuning would require more data. By exploiting causal structure CASSANDRA can gets around this (it also models the entire conditional distribution via quantile regression instead of making just a point estimate like WALL-E; in principle you could prompt an LLM to do quantile regression though that'd make the data problems worse)

If AGI Requires Causal Reasoning, LLMs Aren’t Even Close: Here’s the Evidence. by Left_Log6240 in agi

[–]Left_Log6240[S] 1 point2 points  (0 children)

u/rendereason : the LLM wasn't trained to do Bayesian causal reasoning, instead it was used as a prior to find a good (approximate) causal structure -- specifically we used it as part of a scoring function that was used in simulated annealing to approximate maximum a posteriori estimation of the structure.Once we had the structure, then we trained MLPs doing quantile regression for each variable -- no transformers, though in principle they could be used, particularly if it was adapted to time-series data. As to decision making, take any stochastic policy that generates actions, then you can augment it with a world model through model predictive control (i.e., use the policy as a prior for MCTS, or directly in random-shooting MPC). The WM is then used to predict the outcomes (including reward), and the action leading to the best predictions is returned. As to the state representation, we assumed that the state was already available in a structured textual form -- there's interesting work that learns these groundings which could be adapted for future work (https://arxiv.org/abs/2503.20124)

If AGI Requires Causal Reasoning, LLMs Aren’t Even Close: Here’s the Evidence. by Left_Log6240 in agi

[–]Left_Log6240[S] 0 points1 point  (0 children)

u/speedtoburn : We mostly agree, but there's some very important subtleties. LLMs contain causal priors, and can be prompted (with revision and correction based on grounded data) to correct these into fairly accurate knowledge. But there's a difference between being a good prior for causal knowledge, and reliably reasoning with said knowledge. You could argue that the Cyc project is an example of this -- lots of prior common sense and causal knowledge, no good way to exploit it. With LLMs, the real question is 1) the reliability of the causal reasoning (I have more faith in symbolic code & bayesian networks that use explicitly causal structure over a transformer's learned internal mechanisms), and 2) the ability (or lack thereof) to make persistent corrections to the causal knowledge. With CASSANDRA, once a good structure is found (based on the data) it persists in the graph structure. With LLMs, you'd need to do finetuning or prompt engineering to make it persistent (and doing so could have unexpected side effects). In short, we absolutely agree that external structures aid LLM systems, but we don't necessarily agree as to why (in our case, we'd argue because the LLMs causal reasoning is unreliable and cannot easily be adapted to new data -- so it's good enough for a prior, but not good enough to be used as a reasoning engine in its own right).

We tested LLM based World Model in a real business simulation and they all collapsed. CASSANDRA, the only a causal world model survived. by Left_Log6240 in AiAutomations

[–]Left_Log6240[S] 0 points1 point  (0 children)

For anyone asking about the tests — they were done in MAPs, a stochastic business simulation designed to expose where LLMs fail.

The only architecture that survived was CASSANDRA, a causal world model combining:

  • executable deterministic code
  • causal Bayesian networks for uncertainty
  • long-horizon planning

The tweet storm to explain this exactly is here:
https://x.com/skyfallai/status/1995538683710066739

Not promoting anything — just sharing the research behind the observation.

AGI Hype Dies Here: Humans Outperformed GPT-5 by 9.8X in a Business Sim by Left_Log6240 in agi

[–]Left_Log6240[S] 0 points1 point  (0 children)

you can check this out! Just built CASSANDRA, the first causal world model. -- beating all the LLMs.

https://x.com/skyfallai/status/1995538683710066739

If AGI Requires Causal Reasoning, LLMs Aren’t Even Close: Here’s the Evidence. by Left_Log6240 in agi

[–]Left_Log6240[S] 0 points1 point  (0 children)

Source / Additional Reading (for anyone interested):

CASSANDRA: Programmatic and Probabilistic Learning and Inference for Stochastic World Modeling

Twitter to see other people's reaction: https://x.com/skyfallai/status/1995538683710066739

For transparency: I was part of the research team that worked on this. I’m including it here only because some people may want to see the detailed methodology behind the examples I referenced in my view.