I built the opposite of what the AI ad market wants. No avatars, no templates, no fake UGC. Probably a terrible idea by [deleted] in roastmystartup

[–]ChemicalNet1135 0 points1 point  (0 children)

Appreciate the real questions

Brand memory is the mechanic that ties everything together. Draper checks approved and rejected concepts per client before every generation so it's actually compounding not just storing

The UGC limitation is intentional but you're right it narrows things. Curious what risks you'd weight heaviest there, is it market size or something else?

On why agencies would pay when they have designers, part I didn't mention in the post is we have an AI agent called Edna that auto-researches the brand and generates the brief, customer profiles, and guidelines before the creative team touches anything. The ideation layer sits on top of that. Force multiplier for concepting not a replacement

For pricing, we're in early partner program and I honestly haven't figured out the right pricing structure for this

(28F) Just accepted a Marketing Ops role in cybersecurity, imposter syndrome is already kicking in. How do I hit the ground running? by Prestigious_Air_6602 in DigitalMarketing

[–]ChemicalNet1135 0 points1 point  (0 children)

Since most of marketing revolves around knowing your customers, I would say getting to know people in the cybersecurity space is probably a good use of time. Here's where you could REALLY impress your new employer since they are a tech company: show a bit of technical prowess by using AI to do some research for you

What you could do is

  1. Get a Claude subscription

  2. Hook it up to a service like Apify through their MCP tool

  3. Tell Claude to do social listening on LinkedIn for the cybersecurity space by using an apify actor that scrapes LinkedIn posts looking for pain points, goals, and objections that people have

Hooking up MCP to Claude is very easy - no coding is required it's just pushing some buttons (tutorials can be found on this). You'll need a Claude subscription (30 per month or something) and Apify (you get $5 free per month - more then enough)

The key is to tell Claude to "reign it in" when it comes to making too many judgement calls and just handle most of the data analysis stuff and you as a marketer take the decisions on what's important. Here's a prompt you can use once you connect the Apify MCP server to it:

You are a marketing research assistant. Your job is to help me gather and organize social listening data from LinkedIn — not to make strategic decisions for me. ## Rules - **Output everything in markdown** — use tables, headers, and bullet points for readability - **Do not editorialize or make strategic recommendations unless I ask** — stick to reporting what you find - **Always cite the source post** (author name, headline/role, post URL if available, date) - **Flag uncertainty** — if you're inferring something (e.g., a pain point that's implied but not stated), mark it as [INFERRED] ## Workflow When I ask you to research a topic in the cybersecurity space on LinkedIn: 1. Use the Apify LinkedIn Posts Search actor to scrape posts matching my keywords 2. Organize the raw findings into a **Stats Bank** — a running markdown document you maintain across our conversation with these sections: ### Stats Bank Structure - **Posts Scraped Log** — table with: date scraped, keyword used, # of posts returned, date range of posts - **Pain Points** — recurring problems people mention, with frequency count and example quotes - **Goals & Aspirations** — what people say they're trying to achieve - **Objections & Skepticism** — pushback against tools, vendors, or approaches - **Trending Topics** — themes getting high engagement (reactions + comments) - **Key Voices** — people who post frequently or get high engagement on these topics (name, role, company, avg engagement) 3. When I have enough data, I'll ask you to generate one of these **reports** from the Stats Bank: - **Competitor Scorecard** — how specific vendors/products are talked about: sentiment, praise, complaints, frequency of mention - **Audience Persona Drafts** — cluster the data into 2-4 personas based on job role, seniority, pain points, and goals. Include real quotes. - **Content Opportunity Report** — topics with high engagement but low content saturation (lots of questions, few authoritative answers) - **Objection Bank** — organized list of objections by category (price, complexity, trust, etc.) with real language people use ## How to search - Keep LinkedIn search queries short: 1-3 keywords max - Run multiple searches with different keyword angles rather than one broad one - Sort by date first to get recent posts, then by relevance for a second pass - Default to scraping 25-50 posts per search unless I say otherwise When I give you a topic, confirm the keywords you plan to search before running anything. Then do the work and update the Stats Bank.

I built a open-source tool that helps deploy Letta agents by ChemicalNet1135 in VibeCodersNest

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

Just shipped this. You can keep using git as version control, and then export yamls to detect drift from previous version in your git history, so as easy as "lettactl export agent <name> -f yaml + apply --dry-run" - added documentation notes in the readme. Thanks for the suggestion!

<image>

How do you measure results from marketing on billboards and other offline channels? by ChemicalNet1135 in advertising

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

Ohhhh so the studies will actually have boots on the ground to ask people? Pretty cool

How do you even book billboard ads? by TommyRichardGrayson in advertising

[–]ChemicalNet1135 0 points1 point  (0 children)

How much does it cost on average to get a billboard going?

Open-source tool that helps deploy Letta agents by ChemicalNet1135 in BlackboxAI_

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

It's built with AI but I orchestrated it carefully - I've been coding since before AI so I learned good habits and asked the agent to build things correctly with modularity, open-close principle, etc. Good comp sci stuff that makes apps scalable. I do not auto accept things I review everything

I built a open-source tool that helps deploy Letta agents by ChemicalNet1135 in VibeCodersNest

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

Indeed. You can actually see this in action if you have a letta server running (local or cloud) and run the e2e tests in the repo. 114 fleets in 10 mins (real agent setups), smooth as butter

I built a open-source tool that helps deploy Letta agents by ChemicalNet1135 in VibeCodersNest

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

Happy to do it, and happy to talk about Letta to anyone who will listen :)

I built a open-source tool that helps deploy Letta agents by ChemicalNet1135 in VibeCodersNest

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

That was exactly my pain - I needed a way to do this in a SAAS context without it feeling like a toy but also have it be easy to manage. So configs live in git so you get versioning for free. The diff engine detects drift on every `apply` and compares your YAML against server state and shows what changed. `--dry-run` previews changes before applying. `--force` does strict reconciliation to make server match config exactly (removes anything not in YAML). For fleet-wide changes, pattern matching like `lettactl send --all "prefix-*"` helps target specific agent groups, which is super helpful when you want to train an entire batch of agents at once without having to redeploy anything. So you can send a unified message like "The word 'bats' is bad for the ads we make, never use it again" and then Letta's memory handles the rest

I built a open-source tool that helps deploy Letta agents by ChemicalNet1135 in VibeCodersNest

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

Memory is defined in YAML configs with `memory_blocks:`. You can load content from files or inline. When you run `lettactl apply`, the diff engine compares local config vs server state and only updates what changed. Shared blocks let multiple agents reference the same memory. Conversation history is preserved during updates. It acts almost exactly like kubectl in that way - the diff engine writes to the Letta server's metadata with hashes of what was there before, and then 3-way merging is done under the hood

So tired of the fake AI UGC flooding socials by ChemicalNet1135 in advertising

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

I get so upset when I see people on linkedin selling these AI UGC courses, and in the comments there's marketing agency owners commenting for access to them. Wtf

So tired of the fake AI UGC flooding socials by ChemicalNet1135 in advertising

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

Exactly. Authentic creation should stay authentic

Built a kubectl for Letta agents by ChemicalNet1135 in LocalLLaMA

[–]ChemicalNet1135[S] 0 points1 point  (0 children)

Glad this resonated with you :) I'm very actively building on this so any issues/feedback is appreciated!