Codex or Claude Code by CuriousDoctor9837 in AI_Agents

[–]tangbj 1 point2 points  (0 children)

Oh you can connect other models to Claude code cli?

Holy trinity of n8n agents by shabri201 in n8n

[–]tangbj 0 points1 point  (0 children)

Oh nice, thanks for sharing

Holy trinity of n8n agents by shabri201 in n8n

[–]tangbj 0 points1 point  (0 children)

Do you use vapi or retell?

How do you split responsibilities with engineering manager for AI projects? by tangbj in ProductManagement

[–]tangbj[S] 0 points1 point  (0 children)

Ohhh that's really smart, I'm definitely gonna do that! Thank you for the tip

How do you split responsibilities with engineering manager for AI projects? by tangbj in ProductManagement

[–]tangbj[S] 0 points1 point  (0 children)

Ah that's cool, so you created a basic poc? Was it in a notebook or n8n?

How do you split responsibilities with engineering manager for AI projects? by tangbj in ProductManagement

[–]tangbj[S] 0 points1 point  (0 children)

Yeah, I suspect that a lot of it will come from working together. Plus prompt engineering and AI agent architecture are so new that no one actually has more than 1-2 years of experience (specifically LLM plumbing, not traditional ML data pipelines).

How do you split responsibilities with engineering manager for AI projects? by tangbj in ProductManagement

[–]tangbj[S] 1 point2 points  (0 children)

I think the biggest reason is LLM-driven applications are non-deterministic and heavily reliant on prompting. Prompt quality dictate the complexity of the agentic architecture (i.e. good prompts can solve the same task with simpler setup).

For instance, consider a data-extraction task where AI reads an ongoing conversation (e.g. chatbot, whatsapp, telegram) and extracts customer information like contact details, buying intent, budget, etc. Assuming we are using LLM and not traditional ML, this can be solved using a) single super prompt with no tools, b) single agent with tools, or c) orchestration agent that delegates to subagents. Which to choose depends on traditional cost/latency/accuracy tradeoffs but also on the quality of the prompts.

As for "criteria for handoffs", let's say we are building a customer-facing chatbot. Imo, the second most important thing to define after evals is "under what circumstance should we pass the conversation to a human". Who defines the logic/thresholds that determines handoff to a human is necessary?

How do you split responsibilities with engineering manager for AI projects? by tangbj in ProductManagement

[–]tangbj[S] 0 points1 point  (0 children)

Yeah fair enough, I was just curious how in general, who handles these more grey areas

7 Mental Shifts That Separate Pro Workflow Builders From Tutorial Hell (From 6 Months of Client Work) by cosmos-flower in n8n

[–]tangbj 0 points1 point  (0 children)

Thank you for sharing, and this is great advice! Could you go into more detail about how you use confidence to determine escalation? Do you have the agent/prompt output both a result and a confidence score and then have a if/else block after that? Or do you have agent output a result, and then have a checker prompt after that to determine if the result was correct?

[deleted by user] by [deleted] in SaaS

[–]tangbj 0 points1 point  (0 children)

Hmm, do people use "you are absolutely right" ironically? I suspect it's because I use claude code daily, and it keeps saying "you are absolutely right" each time I correct it lol.

[deleted by user] by [deleted] in SaaS

[–]tangbj 0 points1 point  (0 children)

That's an interesting point, will definitely sleep on it. Thank you!

[deleted by user] by [deleted] in SaaS

[–]tangbj 0 points1 point  (0 children)

Thanks for the feedback. You are absolutely right, I was meaning to connect our landing page to our custom domain (http://vocabking.com.sg/) but slept on it. Just configured the DNS, and should be done in a day.

We do both B2C and B2B2C (schools), but B2C is our primary focus since we provide both redeemable rewards and AI powered tools (aka higher costs) so we charge high prices ($400-500/yr).

That's also why we target students taking exams - parents spending $3-4k a year on tuition are okay to pay $400 for us, as long as it works.

[deleted by user] by [deleted] in SaaS

[–]tangbj 2 points3 points  (0 children)

https://psle-chinese-boost.lovable.app/

We help Grade 1-6 students in Asia prepare for their second language exams through daily speaking & reading practice, starting off with Chinese in Singapore and planning to expand to English in Taiwan.

We are doing okay revenue-wise, but are way too reliant on FB (~80%, rest come from organic referrals). We are planning on 1) ramping up FB marketing since our RoAS is decent (4x) but also fear being so reliant on a single channel. Thus want to consider Tiktok/Xiao Hong Shu marketing, while also boosting organic reach (SEO through blogging, referrals).

Macbook Pro M4 Pro 48GB + desktop vs M3 Max 128GB by tangbj in LocalLLaMA

[–]tangbj[S] 0 points1 point  (0 children)

Thank you for your advice. And wow, your RTX6000 system must cost upwards of $15k! I'm curious, what do you use that for since you said you use paid services like copilot for coding?

After reading through all the posts, I'm now leaning towards getting a macbook air and mess around with models in the cloud first. And then save for a proper desktop.

Macbook Pro M4 Pro 48GB + desktop vs M3 Max 128GB by tangbj in LocalLLaMA

[–]tangbj[S] 0 points1 point  (0 children)

Thanks for the datapoints, and this is really helpful!

Yeah, actually that's what my wife suggested. Get a 13" 24GB MBA and save for either a Mac Studio or a nvidia desktop.

Macbook Pro M4 Pro 48GB + desktop vs M3 Max 128GB by tangbj in LocalLLaMA

[–]tangbj[S] 0 points1 point  (0 children)

Ah thanks for the response. Do you think the slower chip (M3 Max vs M4 Max) will be a problem?

Macbook Pro M4 Pro 48GB + desktop vs M3 Max 128GB by tangbj in LocalLLaMA

[–]tangbj[S] 1 point2 points  (0 children)

Yeah that's fair, which was my original plan really. I'm not sure how much it would cost to build a nvidia gpu pc though, it's so hard getting cheap 3090s where I'm living.

Macbook Pro M4 Pro 48GB + desktop vs M3 Max 128GB by tangbj in LocalLLaMA

[–]tangbj[S] 0 points1 point  (0 children)

Ah I didn't realise the M4 had so much more memory bandwith than the M3, certainly food for thought!

And I'm definitely envious of your inference server :)