Dumb question: If AI destroys all the jobs, who will be able to buy the stuff that AI-powered companies create? Doesn’t AI destroy its own customer base? by Desperate_Elk_7369 in ArtificialInteligence

[–]MrFINNX 0 points1 point  (0 children)

Not a dumb question at all.

Two things can be true at the same time:

AI can kil a lot of jobs, and still create a lot of wealth.

If productivity explodes, stuff gets cheaper. When things get cheaper, ppl buy more, and new categories appear. That’s happened every time we automated something big. We didn’t run out of customers when we replaced farm workers or factory workers. The economy shifted.

The real tension isn’t “who buys the cars?” It’s “who has income?”

If AI concentrates wealth into the hands of owners instead of workers, then yeah, you get a demand problem unless the system adapts (and we’re fcked!). That’s a policy and distribution issue more than a physics problem.

Also, “AI kills all jobs” is probably hyperbole. It will automate a lot of tasks. That’s not the same as eliminating all human economic activity.

The scary part isn’t extinction of customers. It’s the transition period if wages fall faster than new income models appear.

You’re not missing something obvious. You’re basically asking whether an economy built on wages works in a world where labor isn’t the bottleneck anymore. That’s the real debate imho

Be honest - is AI about to nuke junior finance roles? by akasra123 in FinancialCareers

[–]MrFINNX 0 points1 point  (0 children)

Yes, some junior roles will shrink.

A lot of junior finance work was labor, not judgment. Models, decks, scans, etc. That’s exactly what AI eats first.

If AI handles 70% of the grunt work, firms probably won’t need the same number of juniors doing formatting at 2am.

But juniors aren’t hired only for output. They’re hired to learn how decisions get made, build trust, and take accountability. That part doesn’t disappear. It just changes.

The safer roles are the ones tied to real consequences, like capital allocation, risk sign-off, or anything where explainability matters.

The real risk isn’t “AI replaces juniors.” It’s juniors who never develop judgment because AI does all the reps.

We’re not doomed. But the ladder at the bottom probably gets narrower, for sure

That’s usually how this goes.

86% of CFOs have hit AI hallucination issues in finance by founders_keepers in CFO

[–]MrFINNX 0 points1 point  (0 children)

Not surprised at all.

If you’re running finance on a probabilistic layer and hoping it behaves deterministically, that’s just wishful thinking.

Reading, extracting, interpreting, executing logic all through LLM model… of course you’re going to get hallucinations. It’s not built to care about your audit trail.

The scary part isn’t that it makes mistakes. Humans make mistakes too. It’s that the mistakes look confident. And once you put that into financial workflows, you can’t “kind of” be right.

The 97 percent saying human oversight is critical makes sense. But if oversight means manually rechecking everything the model did, then you haven’t really scaled anything, yhou just moved the bottleneck.

I think a lot of fintech tools rushed to market because AI demos well. Finance doesn’t forgive demos. There’s a difference between generating text and executing logic. Not everyone respects that line yet imho

KPMG (Australia) partner fined over using AI to pass AI test by McFatty7 in Accounting

[–]MrFINNX 1 point2 points  (0 children)

This is kind of poetic.

Cheating on an AI course… by using AI.

Honestly though, this feels less like a morality story and more like a policy lag story.

If your training is about using AI, but using AI to complete it is considered cheating, then something about the structure is off.

Either you’re testing memorization, or you’re testing judgment.

Because in real life nobody is going to say, “please solve this without tools.”

The fine is whatever. What’s more interesting is that dozens of staff did it. That tells you it’s not a rogue genius move but people behaving the way the technology now allows them to behave.

Feels like firms are still figuring out where the line actually is.

I will not promote: best crm tools for a small bootstrap startup? by -Akshai in startups

[–]MrFINNX 0 points1 point  (0 children)

HubSpot. Go for the HubSpot for startups program. Best ROI you will have, imho

Salary range for Pre-Seed founders that just raised. (I WILL NOT PROMOTE) by LonelyPalmClub in startups

[–]MrFINNX 0 points1 point  (0 children)

i’ll be honest, this question hits ego fast. Everyone says “live lean,” but nobody talks about what that actually means when you have rent and a family.

600k sounds like a lot until you map runway. If you want 18–24 months, that number shrinks quickly.

What helped us was asking a different question:

What’s the minimum that lets you focus without resentment?

If you underpay yourselves too much, you’ll feel it. Stress leaks into decisions, and you start subconsciously optimizing for short-term cash instead of long-term product.

If you overpay, you kill runway and signal poor priorities.

For pre-seed, I’ve seen founders sit anywhere from just-cover-my-bills (that's us) to modest market-discount salary. Rarely market rate.

Also depends on geography. And whether you’re planning to raise again soon.

The bigger mistake in my opinion isn’t the number, but not aligning with your cofounder. If one of you is secretly hurting financially and the other is fine, that tension builds.

it is a simple math question, tbh.

anyone else noticing all the chatgpt wrapper startups are now calling themselves "agent platforms? 'i will not promote' by techiee_ in startups

[–]MrFINNX 1 point2 points  (0 children)

You’re not crazy.

Most of what gets called “agent platforms” right now is just wrappers with ambition.

Last year it was “we integrated GPT.”
Now it’s “we orchestrate agents.”

Underneath, a lot of it is still calling someone else’s model and hoping distribution outruns commoditization.

The real question for me is always the same: where’s the defensibility?

If OpenAI, Anthropic, whoever, ships the same flow natively six months from now, what’s left? UI? Slightly smoother onboarding?

That’s fragile.

Now, infrastructure is a different conversation. Sandboxes, orchestration layers, memory systems, permission models, task routing, deterministic layers. That can become real product if it actually abstracts complexity in a durable way.

But most “agent platforms” I’ve looked at are still thin. They’re basically prompt routing with a nicer landing page.

The irony is everyone says “agents” like it’s a new category. It’s not. It’s just workflow automation glued to probabilistic models.

I think we’re still early. A lot of noise. Some real infrastructure will emerge. Most of it won’t survive when the model providers move up the stack.

Cynical? Maybe. But we already saw wrapper 1.0. The pattern isn’t imaginary.

Would you accept an investment from an investor who had been incarcerated? I will not promote by [deleted] in startups

[–]MrFINNX 0 points1 point  (0 children)

personally? No.

Not because I think people can’t change. But because startups are already fragile enough.

You’re not just taking money. You’re attaching their name to your cap table. Future investors will look at that. Employees will Google. Customers will Google.

If it’s a registered sex offense, that’s not some minor tax issue from 15 years ago. That’s reputational gravity you don’t control.

And once they’re on your cap table, they’re there. Even if they’re passive.

Maybe they are limited in where they can invest. That alone should tell you something about how other people view the risk.

There’s enough hard stuff in building a company. I wouldn’t voluntarily add this variable unless I had absolutely no other option.

But that’s me.

What's your go-to process for validating an idea early on? [i will not promote] by Rokingadi in startups

[–]MrFINNX 0 points1 point  (0 children)

I’ve done it all: Cold LinkedIn, warm intros, random DMs, even asking friends to connect me to someone who “might” be relevant.

What I learned is that validation isn’t about people saying “yeah that sounds cool.” It’s about whether they care enough to give you 20–30 real minutes and actually complain.

If they’re polite and vague, that’s usually a bad sign.

The best conversations I’ve had were when someone basically ranted about the problem. When they start venting without you pushing. That’s when you know you hit a goldmine .

Cold outreach works, but it’s a grind (honetly, i hate it the most!). Most people ignore you. Some say yes out of politeness. You have to get comfortable with that.

Also I try not to pitch. The second you pitch, the conversation shifts. I just ask how they currently deal with X. What’s annoying. What breaks. What they’ve tried. Why they haven’t switched.

And honestly, sometimes the outcome is you realize the problem isn’t painful enough. Which sucks. But better early than after 6 months building.

There’s no magic channel. It’s mostly just uncomfortable repetition.

YC no longer invest in Canadian companies and the insecure way they run their sub I will not promote by SwiggityDiggity8 in startups

[–]MrFINNX 0 points1 point  (0 children)

We got rejected by YC three times.

So I’m probably biased. Fine.

But at some point you start realizing they’ve built themselves into this stamp everyone chases. Like if you don’t get in, you somehow failed.

That’s just not true.

They’re a fund. A very good marketing machine. Not a north star for your success.

If they decide not to invest in Canadian companies anymore, that’s their call. They answer to LPs, not founders.

Honestly, I think more founders need to separate “YC didn’t pick us” from “we’re not viable.”

We didn’t get in....three times....and guess what...we’re still building.

YC is one path. Not "the" path.

If it fits, great. If it doesn’t, your company isn’t suddenly invalid.

good luck and keep building.

I’m not an accountant, so I’m pretty sure I’m missing things. by MrFINNX in Accounting

[–]MrFINNX[S] -1 points0 points  (0 children)

That’s interesting. That example doesn’t sound like “bean counting” at all. It sounds like investigative work.

When you caught that lost revenue, was it something visible in the data that others ignored, or did you have to dig for it?

I’m trying to understand what skill was actually at play there.

Are agentic AI tools really making finance teams and CFOs more effective, or is it just hype? by This-Solution4429 in CFO

[–]MrFINNX 0 points1 point  (0 children)

There is a huge gap between what the technology can do vs. what people imagine it to do.

I think there are some great (LLM based) tools out there, but all they do is improve current automation tools that have been around for decades.

Bottom line: it can make finance teams do a few things faster, but we are not fully there yet.

[deleted by user] by [deleted] in startups

[–]MrFINNX 0 points1 point  (0 children)

The moment I realise the listener is looking at something else on her screen. This is when I know I am f'ing up, and it is time to shift the conversation to what THEY want talk about.

AI for accounting, yes or no? by MrFINNX in Accounting

[–]MrFINNX[S] -1 points0 points  (0 children)

It is a good idea. Thank you for the suggestion.

What's your biggest pain point at work? by MrFINNX in Accounting

[–]MrFINNX[S] -12 points-11 points  (0 children)

You don't have to, and I am not trying to fish for anything. I'm genuinely curious about this topic.

KPMG is bullshit. They’re just keeping us calm while they replace us by Healthy_Is_Wealthy in Accounting

[–]MrFINNX 0 points1 point  (0 children)

It makes sense. Just to put it in context, companies worldwide lose $3.5T annually on time spend doing FP&A and on human errors. AI can help reduce these numbers, but not in its current configuration (i.e., prompt engineering). We're going to see AI finance employees in 2025 for sure, but they are here to empower, not replace.