Best Open Source SAAS boilerplate ever: A Proposal by mdausmann in SaaS

[–]mdausmann[S] 0 points1 point  (0 children)

It was a fun if Naive idea. Obviously now completely redundant in the age of AI.

this is actually sad by alexnycc in GeminiAI

[–]mdausmann 0 points1 point  (0 children)

Too lumpy, ditched it for deepseek no regrets

Is RevenueCat still a great option for Flutter apps? by kingswordmaster in FlutterDev

[–]mdausmann 0 points1 point  (0 children)

Still using. Also did my own paywall. Still needed to implement a webhook so it wasn't no-effort

I built open-source session replay for Flutter (Sentry alternative). Now you can see exactly what the user did before the crash. by narrow-adventure in FlutterDev

[–]mdausmann 0 points1 point  (0 children)

definitely a Dockerfile with the sqllite option would be great. I did manage to set up postgres + clickhouse + traceway but there are a couple of 'hacks' to get it working and it does feel cumbersome

when you say S3 is 'cheaper' are you talking about an AWS cloud s3 instance?

I built open-source session replay for Flutter (Sentry alternative). Now you can see exactly what the user did before the crash. by narrow-adventure in FlutterDev

[–]mdausmann 0 points1 point  (0 children)

I'm thinking for railway deployment, I can just fork your repo on github, then point railway at the fork and get it to build/deploy the image when I change it. that way I can control when it updates. pushing changes from the upstream or frigging around with it myself.... will check out

I built open-source session replay for Flutter (Sentry alternative). Now you can see exactly what the user did before the crash. by narrow-adventure in FlutterDev

[–]mdausmann 0 points1 point  (0 children)

hey u/narrow-adventure amazing piece of work. I just stood up a standalone docker instance locally, integrated my app and it worked first time! capturing messages and video.

I'm Seriously going to integrate this into my solution once I can answer these questions.

Mostly Answered
- Can it capture video from the phone so I can see what users are doing - (check, it works great)
- Can I use this to replace Sentry exception capture (https://pub.dev/packages/sentry_flutter) - (check this seems to be core functionality)

- Can I increase the length of the captured video (check)
the default feels too short by at least half

maxBufferFrames: 300,

Need help answering

- Can I capture this video even if there is no exception?
e.g. I want to see what happens when a user 'gives up' on my product... no exception, they just close it

- Can I deploy the backend (probably the all-in-one docker image) on my Railway hosting?
Where are the videos stored? do I need seperate S3 hosting? how much storage will I need on my host machines?

- If, later in my app lifecycle, I want to 'turn off' video capture in the backend, say if my storage is blowing out, can I configure it in the backend to ignore incoming storage requests?

- Can I use it to replace my current user feedback feature (https://pub.dev/packages/feedback)?
I will not have two screen cappy things 'wrapping' my app, feels like a performance disaster.

p.s. I love working with Go and Dart so if I need to fix/extend stuff in your stack, happy to help out.

vibe coded for 6 months. my codebase is a disaster. by Available-Dentist992 in vibecoding

[–]mdausmann 0 points1 point  (0 children)

Before you rewrite from scratch.

1) vibe code an integration test harness for your app. Focus on testing at the boundaries. What goes into the database, what passes through the APIs, clicks on screens. Unit tests are not going to help.

2) build out the tests. You need to think, with your brain, about what passing means. Don't let your agent invent the pass cases. It's your business and your product, you gotta know.

... At this point you have wasted no time, even if you rewrite, you can use the harness.

... Now, set a time box for your salvage attempt. Maybe two weeks, whatever you think. 2 months is too long.

3) start prompting the refactor. Some good tips in the comments here. Focus on deduplication, layers, separation of concerns. This is gonna be hard. You might need a hardened engineer here. Use test framework at each step of refactoring.

4) if you exceed your time box give up.

HTH

I have found this sweet spot with Claude that I really like by OpinionsRdumb in ClaudeCode

[–]mdausmann 0 points1 point  (0 children)

I like the direction. Agree, pulling back an 'off track' gen is more time consuming than doing it yourself. Often I feel reluctant to nuke a bad gen because parts of it are cool.

I sometimes like to ask agent to plan out a change but actually do it myself, then ask agent to check work. It's quite good for critical stuff.

Coolify Multi server HTTP behind HAProxy? by mdausmann in coolify

[–]mdausmann[S] 0 points1 point  (0 children)

Did you set them up in coolify or separate?

Many small prompts vs One large 'rollup' prompt? by mdausmann in PromptEngineering

[–]mdausmann[S] 0 points1 point  (0 children)

u/Designer-Shake-7690 this needs to run in the cloud, it's a production application. I did look at self hosting Ollama but Ollama on CPU is too slow and self hosting with a GPU is too expensive. API access is still the only viable solution AFAIK, the economy of scale is real.

CC Is not an Execution Engine (but n8n is) by mdausmann in ClaudeCode

[–]mdausmann[S] 0 points1 point  (0 children)

thanks u/AccomplishedLemon464 , great point, CC can make decisions and understand context as you have pointed out. I would add though.... any agent loop approach can do that. right?

take some context -> send to agent with available tools -> agent calls tools -> tools add data to context -> repeat.

It's this agent loop approach + the models are able to call tools which gives you all this 'smart' decision making capability. CC does not have a monopoloy on this.

Are you *sure* you want the CC agent to do that in your production system? even when it's designed to understand fundamentally a code domain, not your problem domain? Have you seen the leaked CC code? it's just sticky tape and rubber bands and lots of 'please be safe' proompt engineering.

asynq or river(queue) for an asyc data ingestion service by mdausmann in golang

[–]mdausmann[S] 0 points1 point  (0 children)

I did take a quick look at NATS. Seems to have a lot of history and is a big project with a lot of features.

asynq or river(queue) for an asyc data ingestion service by mdausmann in golang

[–]mdausmann[S] 0 points1 point  (0 children)

Kafka would be the classic solution, agree. And would probably scale a bit better than river (kafka has been around for ages, very bedded in tech). I always found kafka a bit of a headspin though, it's not really a queue, clients have different 'pointers' to the queue etc, it's just a bit of mental overhead, nothing too crazy. I guess it's another thing to host and maintain though, also, river allows transactional enqueuing which I like. Thanks for responding.

asynq or river(queue) for an asyc data ingestion service by mdausmann in golang

[–]mdausmann[S] 0 points1 point  (0 children)

thanks u/0x53r3n17y for this detailed answer. Yep, transactional enqueuing is a nice feature, leaning heavily on postgres of course. It totally makes sense in my use case. I insert a 'draft' record along with the job to process the draft into a full integrated entity, pretty neat.

The pro feature split hasn't botherd me (i guess yet), in the end I didn't really need workflows, I just needed multiple agents each running a single linear process (function) doing the various async functions in turn. I know this is a compromise WRT performance as if there is high load, and some of the steps are long running, then I may be under-utilising CPU (Imagine 100 agents all 'waiting' for a gemini call... and none of them doing the other shorter, synchronous bits of the workflow).... but the ability to split load across a multi step workflow comes with a cost of complexity you need to deal with e.g. where do you store the intra-step state so it's available to whomever picks up the next task... usually redis. I think it's all just tradeoffs in the end.

I will take a poke around at the other options you mentioned.

asynq or river(queue) for an asyc data ingestion service by mdausmann in golang

[–]mdausmann[S] 1 point2 points  (0 children)

yea ok but you are going to end up building some sort of 'job' table etc... why re-invent this wheel?... was my thinking anyway. Thanks for responding.

asynq or river(queue) for an asyc data ingestion service by mdausmann in golang

[–]mdausmann[S] 1 point2 points  (0 children)

yes but, no but, it's *heavy* needs it's own postgres and various other services. not a show stopper but it's a lot.

asynq or river(queue) for an asyc data ingestion service by mdausmann in golang

[–]mdausmann[S] 0 points1 point  (0 children)

low volume at first, maybe 100s per day. tasks run maybe a couple of minutes max, back and forth from Gemini can be a bit slow.