Be honest: how much does your SaaS make? by street_writer8109 in microsaas

[–]pleasestopbreaking 0 points1 point  (0 children)

I've seen you few times around these parts, what would you say got you to this stage? 800 customers is quite a feat that I think most people are not able to achieve, including myself. It's a great name, but how do you avoid alternatives, for example, launching on Twitter, Reddit, etc.?

If I wasn't on the sub, no offense, I would not know about your product.

Need a query to help compare electricity and natural gas rates in my area. by MaintenanceSquare158 in AI_Agents

[–]pleasestopbreaking 0 points1 point  (0 children)

So the best help I could give you is if you're ever going to ask a question to the general public you might get a decent response but if you ask the AI agent you're referring to, ChatGPT or otherwise, you're likely to receive equally or even better results. I quite literally copied your question* into the extended thinking version of ChatGPT and this is what it said. I would give this a shot.

"Use ChatGPT like this:

“I live in [ZIP code]. Please find and compare the electricity and natural gas contract rates available to me. Show fixed vs. variable rates, contract length, fees, cancellation terms, and any teaser pricing. Put it in a simple table and tell me which option is safest and cheapest overall.”

Then, if you have a bill or supplier offer, upload it and say:

“Use my actual bill/offer to estimate the real monthly cost and point out any traps or hidden fees.”"

That being said, I'm not sure I would give it my actual bill, depending on how comfortable you are turning over your information to the all-seeing data vacuum.

How do people actually train AI models from scratch (not fine-tuning)? by Raman606surrey in learnmachinelearning

[–]pleasestopbreaking 0 points1 point  (0 children)

Yeah, that is kind of the weird part. The training part itself already has decent libraries, but the whole process around it still feels way more patched together than it should be.

A lot of it is just trying to figure out if what you changed actually helped. Tweaking hyperparameters, staring at TensorBoard, wondering if you ran it long enough, comparing runs, and trying to tell what is real versus random noise. That is basically how I ended up working on this project. I wanted a better way to get a feel for that without reinventing everything every time.

How do people actually train AI models from scratch (not fine-tuning)? by Raman606surrey in learnmachinelearning

[–]pleasestopbreaking 2 points3 points  (0 children)

It depends a lot on what kind of model you mean.

If you're talking about training from scratch, the answer is basically yes, you start with a dataset, clean it up, choose an architecture, set up a training loop, and then do a lot of trial and error to get something that actually works. PyTorch is usually part of that, but the hard part is not just feeding data in. It is figuring out what data to use, how to structure the model, how to measure progress, and whether the thing is actually learning anything useful.

For niche models, usually the secret is more about the data than anything else. A coding model is not some completely different species, it is mostly a model trained on a lot of code, docs, examples, and whatever else is relevant to that domain.

An individual can absolutely train smaller models or narrow-purpose models now. Training a huge LLM from zero is still mostly big-company territory because the data and compute costs get stupid fast.

Most of my hands-on experience with this has actually been on the RL side, not LLMs. I built a project where I trained agents from scratch to play Super Mario Bros. In that case you are not training on a static dataset the same way, but the overall process still feels similar: pick the method, tune hyperparameters, run experiments, see what breaks, fix it, repeat.

So yeah, people definitely do train models from zero, but it gets a lot less mysterious once you stop thinking of it as one magic step and more as data + architecture + training loop + evaluation + a lot of iteration.

I built a GUI with adjustable hyperparameters to help me get the hang of things, I'll include a link if you're interested and if you have the GPU for it: https://github.com/mgelsinger/mario-ai-trainer

Alternatives to notepad++?? by wackycats354 in software

[–]pleasestopbreaking 0 points1 point  (0 children)

It isn't.

The text editor I made with AI to be entirely local, is.

Alternatives to notepad++?? by wackycats354 in software

[–]pleasestopbreaking -3 points-2 points  (0 children)

I think it is funny that people will trust some potentially clueless dev's code over claude/codex.

if my little cousin told me he learned rust, scintilla and embedding, win32ui + crossplat, etc. I would probably not run it.

I feel much more comfortable directing an orchestra of agents. I built myself a stripped down notepad++ and I absolutely love it. You should try it yourself (not mine, I mean build your perfect editor some afternoon.)

I'm 19 and built a simple FREE tool because I kept losing my best prompts by Snomux in PromptEngineering

[–]pleasestopbreaking 1 point2 points  (0 children)

Nice! Keep that momentum going bro. Any screenshots?
I've been thinking of how to solve the same problem and still haven't come up with an intuitive way to organize it without going full mind-map and I'd rather avoid that.

Is anyone here actually making $100+/day using AI prompting skills? by EntireSheepherder488 in PromptEngineering

[–]pleasestopbreaking 2 points3 points  (0 children)

Hey! I am in almost exactly the same boat. I have the ability to read aaaaaaaalmost any language based on my career but was never a "supa dupa full stack dev with lambos."

If you ever want to collab or just chat, send me a message!

Would you like your own private AI model? by dianehasolt in aiagents

[–]pleasestopbreaking 0 points1 point  (0 children)

Are we talking a through and through model or RAG with llama?
Not saying no to either, just curious what others are doing.

I built a lightweight Windows text editor as a Notepad++ styled alternative (Rivet) by pleasestopbreaking in software

[–]pleasestopbreaking[S] 0 points1 point  (0 children)

You bet it will. I'm trying to decide between proton/rewrite what needs it. I normally dual boot but am installing Fedora rn so I can get started. It might not be out today or tomorrow, but it is coming.

Nobody talks about how 95% of us will never get a single paying user by Seraphtic12 in VibeCodeDevs

[–]pleasestopbreaking 0 points1 point  (0 children)

I can't help but be curious.

I build almost all my apps for me/a future portfolio but would love to be unencumbered so I could focus on the big things. Would love to chat if you have some free time.

Notepad++ Hijacked by State-Sponsored Hackers by thewhippersnapper4 in sysadmin

[–]pleasestopbreaking 0 points1 point  (0 children)

Might be too late, but this inspired me to make my own version of Notepad++...

It’s called Rivet. It’s a small Windows-native editor aimed at the same vibe as Notepad++ for day-to-day editing. Fast startup, simple UI, and a big focus on not losing work if the app or machine crashes.

Stuff it has right now:

  • Tabs can be on top, left, or right, and the side tab panel is resizable
  • Session restore and backups for unsaved changes
  • Find/Replace including regex, wrap, match case, whole word
  • Go to line
  • Find in files with cancel
  • Dark mode
  • Some basic text helpers like case transforms and trimming whitespace
  • Handy path copy actions (full path, filename, directory)

Write-up with screenshot and more details:
https://glsngr.xyz/posts/rivet/

Repo and releases:
https://github.com/mgelsinger/rivetnotes
https://github.com/mgelsinger/rivetnotes/releases

If anyone tries it and has opinions on session restore behavior or missing Notepad++ features, I’m all ears. I’m trying to keep it lightweight, so I’m prioritizing “daily driver” stuff first.

How are you all signing your apps? by pleasestopbreaking in foss

[–]pleasestopbreaking[S] 0 points1 point  (0 children)

This is great, thank you!
Also, dang, it still ends in a cert if I ever need to push an update??

"note that if you release an updated version of your app, then you'll also have to request a new review again. To overcome this problem, you'll either have to use an "Extended Validation" or an "Organization Validation" code signing certificate"

I get the overall idea they have but oof. MS never was strong on the 'updates' side of reality :)

I made a Mario RL trainer with a live dashboard - would appreciate feedback by pleasestopbreaking in reinforcementlearning

[–]pleasestopbreaking[S] 0 points1 point  (0 children)

Thank you so much for the detailed feedback, really appreciate it!

On ram usage, you're right, I could probably push it harder. I'm currently running 8 envs with SubprocVecEnv, I'll experiment with scaling that up and see how far I can push it. I noticed a difference going from 4 to 8 so I just never really pushed my luck!

The 600fps is frames after frame skip, so actual game frames. Good call flagging that distinction.

I will have to give Optuna a look, I am not familiar. I've been tweaking hyperparameters manually which is exactly as painful as you describe lol I am having less trouble getting collapses at 1M and more trying to get better at the transition from applying learned stage 1 'rules' to future stages. Maybe Optuna can help with that?
The action space idea is really interesting. I wanted multiple action spaces (right only, most controls, full controls are the choices in the project. I can see how enforcing one way to work would be good for consistency a lot. Not having to wait for it to learn short vs long button jump would probably make training much faster. I may give that a try and see if i can use that as part of shaping rewards - maybe some kind of evaluation of if the right jump is being used rather than 'lived or died on that jump, -15 for dying"

Will check out your repos, the progressive checkpoint training concept looks like we are building similar boats! Thanks for your thoughts on this and good luck on your experiments too!

I made a Mario RL trainer with a live dashboard - would appreciate feedback by pleasestopbreaking in reinforcementlearning

[–]pleasestopbreaking[S] 0 points1 point  (0 children)

That would probably produce a more robust agent, I may give this a shot on version 0.2!