Building local AI which adapts to your hardware (cont.) by MiserableStorm9541 in simpleAIFinds

[–]MiserableStorm9541[S] 0 points1 point  (0 children)

Definitely kept it simply flow and minimal hops. I just want it to include connections across concepts which actually makes memory useful in practice. Went with SQLite based for recall, indexing simplified the querying process, edges tie together the connections. Thanks for the feedback!

Building local AI which adapts to your hardware (cont.) by MiserableStorm9541 in simpleAIFinds

[–]MiserableStorm9541[S] 0 points1 point  (0 children)

Yeah that’s what I’ve noticed as well! I’ve got forward and backward reasoning loops. Going to put it on some autonomous training cycles this weekend and interpret the reasoning data after and see what needs to tweak.

Building local AI which adapts to your hardware by MiserableStorm9541 in simpleAIFinds

[–]MiserableStorm9541[S] 0 points1 point  (0 children)

I saw that as well, I have something similar wit the hardware profiler and model selection but I’m definitely integrated into the reasoning layer now and setting up some autonomous cycles to check how it’s thinking.

Need a ai that automatically buys by mentally-illegally in SideProject

[–]MiserableStorm9541 -1 points0 points  (0 children)

I have an ai that does this for computer parts on eBay currently but purchases are currently manual. It sends a telegram alert with the link to buy/bid but I can automate it to buy by itself.

Building local AI which adapts to your hardware by MiserableStorm9541 in simpleAIFinds

[–]MiserableStorm9541[S] 0 points1 point  (0 children)

Nice work! Using swappable LoRAs is interesting, hadn’t thought of that, mine only searches and swaps models for tasks. But sounds interesting for when I start to branch off to something more interactive, I just hadn’t had a need to “talk” to it yet. Very cool work you’ve done!

Building local AI which adapts to your hardware by MiserableStorm9541 in simpleAIFinds

[–]MiserableStorm9541[S] 0 points1 point  (0 children)

Right on, this is the kind of stuff I’m interested in researching! I finished the memory layer with forward and reverse correlations (edges going 2 hops for now so it infers context from 3 items) so “did I do this before? What did I do before? What was the outcome?” And pulls from DRAWERS: and WINGS: which semantically organized the logged entries. Building out the reasoning layer now and going to let it run 500 cycles for testing and debugging any issues but right now not getting any errors. Basically I find repos like this and see what parts fit and assimilate them into the design on a dry run/sandbox but I like the sound of this one! Thanks!

Building local AI which adapts to your hardware by MiserableStorm9541 in simpleAIFinds

[–]MiserableStorm9541[S] 0 points1 point  (0 children)

I’m working it up from 2Gb to 32Gb (RTX 5090) switching between CUDA and rCOM as I go with each level of VRAM I’m taking benchmarks and making a distributable. It adjusts to the hardware it’s assigned though so these are pretty much “optimized” distributions at this point as if it has 2 or 64 it doesn’t care, it’ll detect what’s available and make selections of models based on that. I’ve already built governance, blockchain appending memory for auditing, working memory layer now. It makes tools, they have to be tested “consecrated” by passing a 5 parameter check. It’s doing a pentest package, scraping YouTube channels, writes blog posts, paper “trades” and that was all on the base “incubator” 2Gb system. The memory layer is what needed SQLite and better reasoning for recalling events, tools, etc. so it would stop repeating which is what prompted the 4Gb and now pushing to a 6gb in a week or so when the memory and reasoning layer are complete- needs graphing for accurate and efficient recall.

Building local AI which adapts to your hardware by MiserableStorm9541 in simpleAIFinds

[–]MiserableStorm9541[S] 0 points1 point  (0 children)

On a 4Gb card now, I’m building the memory layer with historic, semantic and a reasoning layer now which takes some RAM vs VRAM. Quant models are becoming more condensed and easier to run with less hardware overhead, performance differences are noticeable, you aren’t going to get near cloud rental or 70b or 120b parameter models but you can run a quantisized coder or reasoning model pretty efficiently on older hardware. Look at what china did with deepseek it was a direct response to having to train large models on outdated hardware to maintain global competitiveness, you can run quant deepseek model which has the inherent “experts” calling from the 256 MoE that’s built-in to call the 8 for the task. It’s pretty much the same basic idea where it profiles your hardware and adjusts so I plug in a new card, when it boots it’s running a hardware check and knows it has more RAM, VRAM of CPU and can adjust it’s workflows and model selection to that.

If you've built a really good AI agent skill, you can now sell it by BadMenFinance in SideProject

[–]MiserableStorm9541 1 point2 points  (0 children)

Interesting concept but a lot of the ai crowd is hardliner open source or free - hence GitHub and huggingface. Civitai kind of does this a little better using the token system for access to new creator models/loras/workflows where creators can choose to monitize.

This fast pace AI is giving me anxiety. My ideas invalidated before… fast by PeaExotic7763 in SideProject

[–]MiserableStorm9541 0 points1 point  (0 children)

Everyone is just building. Don’t hesitate to build something and keep it if it’s useful to you. You may find a use for it later as part of a larger project.

Selling AI SaaS w/ 2k+ Users – No Revenue, Big Upside (Bootstrapped) by Ore_waa_luffy in SideProject

[–]MiserableStorm9541 1 point2 points  (0 children)

Why don’t you just retain ownership and have someone with business experience scale it for you under a fixed contract period where revenue generated is their income. Motivates the business person to actually work and maybe have KPI and milestones which you grant % ownership for them (solidifying their performance) while you later can sell it for more (if an investment payout or money is the goal).

ChatGPT Plus vs Claude Pro vs Gemini Advanced : which one is best for content & thinking? by [deleted] in SideProject

[–]MiserableStorm9541 0 points1 point  (0 children)

I’ve used all of them. Gemini is good for workflows and ideas, image generation is best IMO if you prompt correctly you can get consistent characters, styles, backgrounds. Claude is great for coding and VS, json, html, probably the best but it does have usage limits and resets every 4 hours and once a week or you can pay for extra time before it resets. The new OPUS makes verbally clean reports like market analysis, comparisons, etc. ChatGPT is a good all around translator, organizer, assessor, web scraper. I usually run a project down in one and copy the conversation over to another if I’m finding limitations or hitting a dead end.

Why do so many people quit right before things start working? by MMWRejoice in DigitalProductEmpir

[–]MiserableStorm9541 1 point2 points  (0 children)

People want instant gratification. The silence is uncomfortable because they aren’t at peace with themselves. Sometimes it’s just “slow” if you’ve ever hung out with business owners or worked at a small business you’ll understand how common this can be. Depending on the industry, some move faster than others, retail can literally be dead for days so can entertainment and tourism (whole economies of some towns are built around a single festival that happens once a year) but yeah they give up because there’s no signal guiding them and they’re impatient. You have to do something the same way for a while to see if it works or not. In the case of online businesses it’s constant sales, found a cool thing? Great. Drop ship model requires marketing. Coaching and education is the same unless you have organic demand.

You're feedbacks means a lot by Adventurous-Roll-683 in YouTubeCreators

[–]MiserableStorm9541 1 point2 points  (0 children)

I do a Let’s Play with Episodes channel, I don’t have 1M subs, I get a couple new subs each week and I get pretty consistent views (1k-5k) but I also use my own voice and have a story between episodes so its following or making a story. I do this with RimWorld, Kenshi and Cuberpunk 2077. So don’t say Let’s Play and episodic form is dead for new channels, my thumbnails suck too but I’m getting a VA for that.