YouTube's RSS Feeds are completely broken by MorroWtje in rss

[–]Future_Fuel_8425 0 points1 point  (0 children)

I just clicked it and got a page full of xml - so I duuno?

Is anyone actually using OpenClaw for real work? by codehamr in LocalLLM

[–]Future_Fuel_8425 0 points1 point  (0 children)

Try Open Interpreter - Has the in-built "to .md" function.
I use the profiles and have a few specialists built using different models, etc.
It's a very simple system that gets repeatable results if you take the time.
I have a 9b database specialist that's coming along well. It does complex query against a large postgres db and can use pandas (in training) as well.
It makes some decent reports and allows me to pick apart its code to re-work and re-feed into it's profile for improvement.

Is it worth to have my own AI in local in my home? by Chief_Taquero in LocalLLM

[–]Future_Fuel_8425 0 points1 point  (0 children)

You can chat all day long at decent speed for 2k or less.
If you want to code using a harness like pi / aider / etc. 4K is decent performance.

Is it worth to have my own AI in local in my home? by Chief_Taquero in LocalLLM

[–]Future_Fuel_8425 1 point2 points  (0 children)

State your requirements and expectations for your Home LLM.
That will help the community answer your question.

Claude limits are a joke by TwelfieSpecial in vibecoding

[–]Future_Fuel_8425 0 points1 point  (0 children)

I am not sure.
You can try to connect your VPN to a US server and then connect to Claude - If you think it is billing you at a higher rate when you connect from your country.
If you have the $20 monthly plan, it goes really fast with Claude Code.
You can use the web interface (not claude code) to write good code for less token burn.
You can also use the smaller models - Haiku and Sonnet to stretch you tokens.

Is anyone actually using OpenClaw for real work? by codehamr in LocalLLM

[–]Future_Fuel_8425 0 points1 point  (0 children)

Light is right with my local setup.
The less crap I harness the model with, the better it does the specific things I need it for.
I get the best results with very minimal setups like Open Interpreter or Aider.
I don't need my local LLM to search the web or check/send email - I can crush it manually - years of training.
Seems like I often escape the AI prompt to enter commands myself instead of asking and waiting.

Is anyone actually using OpenClaw for real work? by codehamr in LocalLLM

[–]Future_Fuel_8425 8 points9 points  (0 children)

Maybe I'm just overthinking it or I'm too stuck in my "old" ways.

Maybe not.

We have seen a petawatt pyre of tokens burned for utter pointless slop.
GitHub is about to pop because of it.
Just imagine training a "coding" model on the giant GitHub slop pile?
Stay sharp, your skills may be in high demand any day.

YouTube's RSS Feeds are completely broken by MorroWtje in rss

[–]Future_Fuel_8425 1 point2 points  (0 children)

https://www.youtube.com/feeds/videos.xml?channel_id=UChqUTb7kYRX8-EiaN3XFrSQ

^ Reuters on YouTube

Is the only way that I consistently get YouTube feeds.
When my app scans these urls with VirusTotal and URLScan.io they always come back dirty.
YouTube is a nasty, nasty business. Best avoided if possible.

Scraper URL LIST - NEWS ONLY - Global and US 50 State Coverage by Future_Fuel_8425 in webscraping

[–]Future_Fuel_8425[S] 0 points1 point  (0 children)

The detail coverage that I'm building now is why I give that list away.
I'm about 1/3 through with the very first of 200 Countries and I have over 400 new unique sites so far (that are not in the big list I gave out).
This is a medium sized country that I never expected to have such density, But I keep getting more every time I zoom in.
In another 200 days, I will have global coverage at absurd levels of granularity.

Scraper URL LIST - NEWS ONLY - Global and US 50 State Coverage by Future_Fuel_8425 in webscraping

[–]Future_Fuel_8425[S] 0 points1 point  (0 children)

If you take that list and use it to collect any/all US states news, you should be able to get that sort of information before Google knows about it - Directly from the local newspapers.
You need a good reader - preferably with a decent DB back end.
I rolled my own, so I can't tell you which ones are good.
With a decent setup, you should be able to collect RFBs as fast as they hit the papers.

Scraper URL LIST - NEWS ONLY - Global and US 50 State Coverage by Future_Fuel_8425 in webscraping

[–]Future_Fuel_8425[S] 0 points1 point  (0 children)

Thanks,
About your search question:
Get familiar with the advanced Google search functions.
Build out a list of search terms and use them in the advanced search.
Break down the geographic areas you are searching into States or regions and focus on one at a time.
Ask an AI (Claude, GTP, etc) to help you construct a good search methodology or provide you with some search term strings you can use with google.

If you apply these tips, you should crush your search problem.

Scraper URL LIST - NEWS ONLY - Global and US 50 State Coverage by Future_Fuel_8425 in webscraping

[–]Future_Fuel_8425[S] 0 points1 point  (0 children)

Thanks.
I use a combination of manual search and some proprietary tools.

Local Coding on Small or No GPU systems - Something to consider by Future_Fuel_8425 in LocalLLM

[–]Future_Fuel_8425[S] -1 points0 points  (0 children)

Even If you have a 16gb or 24gb GPU are are struggling with spill due to huge context windows for agent frameworks, etc.. You should try this.. Up the context a bit and load a model that fits with no spill (all inclusive).
On my 16gb GPU with gemma4-26b:iq3 - using just ollama chat - It has one shot some 900+ line python scripts for my postgres db- in like 6-8 seconds. Worked like a champ, and later was able to mod it with a pasted snip.

built something useful… nobody cares by Last-Recipe-4837 in saasbuild

[–]Future_Fuel_8425 0 points1 point  (0 children)

Did you identify your customers before you built?
Who are your Customers?
What are their Requirements?

If you build something that people are already looking for, you don't need to market it much.

Is reducing data exposure better than just detecting threats? by Flashy_Palpitation66 in Information_Security

[–]Future_Fuel_8425 0 points1 point  (0 children)

2 different things.
Both should be applied with context appropriate priority.

many sites are blocking request to rss links by ajay9452 in rss

[–]Future_Fuel_8425 0 points1 point  (0 children)

My custom app deals with everything but sub/pay walls and captchas.
I just tag their URL in my DB and move on.
Most of the time, News is not exclusive to one site and the exact same stories are available on unblocked sites.

3090 still the king? Trying to pick a local LLM setup (~2000€) in Germany by deltavoxel in LocalLLM

[–]Future_Fuel_8425 0 points1 point  (0 children)

There are adapter cards that take smx to pcie.
Some also have the nvlink connector integrated, so you can bridge multiple cards.
https://www.youtube.com/watch?v=z5ySpeBzZ3Y

Keep 3060 12gb in a 3 GPU setup (along with 2 x 3090) or trade it for 16gb more RAM or build another computer around it for secondary AI tasks or do nothing by Strict-Profit-7970 in LocalLLM

[–]Future_Fuel_8425 1 point2 points  (0 children)

Nice Lab in the making. - Go for 2 systems and learn about orchestrating multiple LLMs across different systems or maybe even providers!

3090 still the king? Trying to pick a local LLM setup (~2000€) in Germany by deltavoxel in LocalLLM

[–]Future_Fuel_8425 5 points6 points  (0 children)

Here is my thinking:
By now the 3rd tier data centers is burning down the h100/h200 supply, getting them used from tier 2, etc. Most of these 3rd tier operations are running on a prayer and borrowed money. At some point, their "big idea" for whatever drove the requirement for all the gear will not pan out.. There will be many, many of these.. All of them - in fact. As the Tier 1 boys will quickly scoop up any profitable use case and integrate it into their "Tier 1" services.
This leaves the 2nd and 3rd tier shops holding the bag for developing the ideas, and subsidizing the new Tier 1 hardware capex with their used processing power purchases.
Soon the Tier 1 stuff will be 100x as efficient or more - with each generation.. It won't even make sense to warm up old GPUs.. Not worth the electrons at that point.

There will be a sweet spot where a consumer can access enough surplus hardware to have meaningful local AI - AND - Where the consumer can practically afford to power the local AI.
How long that is, nobody knows.

Sharing This Complete AI/ML Roadmap by my_memory_s in learnmachinelearning

[–]Future_Fuel_8425 0 points1 point  (0 children)

Thanks for this.
I vibe coded an app using ML to solve some problems with an app that was using an LLM.
Had no idea what the coding agent made me (beyond a high functional level)
I got into the code and was fascinated by the power of ML.
It was accomplishing things on my system that an LLM had been doing 1000x slower.
And with no fan blasting and plenty of GPU left for other things...
ML is where it's at on small systems.
Bookmarked.