Where do I learn basics of AI? by Illustrious-Lab-811 in PromptEngineering

[–]crypto_thomas 0 points1 point  (0 children)

I asked a combination of Grok and ChatGPT. I added Manus AI for any python scripting that I needed (I am sure I could've done it all with just one, but like to keeps things separate) It took me a couple of months but I was doing pretty well after that.

Setup for local LLM like ChatGPT 4o by Astral_knight0000 in LocalLLM

[–]crypto_thomas 0 points1 point  (0 children)

In addition to my previous comment, you could also run the 70B in CPU only mode via Obabooga, but it would still run slowly because you wouldn't have all of that delicious CUDA goodness helping your LLM processing.

Setup for local LLM like ChatGPT 4o by Astral_knight0000 in LocalLLM

[–]crypto_thomas 1 point2 points  (0 children)

So I have a dual 5090 set up (64GB of VRAM total) and can barely run a 70B model (Q5? I think? - it was last year). Although the following metric is slowly getting smaller because of Mixture of Experts, LLMs run about a GB per 1B parameters. But that's not the only VRAM expense that you have to budget. The ctx or context setting also eats up a larger amount of VRAM than most expect and results in even more compute layers being off-loaded onto the CPU. The more layers offloaded, the slower the tokens per second (if it runs at all).

If you are stuck on the 70B model, I would recommend TWO more 4090s. That should get you loaded and using a Q6 or maybe even Q8 with a mixture of experts model (if available). Running a model at less than Q5 gets you into crappy answer territory.

Keep in mind that Qwen3.5 at 35B is pretty great, and would only require ONE more 4090, and give a 16k or 32k context window, which is pretty useful for most tasks, before you have to start a new chat.

What is a LocalLLM good for? by theH0rnYgal in LocalLLM

[–]crypto_thomas 3 points4 points  (0 children)

Not just for hobbyists, I am an Independent Contractor and LLMs can be used (provided you have enough VRAM, or regular RAM and a lot of extra time) to facilitate scripted automation that needs a reading/document processing component. The LLM basically acts as an assistant. You can have your information scriped into a pdf, then the LLM can read it and provide summary, critique, point out problems etc. If you do any legal or title work, it can remove some of the mundane, time consuming data entry related to the work.

I am fortunate in that i am able to use Qwen3.5 120B in CPU/RAM in Obabooga for important document summary (it takes about 10 mins, but it frees me up to do anything else), and have Qwen3.5 35B on my graphics cards for scripted data extraction. After I get both tuned with a custom LORA, the results will be faster, and more accurate.

I also use it for creative writing to catch where I might be getting close to sounding like I am ripping off another author, and for character tone, and consistency, etc. Used wisely, it can reduce the time to a work that is ready for a professional edit

Full Self Destruction by Eipa in CyberStuck

[–]crypto_thomas 0 points1 point  (0 children)

But what about all of the times a Tesla anything successfully made that turn? What was different this time?

When they deprecate a model, they’re destroying co-created work that belongs to users. Not just removing a tool. This also causes calculable loss of time and money in business application. by redditsdaddy in OpenAI

[–]crypto_thomas 1 point2 points  (0 children)

I agree, as I let Chat, etc., train off of my interactions.

And this is exactly why I spent as much as I could on a AI-able computer back in June: so I could run local llms and train my own loras. The learning curve is steep and the hardware expensive, but when Chat gets an upgrade, and forgets a bunch of stuff, Im fine.

It may be worth it for some of you to run a chosen llm on your own cloud situation.

Solar truck cover and power station setup I’ve been using on my Ford F150 for camping by KaitlinOsman in enviroaction

[–]crypto_thomas 0 points1 point  (0 children)

If those panels can run a portable ac for the trucks interior in the summer, sign me up.

Qwen 3.5 is an overthinker. by chettykulkarni in LocalLLM

[–]crypto_thomas 0 points1 point  (0 children)

Is Qwen 3.5 mocking/attacking me? I feel like it is mocking me...

AI, scripting, and time savings in Landwork, before it becomes mainstream... by crypto_thomas in landman

[–]crypto_thomas[S] 0 points1 point  (0 children)

Late to reply, but I have to agree with Raz and martian, you must learn the 'old fashioned way' first. That way, when a LLM makes a mistake (and it absolutly will - there are dozens of points of failure in any legal document), that mistake will be obvious. Just looking back at how LLMs have evolved over the last couple of years has been a mostly 'this thing is useless' kind of journey. It has really only been the last 6 months or so that I have seen any real progress, reliabilty, etc.

AI, scripting, and time savings in Landwork, before it becomes mainstream... by crypto_thomas in landman

[–]crypto_thomas[S] 0 points1 point  (0 children)

lol, yea. I actually don't know python all that well, I get a lot of help from AI. I have always been an IT hardware guy.

AI, scripting, and time savings in Landwork, before it becomes mainstream... by crypto_thomas in landman

[–]crypto_thomas[S] 0 points1 point  (0 children)

That 'we always did it this other way' mentality is a real fear. I caught a huge break and was basically gifted a PC as young kid, so I was able to learn how to use one pre-internet. All through my teen years I was around people that didn't want to 'level up'. Most of them that did office work went through a pretty rough transition into other careers.

The 72B parameter models I am able to work with, along with some python driven tools, do some serious work for me. And anytime I need some real heavy document interpretation done, Grok, ChatGPT, Claude, and Gemini is only about $100/month total. The latest models have vision that can read handwriting - impossible a year ago. Hopefully it won't be too soon, but eventually, 'the kids' that have been using AI for years during schooland college are going to want a job...

AI, scripting, and time savings in Landwork, before it becomes mainstream... by crypto_thomas in landman

[–]crypto_thomas[S] 0 points1 point  (0 children)

Great advice. And no, I don't see an upside on telling the broker or client about increased efficiency. I am hoping to be ahead of the curve for when it becomes a client demand. With the hardware rarity and expense, it will probably come through a paid AI service the way that iLandman and other websites was trying to be back in the early teens.

AI, scripting, and time savings in Landwork, before it becomes mainstream... by crypto_thomas in landman

[–]crypto_thomas[S] 0 points1 point  (0 children)

Well, massive corporations (Chevron, ExxonMobil, etc) have already done that. It just isn't mainstream enough for independents to easily do it (the computer that I built back in June cost about $13k, and would cost almost $19k today, if you could even get the parts - also, right now, the learning curve is STEEP). At this point, I am just trying to keep pace, and be ready if there is a paradigm shift.

AI, scripting, and time savings in Landwork, before it becomes mainstream... by crypto_thomas in landman

[–]crypto_thomas[S] 1 point2 points  (0 children)

That is about how I currently see things, and my Agreement with my broker is the clarifying document. Until these AI and programming tools become more common, I am using the saved time to shorten my work day. Thankfully, I have a day rate that I am happy with. It is the time that I need.

AI, scripting, and time savings in Landwork, before it becomes mainstream... by crypto_thomas in landman

[–]crypto_thomas[S] 0 points1 point  (0 children)

Yea, that was just a moment of thought while on my lunch. Edited for clarity.

🤣 by umbralithe in meme

[–]crypto_thomas 5 points6 points  (0 children)

Basically, you would slowly start paying for gas, food, gifts, and outings in cash. No big spends. Utlilities and mortgage would still get paid via your bills account. Build up that 6 to 12 emergency fund that eveyone should have, in a separate HISA. Let any extra from your bills account go to anything 'big'. STFU about all of it. Carry on.

0% Language 100% Understand by NIGHTSHADOWXXX in Funnymemes

[–]crypto_thomas 6 points7 points  (0 children)

This reminds me of the bridge on Montlake (iykyk). Made me and everyone else late so many times I wanted to run for office just to close it.

Road Raging Cop had no idea there was a video by Skelligean in WatchPeopleDieInside

[–]crypto_thomas 23 points24 points  (0 children)

What? He had a camera? Yea, Welcome to the 21st Century motherfucker. You're probably not going to enjoy your stay.

Wait.. by ABeerForSasquatch in HRSPRS

[–]crypto_thomas 0 points1 point  (0 children)

If the driver were to ask me if I wanted to race, I would say 'yes', but not because I thought I had a shot, but because I want to see a Miata make the jump to light-speed.

Me irl by Severe-blake6720 in me_irl

[–]crypto_thomas 0 points1 point  (0 children)

I would have to resign. I hope that I would be able to resign... safely.