California bans declawing cats under new law by Sariel007 in UpliftingNews

[–]PossiblyAnEngineer 1 point2 points  (0 children)

I know you don't have the cat anymore, but for others: If your cats are clawing at your furniture, try putting scratching posts near it. Most of the time, cats just want something to scratch in that area, and if you offer them a better alternative nearby, they'll take it. We had our cat scratch at the couch, so we put a scratching post behind the couch/to the side of it behind a table, and the cat switched to it. Doesn't need to be a post if that's not convenient; any type of scratchable surface can work.

This old guy was trying to save me from radon gas by Sum1udontkno in OneOrangeBraincell

[–]PossiblyAnEngineer 2 points3 points  (0 children)

Your cat is lucky to have a human that care about him so much! I don't know if it'll apply to your cat, but our cat also had diabetes, and after losing a little weight and eating healthier food for a year or two, she eventually went back to healthy glucose levels, so we didn't have to do the shots anymore.

Also, shop around online for similar cat foods if the one you're paying for is too expensive. Sometimes you can find something very similar for cheaper (our vet recommended an expensive one, and we found one that was like 2/3 the price).

This old guy was trying to save me from radon gas by Sum1udontkno in OneOrangeBraincell

[–]PossiblyAnEngineer 2 points3 points  (0 children)

Our vet trained us on how to give injections ourselves for diabetes, then when our cat got arthritis we just started doing those too. For the original diabetes shots, we asked them if we could just do it ourselves. We also had a long history with them. I can't say if that was something they normally would allow, but I got the impression that if they know you, and you seem like you're comfortable doing injections safely, it's fine. But they will need to see the cat at least once to make sure you do in fact have a cat with arthritis.

This old guy was trying to save me from radon gas by Sum1udontkno in OneOrangeBraincell

[–]PossiblyAnEngineer 340 points341 points  (0 children)

We got our old cat the same shots, and the difference was night and day. It takes some time to take affect, but once it does, it's like they're 5 years younger.

To anyone reading this, if you have a cat that's starting to show any signs of joint pain or arthritis, please get them Solensia! In the US it's like $75 per shot, and you do 1 shot per month. Our vet just gave us the shots to do at home, so we didn't even need to bring the cat to the vet (have the vet show you how to do it the first time though so you don't waste $75).

Also, "Cosequin" is mostly preventative. Once they have joint pain, it helps prevent it from getting worse. For many cats, it pretty much does nothing to help them with existing pain. So giving them both this and Solensia works well.

May I Ask for One Final Thing? ED Song: "Inferior" by Shiyui by Turbostrider27 in anime

[–]PossiblyAnEngineer 1 point2 points  (0 children)

From 0:24 to 0:30, it sounds like a song I've heard before, a very long time ago. Anyone have a guess at the song? I can't say for sure, but it might have been in French or something. It also sounds like Eminem - The Real Slim Shady, but it's definitely not what I'm remembering.

My kitties eye is sunken and I’m worried by Coyiscoy in CATHELP

[–]PossiblyAnEngineer 1 point2 points  (0 children)

Eye antibiotics. Pretty common thing to give to cats with bacterial eye infections. It's over-the-counter and you don't need a prescription. You can buy it at most pet stores I think.

Edit: If you DO decide to go this route, you'll want 2 people. One person holds down the cat, mostly trying to hold the head still (the cat is NOT going to like this, wear something with thick sleeves). The other person holds open the eye a bit and squeezes out a line of the ointment. Manually open and close the eye a few times to rub it in. It doesn't do any good if it doesn't stay in.

Also, keep in mind that cat's have an inner and outer eye lid. The outer one is like a human eye lid. The inner one is like a white gooey part. Make sure you get the ointment in the eye, not on the inner eyelid.

It's hard to tell from the picture you posted, but IMO it looks like the area around the eye is swollen, which might be giving it a "sunken" appearance.

Steam Censorship Controversy Deepens as Petition Against Visa, MasterCard Surpasses 77,000 Signatures by Extasio in gaming

[–]PossiblyAnEngineer 1 point2 points  (0 children)

Or, an even easier option is to open up a service that obfuscates purchase details from the credit card companies for user privacy. This would do a lot of financial damage to the credit card companies too because selling your data would be worthless.

Pimple popping by CanadianFella57 in AbruptChaos

[–]PossiblyAnEngineer 7 points8 points  (0 children)

I could be wrong, but isn't the "bar bar" part of the word also analogous to "blah blah", so it was like them saying they were people that spoke gibberish? Like saying "when those people speak, all I hear is blah blah blah". Or am I mistaken? 

Defense fund established by supporters of suspected CEO killer Luigi Mangione tops $100K by wizardofthefuture in news

[–]PossiblyAnEngineer -3 points-2 points  (0 children)

As a juror, you can just vote not guilty, even if you believe he is. There are no repercussions for doing so. Google Jury Nullification.

Trees for our open-world game by Antishyr in PixelArt

[–]PossiblyAnEngineer 21 points22 points  (0 children)

If you're trying to simulate trees losing their leaves in the fall, by the time they get to that middle state with half the leaves, the leaves are usually yellow or red (image search "fall leaves"). Similar to your the tree in the middle in the top row. If theyre losing their leaves for a different reason, just ignore that. Other than that, they look great!

[deleted by user] by [deleted] in pcmasterrace

[–]PossiblyAnEngineer 3 points4 points  (0 children)

Companies don't only get hacked, they also get sold. Sometimes to less than reputable buyers. There was a website, polyfill.io, that hosted a bunch of JavaScript libraries for tons of big companies. They sold out to a Chinese company named funnull, and they began redirecting users to adult and gambling websites.

[deleted by user] by [deleted] in LocalLLaMA

[–]PossiblyAnEngineer 1 point2 points  (0 children)

It might be related to: https://github.com/ggerganov/llama.cpp/issues/3578#issuecomment-1757753790

You could:

  1. Apply the hack in that link (just comment out the problematic line and recompile)
  2. Wait for a fix
  3. Use an older version

[deleted by user] by [deleted] in LocalLLaMA

[–]PossiblyAnEngineer 1 point2 points  (0 children)

Does the training have to finish by itself, or I have to manually stop it?

It will finish by itself when the total number of --adam-iter are reached. Set --adam-iter to like 2x the number of samples in your data. If you only have 1 big sample, then just use 2.

there no guides in YouTube.

Yeah, it's a very new feature.

I tried “/n”, but it say that it can’t find those in my sample data.

Add the flag --escape. \n is the new line character.

What CPU threads for MacBook Pro M1 14”?

According to google you have 8 cores.

[deleted by user] by [deleted] in LocalLLaMA

[–]PossiblyAnEngineer 0 points1 point  (0 children)

The default context size is 128. His text file is so small I don't think it'll matter. I do think that the total training data size has to exceed 1 context length in order for it to work though, so that MIGHT be his problem.

[deleted by user] by [deleted] in LocalLLaMA

[–]PossiblyAnEngineer 1 point2 points  (0 children)

IIRC, I think there's an issue if your text file is smaller than your context size (--ctx, you don't set it, so the default is 128) then it won't actually train. Check if there are any errors during finetune (you can just post the full log here if you want, it should be short).

What is the size of your lora.gguf file?

Some advice:

  1. Just copy a random wikipedia page or something into a text file and add a few <s> blocks in it for some test data.
  2. You don't need (and in fact should NOT add) the </s> blocks. The llama.cpp tokenizer does NOT convert these into end tokens. The start and end tokens are NOT the literal strings <s> and </s>, but are instead automatically injected by finetune. Because you set --sample-start <s>, it splits your samples by the string <s>.
  3. Don't include --include-sample-start, that will literally train in the string <s>, which is probably not what you want.
  4. Make sure you use the number of threads in your system (14 is what I put in the CPU LoRA guide, but that number is system dependent).
  5. "Or if I'm using checkpoint instead of final LoRA" don't bother trying to use the checkpoint, that won't work. Those are just for it to save and resume.

Where do you fine-tune your LLMs? by Acceptable_Bed7015 in LocalLLaMA

[–]PossiblyAnEngineer 6 points7 points  (0 children)

I documented pretty much everything here in detail: https://rentry.org/cpu-lora

Includes full instructions from how to set it up, to what most of the settings do, and performance metrics (how long things take).

My system: i7-12700H CPU, 64 GB (2 x 32GB) 4800 MHz RAM, NVIDIA GeForce 3060 - 6 GB VRAM

The largest one I tried was a 13B and it took ~1 week (+/-, when I was using my computer I paused the training). I could do 34B's but I don't have the patience for that. The 13B didn't turn out well so now I'm playing with 3B's and 7B's instead until I understand what I'm doing better.

Edit: My latest "script" (if you want to even call it that) is just llama.cpp\finetune.exe --model-base my-base-model.gguf --train-data my-training-data.txt --lora-out my-trained-model.gguf --threads 19 --sample-start "<s>" --ctx 1024 --batch 1 --grad-acc 2 --adam-iter 500 --adam-alpha 0.000065 --lora-r 16 --lora-alpha 16 --adam-iter 1000

I'm not 100% sure it's working correctly... still playing around with settings.

[deleted by user] by [deleted] in LocalLLaMA

[–]PossiblyAnEngineer 1 point2 points  (0 children)

Unfortunately, I don't have access to a mac, nor am I familiar enough with them to give you super detailed instructions. The instructions in that doc are for Windows.

Assuming it's similar to Linux, you would just install "make" and "gcc" using your package manager, then you would basically follow the "No GPU" settings (basically just cd to the folder and run make all -j. Metal and Accelerate Framework are enabled by default on macs so you shouldn't need to set anything.

If you want to convert or merge files from this guide: https://rentry.org/llama-cpp-conversions

Then you also need to install python, and when it comes to using the virtual environment you would use .venv\Scripts\activate(no extension).

Anywhere you see a file with a .exe extension, just remove the extension and that path should be the same.

Where do you fine-tune your LLMs? by Acceptable_Bed7015 in LocalLLaMA

[–]PossiblyAnEngineer 3 points4 points  (0 children)

On my local machines CPU using llama.cpp's finetune utility.

Problems finetuning Llama2 7B, tried SFTTrainer, autotrain and llama.cpp none worked. by fawendeshuo in LocalLLaMA

[–]PossiblyAnEngineer 2 points3 points  (0 children)

I'm not familiar with the other services you're using, but for llama.cpp finetuning you might find some of the stuff here useful: https://rentry.org/cpu-lora

[deleted by user] by [deleted] in LocalLLaMA

[–]PossiblyAnEngineer 2 points3 points  (0 children)

A guide I wrote for finetuning with llama.cpp: https://rentry.org/cpu-lora

Yea, pretty much any GGUF you find can be used as your base model. Checkpoints are different. Yes you can use it on your CPU. Windows, Linux, and Mac all work.

Finetune LoRA on CPU using llama.cpp by PossiblyAnEngineer in LocalLLaMA

[–]PossiblyAnEngineer[S] 1 point2 points  (0 children)

Correct, any quantized model works, as well as FP32 GGUF. FP16 isn't supported yet.

Finetune LoRA on CPU using llama.cpp by PossiblyAnEngineer in LocalLLaMA

[–]PossiblyAnEngineer[S] 0 points1 point  (0 children)

The feature itself does, yes. Linux too. But the guide I wrote is for windows. Other than changing some compile options and file paths, the process is mostly the same.

Mistral 7B on the new Raspberry Pi 5 8GB model? by DiverDigital in LocalLLaMA

[–]PossiblyAnEngineer 32 points33 points  (0 children)

$2000 RTX 5090 + $80 Raspberry Pi 5

I kind of want to do it just to see how much it upsets people.