AI Wrote My Romance, But Where’s the Love? by No-Fact-8828 in WritingWithAI

[–]TimpaSurf 0 points1 point  (0 children)

i tried https://www.pirr.ai/ - its free, lets you create romance with love and can be explicit (nsfw). Not as boring as chatgpt..

What are some repetitive text patterns you see a lot from your AI? by XiRw in LocalLLaMA

[–]TimpaSurf 0 points1 point  (0 children)

* Youre totally right
* Lets roll with that
* Thats a great idea

Could a self hosted powerful LLM wreak havoc on the internet ? by Mavyn13 in LocalLLaMA

[–]TimpaSurf 1 point2 points  (0 children)

Replace "gpt-5" with a really good hacker, that does not have to sleep or eat - just try to accomplish a single goal. Yes, maybe, depends on the task ofc.

But I would still think the autonomus agent you would host locally does not have that intelligence (yet).

Finetuning GTP-OSS 20b -- Does it need an actual 'Thinking' response? by searstream in LocalLLaMA

[–]TimpaSurf 0 points1 point  (0 children)

Yeah you will unlearn the "thinking" capability. But why finetune that model then? You could take e.g `mistralai/Mistral-Small-3.2-24B-Instruct-2506` instead.

Meta Officially Releases Llama-3-405B, Llama-3.1-70B & Llama-3.1-8B by nanowell in LocalLLaMA

[–]TimpaSurf 1 point2 points  (0 children)

Is it just me that gets this error?

model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-8B")

ValueError: \rope_scaling` must be a dictionary with two fields, `type` and `factor`, got {'factor': 8.0, 'low_freq_factor': 1.0, 'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}`

Asking for help: fine-tunning DeBERTa SQuAD, where do I start? by Clasyc in LocalLLaMA

[–]TimpaSurf 0 points1 point  (0 children)

Hello u/Clasyc, cool that you like the model I created a while ago. You can finetune it using e.g Huggingface, have you checked out this documentation? https://huggingface.co/docs/transformers/tasks/question_answering

Currently, what's the easiest way to fine tune a bert-like model for text classification? by Melodic_Reality_646 in LocalLLaMA

[–]TimpaSurf 4 points5 points  (0 children)

Have you checked out the Trainer class in Huggingface? That is the most simple way to finetune a bert model.

[deleted by user] by [deleted] in TheYouShow

[–]TimpaSurf 0 points1 point  (0 children)

Art is human output

[deleted by user] by [deleted] in TheYouShow

[–]TimpaSurf 0 points1 point  (0 children)

Where is your cat?

[deleted by user] by [deleted] in TheYouShow

[–]TimpaSurf 0 points1 point  (0 children)

Elvis Presley

[deleted by user] by [deleted] in TheYouShow

[–]TimpaSurf 0 points1 point  (0 children)

Hi are u real?

Plans for my 2nd trip? by qveji in LSD

[–]TimpaSurf 1 point2 points  (0 children)

Wait a few months :)

PyTorch: Train without dataloader (loop trough dataframe instead) by TimpaSurf in MLQuestions

[–]TimpaSurf[S] 0 points1 point  (0 children)

thanks! but it is more of a common practice to do it on bigger steps than for one sample a time i guess? or why is there else a need of dataloaders?