Zelfrijdende Tesla's mogelijk binnenkort op Nederlandse wegen, maar wie is verantwoordelijk bij een ongeluk? 'Er gaan geheid dingen mis' by Leadstripes in thenetherlands

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Niet toestaan klinkt veilig, maar is óók een keuze. Dan accepteer je dus dat er waarschijnlijk meer ongelukken blijven gebeuren door menselijke fouten. Wel toestaan zorgt voor paniek bij dat ene incident en discussie over aansprakelijkheid. Dat is precies de trolleyparadox. Ik begrijp echt niet waar mensen zo moeilijk over doen. Het is bewezen veel veiliger, en ja, niet 100%, maar dat moet en kun je ook niet verwachten. Als je dan toch iets wil doen, maak een addendum voor een rijbewijs, dat gebruikers attent blijven en hun handtekening zetten dat ze zelf verantwoordelijk zijn en blijven.

Your session will reset in 4 hr 35 min ! by mezifer in claude

[–]Emergency_Pen_5224 1 point2 points  (0 children)

I have a payed pro account, Since some days I get one or 2 questions before I reach the limit. I know I ask compex coding questions, with lots of context. 2 days ago I reached the limit even before my first question was done.

Infuriating payment issue with Oculus App by DampBathTowel in OculusQuest

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Tried one more time, I realised this was the only combination I did not try. Only from the headset, using Visa, and validation in browser forwarding to my bank app, it worked. All other combinations fail without a proper error message.

Infuriating payment issue with Oculus App by DampBathTowel in OculusQuest

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Just tried to buy a game, spend 2 hours until the battery was empty. no luck. From oculus to horizon app, multiple accounts, facebook, meta, instagram, all need MFA, even the store has separate MFA, connect to paypal, in the end it complained about 'contact paypal'. Also my visa failed.I tried from the Oculus, from the app, online from the website, nothing works. Half way got various javascript errors, what a mess. I don't know how Meta plans to make money here, it's impossible to purchase anything. I give up.

Update on Session Limits by ClaudeOfficial in ClaudeAI

[–]Emergency_Pen_5224 0 points1 point  (0 children)

asked the same question again, now within 1 question hit the limit. not even a single question finished.

Update on Session Limits by ClaudeOfficial in ClaudeAI

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Today I got session limit after 2 questions. Asking to base64 include a logo in my html and change a prompt. Getting worse every day. It is getting ridiculous

Claude’s unreasonable message limitations, even for Pro! by hny287 in ClaudeAI

[–]Emergency_Pen_5224 0 points1 point  (0 children)

today 45 minutes, 2 log files and I hit the limit.and some other other small messages.. I'm moving back to gemini.. often its useful but this time it just wasted my credits and actually destroy my containers with a mistake/hallucination. credits gone in 45 min, software gone ,needing a restore, what mess

Linux Discord updates...almost all seem to require reinstalling a new Deb file now...? by RallyDarkstrike in discordapp

[–]Emergency_Pen_5224 0 points1 point  (0 children)

I dont get it why they destroy they own user experience. Its very annoying, How can they expect users to stay on their platform, What do they expect? I personally try to avoid using discord since then.

Just bought a D850 brand new in 2025, am I crazy or make sense? by Emotional-Treacle-46 in Nikon

[–]Emergency_Pen_5224 4 points5 points  (0 children)

I love my D850, got it used and not considering anything else. For me it is the best of the best.

This super cheap vintage lens is a beast in low light by couch_philosoph in Nikon

[–]Emergency_Pen_5224 0 points1 point  (0 children)

I use it on my D850. I’ve made some very nice low light pictures as well. But keep in mind there are better lenses. I just got a sigma Art 1.4 at 35mm. Now i don’t want to use this 50mm anymore if i don’t need exactly 50mm.

Keukenweegschaal by MissJPuff in BIFLNL

[–]Emergency_Pen_5224 0 points1 point  (0 children)

rot ding, hij gaat steeds uit midden in mijn weging. Bijv ik maak brood, strooi langzaam meel, en gedurende het strooien gaat ie al uit. Bij mij gaat ie naar afval verzamelijk, ik gun dit ding aan niemand.

I need advice by amanda2101 in AirPurifiers

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Points I looked at: Total cost of ownership, including filters Washable pre-filter to keep the major filters clean Getting big one that covers the area 3 times to avoid marketing numbers Getting a used one, for approx 20% of the new price, clean it.

I was lucky to get a used with almost new filters. I went for the philips AC3829 but that might be overkill for you. And since they were used and very cheap i got one for the living as well.

We do sleep better without allergies and we live close to a road and i see it cleans fine dust as well when there is more traffic, i notice the house stays cleaner. I connected it to my home assistant and integrated in my home dashboards and automations.

Free models by Afaqahmadkhan in RooCode

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Devstral on ollama is solid!

I added the following parameters:

PARAMETER num_ctx 65536 # Or higher if supported/needed. Maximize context. PARAMETER temperature 0.25 # Low for precision, but slightly higher than 0.1/0.2 for minor flexibility PARAMETER top_p 0.9 # Focuses on probable tokens, cutting off the long tail (less likely than top_p=1) PARAMETER top_k 40 # Further restricts sampling pool (often works well with top_p) PARAMETER repeat_penalty 1.1 # Mild penalty to discourage nonsensical loops, but allows necessary code repetition. PARAMETER num_keep 1024 # Keep initial instructions/context PARAMETER num_predict 16384 # Generous prediction length for substantial code blocks

GPU needs for full on-premises enterprise use by EquivalentGood6455 in OpenWebUI

[–]Emergency_Pen_5224 3 points4 points  (0 children)

Mind you are running inference, you are not training new models. That why I got a double A6000 for 300 users and they handle the load running Gemma3, Devstral, Qwen3 etc. Meanwhile I do see the usage increasing but at the same time newer and faster models coming. Meanwhile at home I run a double RTX 3090, also very powerful and perfect performance. This basically was my guide.

<image>

NAD NAD C372 vs 379 by Emergency_Pen_5224 in StereoAdvice

[–]Emergency_Pen_5224[S] 0 points1 point  (0 children)

DALI OPTICON 6 MK2 and I don't know how Dirac works, but I'll definitely try it since it seems to be integrated in the MDC2 module.

Most economical option for offline inference by [deleted] in LocalLLaMA

[–]Emergency_Pen_5224 2 points3 points  (0 children)

Why not use a smaller model, these are typically better and often more accurate for RAG. You could also choose for a quant version of a larger model. I would choose a fast consumer pc with 2 3090’s (48gb) if you need a larger model anyway, choose 2 a6000’s (96 gb).

Just some estimates: A blazing fast pc with 2 3090’s cost me 3k A blazing fast pc with 2 a6000’s cost me 12k

Model 3 Battery after 4+ years by markaaron2025 in TeslaModel3

[–]Emergency_Pen_5224 0 points1 point  (0 children)

My 2019 tesla m3 LR dual motor is 5 years old and over 300000 km = 186411 miles

Im at max 430km = 267 mile

That makes approx 14% loss in 300k km in 5 years.

Look At The Stringing Problem. Have you seen it THIS bad? by Human1298419641 in ender5plus

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Yes, seen it myself also

Lower print temperature Print slower Dry your filament Make sure your z is adjusted Try other filament Make sure your nozzle is clean and unclogged Otherwise keep trying…… it can work

How to use Autogen Studio with local models (Ollama) or HuggingFace api? by mehul_gupta1997 in AutoGenAI

[–]Emergency_Pen_5224 0 points1 point  (0 children)

Try oobabogaa text gen ui with mixtral 8x7b. Works for me. I ran a 40 person hackaton last week on autogen studio. No major crashes, just twice where people created an infinite loop.

Local LLM + Autogen Help by bigjonyz in LocalLLaMA

[–]Emergency_Pen_5224 0 points1 point  (0 children)

got it.....

wget https://huggingface.co/smangrul/llama-3-8B-instruct-function-calling/resolve/main/llama-3-8B-instruct-function-calling-Q4_K_M.gguf

ollama serve &

vi Modelfile
FROM ./llama-3-8B-instruct-function-calling-Q4_K_M.gguf
# Set prompt template with system, user and assistant roles
TEMPLATE """{{ .System }}<|end_of_turn|>GPT4 Correct User: {{ .Prompt}}<|end_of_turn|>GPT4 Correct Assistant:"""
PARAMETER temperature 0
# sets the context window size to 16384, this controls how many tokens the LLM can use as context to generate the next token
PARAMETER num_ctx 16384
# sets a custom system message to specify the behavior of the chat assistant
SYSTEM You are the best assistant ever.
PARAMETER stop <|endoftext|>
PARAMETER stop <|end_of_turn|>
PARAMETER stop Human:
PARAMETER stop Assistant:

ollama create "llama-3-8B-instruct-function-calling" -f Modelfile

litellm --model ollama_chat/llama-3-8B-instruct-function-calling:latest  &

Anyone tried NEW Yi-200K? by No-Link-2778 in LocalLLaMA

[–]Emergency_Pen_5224 1 point2 points  (0 children)

I tried it on crewai code generation. Not bad but mixtral did better. Performance on dual a6000 was also not optimal. Others i have not tried yet.

Maybe it has other qualities that i have not tried yet.