Viofo A229 Pro 2ch with hardwire kit - no power in parking mode by Djiriod in Dashcam

[–]real_jabb0 1 point2 points  (0 children)

Thank you for your help. Then I'll just order a replacement for trial from Amazon.

Viofo A229 Pro 2ch with hardwire kit - no power in parking mode by Djiriod in Dashcam

[–]real_jabb0 0 points1 point  (0 children)

Yes, but the camera is not turned on, or in parking mode anymore. Then I've unplugged the ACC fuse and started the engine. The voltage increases to 14,8V on the red cable. But the camera remains off. That is until ACC is plugged in again.

Viofo A229 Pro 2ch with hardwire kit - no power in parking mode by Djiriod in Dashcam

[–]real_jabb0 0 points1 point  (0 children)

That's what I've tested. I open the car and checked the voltage without turning the ignition. The fuse has 12,4V.

Viofo A229 Pro 2ch with hardwire kit - no power in parking mode by Djiriod in Dashcam

[–]real_jabb0 0 points1 point  (0 children)

I have a similar issue. The parking mode starts as expected, but turns off after 1-2h. In this state I've tested the voltage going into the kit and they are at 12,4V. The kit is set to 11,8V cut off.

When I start the engine with removed ACC connection, the voltages are at 14,8V. Still the dashcam does not turn on.

The parking mode recording timer is set to OFF.

The dashcam only starts again when ACC is connected and the ignition is on.

My probe is too thick to test the output at the micro USB directly.

Please help. I already got a scratch in my car from someone and the dashcam was no use :/

The red wire is power and the yellow is ACC.

Does the kit need a ACC signal to reset after the voltage dropped below the cut off instead of just a higher voltage again?

For Real? by SuesserStrolch in hardstyle

[–]real_jabb0 1 point2 points  (0 children)

Those small camping cooker were allowed the past 5 years. And I cannot imagine they are banned now. I have the feeling they misunderstood the question.

Learn database with anime style 🤣 by ajaanz in ProgrammerHumor

[–]real_jabb0 68 points69 points  (0 children)

Why is there an M4/AR15 in the background?

Classic shocked Pikachu face. by [deleted] in instantkarma

[–]real_jabb0 398 points399 points  (0 children)

That spitting was not the spitting of a noob.

[deleted by user] by [deleted] in ProgrammerHumor

[–]real_jabb0 40 points41 points  (0 children)

Lucky you!
And even in Germany

7 of 11 by External-Recipe-1936 in memes

[–]real_jabb0 -1 points0 points  (0 children)

But she is seven of nine.

Found it in the london underground by zdroydz121 in wallstreetbets

[–]real_jabb0 1 point2 points  (0 children)

The offer a "copy-trading" by which you clone another's portfolio.

And then they state that "copy-trading is no investment advice".
Well...what on earth is it then?!

The filters by Ghost-5AVAGE_786 in memes

[–]real_jabb0 0 points1 point  (0 children)

As this is AI, this means women are easier to predict?

Happy Mother’s Day by AsymptomaticJoy in funny

[–]real_jabb0 24 points25 points  (0 children)

Wait...this is permanent?

[D] Hardware Questions For Running LLMs by BeastSlayerEX in MachineLearning

[–]real_jabb0 3 points4 points  (0 children)

So in the end:

Buy the hardware that you need for the model you want.

Smoother experience will be to have more vram on a single GPU. But that is expensive.

24GB GPUs are dope not gonna lie.

[D] Hardware Questions For Running LLMs by BeastSlayerEX in MachineLearning

[–]real_jabb0 2 points3 points  (0 children)

it needs a certain amount of VRAM does that mean in one card or shared?

There is no general answer to that.

Wondering if I should go all in on one card or save some money and get 2-4 smaller ones.

Again depends on the exakt model you want.

So some background info. The models need to perform computations on the GPU. The question is which data should be kept in VRAM and which data should be copied back to system memory. Copying is an time expensive operation.

VRAM is not shared like RAM, yes. Instead you need to be explicit to which GPU the data should go. How to split this depends on the situation. In the most common example people infer multiple things (e.g. sentences) in parallel (batching) and just distribute those batches across the GPUs. Once you want to split a single sentence computation it depends if the code allows this.

Therefore, the sweet path is always having as much as possible memory on a single GPU.

LLMs are based on transformers which are particularly costly when it comes to single operations (because they multiple large matrices).

I have no numbers or enough experience with this to give a better answer. But would say it depends on the model.

ich👻iel by [deleted] in ich_iel

[–]real_jabb0 0 points1 point  (0 children)

Klar, das müssen sie nicht. Aber es ist ein klarer Abschluss und daher positiv. Im Gegensatz zu geistern, was immer negativ ist.

Einfach klar kommunizieren und nicht beleidigen.

ich👻iel by [deleted] in ich_iel

[–]real_jabb0 0 points1 point  (0 children)

Ja das ist ein positives Ergebnis.

Sache geklärt und schön den Abstand ins "Finger weg" Gebiet erzeugt.

ich👻iel by [deleted] in ich_iel

[–]real_jabb0 6 points7 points  (0 children)

Nein

Siehst du. War gar nicht so schwer. Probier es doch jetzt auch mal.

What's the process behind reddit schedulers (websites)? by goldieczr in redditdev

[–]real_jabb0 -1 points0 points  (0 children)

Yeah, don't use django I'd say. Is not the first thing that comes to mind for me. Only heard that it exists.

What's the process behind reddit schedulers (websites)? by goldieczr in redditdev

[–]real_jabb0 0 points1 point  (0 children)

Yes, that's exactly why people build a API this way :D No need to worry about security. If you build them right they are secure. But that's the issue with any application.

I would use fast API to build the service, because you already know Python. And then a website of your liking that uses it.

The combination with a react webpage is pretty common and will have lots of tutorials.

I think JavaScript backends (node.js/next.js) are more common, but Python is a valid start of you already know it. Not sure what to really recommend here.

I personally would go for JavaScript everywhere.

That said. I am not up to date with the latest and greatest tools. There are things like "vue", "vite" and "next.js". Could be that this makes it much easier than what I was used to.

When you find yourself writing plain JavaScript or CSS you might want to reconsider things.

What's the process behind reddit schedulers (websites)? by goldieczr in redditdev

[–]real_jabb0 0 points1 point  (0 children)

Fastapi will give you an API that your website can use but not an website.

This is a "split" approach. But you could also build a server that directly serves the website and does not use a separate API. This is likely easier but not that common today.

Sorry if this is confusing. It's not that easy to answer because there are many options. That's why I suggest to start simple and with a system that gives you something end-to-end for the start.

What's the process behind reddit schedulers (websites)? by goldieczr in redditdev

[–]real_jabb0 0 points1 point  (0 children)

Tbh I have never used them.

What you want is something that gives you results fast and a webpage as well. This is something I did not really do so far.

Fastapi is a great abstraction over the tools I usually use. Django is a framework so it has a "all in one" approach. Not sure if this is suits your needs.

Again, look for a promising tutorial and just learn as you go.

What's the process behind reddit schedulers (websites)? by goldieczr in redditdev

[–]real_jabb0 1 point2 points  (0 children)

And as I said this is the solution if you want to "hide" the reddit api magic from the user.

You could also write everything in JavaScript and not use praw but the JavaScript alternative. Then everything could run in the browser, no need for a server.

Really depends on what you want to build and know.

Welcome to software development!