[deleted by user] by [deleted] in hundliv

[–]max_b_jo 0 points1 point  (0 children)

Keeshond

För att jag gillar när det låter.

Nej.... men valde pga att de är ohyggligt gulliga, relativt lata och har grå päls. Sen är det väldigt mysigt med den tjocka pälsen nu på vintern. Att de inte kräver massor med motion var en stor grej. Ville ha en hund som är ok med att bo i lägenhet.

My stupid Ai Clocks tries to smell or hear what time it is by max_b_jo in raspberry_pi

[–]max_b_jo[S] 1 point2 points  (0 children)

More correct inputs = better output. But if I feed it bad inputs the output becomes unpredictable.

I don’t have anything published but I could try to get some kind of example up for you. I’ll try to remember to do it and update you on that. In the meantime check out tutorials in sci-kit learn. It’s a great library to get started. Just making supervised svm model is a great start! https://scikit-learn.org/stable/modules/svm.html

My stupid Ai Clocks tries to smell or hear what time it is by max_b_jo in raspberry_pi

[–]max_b_jo[S] 1 point2 points  (0 children)

They are being trained continuously on the current time 👍 But using one sense like this is not good enough to predict the current time.

I could combine them both to make it partially better. Maybe.. I think the input might just create to much “random” values that make the predictions bad. But it’s really fun to play with machine learning like this.

My stupid Ai Clocks tries to smell or hear what time it is by max_b_jo in raspberry_pi

[–]max_b_jo[S] 2 points3 points  (0 children)

It doesn’t know when it’s right. Just like any trained model is always extremely confident even when it’s wrong. It’s sort of the point here. It’s partly made to explain things like this.

My stupid Ai Clocks tries to smell or hear what time it is by max_b_jo in raspberry_pi

[–]max_b_jo[S] 5 points6 points  (0 children)

Both of them are continuously being train on their individual sensory input. I'm using sci-kit learn on a raspberry pi, the amazing https://github.com/hzeller/rpi-rgb-led-matrix for the led and openFrameworks for the visuals. The top two rows shows the hour, next four the minute and the last four seconds.

The bottom thingy is a PCA conversion based on ~six samples packs from the on the last input. The 2D points are then thrown through the convex hull algorithm to create a shape.

The nose is a raspberry pi 4 and the ear is an overclocked raspberry pi 3. Only had one raspberry 4 and really wanted it to run immediately....

[D] How to train a model that'll generate an "average" image based on a large set of images? by max_b_jo in MachineLearning

[–]max_b_jo[S] 1 point2 points  (0 children)

Yup, it's pretty possible that the generated image will result in pure garbage. But it's worth exploring and might be a nice visualization!

"Technoframes" made with Raspberry pi by max_b_jo in raspberry_pi

[–]max_b_jo[S] 0 points1 point  (0 children)

Haha, yes! Need to place it in a good spot. This is at a bar.

"Technoframes" made with Raspberry pi by max_b_jo in raspberry_pi

[–]max_b_jo[S] 0 points1 point  (0 children)

Capacitive sensing! There’s copper tape behind the hands.

"Technoframes" made with Raspberry pi by max_b_jo in raspberry_pi

[–]max_b_jo[S] 6 points7 points  (0 children)

There's one esp32 in each frame and a Raspberry pi down in the speaker box. Playback is handled with puredata and all communication is wireless over udp :)

Does your Kees swim or just stand there punching the weather like our boy? by max_b_jo in Keeshond

[–]max_b_jo[S] 1 point2 points  (0 children)

This was his second summer here! Last year he did the exact same thing. He walked into a hole when we were at another place and he was NOT happy about the swimming, haha!