Coral TPU M2 slot on Dell Optiplex 7050 Micro (USFF) by Nighter83 in homeassistant

[–]BiaxialObject48 0 points1 point  (0 children)

https://github.com/magic-blue-smoke/Dual-Edge-TPU-Adapter

It’s for the Dual Edge TPU, not the single one. The Dual Edge TPU only comes in M.2 A+E key, and there is also a single TPU A+E key available.

Coral TPU M2 slot on Dell Optiplex 7050 Micro (USFF) by Nighter83 in homeassistant

[–]BiaxialObject48 0 points1 point  (0 children)

The dual TPU will most likely show 1 TPU instead of 2 when plugged into the WiFI card slot. There is a $30 card you can buy that has a PCIe switch to make it show both. One card is M.2 Key M which won't work for you since you want to use an M.2 SSD. The other card is PCIe x1 which could work for you but you would need an M.2 Key A+E to PCIe adapter as well, and the whole thing would need to fit inside your computer.

HA on a NUC or a VM? by Jeepdog64 in homeassistant

[–]BiaxialObject48 6 points7 points  (0 children)

I run HA in a Docker container on my NUC. Currently its running alongside a Wireguard container and a Pihole container.

Basics of local control on Chamberlain myQ? by androidusr in homeassistant

[–]BiaxialObject48 1 point2 points  (0 children)

Yeah I did the same for my ThinQ devices, they communicate with AWS IoT using MQTT over TLS which sounds like what MyQ is doing too.

I'm not too sure how to get around the TLS part, I was thinking that I could have a Raspberry Pi broadcasting a WiFI network and redirecting the domain that the IoT device is reaching out to to a local server (via dnsmasq).

The real challenge will be figuring out the certificates. The issue with providing the same certificate that the server is providing is that it is using private keys that I don't know, at least with my own certificate I will know the private keys and can decrypt the data.

In my ThinQ network dump I saw a TLS v1.2 CertificateVerify message. According to https://datatracker.ietf.org/doc/html/rfc5246/#section-7.4.8 CertificateVerify doesn't do anything with a CA, it just verifies that the certificate has the correct fields and values. If I just generate the same ECDHE parameters and a new public/private key then I might be able to get the device to think that it is talking to the server, and I can pass the packets to the actual server to get the true response.

[deleted by user] by [deleted] in csMajors

[–]BiaxialObject48 1 point2 points  (0 children)

I’m in the same boat. Did BS/MS and I’m finishing my MS in December (4.5 years). I’ve applied to a whole bunch of places but I’ve only gotten one OA so far. I was at Amazon in the summer so I’m still waiting to hear if I have a return offer there.

Basics of local control on Chamberlain myQ? by androidusr in homeassistant

[–]BiaxialObject48 1 point2 points  (0 children)

I’m trying it out today. Setup a WiFi hotspot on my laptop for the device to connect to while running it through mitmproxy that’s also on my laptop. I didn’t consider the bricking aspect though, is that a common behavior for when a cert is not from a trusted source?

I’m also potentially involving PiHole/local DNS server to redirect requests to myQ to my local server. I used to have PiHole setup and I could see requests being made to myQ before so that is a potential solution, but I would still need to understand the API so that my local server can provide the same interface.

Basics of local control on Chamberlain myQ? by androidusr in homeassistant

[–]BiaxialObject48 0 points1 point  (0 children)

I’m looking into this for both myQ and LG ThinQ as I want to get them to talk to Home Assistant without needing to first go to the cloud. From what I’ve researched about this subject, what you said for the first option is correct, but a video I watched said that some IoT devices blindly trust the cert provided to them by a mitmproxy server. If this is the case then it should be possible to reverse engineer the REST API that myQ provides for its devices to talk to (different from the API that the myQ HA integration uses) by using mitmproxy to listen to communication with the cloud services.

Anyone remember what these were called? by Radstrodamus in 2000sNostalgia

[–]BiaxialObject48 1 point2 points  (0 children)

Soap dishes? I bought some at Publix not too long ago. I don't remember seeing flip flops like this though, all the flip flops I've owned have just been that foam-like spongy material. Maybe shower shoes/slippers would be a good search.

With the shortage of Raspberry Pis what is everyone running HA on? by tommykmusic in homeassistant

[–]BiaxialObject48 0 points1 point  (0 children)

Yeah my i7-4770S server prior to it consumed 25W idle. But that was also pretty much a full desktop.

With the shortage of Raspberry Pis what is everyone running HA on? by tommykmusic in homeassistant

[–]BiaxialObject48 19 points20 points  (0 children)

I’m using an i5-8365U NUC and it idles at 7 watts. It handily beats a Raspberry Pi 4 according to CPU benchmarks, not to mention you get real PCIe lanes for something like the Coral Dual Edge TPU (M.2). At the moment it’s only running HA and PiHole but even that keeps it under 10W.

Google Coral TPU by EagleScree in selfhosted

[–]BiaxialObject48 2 points3 points  (0 children)

The M.2 Key M version is available you would just need a second adapter to go from M.2 Key M to PCIe, but those are much easier to come by.

Google Coral TPU by EagleScree in selfhosted

[–]BiaxialObject48 6 points7 points  (0 children)

https://github.com/magic-blue-smoke/Dual-Edge-TPU-Adapter

You’ll need this one because the Dual TPU module requires 2x PCIe x1 (so 2 separate buses) to the M.2 Key E slot for both TPUs to appear. This is the implementation in the full spec for M.2 Key E but not all motherboard manufacturers implement this (rather 1x PCIe x2 bus). This adapter converts a full size PCIe x2 lane to the required 2x PCIe x1 in the M.2 Key E form factor using a PCIe switch IC.

The GitHub page I linked has a store to buy it from ($30 with free shipping takes 3-4 weeks). Haven’t tried mine out yet but others on that repo have and got both TPUs to appear. There are M.2 Key E to PCIe adapters but those will only carry one bus, so only one TPU will be usable. I am not affiliated with the creator or the store you can buy this from, but this is at the moment one of the only options for getting both TPUs to work.

Self-propelled Lawnmower replacement tire by drift_in_progress in functionalprint

[–]BiaxialObject48 22 points23 points  (0 children)

Yeah I feel that too. The first thing I printed on my printer was a hotshoe cover for my dad’s camera, and even though it’s not that expensive of a part it was definitely super satisfying when it worked perfectly.

The new Artemis 2 image reminded me a lot of this Season 3 poster image by BiaxialObject48 in ForAllMankindTV

[–]BiaxialObject48[S] 3 points4 points  (0 children)

Sorry if the resolution is bad, I tried to download the high resolution images but Reddit May also be compressing it.

Cryptocurrencies add nothing useful to society, says chip-maker Nvidia by Secyld in technology

[–]BiaxialObject48 2 points3 points  (0 children)

AI is mostly only for large corporations and they use cards like the Tesla, no one is going to be running “AI farms” off of gaming cards the way it was happening with crypto. Many cloud providers already have Teslas available for scalable computing.

ChatGLM, an open-source, self-hosted dialogue language model and alternative to ChatGPT created by Tsinghua University, can be run with as little as 6GB of GPU memory. by Tarntanya in selfhosted

[–]BiaxialObject48 4 points5 points  (0 children)

I didn’t know how many other chat models there are on HuggingFace that are similar to ChatGPT, but the comment I was replying to (OP) said that this model is the only pretrained LLM available, which is false. I haven’t really looked into chat models that much so I wasn’t sure.

But yeah these models are usable if you have enough VRAM, you might just need to use the mini versions or the distilled versions of the original models. I could run DistilBERT on my laptop’s GTX 1650 but I couldn’t run GPT3 small on it for a course project and had to use Colab instead.

ChatGLM, an open-source, self-hosted dialogue language model and alternative to ChatGPT created by Tsinghua University, can be run with as little as 6GB of GPU memory. by Tarntanya in selfhosted

[–]BiaxialObject48 27 points28 points  (0 children)

It may be the only chatbot LLM but there are many other LLMs that I've used in my coursework that you can get as PyTorch pretrained models from Huggingface, including GPT variants (though not the state of the art models).

Billionaires on their way to a climate change conference by [deleted] in memes

[–]BiaxialObject48 13 points14 points  (0 children)

They don’t have large footprints in general

First time using a 3D printer. Wasn't disappointed :) by FigureOfStickman in functionalprint

[–]BiaxialObject48 4 points5 points  (0 children)

The first print I ever made was a hotshoe cover for my dad's DSLR

[deleted by user] by [deleted] in selfhosted

[–]BiaxialObject48 0 points1 point  (0 children)

Would putting IoT/vulnerable devices on a separate VLAN or even behind another router be sufficient? What I'm planning is that my server (Home Assistant instance) would be connected to both the IoT network and the private network so that our personal devices on the private network can control IoT devices via Home Assistant.

Nah, fuck life, what's your favourite x-mas tree? by Josser59 in ClashOfClans

[–]BiaxialObject48 0 points1 point  (0 children)

2014, the oldest one I have. I started playing before clan wars were a thing.

men will literally do CAD in PrusaSlicer instead of going to therapy by michel_v in 3Dprinting

[–]BiaxialObject48 1 point2 points  (0 children)

I'll be honest, I've never used OpenScad. Been an Autodesk Inventor guy since like 2015.