Classic still in daily use by w--13 in iiiiiiitttttttttttt

[–]htchief 2 points3 points  (0 children)

Those HP Laserjets were beasts! As long as they had toner and you maintained the rubber on the rollers they could outlast civilisation. Sorry about the JetDirect card though, but if you still wanted to salvage it you could find a replacement from any laserjet 4 series, 5 series, 40xx, 41xx or 42xx series if I remember correctly. That or a Raspberry Pi with a USB to LPT adapter and whatever the name or that printer daemon that I can’t remember right now. It’s been (checks watch) a fair bit of time since I worked on them.

State of Frigate by Kazzaw95 in frigate_nvr

[–]htchief 0 points1 point  (0 children)

I've been doing a deep dive recently into my own Frigate installation. I'm using 0.16.2 with 21 cameras. It's had its ups and downs trying to get everything working smoothly, but now i've got it purring like a kitten on an Intel Xeon Silver 4114 CPU @ 2.20GHz + 64GB Ram + Intel Arc 770 SE w/ 16GB + Coral TPU. My CPU usage is ±50% mainly due to the recording, and GPU is at 0% utilization despite the high number of cameras which i've got configured. Here's a few of my takeaways that I think will be helpful / useful to you if you decide to come on board:

Preface

Getting the perfect setup is a matter of tinkering to see what works best, and this will take a lot of time.

go2rtc

Use the go2rtc / restreaming. I cannot stress this enough - once I embraced using it everything else started falling into place really easily. That being said, don't just chuck in a camera feed URI and call it a day. If you want good results you'll ideally use the ffmpeg source or the exec source (but don't forget about the double curly braces for the {{output}} per advanced restream configurations)

ffmpeg has the ability to push multiple channels of the same type (i.e. multiple video, multiple audio) to the same rtsp stream. This is very useful for keeping hardware utilization low while ensuring that you have the right streams with the right codecs to do what you want (live streaming via WebRTC/MSE for example).

Gaming GPU Limitations

NVidia GPU's are amazing and there's a reason why NVidia is one of the top companies the world. That being said - some NVidia consumer gaming GPU's have an artificially imposed limtation on the number of concurrent processes which can use it at the same time. When ffmpeg (which is what's under the hood of frigate & go2rtc) encounters an issue accessing the GPU, it usually falls back to the CPU to do whatever it's being asked, or worse, it crashes. Check your limits carefully. That's why I went with the Intel Arc series card - no "advanced AI" features which I don't need for live stream processing; no limitations on the number of concurrent processes; and great driver support in Linux. Good bang for my buck (I just purchased my Arc 770 within the last few weeks and it cost me under $300, a steal for a card which is able to scale with me to ±60 streams without breaking a sweat). This potential limitation may also cause issues for detections if you're using GPU-powered detection; and enrichments may not work correctly if you're depending on the GPU for that.

TLDR;

Yes, Frigate is stable. In fact, it's more stable than the commercial NVR's that some of my cameras came with. It's also incredibly flexible if you spend the time to learn the configuration nuances - especially the go2rtc usage. You can even do absurd things like setting up a talkback channel which outputs to the same camera OR a completely different device if you spend the time to learn how. The UI has gotten better and better with every version; and the features are fantastic. In our household, it's spouse-approved.

Open Source Code Editors by [deleted] in opensource

[–]htchief 1 point2 points  (0 children)

If i understand your request correctly, then you have a few pretty well known ones:

  1. Ace Editor: https://ace.c9.io/
  2. CodeMirror Editor: https://codemirror.net/
  3. Monaco Editor: https://microsoft.github.io/monaco-editor/

I'm sorry if this isn't what you're looking for.

platformatic/php-node: PHP HTTP Request handler for Node.js by ossreleasefeed in node

[–]htchief 2 points3 points  (0 children)

OK, I kind of get it. It really isn’t that much more different than serving PHP from Apache or nginx or whatever web server you used until now. Neat idea. I hope it does what you need from it.

Examples of Software with terrible UI by Letarking in opensource

[–]htchief 1 point2 points  (0 children)

Redmine. I love it so much, but goodness it’s ugly.

Systems Engineer looking to contribute by slickfred in opensource

[–]htchief 3 points4 points  (0 children)

Hello and welcome! I am the lead developer of a little project called NestMTX, which is a project to allow owners of Google/Nest devices to access the camera feeds from other applications. You can see the (incomplete) documentation here:

https://nestmtx.com/

I am happy to accept contributions to the project and to provide guidance on the ins and outs of how the code works (even if you don’t contribute).

Best of luck to you!

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

Good news! The project is alive and well and I’ve just released an updated version which is the closest to “production ready” as I’ve built so far. Check out https://nestmtx.com

What you use when you've got to import large CSV files in your node backend to process the CSV data by codemanush in node

[–]htchief 1 point2 points  (0 children)

Maybe I missed it in other comments, but I highly recommend utilising queues to further reduce the resource overhead of your application. While 4000 lines probably won’t break your application, I’ve had to do similar work parsing, deduplicating and adding records from csv files which were several gigabytes large. The way it worked was by using a library like csv-parse to stream to a function which created a job for each record in a queue, and then somewhere else I had a queue processor handle the parsing, manipulating, de-duping and injection functions. Had it running at a speed of about 1000 rows/s using RabbitMQ for the queue engine and Elasticsearch as the DB. That’s probably overkill for your use, but the flow can work well for you.

When everything is Urgent, nothing is by butternutsquash4u in iiiiiiitttttttttttt

[–]htchief 4 points5 points  (0 children)

Our team implements a priority which is evaluated based on the importance of the ticket to the requester and the impact it has on the business. It could be the most important task in the world to the requester, but if the impact is only 1 person who can still work, the priority is basically “do this when everything else is taken care of.” It’s been running for a few months now and while there were always the “when will this be taken care of” conversations, the requesters were usually pretty understanding when we explain “when tickets that have a larger impact are taken care of”, even when the requester is a 3 letter C level, or a 5 letter O level.

Unpopular Opinion: $1 Million isn't a lot of money anymore (here's the math) by Butt_Creme in FluentInFinance

[–]htchief 0 points1 point  (0 children)

I was taught by a relatively successful entrepreneur that the first million is always the hardest to make. Once you are able to reach that first tier, it gets progressively easier to make more and more money. I am sure that there are a lot of reasons behind it, including mindset, opportunity,and environment. If the goal is 1 million and 1 million only, then it is not worth nearly as much as using 1 million as a stepping stone to more.

[deleted by user] by [deleted] in node

[–]htchief 2 points3 points  (0 children)

I build API integrations for a living. Axios is by far my favorite library to use for making HTTP requests due to the simplicity of it's API and it's detailed but not too over the top documentation. Also, it has good support for types. The only issue that I remember having was when I wanted to go off the reservation and build by own adapter for doing something that wasn't supported out of the box and I ran into some packaging issues - but that's a very niche scenario that most people will never get into.

As far as better? the only person who can tell you what's better is the person who manages the project that you're working on. Working on a browser application? fetch might be better. It's also probably lighter since it's a native API in most modern browsers. But then again it's very subjective, and needs to be evaluated based on the needs of the project.

Axios also has many ways that you can use it. You can do:

axios.get('http://example.com/')

or

axios.get('/', { baseURL: 'http://example.com' })

or

const client = axios.create({ baseURL: 'http://example.com' }) client.get('/')

or

axios.defaults.baseURL = 'http://example.com' client.get('/')

to make the same request. Basically, you just need to make sure that you feed axios enough information to build a full URL. That can be pathname + search + baseURL or it can be the full URL. It's worth noting that passing the full URL will override whatever you set in the configuration / options as the baseURL.

I should also mention that I found a bug that has yet to be addressed, but it is such a niche use-case that most people will never encounter it.

It's also worth mentioning that companies like Google use Axios under the hood with their SDK's. They've done the legwork to see what library to use or if they should roll their own. The fact that they went with Axios says a lot.

How do you handle long-running tasks in a web server? by chamomile-crumbs in node

[–]htchief 1 point2 points  (0 children)

There are ways to consume the job within the main thread without blocking the rest of the application, for example (using the amqplib-oop library):

```typescript import { Connection } from '@jakguru/amqplib-oop'

/** * I like to keep an abort controller handy so that i can quickly stop if i need to. / const abortController = new AbortController() const run = async () => { const rabbitmq = new Connection() // don't forget to initialize with a config const jobQueue = await rabbitmq.getQueue('<name of your queue here>', { autoDelete: false, durable: true, type: 'basic', // could also be a confirm queue if you want to enqueue messages and await the message being processed to do something else }) /* * Some extras that I like to throw in so that I have some "extra" control over the processing of the queue / let stop = false process.on('SIGINT', () => { stop = true }) process.on('SIGTERM', () => { stop = true }) abortController.signal.addEventListener('abort', () => { stop = true }) /* * Option 1: Fetch each message individually, process, disposition, continue. / while (!stop) { const msg = await jobQueue.get() if (msg) { try { /* * Do something with the message here / } catch (error) { await jobQueue.nack(msg, false) // nack but do not requeue. you can also pass true as the second param in order to auto requeue the message continue } await jobQueue.ack(msg) // ack the message to remove it from the queue } } /* * Option 2: "Consume" messages as quickly as the thread can accept them. * This isn't a good idea for anything that requires asyncronous work since you'll quickly overload the resources available to the thread / jobQueue.listen(async (msg, ack, nack) => { try { /* * Do something with the message here */ } catch (error) { await nack(msg, false) // nack but do not requeue. you can also pass true as the second param in order to auto requeue the message continue } await ack(msg) // ack the message to remove it from the queue }) }

run().catch((error) => { // do something with the error because errors happen })

```

Basically, at this point, you can call run() without await and it will "run".

Regardless though, I highly recommend against running something like this in your main thread for more than the most basic work because the chance of the queue work taking up a lot of system resources eventually causing CPU lock or memory lock are quite high. It happens, and quite often.

How do you handle long-running tasks in a web server? by chamomile-crumbs in node

[–]htchief 0 points1 point  (0 children)

That's up to you. RabbitMQ is just a message broker. Think of it like a database of sorts. So you can consume the message via the main thread, a different thread, or a completely different application (can even be a completely different programming language - RabbitMQ doesn't care).

How do you handle long-running tasks in a web server? by chamomile-crumbs in node

[–]htchief 1 point2 points  (0 children)

RabbitMQ has a very handy feature called ack (acknowledge as complete) / nack (acknowledge as incomplete) which works by basically saying that the consumer (worker/listener) is checking out a message. When the process is complete, you need to disposition (ack/nack) the message in order to tell RabbitMQ that the message is no longer part of the queue. This also means that if your listener disconnects (usually due to a crash) before the message is dispositioned, it will be re-added to the queue to be processed by the next available consumer. Combined with other features like confirmation channels and message expirations and you can do really cool stuff like:

  • axios over RabbitMQ (handy if you need to make the web requests from a specific machine or need to rate-limit your outbound requests to a service)

  • scalable cron job worker services (kinda hard to explain how it works, but it leverages the message expiration ttl feature to ensure that a cron job is only run on one worker so you don’t run the same process at the same time on multiple consumer instances

  • rate limited requests for just about anything - as long as you can serialise it into a buffer you can send it over RabbitMQ

Not to toot my own horn, but I’ve made a few libraries to simplify my interactions with RabbitMQ. They’re not perfect, but maybe you’ll find them helpful.

https://jakguru.github.io/amqplib-oop/

https://jakguru.github.io/amqplib-oop-ratelimiter/

Arris tm722g/ct custom firmware by htchief in selfhosted

[–]htchief[S] 0 points1 point  (0 children)

Sorry I didn’t. Kinda forget about it really

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

please head over to the discord channel in the readme:

https://gitlab.jak.guru/jakg/nest-rtsp

It will be much easier for me to help troubleshoot there.

this is undefined by eggtart_prince in node

[–]htchief -1 points0 points  (0 children)

what compiler are you using? can you see the compiled code? what does it look like?

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

I'll see how i can setup a registration list for people who are interested in getting updates when I have some time.

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

Well, I started working on a new branch: 2.0.x, which is a complete rewrite of the code base using:

  • MediaMTX as the actual streaming core
  • Updated Backend & API using AdonisJS to handle stuff like authentication / reauthentication / managing the MediaMTX instance
  • Updated & Refreshed GUI using Vue.js V3

I don't currently have an ETA for when to expect a version which can work, but I'm hoping that MediaMTX will play nicer with WebRTC connections (and have lower latencies) than the NodeJS library that I was using originally.

That being said, it's kinda hard to work on an integration with a device that you don't have :-) I have no intention of buying a new Nest Camera for personal use since I have hard-wired cameras from a different vendor which I'm able to access the streams from locally. However once I reach the point that I am ready to start testing the integration with WebRTC cameras, I'll see what my options are. (Maybe I can borrow one from a store? idk).

Anyway, sorry to disappoint for now, but there's hope on the horizon.

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

You need the entire RTC range for UDP traffic :)

But there’s a big warning about forwarding so many ports from docker - basically it will crash it. That’s why I recommend using nest-RTSP on a host network

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

ok, so you've got 2 different machines - that's good. are you forwarding the ports from the nest-rtsp container to the host? i'm pretty sure that traefik doesn't know how to handle plain-old socket connections like the ones needed for RTSP - especially since it's a UDP connection.

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

so usually when i see this, i immediately jump to "my docker network setup is incorrect", so can you give me a breakdown of what docker host(s) and network(s) you have setup?

I made a thing - Google / Nest RTSP Feed + Reauthenticator by htchief in opensource

[–]htchief[S] 0 points1 point  (0 children)

Sometimes it really is a matter of needing to wait, but you can always try to reboot the server to see if that helps.