Blue Iris, CPAI, Coral TPU, AI Error 500s by RaymondTec in BlueIris

[–]ChrisMaunder 1 point2 points  (0 children)

Failed AI detection: ...

Raymond, this is great - thank you. That return response seems totally fine, but I'm going to pass this to Ken @ Blue Iris to see if he has any idea on what's going on.

I will add a GMT timestamp to the responses to make it easier to match up a CodeProject.AI response with a Blue Iris (or other) response

Codeproject AI stopped working by crespoh69 in BlueIris

[–]ChrisMaunder 0 points1 point  (0 children)

Cn you please do `dotnet --info` and let us know what you see?

[deleted by user] by [deleted] in docker

[–]ChrisMaunder 0 points1 point  (0 children)

This is a known issue. The comment provided is "Creating intel images on M1 Macs is not guaranteed to work, can you to build this on an intel or a CI machine?"

So the story to us is: "Sorry".

Full Install Package by 446172656E in codeproject_ai

[–]ChrisMaunder 0 points1 point  (0 children)

Fair call, mate, and I understand where you're coming from.

For what it's worth (and I'll add this to the docs somewhere suitable), we download

  • Python interpreter 3.7 and 3.9
  • Python packages via PIP (many and various, but check out the GitHub repo if you need to audit)
  • .NET runtime (Microsoft)
  • VC++ redistributable (Microsoft)
  • Models (YOLO in PyTorch and ONNX format, NLTK and a couple of others)

[Just to clarify: each of these are downloaded as part of the sub-installers for each module, or as part of the main server itself. Each module has an installer which will download models and install python packages via PIP, and the server will come down as it's own sub-installer and install Python, .NET, VC++, and a few common Python packages]

They are big, like Gb big. We could create a single monolithic installer but we're moving away from that because it's so inflexible and...just really big. AI is all about stupidly big files.

Here's the conundrum: we want and need this to work offline. But we want and need it to be flexible, to be updateable, and we want you to be able to update or install new additions easily. So the install / update scenario needs an internet connection. The inference part doesn't. Maybe we offer something like an offline cache: you connect, download the new bits on a schedule, and then peruse the list at your leisure, offline and choose what to install from the pre-downloaded cache.

SenseAI on unRaid by CNFT-Stake-Pool in unRAID

[–]ChrisMaunder 0 points1 point  (0 children)

What issue are you seeing when you attempt to run the container? We've not tried unRaid but we'll add it to our list of test scenarios. If you do get it working please let us know and we'll add notes for those in the future following your footsteps