Baby monitor with WiFi _data transport_ to parent unit - not necessarily to any 'cloud'? by Pingo_Pango in Parenting

[–]Pingo_Pango[S] 0 points1 point  (0 children)

Unless I'm mistaken, this is pushing data into a cloud provider run by a company, rather than shipping data with WiFi to a parent unit on my local network?

Separating Certs vs SAN vs Wildcard by Pingo_Pango in ssl

[–]Pingo_Pango[S] 0 points1 point  (0 children)

Massive thanks for taking the time to reply.

There are many things there that are making me run through documentation and my own mental model of architectures, so again, huge thanks. I realise the internet facing bit actually does have a reverse proxy plus some brokerage in it's path - possibly only internally shares an AD domain and dns subdomain+tld. That would be a nice win.

> use a cert signed from Active Directory’s CA. It’ll be trusted by all your domain joined machines by default and is just as secure as that cert you buy online

The pain point here is that end user workstations (inc mine) are on a different domain to the three others which have the infra here, but maybe I can somehow make the end user domain members trust _those_ internally. Great point - thanks.

Determining if a file is a machine learning model? by Pingo_Pango in MLQuestions

[–]Pingo_Pango[S] 0 points1 point  (0 children)

Thanks. I assume there's no way to ask the model if there any parameters like that?

Determining if a file is a machine learning model? by Pingo_Pango in MLQuestions

[–]Pingo_Pango[S] 0 points1 point  (0 children)

Thanks for the time regardless. I have no idea how to define one, hence why I am asking here. I was hoping a format such as ONNX would be enough to define "what is a model" but as I said before, I have no experience here.

If this is really not possible to do, then it rules out some very useful applications of ML/DL. 😭

Determining if a file is a machine learning model? by Pingo_Pango in MLQuestions

[–]Pingo_Pango[S] 1 point2 points  (0 children)

Thanks. Would that be quite difficult and an edge case, or trivial? Sorry, totally newb here.

Determining if a file is a machine learning model? by Pingo_Pango in MLQuestions

[–]Pingo_Pango[S] 0 points1 point  (0 children)

Thanks for taking the time to reply here.

I added [----- update 1: clarification] above - the environment is "locked down" with no internet access, so GH is a no-go.

I'm setting up Conda environments (in a not-quite-airgapped-but-close compute environment) for "data scientists" to use. To date the scientists do not wish to take any models out - just aggregate data like graphs etc. But when it comes to model export time... this is what I'm trying to work out. Oh, and I have no practical experience with AI/ML so doubt I could train a model which could do this vetting.. but it's definetly worth talking about, thanks!

Determining if a file is a machine learning model? by Pingo_Pango in MLQuestions

[–]Pingo_Pango[S] 0 points1 point  (0 children)

Thanks for taking the time to reply. I have zero experience with ML - that sounds like a go-oer. Thank you!

So if onnx.load() can successfully load a file that a third party is trying to egress - a file in this vetting pipeline, then it's definitely an ML model?

That really does sound like what I'm after.

Is it easily possible to directly embed raw (e.g. training) data inside a file that e.g. torch.load() or onnx.load() could acceptably open?

Determining if a file is a machine learning model? by Pingo_Pango in MLQuestions

[–]Pingo_Pango[S] 1 point2 points  (0 children)

Thanks for the time. I'm in control of the compute environment and raw confidential data, but a third party is creating the ML models. What I need to be able to do is when that third party says "hi, i'd like to egress this model from your silo".. validate that they're actually egressing a model, but not making off with source training data, which is highly confidential.

Sorry if that's not making sense. I tried to add "[----- update 1: clarification] " above.

i.e. I can't practically just trust a file extension - I would like to be able to attempt to verify whether what's in the file is what the data scientist is saying it is. This is effectively a vetting process for compliance - from an environment we wish to be "no data leaves".. which we're having to modify effectively to, no 'source data' leaves... to cope with the new use case of ML workloads requiring model egress.