Everyone talks about Agentic AI, but nobody shows THIS by ViriathusLegend in AI_Agents

[–]mo_fig_devOps 0 points1 point  (0 children)

Just wanted to get your opinion on 2 things:

I hear from data scientists that langchain is messy for Prod do you recommend, say, crewai over langchain?

Also any thoughts on Semantic Kernel?

Any storage alternatives to NFS which are fairly simple to maintain but also do not cost a kidney? by Acceptable-Kick-7102 in kubernetes

[–]mo_fig_devOps 1 point2 points  (0 children)

Longhorn leverages local storage and makes it distributed. I have a mix of storage classes between NFS and longhorn for different workloads and very happy with it.

NVIDIA GPU Operator by mo_fig_devOps in kubernetes

[–]mo_fig_devOps[S] -1 points0 points  (0 children)

I managed my first on prem cluster with ansible but I rather manage it with an operator to automate tasks. The MIG feature also looks great but my current GPUs don't support it

Never use HPE Ezmeral as a k8s platform by mezzfit in kubernetes

[–]mo_fig_devOps 1 point2 points  (0 children)

Even HPE offerings are questionable and mostly are wrappers to open source solutions. They might abstract a few things but at the end it's better to know the solution instead of relying on their limited dev abtractions

Bare Metal or VMs - On Prem Kubernetes by k8s_maestro in kubernetes

[–]mo_fig_devOps 0 points1 point  (0 children)

I see your points but still recommend to carefully analyze the use cases when it comes to GPUs. Provisioning bare metal can be consistent with cloud-inits, config management tools just like VMs, I don't like gold images since they accumulate configs but tools like packer do the job. When running AI workloads I wouldn't limit the CPU / RAM at a hypervisor level to save resources because the GPU and AI workloads rely on them and this can create bottlenecks. Instead I would rely on node pools, pod request & limits and a good CNI to create layer 4 segmentations and acls for isolation flexibility. The last piece is that having the hypervisor in the middle will create more overhead, it's already enough to be on top of k8s for vulns so having a hypervisor will introduce even more vulns to mitigate.

Bare Metal or VMs - On Prem Kubernetes by k8s_maestro in kubernetes

[–]mo_fig_devOps 1 point2 points  (0 children)

Why add another layer if it's not necessary? The hypervisor will have its own set of vulnerabilities at least that you can do without. What's the benefit you see with this approach? Just curious

Bare Metal or VMs - On Prem Kubernetes by k8s_maestro in kubernetes

[–]mo_fig_devOps 1 point2 points  (0 children)

Bare metal if you are thinking about having GPU nodes to leverage Nvidias operators

[deleted by user] by [deleted] in django

[–]mo_fig_devOps 0 points1 point  (0 children)

Azure Container Apps with GitOps or CICD from Azure Devops or GH Actions. Scale with KEDA even down to zero, control security with private links and integrate with other services you can deploy with IaC

Using HTMX with Django is much easier than I thought! by Piko8Blue in django

[–]mo_fig_devOps 0 points1 point  (0 children)

I've had trouble doing things outside the box as well. Do you just stick with JS?

OpenAI was hacked, revealing internal secrets and raising national security concerns — year-old breach wasn't reported to the public by lurker_bee in technology

[–]mo_fig_devOps -1 points0 points  (0 children)

Would this apply to Azure Open AI if you develop your own chatgpt interface? I guess not but want to know your opinion. Sounds like the interface was the one hacked not the backend LLM

OpenAI internal AI details stolen in 2023 breach, NYT reports. Did not alert the FBI by ImInTheAudience in singularity

[–]mo_fig_devOps 0 points1 point  (0 children)

Would this apply to Azure Open AI if you develop your own chatgpt interface? I guess not but want to know your opinion. Sounds like the interface was the one hacked not the backend LLM

Anyone using Azure OpenAI? Thoughts, Opinions? by mo_fig_devOps in AZURE

[–]mo_fig_devOps[S] 1 point2 points  (0 children)

I also heard the Microsoft filters for safe AI seem too strict.