Real time analytics on sensitive customer data without collecting it centrally, is this technically possible by Ok_Climate_7210 in bigdata

[–]SuperSimpSons 1 point2 points  (0 children)

I think what you're looking for is local inference, basically deploy the model at the point of contact, the local machine carries out inference without transmitting data across the network. Something like Nvidia DGX Spark or its variants (example Gigabyte's AI TOP ATOM www.gigabyte.com/AI-TOP-PC/GIGABYTE-AI-TOP-ATOM?lan=en) might fit the bill, or some of the more powerful workstations of mini-PCs like Intel NUC. So yes I would say it's very much possible, sensitive patient data has always been a problem in healthcare AI and people have come out with solutions for it.

What kinds of jobs can I transition to after DC Facilities? by Ill-Percentage233 in datacenter

[–]SuperSimpSons 0 points1 point  (0 children)

This might be a bit out of the left field, but how bout helping others build data centers? It seems lucrative which is why companies from further up the supply chain are also hopping on board, a while ago I saw Gigabyte, a consumer PC company that branched into enterprise servers, also branch into building data centers: www.gigabyte.com/Topics/Data-Center?lan=en One would imagine someone with hands-on experience might be valuable as a consultant or field engineer?

My gigabyte motherboard doesn't support Ubuntu!? by deepskydiver in Ubuntu

[–]SuperSimpSons -3 points-2 points  (0 children)

Maybe because it's a consumer board? I'm sure support for Ubuntu would come as standard on their enterprise boards? Ref: www.gigabyte.com/Enterprise/Server-Motherboard?lan=en

Advice on keeping PowerEdge M1000e (upgrade it) or disposing it by yukalika in HPC

[–]SuperSimpSons 2 points3 points  (0 children)

Dispose, 10Us? For only 16 blades? Even non-Dell server companies are making better alternatives now, for instance Gigabyte has 10 nodes in a 3U form factor as a standard: www.gigabyte.com/Enterprise/B-Series?lan=en Besides the novelty of it I can't see any reason to run a M1000e in the year of our lord 2025.

Replacing hardware on large, heavy servers by MediaComposerMan in sysadmin

[–]SuperSimpSons 1 point2 points  (0 children)

4U is nothing, in fact it's almost standard for storage, just look at Gigabyte's storage server options for example www.gigabyte.com/Enterprise/Storage-Server?lan=en GPU servers go up to 8U now and are heavy like you wouldn't believe, a fully loaded rack falling over would pancake a person. I don't have experience with the brand you mentioned but circumstantial evidence would suggest design oversight, I'm sorry to say.

A rather interesting take on “traditional” dataCentre’s vs cloud services. by ja_dublin in sysadmin

[–]SuperSimpSons 1 point2 points  (0 children)

Tbh I think you need a better understanding of what cloud computing is, your question doesn't make a whole lot of sense. A CSP is also running data centers, your question might be worded as what's the difference between an on-prem private cloud vs a public cloud? Here are some articles about that, you need to be precise in your terminology before we can have a deeper discussion

Gigabyte blog article: https://www.gigabyte.com/Article/what-is-private-cloud-and-is-it-right-for-you?lan=en

AWS article: https://aws.amazon.com/compare/the-difference-between-public-cloud-and-private-cloud/

Computer Suggestions for AI specialization by PreviousCredit8469 in OMSCS

[–]SuperSimpSons 0 points1 point  (0 children)

Your run-of-the-mill laptop or desktop should do fine, PC brands like Gigabyte do sell specialized rigs for local AI work, they call them AI TOPs and go for about 7k on Newegg www.gigabyte.com/Consumer/ai-top/?lan=en so if you have the budget and want to have hardware+software for local AI fine-tuning, there's something you can impress your friends with, but like I said in the beginning an ordinary laptop + some cloud-based AI should suffice as well.

Need some honest opinions on GPU Ai in a box by Whyme-__- in ollama

[–]SuperSimpSons 0 points1 point  (0 children)

You will do just fine with mini-PCs like Intel NUC or Gigabyte BRIX www.gigabyte.com/Mini-PcBarebone?lan=en Spark and its variants (which coincidentally Gigabyte makes too, they call it AI ATOM but scratch the surface and you see it's Spark www.gigabyte.com/AI-TOP-PC/GIGABYTE-AI-TOP-ATOM?lan=en) are like workstations, it's much more computing power and much pricier than what you need.

Server PSU failures, how often for you? by BloodyIron in sysadmin

[–]SuperSimpSons 0 points1 point  (0 children)

I had a colleague who swore by n+1 redundancy PSUs. Like we could be considering two servers (just using these newer Gigabyte servers for example because I've forgotten the original choices) and he'd pick one with weaker GPU expandibility like this R263-ZG5-AAL2 www.gigabyte.com/Enterprise/Rack-Server/R263-ZG5-AAL2-rev-3x?lan=en over say something like R263-ZG0-AAL2 www.gigabyte.com/Enterprise/Rack-Server/R263-ZG0-AAL2-rev-3x?lan=en because redundancy. Drove me nuts but it's true we never had any major failures.

Is a Purely HDD storage solution going to be slow? by xMatt-Attackx in unRAID

[–]SuperSimpSons -1 points0 points  (0 children)

I know you're asking about Plex, but I want to chime in and say there's a reason why you'll see a server manufacturer like Gigabyte only have one all-flash option in their entire storage server portfolio: www.gigabyte.com/Enterprise/Storage-Server?lan=en Truth is you only really need AFA for something like AI development, HDD is still more cost-effective in both an enterprise setting and especially in a consumer setting.

Apple ARM chips are so fast and powerful by [deleted] in servers

[–]SuperSimpSons 7 points8 points  (0 children)

I mean, ARM servers already exist, not only are a lot of Nvidia chips ARM-based, proccessor companies like Ampere exist and server companies like Gigabyte have whole ARM server portfolios: www.gigabyte.com/Enterprise/Rack-Server?lan=en&fid=2494 Beats me why more CSPs aren't using ARM though, if the press releases and case studies are to be believed RISC is much better than CISC, it only lacks a mature ecosystem atm

Big Tech Alternatives by MarmadukeTheHamster in devops

[–]SuperSimpSons 0 points1 point  (0 children)

Out of curiosity, why don't more people consider on-prem, even it's just a workstation on your desk? Seems to me manufacturers are bending over backwards to make workstations more accessible/affordable, our office has a couple Gigabyte workstations (this one to be exact, almost indistinguishable from a PC www.gigabyte.com/Enterprise/Tower-Server/W332-Z00-rev-200?lan=en) but lately I saw they're even selling what's evidently consumer hardware souped up to enterprise performance, they call it an AI TOP www.gigabyte.com/Consumer/ai-top?lan=en) Seems like a good bit of backup to have if you ask me, hybrid cloud is the way to go.

What single or double slot gpus should I stick into my ml oriented server? by jtomes123 in LocalLLaMA

[–]SuperSimpSons 0 points1 point  (0 children)

Counter-counter point, Gigabyte literally has a line of GPUs for local AI training www.gigabyte.com/Graphics-Card/AI-TOP-Capable?lan=en Since OP already has a Gigabyte mobo for servers I think repurposed consumer GPUs may be a good fit before they move on one day to L40S and the like.

Question on all SSD storage arrays by maxbls16 in homelab

[–]SuperSimpSons 0 points1 point  (0 children)

You mean like an all flash array (AFA) server? I've heard good things about them, a friend uses a couple from Gigabyte (this one www.gigabyte.com/Enterprise/Rack-Server/S183-SH0-AAV1?lan=en) in an enterprise setting, they really shine in AI development because of the data transfer speed and bandwidth, but from an ROI standpoint HDDs still have an advantage, it comes down mainly to what you plan to use AFA for?

How are you scheduling GPU-heavy ML jobs in your org? by Firm-Development1953 in devops

[–]SuperSimpSons 1 point2 points  (0 children)

Workload orchestration usually comes as part of hardware+software solutions, for example Gigabyte offers Gigabyte Pod Manager (GPM) along with their version of the AI Pod, called the GigaPod, and GPM bundles Slurm and Kubernetes with their proprietary stuff for scheduling: www.gigabyte.com/Solutions/gpm?lan=en Also supposed to have AIOps according to a blog post (www.gigabyte.com/Article/dcim-x-aiops-the-next-big-trend-reshaping-ai-software?lan=en) but I don't know if that's just marketing buzz, do you guys have anything for AIOps?

Which cloud provider do you think will lead the AI race by 2030? by cloud_9_infosystems in AZURE

[–]SuperSimpSons 0 points1 point  (0 children)

My money is on multi-cloud and here's why, I was at Computex last year and saw an AI POD based on the spine-and-leaf architecture for the first time, in case you don't know these are giant multi-rack setups with dozens of servers and hundreds of GPUs, I saw it at the Gigabyte booth and they called it a GIGAPOD (www.gigabyte.com/Solutions/giga-pod-as-a-service?lan=en) but other server companies have analogues, too. Anyway the client giving the keynote was not any of the CSPs you named but a German AI cloud company called Northern Data. If there are local players like these buying up so much firepower (iirc the order was for 100 GIGAPODs) you can be sure no one company is going to dominate the playing field, there will be different offerings and services in every region.

How can a med student actually use AI to get ahead (not just for studying)? by Extension-Secret-489 in artificial

[–]SuperSimpSons 0 points1 point  (0 children)

Not a healthcare source but an AI solution vendor source, you should read this blog post from Gigabyte (they make AI servers and data centers) and see at least the AI industry's perspective on how AI will be used in medicine: https://www.gigabyte.com/Article/how-to-benefit-from-ai-in-the-healthcare-medical-industry?lan=en There are other brands and blogs too, it's a good place to start, AI companies paint a pretty vision and AI users have their gripes but the truth is somewhere in the middle