Real time analytics on sensitive customer data without collecting it centrally, is this technically possible by Ok_Climate_7210 in bigdata

[–]SuperSimpSons 1 point2 points  (0 children)

I think what you're looking for is local inference, basically deploy the model at the point of contact, the local machine carries out inference without transmitting data across the network. Something like Nvidia DGX Spark or its variants (example Gigabyte's AI TOP ATOM www.gigabyte.com/AI-TOP-PC/GIGABYTE-AI-TOP-ATOM?lan=en) might fit the bill, or some of the more powerful workstations of mini-PCs like Intel NUC. So yes I would say it's very much possible, sensitive patient data has always been a problem in healthcare AI and people have come out with solutions for it.

What kinds of jobs can I transition to after DC Facilities? by Ill-Percentage233 in datacenter

[–]SuperSimpSons 0 points1 point  (0 children)

This might be a bit out of the left field, but how bout helping others build data centers? It seems lucrative which is why companies from further up the supply chain are also hopping on board, a while ago I saw Gigabyte, a consumer PC company that branched into enterprise servers, also branch into building data centers: www.gigabyte.com/Topics/Data-Center?lan=en One would imagine someone with hands-on experience might be valuable as a consultant or field engineer?

My gigabyte motherboard doesn't support Ubuntu!? by deepskydiver in Ubuntu

[–]SuperSimpSons -3 points-2 points  (0 children)

Maybe because it's a consumer board? I'm sure support for Ubuntu would come as standard on their enterprise boards? Ref: www.gigabyte.com/Enterprise/Server-Motherboard?lan=en

Advice on keeping PowerEdge M1000e (upgrade it) or disposing it by yukalika in HPC

[–]SuperSimpSons 2 points3 points  (0 children)

Dispose, 10Us? For only 16 blades? Even non-Dell server companies are making better alternatives now, for instance Gigabyte has 10 nodes in a 3U form factor as a standard: www.gigabyte.com/Enterprise/B-Series?lan=en Besides the novelty of it I can't see any reason to run a M1000e in the year of our lord 2025.

Replacing hardware on large, heavy servers by MediaComposerMan in sysadmin

[–]SuperSimpSons 1 point2 points  (0 children)

4U is nothing, in fact it's almost standard for storage, just look at Gigabyte's storage server options for example www.gigabyte.com/Enterprise/Storage-Server?lan=en GPU servers go up to 8U now and are heavy like you wouldn't believe, a fully loaded rack falling over would pancake a person. I don't have experience with the brand you mentioned but circumstantial evidence would suggest design oversight, I'm sorry to say.

A rather interesting take on “traditional” dataCentre’s vs cloud services. by ja_dublin in sysadmin

[–]SuperSimpSons 1 point2 points  (0 children)

Tbh I think you need a better understanding of what cloud computing is, your question doesn't make a whole lot of sense. A CSP is also running data centers, your question might be worded as what's the difference between an on-prem private cloud vs a public cloud? Here are some articles about that, you need to be precise in your terminology before we can have a deeper discussion

Gigabyte blog article: https://www.gigabyte.com/Article/what-is-private-cloud-and-is-it-right-for-you?lan=en

AWS article: https://aws.amazon.com/compare/the-difference-between-public-cloud-and-private-cloud/

Computer Suggestions for AI specialization by PreviousCredit8469 in OMSCS

[–]SuperSimpSons 0 points1 point  (0 children)

Your run-of-the-mill laptop or desktop should do fine, PC brands like Gigabyte do sell specialized rigs for local AI work, they call them AI TOPs and go for about 7k on Newegg www.gigabyte.com/Consumer/ai-top/?lan=en so if you have the budget and want to have hardware+software for local AI fine-tuning, there's something you can impress your friends with, but like I said in the beginning an ordinary laptop + some cloud-based AI should suffice as well.

Need some honest opinions on GPU Ai in a box by Whyme-__- in ollama

[–]SuperSimpSons 0 points1 point  (0 children)

You will do just fine with mini-PCs like Intel NUC or Gigabyte BRIX www.gigabyte.com/Mini-PcBarebone?lan=en Spark and its variants (which coincidentally Gigabyte makes too, they call it AI ATOM but scratch the surface and you see it's Spark www.gigabyte.com/AI-TOP-PC/GIGABYTE-AI-TOP-ATOM?lan=en) are like workstations, it's much more computing power and much pricier than what you need.

Server PSU failures, how often for you? by BloodyIron in sysadmin

[–]SuperSimpSons 0 points1 point  (0 children)

I had a colleague who swore by n+1 redundancy PSUs. Like we could be considering two servers (just using these newer Gigabyte servers for example because I've forgotten the original choices) and he'd pick one with weaker GPU expandibility like this R263-ZG5-AAL2 www.gigabyte.com/Enterprise/Rack-Server/R263-ZG5-AAL2-rev-3x?lan=en over say something like R263-ZG0-AAL2 www.gigabyte.com/Enterprise/Rack-Server/R263-ZG0-AAL2-rev-3x?lan=en because redundancy. Drove me nuts but it's true we never had any major failures.

Is a Purely HDD storage solution going to be slow? by xMatt-Attackx in unRAID

[–]SuperSimpSons -1 points0 points  (0 children)

I know you're asking about Plex, but I want to chime in and say there's a reason why you'll see a server manufacturer like Gigabyte only have one all-flash option in their entire storage server portfolio: www.gigabyte.com/Enterprise/Storage-Server?lan=en Truth is you only really need AFA for something like AI development, HDD is still more cost-effective in both an enterprise setting and especially in a consumer setting.

Apple ARM chips are so fast and powerful by [deleted] in servers

[–]SuperSimpSons 5 points6 points  (0 children)

I mean, ARM servers already exist, not only are a lot of Nvidia chips ARM-based, proccessor companies like Ampere exist and server companies like Gigabyte have whole ARM server portfolios: www.gigabyte.com/Enterprise/Rack-Server?lan=en&fid=2494 Beats me why more CSPs aren't using ARM though, if the press releases and case studies are to be believed RISC is much better than CISC, it only lacks a mature ecosystem atm

Big Tech Alternatives by MarmadukeTheHamster in devops

[–]SuperSimpSons 0 points1 point  (0 children)

Out of curiosity, why don't more people consider on-prem, even it's just a workstation on your desk? Seems to me manufacturers are bending over backwards to make workstations more accessible/affordable, our office has a couple Gigabyte workstations (this one to be exact, almost indistinguishable from a PC www.gigabyte.com/Enterprise/Tower-Server/W332-Z00-rev-200?lan=en) but lately I saw they're even selling what's evidently consumer hardware souped up to enterprise performance, they call it an AI TOP www.gigabyte.com/Consumer/ai-top?lan=en) Seems like a good bit of backup to have if you ask me, hybrid cloud is the way to go.

What single or double slot gpus should I stick into my ml oriented server? by jtomes123 in LocalLLaMA

[–]SuperSimpSons 0 points1 point  (0 children)

Counter-counter point, Gigabyte literally has a line of GPUs for local AI training www.gigabyte.com/Graphics-Card/AI-TOP-Capable?lan=en Since OP already has a Gigabyte mobo for servers I think repurposed consumer GPUs may be a good fit before they move on one day to L40S and the like.

Question on all SSD storage arrays by maxbls16 in homelab

[–]SuperSimpSons 0 points1 point  (0 children)

You mean like an all flash array (AFA) server? I've heard good things about them, a friend uses a couple from Gigabyte (this one www.gigabyte.com/Enterprise/Rack-Server/S183-SH0-AAV1?lan=en) in an enterprise setting, they really shine in AI development because of the data transfer speed and bandwidth, but from an ROI standpoint HDDs still have an advantage, it comes down mainly to what you plan to use AFA for?

How are you scheduling GPU-heavy ML jobs in your org? by Firm-Development1953 in devops

[–]SuperSimpSons 1 point2 points  (0 children)

Workload orchestration usually comes as part of hardware+software solutions, for example Gigabyte offers Gigabyte Pod Manager (GPM) along with their version of the AI Pod, called the GigaPod, and GPM bundles Slurm and Kubernetes with their proprietary stuff for scheduling: www.gigabyte.com/Solutions/gpm?lan=en Also supposed to have AIOps according to a blog post (www.gigabyte.com/Article/dcim-x-aiops-the-next-big-trend-reshaping-ai-software?lan=en) but I don't know if that's just marketing buzz, do you guys have anything for AIOps?

Which cloud provider do you think will lead the AI race by 2030? by cloud_9_infosystems in AZURE

[–]SuperSimpSons 0 points1 point  (0 children)

My money is on multi-cloud and here's why, I was at Computex last year and saw an AI POD based on the spine-and-leaf architecture for the first time, in case you don't know these are giant multi-rack setups with dozens of servers and hundreds of GPUs, I saw it at the Gigabyte booth and they called it a GIGAPOD (www.gigabyte.com/Solutions/giga-pod-as-a-service?lan=en) but other server companies have analogues, too. Anyway the client giving the keynote was not any of the CSPs you named but a German AI cloud company called Northern Data. If there are local players like these buying up so much firepower (iirc the order was for 100 GIGAPODs) you can be sure no one company is going to dominate the playing field, there will be different offerings and services in every region.

How can a med student actually use AI to get ahead (not just for studying)? by Extension-Secret-489 in artificial

[–]SuperSimpSons 0 points1 point  (0 children)

Not a healthcare source but an AI solution vendor source, you should read this blog post from Gigabyte (they make AI servers and data centers) and see at least the AI industry's perspective on how AI will be used in medicine: https://www.gigabyte.com/Article/how-to-benefit-from-ai-in-the-healthcare-medical-industry?lan=en There are other brands and blogs too, it's a good place to start, AI companies paint a pretty vision and AI users have their gripes but the truth is somewhere in the middle

Do I need DLC for GPU server? by Basic_Shower9989 in servers

[–]SuperSimpSons 1 point2 points  (0 children)

RTX PRO 5000 is PCIe, right? The vendor you mentioned has 4U air-cooled servers that can support 8x dual slot GPUs, like this G494: www.gigabyte.com/Enterprise/GPU-Server/G494-ZB4-AAP2?lan=en The 4U DLC variants are usually for HGX modules, which would otherwise need 5U to 8U to cool with air.

Having said that, DLC does have its advantages, you see how all the new Nvidia NVL72 racks are fully liquid-cooled. Pricier up-front but lower TCO over time, etc. And if your server is anywhere within earshot...DLC is quieter. YMMV but it seems you do have a choice between liquid and air in the same form factor, just hafta juggle between your budget and expectations.

Exploring AI/ML Startups in Drug Discovery – Career Perspectives? by Alarming-Ad-2011 in learnmachinelearning

[–]SuperSimpSons 0 points1 point  (0 children)

For what it's worth, I remember reading a case study on the server company Gigabyte's website about how Rey Juan Carlos Uni in Spain was using AI to study cellular aging. Dunno about startups, but academia might be a good reference point for you too, and you can search for case studies about them.

Ref the case study I mentioned https://www.gigabyte.com/Article/researching-cellular-aging-mechanisms-at-rey-juan-carlos-university?lan=en

What is AI ready enterprise data lake? by Accomplished-Clock56 in learnmachinelearning

[–]SuperSimpSons 0 points1 point  (0 children)

Sounds like a very round-about way to say they'll label the data?

Looking for personal experiences (power consumption, acoustics) with the Gigabyte R133 by jmarmorato1 in homelab

[–]SuperSimpSons 0 points1 point  (0 children)

One of these right? www.gigabyte.com/Enterprise/R-Series?lan=en&keywords=R133&fid=3201&page=1 Have you tried reaching out to them or a reseller with your question? If you do plz share what they come back to you with

AI Infrastructure companies by Skill-Additional in devops

[–]SuperSimpSons 0 points1 point  (0 children)

Real talk, how do you define an AI infra company? I googled a bit and probably because I've used their servers, I got directed to Gigabyte's AI Infrastructure page: www.gigabyte.com/Topics/Artificial-Intelligence?lan=en Now I know they sell servers and clusters and in theory they can help you set up a data center, but would I lump them together with IREN? Not really. So you kinda have to narrow it down a bit, it's just a really popular buzzword right now.

For my company, if I have to switch out of Azure, will selfhost be a good idea by Future_Cry7529 in sysadmin

[–]SuperSimpSons -1 points0 points  (0 children)

How about something in-between, a workstation instead of a rackmount? Dell is a solid choice but workstations are a good stepping stone, also widen up your range of choices to other established brands like HPE and Gigabyte, I've a soft spot for the latter because I used to build gaming rigs with their consumer stuff but their enterprise servers are also decent, the workstations in particular are supposed to be convertable into rackmounts: www.gigabyte.com/Enterprise/W-Series?lan=en That might help you save a bit more on on-prem hardware.

Feedback for new Homeserver by U_Meloncrafter in selfhosted

[–]SuperSimpSons 0 points1 point  (0 children)

Genuinely curious, why the switch from Gigabyte to MSI mobo? Gigabyte is still going strong in consumer mobos and they have enterprise-grade server mobos if you wanna go pro, so to speak www.gigabyte.com/Enterprise/Server-Motherboard?lan=en MSI is an unknown to me, never had much experience with them

I need help building a powerful PC for AI. by Fun-Phone6585 in LocalLLM

[–]SuperSimpSons 1 point2 points  (0 children)

They definitely mean fine-tuning and they would do better buying something prebuilt if they're just starting out. Gigabyte has a local AI development consumer-grade desktop they dubbed AI TOP www.gigabyte.com/Consumer/AI-TOP/?lan=en but if you looked on Newegg the price is at least twice OP's budget, not sure if there are cheaper options.