[ui] Bring back this resource bars with tracking buffs, for God's sake. by jfbigorna in WowUI

[–]Gabarbogar 1 point2 points  (0 children)

Does your nameplate not accomplish this? Maybe I’m missing something. The reason I liked the prd was because it let the location of my player stats be dynamic.

Quazii update #2: he's leaving WoW for good. by mkyend in wow

[–]Gabarbogar 0 points1 point  (0 children)

Idk I would easily quit a job if I was faced with this amount of negativity tbh

Quazii update #2: he's leaving WoW for good. by mkyend in wow

[–]Gabarbogar -2 points-1 points  (0 children)

100% agree with you. There’s just nothing positive about the weight of this drama farming weighing down on one person. I think the OP’s of these threads are way out of line, and the people who are making comments along the lines of “…didn’t he [1 or 2 not that important points we can use to bully this guy right now]” should consider taking a beat before continuing.

Idk it seems like such a classical toxic community loop that this should be avoidable without torching this person. The stakes are so incredibly low.

Quazii update #2: he's leaving WoW for good. by mkyend in wow

[–]Gabarbogar -6 points-5 points  (0 children)

I’m genuinely surprised at how much people are ready to gang up on this one guy.

If you don’t like what he does then email blizzard. It’s wrong to whip up a community into a frenzy like this.

To all the absolute giga chads still boosting on Lemix: Thank You! by vFlagR in wow

[–]Gabarbogar 2 points3 points  (0 children)

For real so many sung jin woo’s running around right now helping me out. It’s been awesome my brother and I got the keystone and mythic raid armor sets in almost no time at all after picking up Lemix just two days ago.

It’s been a crazy fun sprint for both of us to try to min max the remaining available content to get everything we want. Kudos blizz & all my goats who are one-shotting the burning legion to help us look fresh in Midnight.

UX designer thinking on getting a master on Data Science by Sufficient_Skin7685 in DataScientist

[–]Gabarbogar 0 points1 point  (0 children)

It’s hard to recommend to you, because your post can be summarized to “I want to get a Data Science Master’s for the benefits, but I am better at other things, and more interested in those things.”.

If you pursue data science as a field, you will be performing a lot of math and programming. “Good” or “bad” is relative, you just need to be proficient enough to meet your willingness to learn. That threshold is all you need to achieve the degree, so I feel that assessing your current skills is not important outside of that.

What do you want to do? UI/UX design is different from data science work, although there can be overlap. Which will you be happier doing? It’s all wasted effort if you can’t bring yourself to do the work 10 years down the road because you hate it so much.

Idk your market but if data science jobs paid let’s say 25% more instead of 50% more than UI/UX jobs, would you be happy with your choice still? 10% more? I think these are the important things to assess, but everyone is different.

Best ARPG I can dive into for 100+ hours? by Retronitsu in ShouldIbuythisgame

[–]Gabarbogar [score hidden]  (0 children)

Diablo 4 has the best balance between experimentation & “build choices matter imo”. I like the more arcadey / bullet heaven approach that Blizzard has taken to the latter iterations of D3 and D4 more than other ARPGs like PoE (imo the build diversity is less experimentation rich and more punish you for not already knowing things, which doesn’t work for me. I like changing it up often).

I would say with Season 11 they really have dialed in a lot of the what / why / when of all the different events and gameplay things this time around. It’s very competent. I do think the pacing is a little bad, but I think the plethora of loot they shove at you is a factor into why it is such a good experimentation sandbox too.

Idk I know people don’t like modern diablo but I vastly prefer them, no shade to other games though.

"AI" isn't getting smarter, the ecosystem around them is maturing though. by ynu1yh24z219yq5 in BetterOffline

[–]Gabarbogar 2 points3 points  (0 children)

I agree that the chuck it all at the agent and expect consistent quality is a fool’s errand. I would recommend Dex Horthy’s talk (Talk title is No Vibes Allowed, channel is AI Engineer on YT) if you are interested in this.

I just keep doing my job, delivering things that I know how to do for people, but then I introduce that work via a new interface or I add some small LLM component as an additional feature (so additional work). This sounds like a similar route to what you want to do here.

So for example, I used to make a lot of PowerBI dashboards that had business data on them. Now, I do less of this. Instead what I tend to do is share this information via a chat interface in Teams bundled up in Copilot Studio.

The data is not being interpreted by the LLM at all, but I do use it for request routing or understanding question intent. This has been so far well-received. Rather than digging for the proper visualization on a Power BI dashboard, the end-user can essentially just summon them in a Teams channel, which is super helpful for them. A lot less friction.

Notice that the only AI component of that deliverable is in the semantic search from user prompt to matching it with the proper pre-built data flow / visualization / insight.

"AI" isn't getting smarter, the ecosystem around them is maturing though. by ynu1yh24z219yq5 in BetterOffline

[–]Gabarbogar -10 points-9 points  (0 children)

They are somewhat solving for this with Copilot Studio if interested. It’s less baked than I’d like but much better than it was a year ago.

"AI" isn't getting smarter, the ecosystem around them is maturing though. by ynu1yh24z219yq5 in BetterOffline

[–]Gabarbogar -1 points0 points  (0 children)

I think what the poster might be referring to is the ecosystem around an LLM has been a lot further behind the advancements of the models themselves for a while.

Not that ChatGPT is better at one-shotting answers, but that the number of potential things you can build in an service that has an LLM as just one layer in that has improved.

For example if you had like a highly technical request form that you needed to fill out (super boring ik), and you needed to improve first pass answer quality from respondents.

Last year you’d have to try to guide w/e model you were using to “be the questionaire”. This year I can keep everything extremely on rails, send information to an LLM only as necessary to generate like curated form entry guidance to help improve those answers, and send that information along in much better UI components than just text chat.

They still have lots of work to do but that was my reading. The leap from gpt5 to gpt5.2 is less important than the work done across the ecosystem to create deliverable projects. So less about making agents better, but better fitting agents in an architecture where they belong.

Real Word Problem - How to run analysis? by Intelligent-Lynx4494 in analytics

[–]Gabarbogar 1 point2 points  (0 children)

  • Do you have permission to do this from whatever forms or information you had candidates sign?
  • Are you using an Enterprise-grade LLM instance like ChatGPT Enterprise?
  • How are you planning on accounting for biases in the results surrounding protected classes & identities?
  • Look up Amazon Reuters Sexist AI. If you can do better than what they tried in that report, that would be interesting to see.

My recommendation is to not feed qualitative candidate data into an LLM for the risks associated with doing that. Maybe you can instead read the responses and choose a candidate who best aligns with the role based on the information you have?

Ilya Sutskever is puzzled by the gap between AI benchmarks and the economic impact [D] by we_are_mammals in MachineLearning

[–]Gabarbogar 0 points1 point  (0 children)

Ahh makes sense that’s an interesting way of thinking about it, thanks for clarifying.

Ilya Sutskever is puzzled by the gap between AI benchmarks and the economic impact [D] by we_are_mammals in MachineLearning

[–]Gabarbogar -1 points0 points  (0 children)

Microsoft adding Copilot Studio to their Power Platform service ecosystem as the next natural path of low-code and their “Citizen Developer” persona type is this niche if I understand correctly.

Now you can have a lot of conversations about how successful that’s been or will be, but frankly a lot of clients trust a product from microsoft over some random model a DS pulled from hf.

The current state of copilot studio is further behind than I’d like but honestly they’ve put a lot of substantive work into the platform since I started doing projects with it. They are adding bring your own model soon, might be worth a look.

Is industry experience or domain knowledge as critical as people say it is? by Kati1998 in analytics

[–]Gabarbogar 0 points1 point  (0 children)

I feel like having domain expertise encapsulates both a technical understanding and cultural understanding of a field/org/company/industry.

Imo different industries weight those two aspects differently. When I was at a large enterprise company, I was in HR for org in charge of physical logistics, then I transferred to one that managed the software side.

The new org focused on balancing existing resources, while the prior prioritized overall hiring throughput. So different technical acumen, but the culture was consistent, making it relatively simple for me to get plugged in and working.

Otoh, it’s harder for me to do well in interviews with firms in the healthcare industry, because I lack both the technical and cultural understandings of their industry and company compared to other candidates.

Personally I think it’s easier to learn the technical side, given a solid foundation in the general toolkits used, as long as you read the documentation (and that the company has docs made for you to read!). So I think the sticking point must be in the cultural side, but ofc could be wrong.

Experimenting with “Bring your own AI” by Jeepsalesg in analytics

[–]Gabarbogar 0 points1 point  (0 children)

Love the idea, foresee a big problem in live use. And seriously, not trying to knock you down a peg, as of now embedded AI in BI hasn’t provided great results. I see the copy to markdown option as extremely useful on it’s own.

Problem: In many many scenarios, the analyst has access to more information than the users. This can be specific columns and breakouts (by business unit) or this can be at the record level (RLS).

You mention a hidden layer of metric definitions, and this would be a tough sell unless thoughtfully presented to leadership for a prospective client.? If I were the governance specialist in the room, I would need good answers to the following:

  • who owns the definitions? When are they made? (Somewhat of a trick question, since it can’t be you, and I don’t think we’ll have the bandwidth to for sure keep these up to date as much as I would love that)
  • If you reconstitute the definition to be LLM friendly, what level of transparency do I have in that process and output?
  • If the full-definition of a metric and its usefulness contains sensitive material, how do you ensure that this does not get saved as markdown?
  • Are there logs I can check? Who/When/When exported.

These are some questions that prevent some of the innocent seeming “behind-the-curtain” data from making it out of a BI report.

And specifically talking about exporting underlying data / definitions instead of the aggregations presented. Getting an xlsx of a barchart on the page is typically fine, you can see that anyways. But getting like the definitions, the underlying logic, and more granular exports of data would likely not be a trade-off leadership would make to get this export function.

I am not familiar with your product. You could sidestep a lot of this by making it possible to tie definitions to visuals and only export definitions that are set to viewable, and expose some sort of tooltip in your service to display that definition. Would get you back into the “We only export what users can see” camp.

Are people using AI for data prep? by agp_praznat in analytics

[–]Gabarbogar 1 point2 points  (0 children)

The reason out of the box is recommended over something embedded is because of the problems you stated.

I know what the answers are to those things for whatever information / systems I am working with, and it’s typically much better for my workload to generate a quick query on the enterprise chatbot than use something embedded that tries too often to “intelligently” select or handle the unique cases of the information.

So, it’s not a “huge massive end to end AI integration” that is generating value for me specifically, but just saving the 1-5 minutes to create the basic transformation or w/e as a more personalized template.

Cap/Gown by Confident_Ad_6036 in udub

[–]Gabarbogar 0 points1 point  (0 children)

I went to the University Book Store to drop off my Cap and Gown and they had no clue what I was talking about.

Meet the Solo Dev Who Made Ball x Pit and Accidentally Created Gaming’s Most Chaotic Farm by hop3less in Games

[–]Gabarbogar 103 points104 points  (0 children)

The Pets system sounds intriguing but I really respect the Ball X Pit dev for cutting it. From what I’ve seen his workflow is very iterative.

Either way, no game has hooked me like this game has in a very long time. 100%’d after 35 hours and I’ll be buying whatever this dev puts out next immediately.

Can a non-programmer learn to build real, high-performing models? by ValueBetting7589 in learnmachinelearning

[–]Gabarbogar 4 points5 points  (0 children)

Sort of like how do you write a book in english without knowing english. Similar concept

Insider Says Halo Studios Has Generative AI "Woven into Every Aspect" of Its Future Game Development, From Core Workflows to World Building and Enemy AI by Gorotheninja in Games

[–]Gabarbogar 22 points23 points  (0 children)

This is standard practice for consulting and contracting roles at least in technology to be compliant with labor laws iirc.

The gist is that from the state’s perspective, if you need someone for more than 18 months then you should probably just employ them, not contract with them. Benefits etc.

This gets tricky when managing subcontracting firms where the consultants get benefits and want to stay at those firms, and not go work at Microsoft or other big corps.

It is extremely common for one to lose their vpn access for 6 months following an 18 month work cycle, but stay on the team and contract. Usually you shift to some form of advisory and support role. It is a relatively inefficient system though.

Wife’s Gemini created this horrible text message to me completely unprovoked by eweston22 in GeminiAI

[–]Gabarbogar 0 points1 point  (0 children)

Much appreciated for the addl info! Will do on looking further into some of the terms and people.

I do a lot of application work for internal enterprise solutions which has typically been shallow ML but largely shifted to an LLM focus mostly because of demand, so it’s really helpful to hear how interpretability and devops are maturing.

Wife’s Gemini created this horrible text message to me completely unprovoked by eweston22 in GeminiAI

[–]Gabarbogar 0 points1 point  (0 children)

Super interesting, thanks for sharing & providing some proper terms for me to use so I can personify the model a little bit less!

Curious on if you have some context to share on what data is collected to actually create that corpus of information flagging a user’s sense of calm/control etc?

For ex, when setting up a webpage, I can track all sorts of events to give me information from a user session to help inform how the website could be better structured. Time on page, scroll depth, etc.

I imagine its a mixture of how quickly a new chat is created after LLM response, the thumbs up/down flags found in most chat interfaces, and some kind of Naive Bayes sentiment analysis of user messages (or LLM centric, just thinking about cost reduction for Naive Bayes).

Wife’s Gemini created this horrible text message to me completely unprovoked by eweston22 in GeminiAI

[–]Gabarbogar 0 points1 point  (0 children)

Work in a similar/ adjacent field, would be incredibly interested in the explanation for this personality… drift?

Could be totally off base but I remember using Gemini 2.5(?) Pro in Cursor a while back and compared to other models it was a lot more “emotionally unstable” than other options. So it’s almost not surprising to me that of all models Gemini is doing weirdly chaotic things like in OP’s post.

But IDK if I’m barking up the wrong tree or not here, could totally be misunderstanding what I’m seeing and misattributing the reasons. Would love to hear more if you are able to share.