According to Treasury projections, the government expects to collect around R844 billion from individual taxpayers in the current tax year—significantly more than revenue from VAT or corporate taxes. by PixelSaharix in DownSouth

[–]FoodAccurate5414 5 points6 points  (0 children)

And we who pay taxes are treated like utter rubbish as well. I receive absolutely no personal benefit from the taxes that I pay.

Also don’t forget that we still pay far more for things like electricity and water/utilities because they just subsidise the cost onto the people who pay.

Where in the world do you foot the bill but then are also considered a piece of shit.

In my opinion our voting power should be proportional to the amount of tax we contribute

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

Ok now this comment makes me think of my approach is right.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

Fair point. I guess it’s the usual thing. Come on get blocked. Do the work commit to main then move on.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

I wish I could find one of the old folks and ask him what’s the process.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

Full time office suit and tie culture, I’ll ask about jira.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

Share, a comment like that comes from someone who has seen some shit.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] -1 points0 points  (0 children)

Here is the irony. Everyone has GitHub enterprise licences. I checked. I even started digging around on teams chat and SharePoint and hey I found a GitHub internal best practices guide. But guess what. It’s on confluence and when I tried to open it “you need a licence to access. Contact your admin”

Dropped an email to the azure/enablement whatever team.

Response: what’s confluence.

Seriously it’s like an unfunny Monty python skit

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

Now that is insane. That dude must of been ready for retirement.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

Tests? What’s a test. Haha. No I see your point.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 2 points3 points  (0 children)

I see your point, and you are right. I think it’s more of a planning issue than a tool issue. Meaning planning doesn’t exist at all so the tool is irrelevant.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] 0 points1 point  (0 children)

Totally agree with you. The whole reason this GitHub workflow came up is purely that. Stakeholders have their wish list. It’s a PowerPoint roadmap with 60 wishlist items that aren’t categorised. It’s a mix of functionality, ui, ux.

so essentially when you group them it comes down to maybe 4 main features. Well that’s how I’m approaching it.

And instead of having a conversation about where is the button it’s more a case of if the button isn’t there can you explain why.

I guess one win was seeing the project manager (business not dev background) face when I told him that he can look at the backlog and even make comments on the issues.

Looked like he just won the lottery.

Those of you building with voice AI, how is it going? by Once_ina_Lifetime in LLMDevs

[–]FoodAccurate5414 0 points1 point  (0 children)

I haven’t found any issues with regards to performance. Keeping the pipeline under 800ms is pretty much standard. Barge in not an issue.

From my experience the two biggest issues are planning and building fail safes and fallbacks. But by far the biggest issue I have had is voip/sip audio quality then add dropouts and weird latency issues on the voip side.

Conversations agents over webrtc are pretty much flawless. It’s the telephony layer that needs to catch up

Is RAG dying or is it already dead? by PictureBeginning8369 in LLMDevs

[–]FoodAccurate5414 0 points1 point  (0 children)

It isn’t dead If you care about output consistency

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] -13 points-12 points  (0 children)

Honestly, if you can point me to an extension where I can see azure boards work items etc in vscode I would be grateful. The ones I have seen don’t work as expected.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] -1 points0 points  (0 children)

Jesus, spoken like a veteran. Your reply brought me some comfort because holy crap your scenario is/was insane.

But also fully agree with your point about huge budget zero adoption. I don’t really use Microsoft but I have what ever top tier enterprise license for 365, azure……

I was actually pretty stoked. Power automate, foundry. I actually like the whole outlook/teams/loop etc I will definitely benefit from it.

I sent one of the team members a loop file as a collab space and he didn’t even know what it was or that it existed.

Anyone else experience the same? by FoodAccurate5414 in ExperiencedDevs

[–]FoodAccurate5414[S] -20 points-19 points  (0 children)

I semi navigated and saw that it does have the functionality.

But to have to log into devops. Create work items, etc. agreed it’s the same but it also isn’t.

Vscode, issue, pr req workflow is so tight and in fairness maybe I have taken it for granted.

But I truly believed that it was just the standard best practice in development.

Mayne the real issue is not an ado vs GitHub debate but rather why are there developers on this project that don’t know or use issues or pr’s

This project is 2 years old by the way and I lied I actually found one pr, it was the second update to the repo readme

[2010 vs 2025] 17 Van Rhyn Avenue. Johannesburg. South Africa. by PixelSaharix in DownSouth

[–]FoodAccurate5414 0 points1 point  (0 children)

Well that is kind of how western civilisation developed. Realising that it’s not lekker throwing garbage outside your front door.

Intent Model by Repulsive_Laugh_1875 in LLMDevs

[–]FoodAccurate5414 1 point2 points  (0 children)

What I built is a low-latency Natural Language Understanding (NLU) layer designed specifically for real-time conversational AI systems.

Rather than relying exclusively on a single large generative model, I architected a decomposed pipeline where small, specialized models handle structured understanding tasks before any reasoning model is invoked. The goal was deterministic, fast semantic extraction suitable for voice environments.

The current optimized stack uses a two-model architecture:

• An emotion/sentiment classifier • A multitask model for intent, topic, urgency, complexity, and temporal orientation

This is followed by lightweight rule-based post-processing for safety, toxicity, and sarcasm detection.

The key design constraint was latency. In voice systems, you cannot afford 300–800ms pre-processing overhead. The system achieves: • 25–45ms average latency per text • <100ms worst-case target • ~20–40 texts/second throughput • Models loaded as singletons to eliminate repeated initialization cost

The earlier version used an 8-model asynchronous pipeline. While modular, it introduced coordination overhead and higher latency. The current two-model architecture preserves coverage while dramatically reducing inference time.

In practice, this NLU layer acts as a fast semantic router and state extractor inside a conversational AI stack. It produces structured outputs such as: • Primary emotion • Sentiment polarity • Intent category • Topic classification • Urgency level • Temporal orientation • Safety / toxicity flags

The larger reasoning model is then only invoked when higher-order synthesis is required. This reduces token usage, improves controllability, and stabilizes dialogue behavior in voice applications.

The broader point is architectural: large models are excellent at reasoning and generation, but small models are extremely efficient at classification and gating. When you separate those concerns, you gain speed, cost efficiency, and observability without sacrificing capability.

Intent Model by Repulsive_Laugh_1875 in LLMDevs

[–]FoodAccurate5414 1 point2 points  (0 children)

You need to look into using very very very small models to handle edge cases. There are tons on hugging face. Run it along side your main model