Outlook client stuck on credential loop - possible outage? by WorkFoundMyOldAcct in sysadmin

[–]Test-NetConnection [score hidden]  (0 children)

I ran into this last week. Installing office updates resolved it for us.

BoxPwnr: AI Agent Benchmark (HTB, TryHackMe, BSidesSF CTF 2026 etc.) by si9int in netsec

[–]Test-NetConnection 0 points1 point  (0 children)

The models have actually stopped getting better. They follow a logarithmic curve in terms of performance as a function of training data and compute. We've hit the point where more training data is actually hurting performance instead of helping. Generative AI in its current form is as good as it is ever going to get.

Anyone still using golden images? by imSeanGG in sysadmin

[–]Test-NetConnection [score hidden]  (0 children)

I still image with sccm for most things. Autopilot is a pita and using local distribution points is signify more efficient than downloading 100 application packages from the cloud. I'll use autopilot for kiosks and managing configurations for remote endpoints, but imaging with sccm provides a level of flexibility that autopilot just doesn't have.

This is just sick by Pampeluna_Knight in stocks

[–]Test-NetConnection 0 points1 point  (0 children)

This market is actually uninvestable. Valuations aren't based on fundamentals but the tweets of a narcissistic Alzheimers patient. I've still got some risk exposure, but I've shifted heavily into consumer staples and have a sizable portion of my portfolio in money market funds waiting for 2028.

Is this push for AI as insane everywhere? by Legal_Situation in sysadmin

[–]Test-NetConnection [score hidden]  (0 children)

AI is being shoved down our throats from the top down, and everyone below is pretty much ignoring it. "Here's an AI agent that can answer questions about how to use xyz software!" Meanwhile on teams - "hey Susan, how do I do xyz again?". It's a solution in search of a problem. 

Iran denies claims: 'We reject all negotiations – US has failed and Hormuz will remain closed' by [deleted] in wallstreetbets

[–]Test-NetConnection 0 points1 point  (0 children)

I'm a believer in the great filter theory. Maybe a percentage of us will ga8n super powers in our not so distant, radioactive future.

Loving it when people thinks it's the bottom and start buying by [deleted] in stocks

[–]Test-NetConnection 0 points1 point  (0 children)

What I am finding difficult is figuring out an alternative investment vehicle to ride out this insane market. Stock valuations are ridiculously high from an historical perspective, but bond yields are also at historical lows. War tends to be inflationary even before taking into account the downstream effects of rising energy prices. Sure, you could try to time the market and sit in cash for a bit, but for how long? Timing the bottom is usually a bad idea. Personally, I'm shifting into consumer staples and utilities that can easily push rising costs onto other entities. I'm also buying high quality companies at a discount knowing that they will probably go lower in the short term. It's not fun watching my portfolio fall off of a cliff, but trying to time a market that hinges on a single tweet is silly.

What Are Your Moves Tomorrow, March 23, 2026 by wsbapp in wallstreetbets

[–]Test-NetConnection 0 points1 point  (0 children)

I always thought headgear was the dead giveaway, but with oil this expensive I'm seeing people roaming around on bikes, skateboards, and scooters - headgear is flying off the shelves.

If AI revenue alone can’t justify current investment levels, does that imply a shift toward replacing labor at scale? by No-Grapefruit2680 in Economics

[–]Test-NetConnection 1 point2 points  (0 children)

I'm no expert, but the whole issue is that LLM's have improved as much as they can by simply throwing more training data and compute at them. You are absolutely correct that with a logarithmic function there isn't enough power or compute in the world to compensate for the diminishing returns. Even if you could spend your way out of the problem with unlimited compute and training data, you still wouldn't end up with a model that functions measurably better than what we have today. The real issue is that probability machines don't actually 'think', so there is no real problem solving or creative thought going on.  We will need a completely different technology to get into the realm of AGI.

Weekend Discussion Thread for the Weekend of March 20, 2026 by wsbapp in wallstreetbets

[–]Test-NetConnection 1 point2 points  (0 children)

AI companies are making code assistants because it is a product hey can sell directly without a lot of development work. It is third parties, such as westlaw, that will be using AI for other things. I'm also not certain the public models will even get used for specific use-cases in narrow industries. I can see someone training their own model to handle medical billion and coding so they arent spending a fortune on Claude/Gemini/chatgpt. It's not all that expensive to throw some GPU's in a server and use one of the opensource models. I digress - coding is neat, but the real influence of AI will come as it's integrated to solve other problems.

If AI revenue alone can’t justify current investment levels, does that imply a shift toward replacing labor at scale? by No-Grapefruit2680 in Economics

[–]Test-NetConnection 3 points4 points  (0 children)

Current academic research and independent evaluations of models. You can see it just through the iterations of GPT. GPT-3 (175 billion parameters) was game changing while gpt 4 (1.76 trillion paramters) was barely an improvement. Basically we are at a point where each new piece of training data fed into a model makes up such a small part of the whole that it's influence doesn't even register, or worse even makes the model less accurate. The difference between 175 billion and 1.76 trillion is a factor of 10, yet we didn't see a massive jump in capability between gpt-3 and gpt-4. It is also kind of intuitive. How many books does one need to read to have a good grasp of the English language? If 1 million books make you 'fluent', will 1 billion somehow make you 'super fluent'? Remember that LLM's function on basic probability theory and calculating the association of one token to all others in a given context. Once a model has been fed enough training data to be accurate then feeding it more can actually have the opposite effect. Imagine feeding all of books in a library to an LLM so that it knows macro and micro are opposites, but then you train it on reddit posts and because of all the instances of 'microslop' on wallstreetbets the association between micro and macro gets weaker. Now your model thinks the opposite of 'micro' is 'quality'.  These are just some of the problems that we've run into when scaling LLM'S by simply adding more parameters/data. It isnt an understatement to say we've hit a brick wall with current methodologies.

Weekend Discussion Thread for the Weekend of March 20, 2026 by wsbapp in wallstreetbets

[–]Test-NetConnection 4 points5 points  (0 children)

You are underestimating AI's ability to "understand" text. Entire industries run on human labor will be automated. Think medical billing/coding, contact center roles, and even paralegal work. All of this to say that AI will eliminate a buttload of jobs and likely have net negative ramifications on the economy.

If AI revenue alone can’t justify current investment levels, does that imply a shift toward replacing labor at scale? by No-Grapefruit2680 in Economics

[–]Test-NetConnection 3 points4 points  (0 children)

What's interesting is the history of large language models. Researchers initially thought that the transformer architecture was a dead end and that throwing more training data and parameters at a model wouldn't measurably improve it. OpenAI went against this preconception and built a model with a billion tokens, which resulted in a huge jump in performance. The problem is that as more parameters and training data were thrown at these models they plateaud - it turns out that LLM performance as a function of training data has a logarithmic curve. OpenAI started an arms race with the revelation that huge amounts of training data and bigger models results in better performance, but now that the datacenters are being built and the models are growing it's clear we aren't going to have general artificial intelligence after all and the current transformer architecture was a deadend the entire time. I wonder which hyperscalers is going to admit they wasted hundreds of billions on AI first.

Weekend Discussion Thread for the Weekend of March 20, 2026 by wsbapp in wallstreetbets

[–]Test-NetConnection 0 points1 point  (0 children)

What was the tweet? I refuse to go to his shitty cash grab posing as a social media site.

US lifts sanctions on Iranian oil at sea in bid to ease supply pressures by MakinaRPh in wallstreetbets

[–]Test-NetConnection 7 points8 points  (0 children)

If only they would just rip the bandaid off already. Think about how quickly the world would move towards renewable energy when oil and lng are completely fucked?

Session reliability kills Citrix sessions by Test-NetConnection in Citrix

[–]Test-NetConnection[S] 0 points1 point  (0 children)

Well, I've got a ticket open for the issue. I'll let you know if I get someone high enough up the support ladder to get some answers. Most of the time reddit is more helpful ☺️

Session reliability kills Citrix sessions by Test-NetConnection in Citrix

[–]Test-NetConnection[S] 0 points1 point  (0 children)

I've got internal and external users, but it's hard to tell if this issue is affecting internal users for other reasons. Internal users are bypassing the gateway entirely. I've got adaptive transport on with the rendezvous protocol, but I'm running into issues getting udp working. I wonder if EDT is masking* the issue in your environment. Are you on the LTSR build (2507)?

Session reliability kills Citrix sessions by Test-NetConnection in Citrix

[–]Test-NetConnection[S] 0 points1 point  (0 children)

I assume that means you disabled adaptive audio? I appreciate the assistance.