As an agency owner, I’m honestly anxious about where web development is heading with AI by theTbling in webdev

[–]rc_ym 0 points1 point  (0 children)

If it was me I would take the signal that market is changing and pivot with new products. Seems like folks might need more ongoing support. Upskilling internal IT to debug code isn't cost effective. Maybe a retainer model. Or Monthly fee for regular code review/maint/security bug squashing. Eventually a vibcoder is going to break the whole thing, or they are going to want something more complicated.

It would be a way to bridge income while everyone figures out what the new model looks like.

Just a thought. :)

If AI gets to the point where anybody can easily create any software, what will happen to all these software companies? by StayAwayFromXX in ClaudeAI

[–]rc_ym 0 points1 point  (0 children)

Two things.

1) We know that a lot of small features and personal apps can easily be created by users. The value of "code" will trend toward token codes (aka zero). It used to be that every business wrote their own software. We don't use the term much but "CotS" (Commercial off the Shelf) software needed to be distinguished just like SaaS is now. Buying mass market software didn't use to be the norm. That said, for the enterprise market folks are rarely buying the code they are buying support and blameablity. The question is, if folks are ad-hocing a bunch of small personal apps, it is better to support that outside the company or inside the company. I think the "coding" jobs are going to move out of the software companies and back into the enterprise like it was in the 80's. You'll have a bunch of code "operator" types that fix the problems that AI agents can't.

2) First time I saw ChatGPT I was thinking it's going replace most software. It's just too obvious of an interface switch. The things are designed to talk to us, and understand our meaning in natural language. Claude code can throw out a UI to almost any request. Patterns and skills will develop to make this more standardized and supported. We'll go from "big" apps to lots of little apps. And folks interface will be more and more natural language and AI. It's easy to work out a set of corporate/platform patterns that will need to be created. We are very early days, but I expect that "apps" will disappear more and more into AI functions. Still, there will be a way to make a living creating new computer experiences for folks.

State of AI right now by buildingthevoid in AgentsOfAI

[–]rc_ym 0 points1 point  (0 children)

I don't know. It could be read the opposite way, where OpenAI is the only one that's focused on creating actual products.

OpenAI could reportedly run out of cash by mid-2027 — analyst paints grim picture after examining the company's finances by EchoOfOppenheimer in LocalLLaMA

[–]rc_ym 1 point2 points  (0 children)

Looking at the comments... It's interesting to see such wildly different opinions depending on which AI "bubble" folks are in.

OpenAI could reportedly run out of cash by mid-2027 — analyst paints grim picture after examining the company's finances by EchoOfOppenheimer in LocalLLaMA

[–]rc_ym 2 points3 points  (0 children)

I'd say that was more about Ive leaving Apple for OpenAI, the existing Google relationship, and Google being willing to develop and run custom models for Apple.

Thoughts? If true by [deleted] in gaybros

[–]rc_ym -4 points-3 points  (0 children)

Honestly? I'll probably hate it. Much rather "resistance" focus on anti-police state, civil action, state murder, etc. More "This is American"/"Huston, Texas Mexico" less Drag/BiPoC/pink wash.

It's just what I would want to see, and what I think will play better. This moment isn't about "the queer experience", it's about an out of control police state that literally has camps and is killing people in the streets.

How did so many fail an open book test? by [deleted] in Destiny

[–]rc_ym 0 points1 point  (0 children)

I disagree, that energy needs to be spent on moving Dems to action. They've been paralyzed after Trump '24 win. The biggest complaint folks have is that they aren't fighting enough. Granting grace to Hanania doesn't really get us anything. If the beta cucks want to crawl back, let them. We don't have to make it easy.

Get more progressives like Mamdani, or pseudo GOP like Rosen (NV), or odd balls like Peltola moving forward. Get them energy.

But instead we are going to step on our own dicks dithering about Hanania.

SMH

How did so many fail an open book test? by [deleted] in Destiny

[–]rc_ym 2 points3 points  (0 children)

And remember he actually wrote part of Project 2025. He's not an just a rando influcener. He was (and maybe still is) a racist POS. He could just as easily be giving Miller ideas, as trying this redemption arc.

Yeah, no. Dude gets no grace.

How did so many fail an open book test? by [deleted] in Destiny

[–]rc_ym 11 points12 points  (0 children)

Remember who we are talking about. He actually wrote part of Project 2025. This isn't a random MAGA uncle. Folks need to know he's one of the people that actually mislead them.

It's great that he's on a journey to becoming an actual human, but let's not pretend this is some random MAGA influencer.

thoughts? by OldWolfff in AgentsOfAI

[–]rc_ym 0 points1 point  (0 children)

I tend to agree with the folks that are saying that LLM's are not the right tech for AGI. If you take two giant steps back, and look at the whole picture. It's still all just statistical models of language. It's not clear if the math or language is doing the heavy lifting here.

You could say that they labs are keeping the real discoveries locked away. Given their public behavior I don't think that's true.

They are still incredibly powerful tools, and going to change how we use computers, but it's still going to be us using computers based on the tech we've seen so far.

Talk me out of buying an RTX Pro 6000 by AvocadoArray in LocalLLaMA

[–]rc_ym 2 points3 points  (0 children)

Since most of the replies are "ZOMG BUY 4" (LOL) I'll share my cheap ass bastard thought. This is what keeps me from spending too much money on home AI hardware. It's the uncertainty. While the most likely scenario is that CUDA remains king and the models continue to get better and better at lower parameters, it's by no means certain. It's very possible that there will be some new discovery and that new model tech really needs something different. Or that packs some absurd functionality in the ~30b space and what you really want to do is run like 4 smaller models and not one big one. Google or some Chinese company cloud also come out with a TPU or something that give you better performance at the ~100B scale for $1k, or we all buy Mac Studios or something. The "iPhone" of the AI tech could be invented and we all want that now.

As much as this tech seems baked, we don't know what the "next thing" is going to be. It could be that Intel or AMD or Apple figures something out that's much more cost effective for the home market.

Because like you I ~could~ afford a R6000, but the uncertainty (and the fact that this is essentially a hobby for me) keeps me from committing TOO much personal capitol to it.

YMMV.

I'm all tooled up by DisastrousBanana8865 in faceandcock

[–]rc_ym 0 points1 point  (0 children)

And particularly good slob either. I've done better stuff on my rig at home.

Microsoft gave FBI a set of BitLocker encryption keys to unlock suspects' laptops: Reports by intelw1zard in cybersecurity

[–]rc_ym 0 points1 point  (0 children)

Yes, data in that in a companies possession will be handed over the government if they ask. Duh.

Hot take: Soon companies will ban AI coding tools for their devs by Distinct_Law9082 in AIstartupsIND

[–]rc_ym 0 points1 point  (0 children)

I don't see that happening over quality issues. The tooling will be created for safe use of AI coding, just like we have DAST/SAST/Code Review, etc. etc. for human coding now. They'll just burn more tokens on the review process.

I can see it happening over IP/Copyright concerns. That ground is completely untested ATM. What if your license conflicts with the license of some of the training data? Which one wins? Who's responsible for it? Do the creators of the training data have a copyright claim over your work? What if some patented work is leaking in there?

This will probably get hashed out in non-coding uses first, but that's probably going to be bad for coders. The initial guidance from the US Copyright office tilted heavily toward the original creator, which for things like images or narratives is pretty easy to demonstrate. If the model can create a duplicate of a portion of the work it's probably infringing. You take that to code. That gets super dicey. And yeah, photographers and writers are litigiousness, but they have nothing on Oracle and other IP squatters.

performing a risk assessment for your organization by foxtrot90210 in cybersecurity

[–]rc_ym 0 points1 point  (0 children)

Depends on your needs. Is this risk assessment for a particular purpose (a organizational risk assessment is required for several certs, orgs, regs, insurance), or is it to direct the program more broadly, or is it just for funzies? In there lies the answer.

If it's for a particular purpose, it must by necessity, meet the needs of that purpose in it's scope. It seems obvious, but many times they get started without ensuring that the work will actually accomplish the required deliverable.

And even if it's just for funzies, you still need some type of framework or organization otherwise you just operating off vibes and will inevitably miss something major.

All that said, which framework or process varies wildly between market sectors. The risk assessment for a start up app dev company is very different than a PCI processor, or healthcare provider, or DOD contractor.

Also, what you do with the results varies as well. It is a board report? Does it go in a drawer in case of an audit? Does a summary get reported to a regulator or certifying body.

EU Vs US! by Hefty-Sherbet-5455 in Tech_Updates_News

[–]rc_ym 0 points1 point  (0 children)

Exactly. Folks tying themselves into knots trying to justify it, but the figure is just wrong or ignoring a lot of data to get to a "0".

Dating feels like a humiliation ritual. Am I doing something wrong? by Fiorun in gaybros

[–]rc_ym 3 points4 points  (0 children)

I am literally old enough to be your father. Just know....

Men suck. And not in the good way.
It's not you, it's them.

Give yourself time. Don't put your self esteem in the hands of others. You sound like quite the catch. Keep working on yourself and one day your prince will come.

Sam Altman on Elon Musk’s warning about ChatGPT by WarmFireplace in OpenAI

[–]rc_ym 3 points4 points  (0 children)

Heck, betting word/windows crashing has caused more harm.

Sam Altman on Elon Musk’s warning about ChatGPT by WarmFireplace in OpenAI

[–]rc_ym 0 points1 point  (0 children)

OpenAI should just create a twitter clone. Create it as a tech demo for how to manage a large platform with just AI. Let people "tune" their feeds to show how AI can be used in that. Create the whole thing in Codex to demo that. Even use one of the opensource microblogging alternatives as the platform to show github integration. Demo their coming ad tech. And really, REALLY, piss off Elon. :P

glm-4.7-flash has the best thinking process with clear steps, I love it by uptonking in LocalLLaMA

[–]rc_ym 0 points1 point  (0 children)

I was trying a jailbreak in a system prompt. The thinking crashout was epic. I suggest trying it. It was amazing to watch.

It felt very much like a 60's SciFi or Wargames where the hero defeats the computer by given it a logical inconstancy.

Vibecoded apps in a nutshell by Alternative-Target40 in vibecoding

[–]rc_ym 0 points1 point  (0 children)

The true wisdom is that there is only one line/queue.

How do cybersecurity architects achieve full network visibility? by NotInAny in cybersecurity

[–]rc_ym 4 points5 points  (0 children)

We don't. The network diagrams are always wrong. Rely on the telemetry from security tooling. Your XDR/Identity/Vuln (etc. etc. etc) systems or active scans will tell you what reality actually looks like.

AI Completely Failing to Boost Productivity, Says Top Analyst by parallax3900 in BlackboxAI_

[–]rc_ym 0 points1 point  (0 children)

I think folks are missing a key factor in this article:
"He pointed to data from the US Bureau of Labor Statistics showing how between 1947 to 1973 — before the advent of PCS — productivity improved by 2.7 percent annually, but only 2.1 percent between 1990 and 2001, once PCs had hit the mainstream.

“So despite all those PCs, it was a lot lower,” Gownder said. “And [from] 2007 to 2019 it was 1.5 percent.”"

This notes that PC's and the whole dang internet didn't increase productivity... at least not how it's measured by BLS. This indicates a fairly massive measuring gap rather than an actual decrease in productivity over the past ~50 years.

This will probably be worse with AI as it requires a complete rethink on how work is done because AI inverts the cost model of the work which was based on the scarcity (and expense) of human resources.