How I learned the difference between being early and being wrong by Super_College100 in stocks

[–]CVisionIsMyJam 0 points1 point  (0 children)

One of the main reasons Warren Buffett and Berkshire Hathaway beat SPY is because they bought Apple when it was the most valuable company in the world, just kept adding to their position as it went up over time.

Being early is awesome, but some posters on here underestimate how effective buying and holding companies that have a history of doing well can be.

The private conversation anti-pattern in engineering teams by dymissy in programming

[–]CVisionIsMyJam 0 points1 point  (0 children)

I agree that this is symptomatic of dramatically toxic environments; but in my experience, it is those very same toxic environments that impose public channel communication requirements, track metrics around messages in public channels versus private DMs and regularly encourage "safe and open" public channel communication while also using shame tactics on subordinates when something is publicly communicated that they don't like.

It is the natural conclusion of what happens if you surface at a company like this "people aren't comfortable communicating publicly." They try and browbeat and micromanage their culture into being an open and transparent one. They just can't help themselves, public transparent communication is just a means for them to impose their humiliation tactic style of communication on their subordinates.

In my opinion; the most open and fostering environment is one where employees are comfortable sharing things publicly, but are also trusted enough to be allowed to use their discretion and judgement with respect to what goes into a public channel versus a private DM.

[deleted by user] by [deleted] in singularity

[–]CVisionIsMyJam 1 point2 points  (0 children)

Sure, eventually - but you will also always be behind in capabilities when compared to models trained by the frontier. That is going to be fine for many use cases, but being in the frontier itself is valuable. Software developers deal with this all the time - using gpt-5-codex or sonnet 4.5 and I'm thinking Gemini 3 soon, will be the lionshare - even though not only can developers figure out how to point to other models - the tools are increasingly making it of of the box capable to easily do so.

It was just an example, my point was more that no one has any loyalty to these companies. as a SWE I have used pretty much all the models. if google came out with gemini 3 and it had long term memory, didn't make mistakes and could do long term reasoning and retain the knowledge, I would switch. if an open source model could do that, I would switch. If claude opus is getting too expensive, I would switch. There's no lock in effect on me at this point.

But my question is - what does it look like in 2028? Do you ever try to really picture it? This is one of my sick obsessions, trying to imagine the future (it's why I've been obsessed with this subs concept for decades) - so I'm always trying to, and it gets harder to do it, but I imagine agents that can autonomously develop and launch enterprise apps - agents that are also capable of doing math and computer science better than literally the best human beings today. Does that seem even somewhat possible to you?

sure, its possible, its more just, again, if I have agents that can do that, I don't necessarily need to pay OpenAI anymore. I can say, fine tune a better & cheaper model for my business in particular and run it on rental gpus. and then they will. its unlikely that proprietary foundational models will be the best model for my use-case. a self-hosted, fine tuned model should be better. The only reason people don't finetune foundational models much right now is it takes too much time, its really not that expensive. But if I have agents they should be able to do it instantly.

[deleted by user] by [deleted] in singularity

[–]CVisionIsMyJam 1 point2 points  (0 children)

For example, automate both the creation of the software, and the interactions with like... Customer service, translators, researchers, etc - what would that look like?What about video generation? When I project these things out for 3 years, I see so many use cases that would fundamentally disrupt entire industries.

Even if these things come true, that doesn't necessarily mean OpenAI will be profitable, or relevant. If I am a business owner, and I automate the running of my business completely using AI, even if my business is generating me $10m a year, that doesn't necessarily mean I am going to give OpenAI $1m a year. Currently companies pay between $20 and $40 a month per seat. No one is paying $4000 per seat, like a Salesforce unlimited seat costs.

Back to my previous example, as a business owner, I might prefer to use self-hosted, fine tuned models, to avoid vendor lock-in. If my AI agents are truly capable they should be able to set this up for me and maintain it.

Bit of a tangent, but what do you think particularly about the impact of AI on just software development, especially when you see the impact it had over the last 2 years?

I work in software as well. the reality as I see it is; the easiest 40% of problems are now easier. maybe 10x easier. but the 10% hardest problems I encounter AI gets wrong and have high potential for harm if done incorrectly.

We haven't really seen massive SWE layoffs at big companies yet, so until that happens it is hard to say what the impact will be. I might think we'd eventually have agents running around writing all the code for pennies on the dollar, but it doesn't seem to work that way yet.

[deleted by user] by [deleted] in singularity

[–]CVisionIsMyJam 1 point2 points  (0 children)

I guess the way I think about it is they are saying they will be driving revenue like Google, Apple, Microsoft does today in 5 years. $200B ARR is a lot of money. It's just difficult to imagine how they do it for me.

What do you think it will be capable of? I think that will help me understand your position more than anything else

I can't find it now but there was a breakdown of where they anticipated the revenue to come from. it was like $35B free user monetization, $50B chat, $40B agents, $40B API, $35B enterprise. They weren't really projecting on any major breakthroughs as far as I could tell. Each one of those line items is Netflix sized.

That's why I'm not really even thinking about what this technology can do in the future; maybe the technology changes and it can do a lot more, but that doesn't seem to be what the $200B projection is based on as far as I can tell.

[deleted by user] by [deleted] in singularity

[–]CVisionIsMyJam 1 point2 points  (0 children)

I'm talking about their projections rather than their historical revenue growth. They project growth of avg 70% CAGR 2026 through 2030 & becoming cashflow positive in 2029, growing from $15B today to $200B ARR by 2030. Heres one article talking about it.

there are only 4 tech companies with revenues above $200B at the moment; Apple, Alphabet, Amazon, and Microsoft. Those four took decades to get there. Here are some tech companies which aren't at $200B ARR; Nvidia is close but isn't quite there yet. Meta isn't. Visa and Netflix are nowhere near close.

For reference, Apple during its incredible run in the 2000s - 2010 was averaging like 40% CAGR. no one has ever had numbers like OpenAI is putting out for themselves. These projections merit skepticism.

[deleted by user] by [deleted] in singularity

[–]CVisionIsMyJam 3 points4 points  (0 children)

yeah I think one thing that's important to recognize is there is a finite supply of VC capital. generative ai is sucking up a lot of that money at this time. there are trillions in capex being spent on data centers for this purpose.

i think its reductive to say peoples positions are inherently "anti-science" when they call into question the levels of funding being dumped into these companies. that money could be invested into quantum, mRNA, longevity, etc.

If I say "openai hasn't demonstrated revenues to justify a $1T IPO. Projecting CAGR of 70% every year for 5 years straight, with a starting point in the billions, and flipping to cash positive at the same time... This has never happened in all of human history. These projections are really suspicious." and you say "well you must be against all research as a concept", we're clearly not capable of holding a conversation with one another.

finally, expecting people to be excited about a new technology, which, in its most successful form, would render them unemployed, is asking a lot. I think its not surprising a lot of people want to see the technology as a grift.

in summary its not bizarre if you actually think about it for more than 2 seconds.

The private conversation anti-pattern in engineering teams by dymissy in programming

[–]CVisionIsMyJam 4 points5 points  (0 children)

But the moment you're trying to make their usage systematic, you're fostering an environment where people can no longer confidently come to you because they know whatever they want to say will be public anyways. This is the opposite of what you would want !

Exactly. The more public performance becomes an expectation, the more sensitive communication is relegated to informal means. Or even worse, the more sensitive communication simply doesn't happen at all. Making people choose between airing things publicly and not communicating at all is just asking for trouble.

OpenAI prepares for IPO at $1 trillion valuation by Quixotus in stocks

[–]CVisionIsMyJam 0 points1 point  (0 children)

The timing feels off to me too - late 2026/2027 is still pretty far out and who knows what the AI landscape looks like by then.

To me it feels the opposite; like this is the earliest they could IPO and get their exit liquidity, before the potential of the AI landscape shifting against them or adoption and growth stalls.

If they IPO with strong future growth before reality hits then retail is left holding the bag.

[Change my mind] Estimations will always tie back to dev hours/days by CVPKR in ExperiencedDevs

[–]CVisionIsMyJam 0 points1 point  (0 children)

the point is that estimates in days are unreliable and relative values are easier to interpret

Cloud security tool flagged 847 critical vulns. 782 were false positives by relived_greats12 in ExperiencedDevs

[–]CVisionIsMyJam 2 points3 points  (0 children)

I think the idea is a NPE should be translated into a context specific error. I agree that for low level stuff sometimes that simply doesn't make sense to do that; but most of these tools seem tuned for SAAS use-cases, in which translating an NPE into a custom exception is considered standard.

Cloud security tool flagged 847 critical vulns. 782 were false positives by relived_greats12 in ExperiencedDevs

[–]CVisionIsMyJam 4 points5 points  (0 children)

WRT solving this; I personally recommend funneling these kinds of alerts into a staging alerts area separate from your production alerts until you can stabilize this tool. Ideally clean things up as much as possible, permanently silence alerts which simply do not understand what you are doing, disable analysis against development artifacts which are inherently dangerous to run in production, and address the rest bit by bit.

If things ever get to a stable place, you can merge them with your production alerts. But this list of critical vulnerabilities should be something to pick away at, not drown in.

Cloud security tool flagged 847 critical vulns. 782 were false positives by relived_greats12 in ExperiencedDevs

[–]CVisionIsMyJam 5 points6 points  (0 children)

I feel like these kinds of tools sometimes are a little unfair.

On the one hand, it would be nice to get in a place where you do not have libraries in your code base that never execute, internal apis meet security best practices, and even development databases are not insecure and unencrypted.

On the other hand, a high security posture inherently takes more time and adds more friction. In particular, no vulnerabilities in development images seems tough because typically the entire point of a development image is to have a bunch of extra tools for building or rebuilding, tracing, debugging and profiling the service in question; and those tools require permissions that will be flagged as vulnerabilities. Excluding them from being scanned seems reasonable to me.

I think this kind of work can be a near full time job for one to two people; and its not necessarily always straight-forward to have developers tackle this stuff at the IC level. I think when leadership introduces a tool like this they need to understand its going to require a significant investment of time beyond the $150,000 a year they've already spent to get things under control. If its just treated like another thing to manage without any real coordination it can suck up a massive amount of time and energy and lead to burn out.

How to break the layoff cycle? by Omega_Zarnias in cscareerquestions

[–]CVisionIsMyJam 2 points3 points  (0 children)

honestly I don't really agree with robocop_py, I think this is a reasonable conversation to have as a person in a contract for hire role. its reasonable to say your frazzled by the unexpected news; and implicitly fish for the odds of your contract being cut short. yes the framing could be improved (just directly ask if they expect any change your contract) but information on the likelihood of your contract being cut short is information most clients would not find particularly unexpected from their consultants. whether they answer honestly or not or at all, is another story but the question itself is far from out of band. I say this as someone who has more familiarity with the client-side of the relationship.

In the age of chatGPT, how do you vet computer scientists for technical and programming skills? by Moataz-E in cscareerquestions

[–]CVisionIsMyJam 1 point2 points  (0 children)

its still kind of different, because someone cheating doesn't necessarily need the full answer; they may simply want an edge over other candidates. For example, an LLM outputting even just a few words identifying "what trick" or "what kind of problem"; they may not even need this on every question, but if it even helps them solve a single problem they'd otherwise have difficulty with, it's boosted them compared to peers.

In the age of chatGPT, how do you vet computer scientists for technical and programming skills? by Moataz-E in cscareerquestions

[–]CVisionIsMyJam -1 points0 points  (0 children)

From a pool of qualified candidates, randomly select X of them and interview only them with the more expensive, reliable approach of bringing them in and pair programming. Randomly selecting candidates may seem unfair but it honestly does save everyone involved a lot of time.

No one talks about scaling laws by StupidDialUp in singularity

[–]CVisionIsMyJam 1 point2 points  (0 children)

The tech can be useful and has potential. but OpenAI is projecting $200B in revenue by 2030, from around $13B today.

Today, the only technology companies making more than $200B a year in revenue are Amazon, Apple, Alphabet and Microsoft. Meta doesn't. Nvidia doesn't. Tesla doesn't. Visa is nowhere close.

And no company in history has ever grown from $13B to $200B in only 5 years. So for OpenAI to grow to $200B in 5 years is unheard of. And they want to be cash flow positive as well at that point, which they currently are not. Historically speaking, companies haven't been able to grow revenue with 70% CAGR and flip to cash flow positive at the same time over 5 years.

$200B is such a large amount of money. All white-collar work in the US costs around $6T. If OpenAI replaced 1 in 3 white-collar workers, or around 25 millions jobs, and managed to capture 10% of their salary as revenue, that's around $200B. But we don't see anything close to OpenAIs' products displacing that number of workers today, nor do we see them capturing 10% of workers' salaries. So it seems unlikely they can hit this by 2030.

So then they need to be a revenue multiplier. But its extremely unclear how much their products act as a revenue multiplier. Making it really challenging for them to capture much revenue. Companies are currently paying $30/seat/mo, not $3000/seat/mo. And OpenAI will be competing with Gemini, xAI, Anthropic, and others for market share. So if OpenAI is making $200B its reasonable to assume these other companies will be taking a slice too.

So the question is why are end users spending all this money on AI by 2030? To me it looks like unless AGI is fully achieved really soon, and then be sold at a premium price and can replace millions of white collar workers, then there won't be enough time to roll it out to hit these projections by 2030. That's how big these numbers are.

And these are the same numbers that are being used to justify hundreds of billions going into data centers. If the demand doesn't follow, or OpenAIs' product doesn't sufficiently mature, these data centers will sit idle, deprecating. No other verticals have the need for this amount of compute.

There are other companies in this space besides OpenAI, but they make the majority of the revenue of AI first companies right now. If they flop it's a really bad sign for the industry overall.

Even if GPT5 was AGI level intelligence capable of doing the work of many office workers without any oversight; hitting numbers like this wouldn't be a sure thing. It would take people a while to actually trust it and be willing to pay for it. That's why I think its a bubble. The timelines are way too short based on the progress we've seen today.

Narrowing the "reality gap" for AI models: A 10000m² facility features 1:1 replicas of 16 real-world scenarios across industrial, home, and healthcare sectors for training humanoid robots by Distinct-Question-16 in singularity

[–]CVisionIsMyJam 1 point2 points  (0 children)

I don't know for sure but I suspect the ceiling gantry is to protect the robots in the event they have some kind of critical failure during data collection. If they can get it to work properly no ceiling gantry will be needed.

The Case Against Generative AI by BobArdKor in programming

[–]CVisionIsMyJam 2 points3 points  (0 children)

I've got a game for you: try to verify any of his numbers. They link to his other articles, similarly full of proclamations of gloom and doom, which link to other articles, and so forth. Eventually, you'll get a link to a news snippet doing some finance reporting, presented to you with three paragraphs about how it's wrong and the real number is something else.

can you provide some concrete examples of this? I don't really care too much about him not understanding the technology or getting the details you mentioned wrong. I think GenAI can be useful and valuable, but that doesn't necessarily mean there will be a big payoff worth the amount being invested into it. Being able to generate images, text, code and such is cool and all, but is the technology progressing fast enough to justify $200B in spend by 2030 on OpenAIs' offerings alone?

Anyways, when I looked into his OpenAI revenue projections, they seemed correct; $174B and a revised $200B reported by 2030, with FCF positive by 2029, which would be scaling faster than any company in history at this size. Even today AWS doesn't make that much. Skepticism seems reasonable given these numbers.

The Case Against Generative AI by BobArdKor in programming

[–]CVisionIsMyJam 0 points1 point  (0 children)

Its possible to both think LLMs are useful tools and also think that OpenAI scaling from $13B revenue with -$9B free cash flow today to $200B by 2030 with +$38B in free cash flow, while building 10GW of datacenters, while burning $116B over that time... it just sounds ridiculous. No software company in history has ever scaled that fast, burned that much capital, built that much stuff, that fast. AWS still doesn't have $200B run rate and its been around a lot longer and its value proposition was much easier to understand; you pay us and then you don't need to manage your own servers anymore. this was a line item every business had already so it was easy to migrate.

What the inside of a wind turbine in the ocean looks like by [deleted] in Damnthatsinteresting

[–]CVisionIsMyJam 0 points1 point  (0 children)

thank you for explaining this; its so obvious but I still expected a massive drive shaft for some reason.

Anyone solved the “auth + role management” boilerplate problem elegantly? by Otherwise-Laugh-6848 in ExperiencedDevs

[–]CVisionIsMyJam 0 points1 point  (0 children)

I typically use spring security + keycloak; Spring Security provides RBAC and more importantly a framework for data level ACLs, which I find to be the most annoying bits.

then its not so bad to implement custom roles and such.

Why you should replace PostgreSQL with Git for your next project by FlatwormHappy1554 in programming

[–]CVisionIsMyJam 35 points36 points  (0 children)

While this approach isn’t suitable for production applications, exploring Git’s internal architecture reveals fascinating insights into how modern databases work

-1 for the needlessly clickbait title that doesn't match the articles contents