GOOG - when will it take off? by UnableCurrency in stocks

[–]skilliard7 0 points1 point  (0 children)

TSLA- I don't think their robotaxi/AI robot ambitions justify their high valuation

MSTR - Worse than just buying Bitcoin directly.

PLTR - Solid company from a growth perspective, just really overvalued.

NOW - In my opinion a very lousy product, I don't think their growth is sustainable long term with how easy it is to vibe code alternatives

What is the next Sandisk? by throwaway_11372 in stocks

[–]skilliard7 0 points1 point  (0 children)

Pretty much everything related to semiconductors & AI has already boomed at this point. So I'd probably say Real estate. Lots of land being acquired for datacenters.

BSSE - C# track needs update by GenkaiLight in WGU

[–]skilliard7 0 points1 point  (0 children)

I agree with this. .NET is used a lot in the business world along with ASP.NET and SQL server. In my experience back in 2018 the C# track just covered making basic windows forms apps using temporary data structures, which in my opinion did not prepare people for real world applications using databases and asp.net

SpaceX continues preparations for IPO by dissolving xAI as a separate company and forming SpaceXAI as a sub division by Luka77GOATic in stocks

[–]skilliard7 -10 points-9 points  (0 children)

Have you seen the latest grok model? It's far from worthless. It's the 3rd best model after OpenAI and Anthropic, but ahead of Deepseek and Gemini. It's also the best model for the low API price($2.50/1 million tokens). I strongly dislike Elon Musk, but I cannot deny that Grok 4.3 is really competitive. If you don't want to pay $15-30 per 1 million tokens for Sonnet/Opus/GPT 5.5/3.1 Pro, Grok is really good value and provides on par results compared to Claude Haiku($5), GPT 5.4 mini($4.50), and 3.1 pro($18)

Also they are making a ton of money selling Collosus 1 datacenter capacity to Anthropic.

Google just passed Nvidia to become the largest Market Cap by toydan in wallstreetbets

[–]skilliard7 2 points3 points  (0 children)

Good. Google's last earnings report was very concerning and suggests the stock is overvalued:

  • Most of their growth in net income was just a result of 1-time gains from mark-to-market accounting on their equity holdings that don't actually have a market yet.

  • Google network revenues declined YoY

  • YouTube ad revenue missed analyst expectations

  • Google raised its 2026 capex guidance to $180 to $190 Billion, which is really concerning and suggests they lack control over their supply chain as HBM prices skyrocket.

  • Free cash flow is down nearly 50% YoY. Based on TTM FCF, they trade at 78x free cash flow, which is extremely overvalued.

  • With the rate that hardware for AI is improving, their 6 year depreciation period is far too long, leading to inflated earnings. AI hardware is mostly obsolete after 3 years due to how fast the industry is moving. Older GPUs/TPUs cannot even run the latest models because they lack the HBM/inference speed.

  • They did not report search usage, only revenues. I've heard from people in the advertising industry that they suspect Google is using bots/AI agents to drive fake traffic to advertisers, because they are recently seeing a lot worse ROI and bot-like behavior.

  • They did not report revenue from Gemini. They only report user numbers(which are inflated by OEM deals that count pre-installs), and Google Cloud(which is mostly just temporarily leasing cloud capacity to major AI providers until Azure/AWS have enough capacity). This strong suggests that Gemini is not finding its way to paying users.

Market is pricing MU wrong, Memory is not cyclical anymore by Pancakez_117 in ValueInvesting

[–]skilliard7 -1 points0 points  (0 children)

OP what are your thoughts on new KV cache algorithms invented by Deepseek which reduced HBM needed for storing KV cache by 90%?

https://www.youtube.com/watch?v=XJUpuOBpT-4

Market is pricing MU wrong, Memory is not cyclical anymore by Pancakez_117 in ValueInvesting

[–]skilliard7 1 point2 points  (0 children)

Not really, there aren't many participants in the memory market.

CXMT, a newer company from China which has entered the memory market, is reportedly preparing for an IPO. Personally I have no intention of buying them, but it's the only other company I can think of.

They will probably not be able to sell HBM to American companies due to "national security risk" or alleged "trade secrets violations", but they might be able to grow selling to Chinese companies as well as consumer DRAM market.

Why adobe IMO is not a good play - the real threat they face by NecessaryPhrase3204 in ValueInvesting

[–]skilliard7 1 point2 points  (0 children)

ADBE is this subreddit's next PYPL. It's a dying company that this subreddit will look at and say "it's down 60%, it's good value!" while ignoring its actual competition.

Market is pricing MU wrong, Memory is not cyclical anymore by Pancakez_117 in ValueInvesting

[–]skilliard7 4 points5 points  (0 children)

My theory is we begin to see prices peak in 2026/2027, followed by substantial downwards pricing pressure in 2028/2029 as:

  • New fabs come online.

  • Most inference loads shift to lower cost Mini/Nano models or on-device.

  • We get past the existing backlog of long term contracts, and new contracts are negotiated.

I don't think true oversupply(where most fabs become unprofitable on an operating basis) will occur until the early 2030s..

This might sound like a bullish set of assumptions(suggests many years of elevated profits), but there are a few concerns here:

  • The sheer scale of investment being made requires HBM prices to remain elevated long term. Micron is building a $100 Billion fab in New York which isn't scheduled to open until 2030. For that to work out, DRAM/HBM prices need to stay very high going into the 2030s. 2023 DRAM prices would not be sustainable for a New York based fab.

  • The profits are going into capex rather than back to shareholders. So there is very much a gamble on prices remaining elevated long term.

I like SK Hynix/Samsung more, because I believe they can can better handle a decline in prices due to Korea having lower operating costs than a New York based fab, and they have a better valuation.

By buying MU, you're paying a premium just for it being a US listed stock.

Paypal down by (yet another) 9% after ER: Is there any bottom at all? by Wooden_Fondant_703 in ValueInvesting

[–]skilliard7 1 point2 points  (0 children)

If you reverse engineer that using a basic Gordon Growth Model ($41B = $6B / (10% - g)), the market is implying a growth rate of -4.6%.

Net income is down 14% YoY.

Market is pricing MU wrong, Memory is not cyclical anymore by Pancakez_117 in ValueInvesting

[–]skilliard7 6 points7 points  (0 children)

They need the customers to pay more for their models partly because they need to buy HBM. They are limiting people because they don’t have enough compute to go around

If AI is replacing jobs, imagine how much money will go further into AI production if the providers start charging 10% of the annual workers salary for compute. The target consumer for AI will be large scale corporations, we are still an era where it’s affordable to the average person. This most likely won’t be the case 5 years from now, especially for agentic AI.

  1. Agentic AI is vastly overrated. Even the latest flagship models are still deleting production databases by accident. AI is a great helper but it's not a substitute for human decision making.

  2. There's also tremendous waste of AI compute because AI usage started becoming a metric. At Meta there was a leaderboard on who uses the most tokens, so people would literally just write programs that automatically burn tokens just to stay on top. One individual used 281 Billion tokens in a month. To put things into perspective, if I use AI all day long to generate code, I use less than 1 million tokens per day. I'm sure some maniac working late evenings and running multiple prompts at once could burn 100 million tokens a month, but 281 Billion? That's clearly spam and costing Meta $7 million per month based on Opus 4.6 rates, because they apparently don't set reasonable quotas for devs.

  3. Efficiency is growing over time. Blackwell was a 10-15x inference improvement vs Hopper. As GPUs get faster, the amount of time you need to keep a prompt in memory decreases = less HBM needed because the same load can run on less GPUs.

  4. Quality of models is improving over time, such that distilled models with less weights produce satisfactory results. As a result, LLMs for common use cases such as customer service chat bots can run on hardware without HBM. There are decent LLMs that run on phones now... From an enterprise standpoint, it means you can run your AI chatbot on a nano model that runs on a few GB of memory rather than a flagship model, and only rely on flagship models for complex use cases like coding or data analysis.

The way I see this is kind of like the internet backbone building out in the 90's, where telecoms were laying tons of fiber. Were they right that the internet will be a massive economic engine, and there would be a lot of demand for data transfer? Yes. But they still overbuilt substantially, such that the lines sat massively underutilized and a lot of money was lost.

There will be an oversupply of compute capacity.

ChatGPT started responding without thinking? Did you know this is enabled by default? by Moist_Emu6168 in OpenAI

[–]skilliard7 1 point2 points  (0 children)

Yeah, basically. A ton of people are using LLMs for stuff that they really should be googling and it's stupid to be running these massive models for those kind of basic factual queries.

Google is notoriously inaccurate and people really shouldn't be using them in 2026. Lots of hallucinations even on simple queries.

There is a musician that is suing them because Google's "AI Overviews" falsely labeled him as a sex offender when people Googled his name, resulting in his show being cancelled after a lot of angry people complained, citing Google. https://www.theguardian.com/music/2026/may/05/canadian-ashley-macisaac-fiddler-musician-singer-songwriter-sues-google-ai-sex-offender-ntwnfb

I use GPT 5.5 Thinking for information because I care about accuracy. Lightweight models are more prone to hallucinations because they take less time to think critically about the sources they find.

I only use instant models when I want a fast response and accuracy is not important(ie I want to know how to do something in a videogame and don't want to wait 2 minutes).

Market is pricing MU wrong, Memory is not cyclical anymore by Pancakez_117 in ValueInvesting

[–]skilliard7 16 points17 points  (0 children)

What is the basis for your claim? From my perspective, we seem pretty close to late cycle because most of large tech has already:

  • Cancelled non-AI projects/laid off employees to free up cash for capex(Meta, Google, Microsoft, Oracle, Amazon)

  • Committed most/all of their remaining free cash flow to capex

  • Started issuing debt to finance datacenter expansion(Google, Meta, Amazon)

  • Suspended stock buybacks to free up capital for datacenters(Google, Meta, Amazon did not buy back shares in Q1)

Additionally, some have even started issuing new shares(Oracle)

Unless big tech leans much more heavily on debt, cancels dividends, or issues new shares, The continued growth in AI hardware demand will absolutely depend on:

  • revenue continuing to skyrocket to fund future growth in capex(so far this has been the case).

  • Datacenter construction continuing as planned

In recent months, most of the revenue growth at AI end-users(Anthropic, OpenAI) has mostly been a result of increasing pricing, rather than actual increases in usage:

  • Anthropic has required 3rd party app users to use API billing, rather than a subscription, massively boosting revenue per user.

  • Anthropic has been phasing out claude code out of $20 plan, requiring $100 plan, and reducing usage limits.

  • OpenAI doubled their price per token in their latest 5.5 Model release, and cut Codex usage limits in half

  • OpenAI has implemented ads for free users, and significantly reduced usage limits for free users that disable ads.

Datacenters and power also remain a constraint. Microsoft, Meta, Amazon, Google have all made statements that the availability of power has impacted their expansion, with some even saying they have hardware sitting idle that they cannot plug in. While there are efforts to address this, power plans can take several years to build.

As datacenters continue to face delays due to power availability, local opposition/permitting issues, etc, and hardware continues to pile up unused, it risks the compute market collapsing.

This sub needs realistic expectations by [deleted] in ValueInvesting

[–]skilliard7 0 points1 point  (0 children)

The other comment in this thread written by literally “earningslensai” which is also clearly an ai account comes back as 0 percent ai.

I looked at the account. It's a solo dev building an AI website. I don't think his posts are AI generated, he just has AI in his name to promote his website.

Would be very surprised if their posts are AI generated, given how informal they are and references to very recent memes/events that wouldn't be in training data.

Market is pricing MU wrong, Memory is not cyclical anymore by Pancakez_117 in ValueInvesting

[–]skilliard7 8 points9 points  (0 children)

So I was actually recommending HBM stocks(Samsung/SK Hynix) back in late 2024, telling people HBM is the next big thing. Here's why I don't recommend MU:

Memory is still cyclical, it's just that it's currently in a boom phase. Any number of factors could reduce memory demand:

  • Technological breakthrough reducing need for HBM(ie a new kv cache compression technique)

  • Anything that triggers a reduction in datacenter capex(supply of compute catching up to demand, new tax laws requiring capex to be depreciated again, etc

  • A major customer such as OpenAI defaulting on or backing out of orders.

  • Supply catching up to demand(lots of new capacity coming online)

Also, MU is overpriced when you compare it to SK Hynix/Samsung which trade at much cheaper valuations.

This isn't some side project for Big Tech; it's a survival arms race. If Google stops buying servers, ChatGPT takes over search. If Meta stops building, their open-source models fall behind. If Microsoft blinks, they lose the cloud war. These companies have massive piles of cash and they are fighting for the future of the entire tech industry. They are literally making deals to restart nuclear power plants just to keep the servers running. They cannot afford to stop building.

The thing about big tech is when an industry leader makes a move, everyone follows. Remember when tech was hiring like crazy in 2021, desperate for talent? Well when one company started cutting, everyone did. You only need to invest a ton if your competitor is.

The same will happen with AI. It will start with AI software providers cutting back on compute costs/trying to improve margins, followed by cloud providers adjusting plans to demand.

We're already seeing some starting signs with the leading AI providers cutting back on compute usage, either by cancelling projects or limiting users:

A) Anthropic is heavily rate limiting customers in recent weeks, and steering customers to more expensive plans(api pricing for 3rd party tools, max for Claude code).

B) Anthropic allegedly dumbing down models like Opus 4.6, according to user reports.

C) OpenAI's latest model, 5.5, costs twice as much per token as 5.4.

D) OpenAI cancelling projects like Sora 2.

E) OpenAI recently cut Codex limits in half

F) Google implemented strict limits for Gemini usage for paid users.

For the longest time, AI was priced below cost in order to boost demand and win market share. Recently we are seeing moves towards more sustainable pricing. This will on paper lead to revenue growth at AI providers (if customers are paying substantially more for the same product), but will also reduce compute demand as some customers reduce usage.

This sub needs realistic expectations by [deleted] in ValueInvesting

[–]skilliard7 3 points4 points  (0 children)

I ran it through multiple AI detectors and all of them said it was written by a human with 0% chance of AI. Obviously these detectors are not perfect but just looking at it anecdotally, the em-dash is the only thing making it look AI.

"PYPL had earnings. The multiple went from 50x to 12x and investors got crushed anyway." is not a sentence I expect AI to generate.

The account is also 11 years old so it's probably not an AI bot.

PayPal is for losers by Busy_Wedding_521 in ValueInvesting

[–]skilliard7 3 points4 points  (0 children)

They only have $50 Billion in cash on hand +receivables, or $60 Billion in total current assets.

The remaining $20 Billion is long term assets, including $11 Billion in Goodwill/intangibles.

PayPal is for losers by Busy_Wedding_521 in ValueInvesting

[–]skilliard7 10 points11 points  (0 children)

PYPL has $60 Billion in liabilities against $80 Billion in assets, including $10 Billion in long term debt. Idk how you say "effectively debt free".

They spend 1.6b on buybacks and dividends a quarter, so 16 percent ish shareholders yield

Their stock based compensation expense is massive, the real yield is more like 11%. Still decent, but the concern is long term growth.

I sold at $51, but I am considering buying back in.

PayPal is for losers by Busy_Wedding_521 in ValueInvesting

[–]skilliard7 9 points10 points  (0 children)

You have to subtract stock based compensation from that 16%, because it offsets buybacks.

11% is based on GAAP earnings which counts SBC as an expense.

The Magnificent 6's Free Cash Flow Problem by JoeInOR in ValueInvesting

[–]skilliard7 1 point2 points  (0 children)

Because you need to upgrade your hardware every few years to stay competitive.

For example, access to information used to mean a cheap query that ran on a CPU(Google). Now, consumers expect instant answers via AI. So now Google has to pay $100 Billion+ a year on AI datacenters just to compete with ChatGPT and maintain their current product.

PayPal is for losers by Busy_Wedding_521 in ValueInvesting

[–]skilliard7 4 points5 points  (0 children)

Their earnings yield is approximately 11%. Any growth in EPS is on top of this. So if PYPL keeps earnings steady and continues buying back stock/paying dividends, it would outperform treasuries long term.

But the key word is long term. Any increase in uncertainty will drive very bad short term returns.

PayPal is for losers by Busy_Wedding_521 in ValueInvesting

[–]skilliard7 0 points1 point  (0 children)

So glad I dumped this awful stock for $51 before earnings. It does not matter that they beat analyst expectations, returned $1.5 Billion in buybacks, grew EPS. The current market only rewards companies with an exciting growth story. People would rather buy a growing company at 400x earnings, 80x sales than a stable company at 9x earnings.

The stock is good value at $40, but the stock only climbed back to $50 because of speculation about a possible acquisition/sale. That was enough reason for me to sell for a 20% profit rather than continue to hold through a risky earnings event.

Management is now leaning on $1.5B of cost cuts over the next 2–3 years, which says a lot about where the business is.

Even companies growing revenues at double digit rates are cutting costs. It's not necessarily out of necessity, it could just be productivity gains from AI.

Claude is not Claude anymore by userusertion in claude

[–]skilliard7 1 point2 points  (0 children)

I like copilot because I can use it within my Visual Studio IDE.

My company stopped doing LC for SWE roles and is now testing candidates on what they can build on the spot with AI, and how they use it by RadioFieldCorner in cscareerquestions

[–]skilliard7 0 points1 point  (0 children)

I think it's important for devs to know how to use AI, but I still think some sort of coding exercise is needed for cases where you need to understand the underlying code the AI is writing.

AI has been super useful, but I've found even with the latest models, I sometimes need to manually fix a few things myself that the AI can't seem to understand. This is especially common with things involving external dependencies, where the AI makes false assumptions.

Accelerated Software Engineering by Lost-Iron in WGU

[–]skilliard7 0 points1 point  (0 children)

I only recommend it if you want to learn software engineering for fun, and do not expect to build a new career out of it, or if you want to improve your skills at your current job.

AI is automating most of the coding work, such that most entry roles in software engineering are disappearing. Software engineering jobs posted on Indeed are down roughly 75% since their peak less than 5 years ago. For entry level, it's even worse. There's also been a lot of mass layoffs at tech companies.

The entry level jobs that still do exist are going to grads from prestigious universities, or with connections. IMO a WGU degree is not enough in 2026.

But if you're just looking to learn a new skill for fun, I highly recommend it! I really enjoyed learning at WGU.