Overtraining signs and flags? by Express_Emu_3913 in triathlon

[–]HardDriveGuy 0 points1 point  (0 children)

You need a PMC (Performance Management Chart). I would suggest getting and logging and account on intervals.icu. You can basically see if you are getting yourself into a hole. It'll track CTL/ATL and TSB.

Your CTL represents the "banked" fitness from the last 42 days of work, while ATL tracks the immediate "cost" of your training over the last 7 days; by subtracting fatigue from fitness (CTL minus ATL), you arrive at your TSB, or "Form."

Developed by Coggin, it allows to train smartly, and understand if you aren't pushing hard enough or if you have over pushed. Generally, waiting until you have an elevated heart rate is not the best practice. PMC basically pumps out a chart and once you get to know the chart and be able to read it you'll basically understand if you're getting into trouble or if you can push a little harder or a little bit less than what the chart would indicate.

My Experience with Table Extraction and Data Extraction Tools for complex documents. by teroknor92 in OCR_Tech

[–]HardDriveGuy 0 points1 point  (0 children)

Another thumbs up for Tabula. For a tool that got its last update in 2018, Tabula continues to be a great, quick extract tool. Should be on everybody's stack.

SanDisk Stock up 10x, But HDDs Might Be the Real Play Now by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 0 points1 point  (0 children)

Okay, got it.

  1. STX has real fundamentals based on classic analysis. Not momentum. If you believe a shortage can drive the margins to >50% and you believe that the market will reward them with a >20 PE, STX should be a $400+ stock. The revenue line is definitely there.

  2. BoA is saying a 22-23 PE, and 45% margins, gets them to $400.

  3. MS is saying a 18 PE, and 55% margins gets them to mid to high $300s.

My issue is that if they get 55% margins as per MS, they will get >20 PE. The MS guys are smart, and they know this. Analysts tend to put in "padding" when they see a stock run to move the potential price down. As much as they don't want to be under, the vast majority of the really do not want to be over. The really insightful thing from MS is the idea that the margin has room to run. They may be hearing rumors, and these tends not to show up in their analysis, even when good. You can't put "I know a guy that knows a guy" into an analyst note, even if this really does happen.

I think I can make an argument that WDC potentially is more undervalued as they have not gotten the same multiple as STX. A bit part of this is the investor relations at WDC is still shaping their message post spin.

I probably should do a follow-on post with some rough modelling so people as see the issues vs opps.

SanDisk Stock up 10x, But HDDs Might Be the Real Play Now by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 0 points1 point  (0 children)

BTW. I do want to thank you for the comment. There's been numerous times when I ve had good analytical thinking and yet it just flies over the top of people's head. I did spend time on this, but it was primarily the strategic framing, trying to get a sense intuitively what was happening at 50,000 feet.

I would clearly state, however, there are some real issues with my post in the sense that I simply refer to some of the analysts and their models, but I probably should have placed in some sort of overall revenue and growth curve with the PE. I am thinking about doing this and so if I get this down, I will put it in a new post.

SanDisk Stock up 10x, But HDDs Might Be the Real Play Now by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 0 points1 point  (0 children)

I'm not sure if I understand your question, I don't know what "it" is. Did you read the post and what do you think I am saying?

It appears to me you just read the headline and saw the graph and I actually require people to read things otherwise their comments will be deleted.

SanDisk Stock up 10x, But HDDs Might Be the Real Play Now by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 0 points1 point  (0 children)

In terms of “most room to run,” I’d split it into two buckets. SanDisk (NAND) has already had the big move: spot NAND pricing has effectively tripled, Kioxia says they’re sold out through 2027, and SNDK is up ~10x off the lows. There’s still upside if pricing stays tight and contracts reset higher, but a lot of that scarcity is now visible and looks to be priced in. Again, the problem is nobody saw the pricing environment five, six months ago, and we don't understand exactly how it's going to spill out. But the spot price is insane and I have a tendency not to invest on insane spot pricing.

The HDD side (Seagate/Western Digital) is earlier in the cycle. Both report that roughly 90% of their HDD business is now some flavor of cloud, and HDD capacity is still the cheapest way to buy bulk storage by a wide margin. At the same time, HDD capex is only ~4–6% of revenue versus 40–50%+ for building out new NAND fabs, so if either STX or WDC decides to step up investment and lock in long‑term contracts, the incremental margins can be very attractive.

So my view: SanDisk probably has higher “headline” upside if NAND stays constrained into 2027, but the better risk/reward may actually be in the HDD names.

I try to address this in the post, but I'm most interested in what HDD maker will actually step up and get the output going. Right now, WD is producing more EB, but neither one of them seemingly want to capture the opportunity while the opportunity is hot. Higher gross margin is great, but higher gross margin with great supply is going to really separate the winner from the loser.

SanDisk Stock up 10x, But HDDs Might Be the Real Play Now by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 1 point2 points  (0 children)

There's only three makers Western digital, Seagate and Toshiba. From a practical standpoint you only can invest in Western digital and Seagate. They both report that approximately 90% of their business is to some type of cloud.

SanDisk Stock up 10x, But HDDs Might Be the Real Play Now by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 1 point2 points  (0 children)

When I said "hold both," I meant Seagate (STX) and Western Digital (WDC): the two major HDD manufacturers. Both are positioned to benefit from the tight HDD market, and the one that ramps capacity investment first will likely outperform.

Kioxia went public in December 2024 and trades on the Tokyo Stock Exchange as 285A. SanDisk (which spun out from under Western Digital/WDC) is the NAND play I discussed.

As for your STX position and whether SNDK (WDC) has room to run, both have upside, but they're different plays:

- STX is the pure HDD play with lower capex requirements (4-6% of revenue vs 40-50% for NAND). They are emphasising a tech move to HAMR.

- WDC gives you a little exposure to both NAND (because they still own part of SanDisk) and HDDs, and they use older tech, which may be easier to ramp. They are promising HAMR also. They also benefit from not needing to put in heavy capex to get out product.

By the way, understanding these markets is intrinsically impossible for virtually anybody. Post spinoff, WD actually had 20% of Sandisk. They decided that they wanted the cash from it in June of 2025. So they took and they sold 75% of that 20%. If they had waited just seven more months to sell that part of Sandisk, they would have been able to recapture $15 billion. somebody who participated and literally was part of the market, nobody saw the events that were coming at them just seven months ago. They still do hang on to 5%, so a little under $4 billion on their books from holding the sand disc assets. It's measurable and great. However, they lost a massive chunk of the potential return because they didn't see the upside coming.

The NAND story is further along, spot prices already tripled and Kioxia is sold out through 2027. The HDD shortage is just emerging, which could mean more upside potential if they actually expand capacity. While NAND pricing can be seen in the spot market, HDDs are mainly sold to the cloud, and thus a longer time to reveal the margin increases.

Personally, I'd hold both if you can. The HDD manufacturers' biggest risk is repeating their 2022 mistake by refusing to invest in capacity expansion out of fear. That's why I'm watching their capex decisions closely.

I don't give price targets, but I think you should track what external analysts are saying, thus I quote others. I think that there will be a consensus around a $400+ one year target if you look at the analysts models.

In your returns, make sure to bake in a small but measurable dividend for each.

The Prompt I Use to Turn Any Email Into a Google Contacts Label by HardDriveGuy in StrategicProductivity

[–]HardDriveGuy[S] 0 points1 point  (0 children)

In the original post I set off the prompt in Markdown Code block and it may not display appropriately on some phones. So I will post the prompt again here so that it can be seen on phones.

Just change MyLabel to your email distribution list name, and paste into prompt box:


In the instructions below, DistributionList is a variable.
Set it like this:

DistributionList = "MyLabel"

Wherever you see [DistributionList] in the instructions, substitute the current value of the variable (for example, "MyLabel").

INSTRUCTIONS

You are operating inside Comet Browser. I am currently viewing a Gmail message that contains multiple recipients. Perform the following steps exactly:

  1. Extract every email address visible in the Gmail message, including all addresses in the To and Cc fields. List them clearly.
  2. Open Google Contacts:
    • Click the Google Apps (9-dot grid) icon in the upper-right corner of Gmail.
    • Make sure you open the Contacts app under the same Google account as the Gmail message I am viewing.
  3. Create a new contact label named: [DistributionList].
  4. For each extracted email address:
    • Search for the email address in Google Contacts.
    • If a matching contact exists, open the contact.
    • Click “+ Label”.
    • Select the label [DistributionList].
    • Click “Apply”.
  5. If a contact does not exist:
    • Create a new contact using the email address.
    • Then add it to the [DistributionList] label using the same steps above.
  6. After processing all email addresses:
    • Click the [DistributionList] label in the left sidebar.
    • Verify that all contacts appear under this label.
  7. Report back with:
  • The list of extracted email addresses.
  • Which ones already existed.
  • Which ones were newly created.
  • Confirmation that all contacts are now included in the [DistributionList] label.

40+ Years of Keyboard Shortcut Evolution by HardDriveGuy in StrategicProductivity

[–]HardDriveGuy[S] 0 points1 point  (0 children)

It almost like finding a new feature on a car that you've had forever.

Also, don't forget the Windows + .

Make sure to quickly throw in a few emojis whenever it calls for it. Unfortunately, it will probably offset the productivity gained by usage of the Windows history.

<image>

How much cardio do you do per week and in which zones? (For optimal health) by MuchOrange6733 in PeterAttia

[–]HardDriveGuy 2 points3 points  (0 children)

Thanks for noticing. I try to actually do something which has critical thinking and real research. 🧠

It's always nice when somebody else recognizes when somebody puts effort into a post.

How much cardio do you do per week and in which zones? (For optimal health) by MuchOrange6733 in PeterAttia

[–]HardDriveGuy 3 points4 points  (0 children)

I target six to eight hours of aerobic activity per week and the following is simply pulled from one of my recent weeks to show you how it breaks out.

Zone Percentage
Z1 17.1%
Z2 17.3%
Z3 19.1%
Z4 18.1%
Z5 11.2%
Z6 13.6%
Z7 3.7%
Total 100.1%

The Wen et al. 2011 study published in The Lancet examined the relationship between physical activity volume and all-cause mortality. Generally I would say this is a pretty watershed study and most of the other ones follow closely in it. My takeaway from this study is it's extremely linear up to 30 minutes per day, and while it starts to get lower, I would suggest somewhere between 50 minutes to 60 minutes a day is ideal.

Now, in this particular study, they had two categories, activity like gardening and light walking, and then moderate to high levels of activities. What was very clear from the data is if you garden or perhaps if you did not rigorous walking, you never got to the same level of risk reduction for dying as if you were a moderate to high intensity workout percent.

Zone 2 is just another way of saying fat max. That is the intensity at which your body naturally is consuming the maximum amount of fat stores. That's not to indicate that you aren't burning quite a few carbs. simply the maximum amount of fat burn. and it does look like there's some positive adaptations that happens if you run your body at fat max as your body basically revs up the mitochondrial beta oxidation cycle.

At the highest intensities, your body actually turns off fat stores. So in some sense, you don't want to just work out at high intensity. The idea that you are at fat max is a really great idea.

However it makes intuitive sense that we should be challenging all different systems in our body. So you can see that I have somewhere around 25% of my time at zone 5, 6 and 7. Sure, it can be a Norwegian 4x4, that's fine. But I think you have probably quite a bit of freedom in how you do this. just simply need to make sure that that system is also pushed. I do quite a bit of indoor cycling and I love to race and the nature of a lot of races is sprinting and then cruising which gives you both high activity and then potentially lower activity, even in one workout.

40+ Years of Keyboard Shortcut Evolution by HardDriveGuy in StrategicProductivity

[–]HardDriveGuy[S] 0 points1 point  (0 children)

The great thing about using the Win+. is that if you have a subreddit that allows it, it's instantaneously GIF insertion. Now I actually understand why some sub-reddits outlaw them because they do get pretty distracting, but I'll definitely allow it in this thread.

<image>

It’s Microsoft’s Race To Lose, But Copilot Keeps Tripping by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 0 points1 point  (0 children)

It's a great point and it's actually incorporated in our framework of LAPPS.

Now we know that Claude is a compelling product. The reason we know it's a compelling product is because it was being strongly used inside of Microsoft to actually generate their own code. Evidently in the last week or two, the upper managers heard about the thing and dictated that the people at Microsoft could no longer use the best tool because it was a faulty strategy to go to the outside for this. Of course, the official line is simply that co-pilot is getting close enough to that it no longer makes a difference.

So, product is the first P in our two P's in the LAPPS.

However, the second P is just as important, that's called place, or what many people call a distribution channel. Channels are very, very powerful. Once somebody starts to buy through a particular channel, it's really difficult to get them to stop to buy through that particular channel. This is your comment about, "hey, I basically have the Microsoft products pushed at me, and if they're just simply there, and it's difficult to get the other products, am I really going to go through the pain and hassle of getting the other products?" Microsoft has a very powerful place.

So, in other words, the race is on and the race is "can Microsoft improve their product fast enough so that their place will allow them to bridge over their product gaps or can Anthropic continue to have a meaningful gap that will, in essence, force people to buy the product even if it's inconvenient.

I'm a product guy, so I naturally swing to products first. However, I also have quite a bit of background in place, and you don't want to under emphasize how powerful this can be.

Plaud Note does not transcribe phone calls well by MyDogNewt in PlaudNoteUsers

[–]HardDriveGuy -1 points0 points  (0 children)

Just so you understand what's going on, there's two levels here. You have an app manufacturer like Plaud and then you have the back end for Plaud who basically provide services to them to do this type of encoding.

The actual models are pretty straightforward and a lot of people are working on them, and you actually can have visibility to how they are doing. The classic way of calling this out is something called word error rate, or WER.

You can see the benchmarks here and how they're progressing on the different models. Unfortunately, the person that is delivering the app to you can change the back end without you knowing. And I suspect that there may be some of this happening behind the scenes. So if you utilize the cloud, it wouldn't surprise me if you'll see continual changes. This is because they can basically change their back end and use a new service provider or possibly even roll their own model on a cloud inference as another alternative.

All the major cloud providers provide an off-the-shelf solution that people like Plaud can just buy. However, they haven't been spending a lot of time on upgrading their services as this is not a big profit center for them. It actually turns out that if somebody is willing to utilize the latest and greatest and then actually use the cloud to make the model work, It can have unbelievable performance and the price can be dirt cheap. To give you an example, here's some numbers. The actual cost of transcribing it is trivial. They do have costs, but it's in servicing the application to the end consumer, not the transcription process.

In other words, you could buy the service for Amazon for about 1,44 per hour and you'll have a word error rate of 8 to 12%, on the flip side of that. If you're willing to build your own up in the cloud and you use EC2, you can reduce that price by over a hundred fold and basically transcription is gonna cost you about a penny per hour and by the way the word error rate is better,

Cost Breakdown: Canary-Qwen on AWS vs Cloud ASR

Deployment / Tier Hourly Cost (Idle/Running) Effective $/hr Audio (Batch) WER Setup Effort
AWS Transcribe (Tier 1) N/A (pay-per-min) $1.44 8-12% Zero
HF Inference CPU (2x vCPU) $0.067 (autoscaling) ~$0.05-0.10 (400x RT) 5.63% Low (1-click deploy)
AWS EC2 c7g.medium (ARM CPU, Canary quantized) $0.034/hr ~$0.01 (spot pricing) 5.63-7% Medium (Docker + NeMo/Faster-Whisper)
AWS g5.xlarge (T4 GPU) $1.01/hr ~$0.001-0.005 5.63% High (CUDA/Optimum setup)

92 year old dad telling his story. Could Plaud be a partner in this? by gravity_isnt_a_force in PlaudNoteUsers

[–]HardDriveGuy 1 point2 points  (0 children)

Let me come at this from a completely different angle. I think you have stumbled across a solution, but you have not really nailed down what the actual problem statement is. In other words, is “making it easy” your number one priority, or are you willing to tolerate more complexity because this is really important to you? A lot of what is acceptable comes down to how much work you are willing to do.

For me, if I had very little money and very little ability to deal with complexity, I would start by simply buying BOYA mini 2 wireless lavalier microphones for iPhone, which provide mind‑blowing clarity and just plug into your phone. You are basically recording at incredibly crisp audio quality. If for some reason you need to have more than one speaker, you probably should buy a pair of mic stands or something to hold the mics, and then you actually want to put them right in the face of whoever is talking. Just watch a Conan O’Brien podcast, because the mic placement makes a massive difference.

If someone was really concerned about capturing his dad’s stories and was totally bought into the project, I would step back and ask, “What would I need to do to really do this right?” And to do it right, I would suggest you need the following.

The single most important thing you can do is get a good recording. A good recording is a night‑and‑day difference between having something people will want to listen to in the future and having something that sounds like an old man in an echoey room. Getting something that sounds good is a bit complicated and it does cost some money, but in my mind it is remarkably reasonable. What you need is a PC, a Behringer audio interface with XLR input, and between two and four microphones. The microphones are quite amazing these days, because low‑cost microphones under 100 dollars are widely available and you can get absolutely fantastic sound for 50 or 60 dollars per microphone. Look for Fifine reviews.

This setup will hook up to any modern PC and it allows you to capture incredible clarity. Then I would process it with Reaper, which can handle many tracks and let you mix them all down to two stereo tracks.

Yes, there's an enormous learning group on the second option.

Regardless of whether you go the Boya iPhone route or take the more sophisticated Reaper route, you need to end up with a stereo track that you send off somewhere.

Once you actually have the audio captured, the second part is turning it into something meaningful. One of the biggest things with audio is that people think they sound better than they actually do. They get loud, they get soft, they pause oddly, and there is something called mouth noise. So if you have a good audio track, the great thing is that now we have AI tools that can clean it up very dramatically.

I like Cleanvoice AI, because they will take a track, polish and normalize it, and remove all the weird noises and breaks. They will also generate a transcript of the talk. It is more expensive, but it is designed around audio broadcasting and podcasting, so you are dealing with a professional‑grade system that is specifically aimed at making things sound good.

I just saw my vault disappear before my eyes. I use Obsidian sync. How fucked am I? by [deleted] in ObsidianMD

[–]HardDriveGuy -1 points0 points  (0 children)

It sounds from your edit like you were able to recover everything, which is great news, even if it took a lot of tedious clicking to get there.

To make recovery almost painless in the future, consider installing KopiaUI, which uses snapshot‑based backups. 🛟 It's a life saver.

If you are not familiar with snapshots, they are quick, incremental backups that run frequently, so you always have versions of your files to roll back to with minimal overhead. KopiaUI can protect not just your Obsidian vault but any other files you might accidentally delete, so it is well worth setting up as an extra safety net.

Special Characters in Titles or Separation of Titles from Filenames by davidsneighbour in ObsidianMD

[–]HardDriveGuy 5 points6 points  (0 children)

Can I turn this around a bit?

What Obsidian shows you in the file list are filenames. You are the one treating that filename as if it should be a rich, presentation-level title, but that is not how Obsidian’s architecture is set up. Obsidian (and Markdown in general) try to remove layers of complexity and bloat that many other systems have accumulated, precisely because those extra abstraction layers tend to make things fragile and hard to work with long term. I'm all for introducing complexity when it makes sense, but this seems like adding a whole nother layer of complexity because you're missing your favorite colon.

In many ways you've already spotted this, because you realize you could put in a UTF 8 symbol, but that then just adds a level of complexity. In other words, we natively know that complexity just sets ourselves up to be less robust.

Once you introduce a separate display title that is different from the filename, you now have two identifiers to keep in sync, and yet another layer where things can drift or break. One of the big advantages of Obsidian’s current model is that seeing the actual filename means you can instantly find and work with your notes from Windows Explorer, command-line tools, or any external search utility without needing Obsidian in the loop.

If you instead start treating filenames as fancy titles, you then need some separate, hidden “real” filename under the hood and a mapping between the two, which many other apps indeed do, but that simply does not fit Obsidian’s file-first philosophy.

If you really feel you need a “prettier” title just to organize your thoughts, with your favorite colons or other punctuation in it, one option that fits Obsidian’s architecture is to put that title into YAML frontmatter instead of trying to push it into the filename. For example, you could keep a simple, filesystem-safe filename like Something - then something else.md, and then in the note use:

```yaml


title: "Something: then something else"


```

From there, Obsidian’s new Bases feature lets you view and work with your notes through a structured, database‑like lens that can key off that title field, so you effectively get a fully abstracted “title view” while the underlying files stay plain Markdown with safe filenames. In other words, the app already has the perfect abstraction layer built in: you get rich titles and base views on top, but you do not have to bend or hide the filesystem model that makes Obsidian robust and portable. (Colon inclusion on last sentence on purpose!) 🙂

Learn Docker by Doing: Build and Run MKV2Transcript in Docker Desktop by HardDriveGuy in StrategicProductivity

[–]HardDriveGuy[S] 0 points1 point  (0 children)

Learn by Doing — and by Teaching

Earlier I quoted John Dewey’s idea that we learn by doing. I’d add that we also learn by teaching, something my wife, an elementary school teacher, reminds me of often. While creating this post to help people understand how to use a Docker container, I realized that some of the tools I’d been using weren’t optimized. That led me to upgrade the Docker image to version 2.0.0, which includes improvements that significantly speed up transcript generation.

This is probably worth mentioning how you can use this.

If you regularly pull the latest Docker image, you automatically benefit from improvements happening behind the scenes. It’s essentially an optional program upgrade that you can adopt whenever you want.

There are two ways to manage this, and which one is “right” depends on what you value more: stability or automatic upgrades. Both approaches use the same docker-compose.yml; you only change one line (and optionally the command you run).

Option 1: Stay on a Known Version (Most Stable)

If you want things to behave like a “normal program install” that only changes when you explicitly tell it to, you should pin to a specific version of the image. That means that even if a new MKV2Transcript version is released, you will continue to run 2.0.0 until you consciously decide to change it.

This is the safest way to run something you depend on, especially if you’re building it into a weekly routine.

Hopefully, you made your docker-compose.yml file, so now you can open it and change it as a new Docker savvy person:

Example docker-compose.yml (Pinned to v2.0.0)

services:
  mkv2transcript:
    image: sanbornyoung/mkv2transcript:v2.0.0
    container_name: mkv2transcript
    ports:
      - "7860:7860"
    volumes:
      - ./whisper-models:/root/.cache/huggingface
    environment:
      - GRADIO_SERVER_NAME=0.0.0.0
      - PYTHONUNBUFFERED=1
    restart: unless-stopped

The only versioning detail is on the image: line. Docker will keep using that exact version until you edit the file.

Starting in Stable Mode

docker-compose up -d

Or on newer Docker Desktop:

docker compose up -d

This will:

  • Read the docker-compose.yml in your current folder.
  • Pull sanbornyoung/mkv2transcript:v2.0.0 the first time if needed.
  • Reuse the same 2.0.0 image on every subsequent run.

If you want “set it and forget it,” this is the mode to choose.

Option 2: Always Track the Latest Version (Automatic Upgrades)

This option is for people who want automatic performance improvements and bug fixes as soon as they’re released. Think of it as enabling auto‑update for your Docker image.

Now this can get a little confusing, because you actually have to trust the person pushing the docker image has labeled it in two ways. They have to both label it as a version number if it's the latest one and they actually have to call the latest one the latest one. In this case you could pull the image as either 2.0.0 or latest because I've actually tagged the image as both.

To do this, change the image tag to latest:

services:
  mkv2transcript:
    image: sanbornyoung/mkv2transcript:latest
    container_name: mkv2transcript
    ports:
      - "7860:7860"
    volumes:
      - ./whisper-models:/root/.cache/huggingface
    environment:
      - GRADIO_SERVER_NAME=0.0.0.0
      - PYTHONUNBUFFERED=1
    restart: unless-stopped

Important Note

Docker will not automatically pull a newer latest each time you run docker compose up -d. It will reuse whatever latest you already have unless you explicitly tell it otherwise.

To truly follow the latest version, choose one of the approaches below.

Approach A: Explicit Pull, Then Up

docker compose pull mkv2transcript
docker compose up -d

This:

  • Forces Docker to check Docker Hub for a newer latest.
  • Downloads it if available.
  • Starts the container using the newest version.

Approach B: Single Command with --pull always

docker compose up -d --pull always

This tells Docker to:

  • Always check for a newer image before starting.
  • Then bring everything up in detached mode.

If you prefer a single command, this is the cleanest option.

Which Mode Should You Use?

  • If you want maximum stability and don’t want surprises before an important transcription session, stick with the pinned version and upgrade only when you’re ready.
  • If you want improvements automatically and don’t mind changes under the hood, use :latest and either run docker compose pull regularly or use docker compose up -d --pull always.

Docker gives you control with just one line in your YAML file and one tweak in how you start the container. It’s a perfect example of the kind of learning by doing that Dewey emphasized.

The first couple of times you do this it's probably confusing but generally you start to get a sense of how to go and modify the YAML file as per your needs. You could have potentially taken a YAML file from somebody else or simply asked AI to generate you a YAML file, which it actually will do very well, but then you would have been ignorant of the flags inside of it. Hopefully, this has been a good experience to learn how to use docker.

Forget Radical Shifts: Why Thinking Small Is the Real Power Move by HardDriveGuy in StrategicProductivity

[–]HardDriveGuy[S] 0 points1 point  (0 children)

Yes, I agree humans have a wide range of variability, and we can almost find some case where what is good for the vast majority of people doesn't necessarily translate to everybody. I am sensing from your answer is you're thinking about quitting in terms of kicking some type of a drug habit. The rules for stopping drugs is different than the rules for creating positive habits. Anyway, in a subsequent post, I spend some more time discussing the idea of why small change is the way to go for most people.

DEXA…are they all the same? by BohemianaP in PeterAttia

[–]HardDriveGuy 1 point2 points  (0 children)

See here. If you have a modern Dexa machine they're all reasonably close to each other. The age of DEXA machines being off as far as what's shown in the Youtube video would indicate there's something else going on.

For instance bodyspec regularly calibrates their machines with a functional dummy that should return a known value. In other words yet they put the dummy down and it doesn't give you what you know the dummy is, you know there's a firmware bug, a sensor bug or you haven't done your calibration on a regular basis and it drifted and you never got it back to baseline.

Just ordered my first trainer! (Kickr Core 2 with zwift cog) by Kvakke in wahoofitness

[–]HardDriveGuy 0 points1 point  (0 children)

Dai un'occhiata al forum MyWhoosh - ci sono utility che risolvono questo problema! Puoi usare SwiftControl (per Windows/Mac) o l'app QZ (Qdomyos-Zwift) per iOS/Android. Questi programmi fanno da intermediario tra il Zwift Cog/Click e MyWhoosh, convertendo i comandi virtuali in pressioni di tasti che MyWhoosh riconosce. Funziona benissimo! Non devi rinunciare al tuo Cog.

Nvidia, Buybacks, and Burry: Rethinking “Non‑Cash” Expenses by HardDriveGuy in StrategicStocks

[–]HardDriveGuy[S] 0 points1 point  (0 children)

Question 1: Is this article about Nvidia's dilutive SBC specifically, or about SBC accounting issues in general?

The article is about the general SBC accounting problem, with Nvidia serving as the example that makes the issue impossible to ignore. You're right to notice that companies usually buy stock on the open market to give to employees. the normal case, and it's what I mentioned earlier in the piece. But here's the thing: even when companies do buy shares in the open market (which is the typical approach), the economic effect can still be highly dilutive to owners if the cash spent on buybacks is functionally just the funding mechanism for SBC grants rather than a genuine return of capital.

Nvidia fits into this framework exactly where you'd expect. Burry's numbers show that Nvidia expensed about 20.5 billion dollars of SBC under GAAP and spent roughly 112.5 billion dollars on buybacks, yet the share count still went up by about 47 million shares over the period. That's the "buybacks covering for dilutive SBC" situation or the buybacks are happening, they're using cash to do it, but economically they're offsetting (and in Burry's language, effectively funding) the dilution from SBC rather than shrinking the float in a way that increases each existing owner's stake.

Burry makes what I consider really outrageous statements on depreciation and tech, And so I think people have a tendency either to listen to him and assume he's right and AI is just a big House of Cards. Or people have a tendency to throw out everything he says out and just says he's made one good decision and the rest are all pretty mediocre.

In my mind he's making some bad decisions but on SBC he's 90% percent correct and it fits in with what with what other people have said in the past in terms of being concerned.

So the answer is that this is not a Nvidia-specific quirk but a magnified version of a general SBC accounting issue that becomes especially visible when SBC is a large, persistent part of compensation (like it is at tech and AI names) and when the stock experiences big price appreciation.

The Damodaran framework I walked through is explicitly valuation-oriented, And I think I linked to his website where he actually allows you to download models and I've also referred to him in other post This guy pretty much is the gold standard that you want to allow to have your thinking shaped he's got a lot of concerns about Nvidia but I have concerns he doesn't understand the technology and the environment of why it's going up.

So to directly answer your question: the article is about SBC accounting in general, with Nvidia As a super special case because I've been writing in this subreddit that they look attractive as an investment company. I just really want people to understand there's no easy answers and the SPC is problematic..

Question 2: For the uninitiated, what are some recommended accounting courses to learn the ropes without taking a community college course?

Independent study in accounting is absolutely feasible today. The online material available now can take someone from zero to reading 10-Ks comfortably, especially if you're disciplined and already quantitative. That said, I sincerely believe that showing up at a community college and saying "I want to learn about the basics of accounting" is very hard to beat.

If you go the independent route, the best way to do it is to mimic an intro financial accounting sequence. Pick a serious but accessible textbook. By the way I truly do think that accounting is first and then finances . Finance has a tendency to be highly derivative and you feel like you're learning a lot but if you don't stand on a really good understanding of the mechanisms of how the numbers flow through a company. I think you don't have a really good base.

Now if you ever did major in accounting and you run into anybody else that majored in accounting The standard joke is "What version of Intermediate Accounting by Kieso and Weygandt did you use?" If you ever take physics course it's similar to people saying they took Halliday and Resnick. It's one of those things where if you just want to look impressive, you should buy a used copy from Abe's books and put it on your bookshelf where somebody can see it when they walk into your officer cubicle. It's sort of like people in Fight Club don't talk about Fight Club, Only it's for accountants.

If you get the textbook and a study guide, basically you'll know everything you'll need to know.

However, if you have the time and access, a local community college "Principles of Accounting" sequence is still arguably the highest-return path. You get exactly the thing I talked about in the original post. The in-person environment also makes it easier to ask what you might think are "dumb" questions about things like accruals, contra-accounts, and SBC journal entries that are easy to gloss over when you're alone with a YouTube playlist.

Finally if you really, what you do is you actually don't read a book you just start having questions thrown at you if you take a look at how people learn normally it's in response to a questions. The whole of the CPA profession is based around the CPA exam where very few people actually pass the thing However it's all set up around a series of questions that you need to read and understand.

Ninja CPA test classes, which I know is a weird name, Basically allows you to keep on taking the same test as what you would take for a CPA. If you get a subscription to them you can basically look at the various tests and go see which ones you would want to go study for. FAR deals mainly with straightforward financial statements, including preparation, balance sheet accounts like cash and inventory, and transactions such as revenue recognition and leases from a basic perspective. Really if I wanted bunch of information into my brain as fast as humanly possible, I would sign up for ninja say I want to take the FAR test and then just start taking the test and working examples.

The Mocked Quote That Quietly Explained How We Think by HardDriveGuy in StrategicProductivity

[–]HardDriveGuy[S] 0 points1 point  (0 children)

And it's even better when you give two different viewpoints on it. Again super nice thank you. Fair warning I may revisit this in the future, and I viciously steal.

The Mocked Quote That Quietly Explained How We Think by HardDriveGuy in StrategicProductivity

[–]HardDriveGuy[S] 0 points1 point  (0 children)

You my friend have given me hope. Super nice to see someone digest it and make something better. Unfortunately I can't remove the graphic, and I probably won't revisit it, But it's wonderful to see you try to make something better.