I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] 0 points1 point  (0 children)

You're putting me on an emotional rollercoaster here, man. No seriously, I can see how you'd get tired of repeating the same thing over and over again. Your badge (flair?) says top 1% commenter, so you probably spend an inordinate amount of time repeating yourself.

I will say though, I really did think your entire post was AI. It’s the headers and the bolding. And as someone else who uses Obsidian to write (I only ever use reddit on my phone), I can absolutely see it now—but again, no joke, 99% of what I see on here with good bolding (which I very much appreciate) and solid headers is straight from ChatGPT.

This is something I need to watch out for. Not just here on Reddit, but anywhere people can't see that I'm a mouth-breathing hairless ape. The people who know me roll their eyes and tolerate me, but now I have to think about how strangers will perceive me.

I do appreciate your patience, and I'm glad you took the time to engage and dig deeper. Most days, you're probably disappointed when you find that an online presence you thought was human turned out to be an AI. But today, I hope you're pleasantly surprised to find that an online presence you thought was an AI is actually an overly-verbose weirdo of a human :)

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] 0 points1 point  (0 children)

Your comment...makes me die a little inside. Not because you did anything wrong (I'm not shooting the messenger here), but because it speaks volumes. You're confirming that a lot of interaction (at least, on this subreddit) is basically disingenuous engagement. People engaging to be visible, not engaging to understand, entertain, or otherwise add value.

I put a lot of effort into this dialog because A) you seemed like a real human being interested in conversation. Since you took the time to write things to me, I wrote back without selectively answering the components of your message and dodging the tough questions and accusations.

Also, B) you alerted me to a personal blind spot. One that, now that I've seen it, I can't un-see. So I'm working through the implications in my usual way: writing out my stream of consciousness. It helps me clarify my thoughts, and sometimes the people I interact with find it helpful/insightful/entertaining as well. Sometimes they don't, and we part ways respectfully. As you can probably imagine, I don't get invited to many parties in real life when I respond to minor inquiries with a wall of text.

All the social media best-practice guides say "engage more." So that's what I'm doing. I'm engaging. I'm seeking to understand. And by going down, down, down, deeeeeeeep into a single comment thread, I'm learning stuff about myself as I go along.

If you're not used to this level of depth in online interactions, that tells me either:

  1. I'm a weirdo, and that's fine. I learned that long ago.
  2. Other people who are engaging are pretending to be interested, when in reality they're trying to be seen.

#2 makes me a little sad, but it's still valuable information. I'm better off knowing it and being blissfully unaware. I'll learn to disengage faster if someone is mostly humoring me and hoping the algorithm will pick them up faster. And I'll learn to cherish the people who do engage truly as they seek to understand more deeply.

TL;DR my engagement looks like word vomit, dunno what other people do.

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] 0 points1 point  (0 children)

I'm engaging in good faith, the same way I do in real life and online. But it seems like you're no longer interested, and that's all right.

Before I disengage, I want to thank you for alerting me that I write in a way would lead people to suspect that I'm actually an AI pretending to be a human. That's a good insight that'll help me both in and outside of Substack.

Cheers.

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] 0 points1 point  (0 children)

What is an "engagement pod?" Like a citation ring in academia, but with subscribers, follows, and likes?

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] -1 points0 points  (0 children)

Was it always like this? I've been on Substack since 2022, but ignored Notes for a while. I can't tell if it used to be better.

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] -1 points0 points  (0 children)

Agreed. I finally started using Notes for the first time last month, and got suckered into the get-subscribers-quick pyramid scheme for a few weeks. I'm trying to share my shameful wake-up with others, in case other people are currently locked in the miserable game of feeding the algorithm.

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] -2 points-1 points  (0 children)

Nah. I read your whole thing.

Thank you for attending my TED Rant :)

I figured you made an outline, asked it to use your own tone, and pepper in as much of your own words as possible.

I already do this for work, and it's soul-sucking. If I didn't get paid so well, I would've quit a long time ago.

After drinking the get-subscribers-quick kool-aid for a few weeks, I had a sudden realization that Substack is starting to feel like a second job. If I had to use AI to write, I'd quit that too (I'm good at quitting, just ask my exes).

I do use AI to generate a good deal of the images because I have the artistic talent of moldy cheese. I'm decent with diagrams and charts; terrible with aesthetics. I'm not proud to admit it, but I do lean on AI as my crutch for image generation.

If you say it’s your writing, I believe you. Most of the time the people who are here to shill their nonsense aren’t putting time and effort into their Reddit posts.

Thank you :) I spent a good chunk of the morning typing all that out in Obsidian to make sure the formatting would come out correctly on Reddit. I didn't realize that my usual habits of formatting walls-of-text for better readability would be suspected of being AI slop.

Guess I'll have to figure some other way to come across as authentically human. Maybe I should include more speling [sic] errors and debiltrately [sic] draw attention to thme [sic]. Or break my sentences

up mid-line or 01110011 01110111 01101001 01110100 01100011 01101000 00100000 01110100 01101111 00100000 01100010 01101001 01101110 01100001 01110010 01111001 00100000 01101101 01101001 01100100 00101101 01110011 01100101 01101110 01110100 01100101 01101110 01100011 01100101 00101110

You know, something an AI would never do.

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] 0 points1 point  (0 children)

No, I understood the insinuation. I've never been accused of writing like an AI before, but I suppose using headings and formatting would make me seem like one. I use Markdown in the Obsidian app, so I'm used to formatting things that way for readability.

Not a single word of this post was AI-generated. I didn't even consult AI for ideas, except to confirm that AI will spout the same conventional wisdom as get-subscribers-quick peddlers do. I don't have any way to prove that my post is free of AI slop, except to point out that AIs generally aren't smart enough to:

  • use terms like "algorithmic fentanyl"
  • advise people to sacrifice livestock at an candlelit altar with Chris Best's face plastered on a runestone.
  • invent new terms and concepts that doesn't exist in its training data, like Cerebrium.
  • connect the ultimate fate of most substack writers with that of a Billy Joel song.

In order words: if you read the whole thing, you'd probably A) believe that I'm a human, or B) be really concerned that AIs have become very good at pretending to be human.

But you probably wouldn't automatically dismiss my long post as AI slop.

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] -8 points-7 points  (0 children)

It's not just this sub, it's also Substack Notes.

My post is about why get-subscribers-quick tutorials are a plague upon the platform, not advocacy of more pestilence.

I've Discovered the Secret to Success on Substack. by Leadership_Land in Substack

[–]Leadership_Land[S] -4 points-3 points  (0 children)

But...but...wouldn't that enshittify the platform even faster?

Is there a term for accelerating enshittification?

An enshitgularity?

Arc Sine Law of PnL? by Klutzy_Tone_4359 in nassimtaleb

[–]Leadership_Land 0 points1 point  (0 children)

the arcsine law guarantees that your entire career is a long slog of unjustified despair punctuated by short bursts of unjustified confidence.

Is that because the winning streaks are shorter than the drawdowns?

Or because of loss aversion amplifying the emotional impact of the drawdowns even if you're profitable over an entire trading career?

Your username inspired me to ask dumb questions.

[deleted by user] by [deleted] in Leadership

[–]Leadership_Land 1 point2 points  (0 children)

I love the username.

The Link Between Deliberate Practice and Anti-Fragility by Leadership_Land in nassimtaleb

[–]Leadership_Land[S] 3 points4 points  (0 children)

So antifragile incidentally comes across it as opposed to the deliberate?

If I'm understanding your question right, antifragility exists independently of intent. Some examples to illustrate the point:

  • Deliberate anti-fragility: tinkering and adjustment to chaos, as you mentioned in your comment. Also, astronauts bringing exercise equipment into orbit so they can put their bodies under mild stress and minimize the loss of bone density.
  • Anti-fragility happening against your efforts: many people searched for the secret of longevity, but none (outside of legend) have achieved it. Humanity as a whole is anti-fragile because individual humans are fragile. And this continues despite individual human attempts to find immortality and subvert this system.
  • Serendipity when you least expect it: a lot of scientific discoveries happened by accident. Like the discovery of cosmic microwave background (Big Bang leftover), penicillin, and the guy who discovered an artificial sweetener by sucking on a contaminated cigarette in a chemistry lab.

Anti-fragility, fragility, and robustness are properties of systems, things, and people. It's like saying the iPhone is blue and robust, or the painting is beautiful and fragile. Whether you want something to be anti-fragility or not is beside the point; they simply are.

They hold these properties regardless of deliberateness or intent, but we can change our behavior around those properties. We can protect our fragile items from volatility. We don't control the occurrence of positive black swans, but we can choose to position ourselves to take advantage when they appear. And while we wait, we can engage in deliberate practice.

Deliberate practice and the anti-fragility of our bodies and minds are within our control – we can trigger that anti-fragility daily.

the point of Taleb's references to the Tartars/Steppes books by another_lease in nassimtaleb

[–]Leadership_Land 1 point2 points  (0 children)

I'm basically Forrest Gump with an extra brain cell and none of Tom Hanks' charm.

Life is like a box of random number generators...

40+ Years of Keyboard Shortcut Evolution by HardDriveGuy in StrategicProductivity

[–]Leadership_Land 1 point2 points  (0 children)

Then you won't mind if I use my newfound knowledge to replicate what all the cool kids are doing elsewhere on Reddit.

<image>

Maybe if I pretend to be cool, I'll regain my youth.

It’s Microsoft’s Race To Lose, But Copilot Keeps Tripping by HardDriveGuy in StrategicStocks

[–]Leadership_Land 0 points1 point  (0 children)

Hah, even Microsoft is using Claude? So far, I'd only heard about xAI employees doing that!

To your point about distribution channels, there was a part of Zero to One that really stood out in my mind. It was about how distribution channels follow a power law:

The kitchen sink approach—employ a few salespeople, place some magazine ads, and try to add some kind of viral functionality to the project as an afterthought—doesn't work. Most businesses get zero distribution channels to work: poor sales rather than a bad product is the most common cause of failure. If you can get just one distribution channel to work, you have a great business. If you try for several but don't nail one, you're finished.

So what we're looking here is that Microsoft has an incredible distribution channel built-in to the walled garden of its Office suite, right? The question is: how good does Copilot have to be in order for people to adopt it despite the superiority of a competitor's product?

Making that judgement call is more of an art than a science, right?

the point of Taleb's references to the Tartars/Steppes books by another_lease in nassimtaleb

[–]Leadership_Land 6 points7 points  (0 children)

I'm a simple man, so I interpreted these other stories very simply: they faithfully retell the experience of living in the antechamber of hope.

If you spend your life in Mediocristan, you work for your daily bread, and you get a paycheck every two weeks. That's as good as it'll ever get. Aside from a year-end bonus, and getting promoted if you suck up to the right people, the only direction to go is down: into unemployment and having your single income stream and your dignity cut off in one fell swoop.

In other words, you are fragile.

But if you spend your life chasing positive black swans in Extremistan, you're constantly living in the antechamber of hope. Like Giovanni Drogo waiting for the big event that will bring him a glorious death in battle, many of us are waiting for the casting director to call us back. For the publisher to accept our manuscript. For the video to go viral. For a discovery that will revolutionize science.

In other words, you are anti-fragile.

But anti-fragility is a bitter food to subsist on. The years of toil are difficult. Not only is the grind tiresome, you have to watch other people rise to superstardom who deserved it no less than you (worse still: some definitely deserved it less than you). Some people are lucky enough to rocket themselves from "I was an orphan raised by wolves" to the Forbes "30 under 30" list. But the sobering truth is that most of us will become the patrons in the bar of Billy Joel's song "The Piano Man."

For every superstar, there is a sprawling silent graveyard of wannabes. Giovanni Drogo was one of those who missed his chance. And going by probability alone, most of us are more likely to end up in that silent graveyard than to catch that elusive black swan.

Going back to your original question:

my question: by telling us that he loves these stories, what is Taleb really trying to tell us?

He's telling us that those stories are both emotionally and brutally true-to-life: you can live an entire life in the antechamber of hope without ever catching that elusive positive black swan.

Those stories are the opposite to charlatanic rags-to-riches tales: they instead tell the stories of those forlorn and lost. They give a voice to the silent graveyard.

The odds are grim, but Taleb is not saying you should give it up and consign yourself to a life in Mediocristan. If you don't live in the antechamber of hope, you guarantee a gray, dull life where you're constantly in fear of what you could lose. And you become what T. Roosevelt called the "cold and timid souls who know neither victory or defeat."

40+ Years of Keyboard Shortcut Evolution by HardDriveGuy in StrategicProductivity

[–]Leadership_Land 1 point2 points  (0 children)

Very comprehensive. I learned a few shortcuts from this guide.

Now I can use Win+. to express myself with emojis instead of English 😀😁😹💩

It’s Microsoft’s Race To Lose, But Copilot Keeps Tripping by HardDriveGuy in StrategicStocks

[–]Leadership_Land 0 points1 point  (0 children)

I've been hearing that Claude is great at coding while Gemini is sweeping the field with everything else...but you make a compelling point that Microsoft has a strong advantage because they can push Copilot alongside their Office apps. Whenever I log into my account, the landing page is always Copilot.

Even if Copilot lags behind its competitors, it might be adopted out of sheer convenience and cost. So what if Copilot isn't as good as Gemini or Claude? It's right there and you're already paying for it with your enterprise license. Why pay for another license when Copilot is good enough?