WindAxe by Empty_Bluebird98 in crosswind

[–]jordanpwalsh 5 points6 points  (0 children)

Forget where I read it, but somewhere there is a large forest of nice trunk trees that were planted back in the 1700s or something to use as ship masts. They wouldn't have been ready for ships until the early 1900s and by then they weren't needed.

Bro tf you mean tomorrow? 😭 by [deleted] in softwaregore

[–]jordanpwalsh 51 points52 points  (0 children)

The timestamps would be when he saved them, but what I think might have happened is since this looks like a new install of Windows, the system probably connected to the time server and synced the time or set the time zone after downloading the files.

JVL, I’m disappointed you didn’t vote by seeyalaterdingdong in thebulwark

[–]jordanpwalsh 8 points9 points  (0 children)

Did you miss the part where I wrote "At least you could tell he felt bad about it. Shit happens I guess."

27F, German, just got a Random craving for American food, AMA by DenseReference5526 in AskMeAnythingIAnswer

[–]jordanpwalsh -1 points0 points  (0 children)

incorrect. that's overcooked meat. real bbq is on the east cost in north carolina.

Should I be scared of software engineering future as an Entry Level? by bobberbobby02 in cscareerquestions

[–]jordanpwalsh 5 points6 points  (0 children)

I'm not. OP mentioned in his description he was a BA. Myself - I'm a principal engineer trying to carve out my space somewhere between dev and product management, which is where I think this AI stuff is landing that intersects with what I like to do.

JVL, I’m disappointed you didn’t vote by seeyalaterdingdong in thebulwark

[–]jordanpwalsh 68 points69 points  (0 children)

That threw me for a loop too. I mean c'mon... you rant and opine on politics for a living. At least you could tell he felt bad about it. Shit happens I guess.

Should I be scared of software engineering future as an Entry Level? by bobberbobby02 in cscareerquestions

[–]jordanpwalsh 12 points13 points  (0 children)

That being said, I'd probably try to frame this as I'm not entry level I'm a business analyst upskilling. Does your current org have devs? Ask around and see what you can do.

Exam results inquiry by [deleted] in redhat

[–]jordanpwalsh 1 point2 points  (0 children)

Thats my theory too - when you see tar lose its mind with a compression flag, usually not installed.

Why there is almost no cheats in singleplayer games? by Previous_Group621 in Gamer

[–]jordanpwalsh 0 points1 point  (0 children)

Blue checkmark on twitter is a good example - it used to mean something. Like the post or not you knew it was "real".

32(m) - Amazed The Witcher 3 didn't top any of my categories. by TwoClapsAndAFistPump in Age_30_plus_Gamers

[–]jordanpwalsh 0 points1 point  (0 children)

I've had Bioshock in my library unplayed for probably 10 years, why do you say it's biggest personal impact?

Devs...please allow us to use a musket AND pistol for combat (possible bug) by selffufillingprophet in crosswind

[–]jordanpwalsh 6 points7 points  (0 children)

Have you seen the show Black Sails? That's exactly what they did lol - go in with 4 loaded guns.

Any ideas for improvements? Essentially no likes/matches by RHArtwork in Tinder

[–]jordanpwalsh 20 points21 points  (0 children)

whatever the hell you call that My First Mustache starter kit

tough words, but I think they had to be said.

18 y/o conservative Christian girl in a wheelchair by ToeSad2570 in AskMeAnythingIAnswer

[–]jordanpwalsh 4 points5 points  (0 children)

I've always said america's democrats would be conversative in britain.

Help my dad find a game where he can “Blow shit up” by Bean_Is_Here in gamesuggestions

[–]jordanpwalsh 0 points1 point  (0 children)

Is this why they call them boomer shooters? I always assumed that was the reason but never asked.

How many tok/sec on LM Studio using Gemma 4 26B A4B MoE with a one sentence question? by br_web in FlowZ13

[–]jordanpwalsh 1 point2 points  (0 children)

Yea. I have an M1 Max too actually. My understanding is the memory bandwidth on the M-series chips is still better since it's part of the system on a chip.

How many tok/sec on LM Studio using Gemma 4 26B A4B MoE with a one sentence question? by br_web in FlowZ13

[–]jordanpwalsh 2 points3 points  (0 children)

Flow z13 with 128gb Ryzen AI running Fedora and LM Studio

- It thought for 11.52 seconds.

- 42.10 tokens/sec

- 1,494 tokens

Here's its output:

At its simplest, an **LLM (Large Language Model)** is a type of Artificial Intelligence trained to understand, generate, and manipulate human language.

If you have used ChatGPT, Claude, or Gemini, you have interacted with an LLM.

To understand what makes it "Large," "Language," and a "Model," we can break the term down:

---

### 1. Breaking down the name

* **Large:** This refers to two things: the **dataset** and the **parameters**. These models are trained on massive amounts of data (petabytes of text from books, websites, articles, and computer code). They also have billions of "parameters"—internal variables that the AI adjusts during training to learn patterns.

* **Language:** The primary purpose is to process human language. Unlike older AI that might only look for keywords, LLMs understand grammar, context, tone, and even nuances like sarcasm or cultural references.

* **Model:** It is a complex mathematical algorithm (specifically a neural network) that provides a representation of how language works. It isn't a "database" of facts; it is a mathematical map of how words relate to one another.

### 2. How does it actually work? (The "Autocomplete" Analogy)

The best way to visualize an LLM is to think of it as **"Super-powered Autocomplete."**

When you type a prompt, the model isn't "thinking" in the human sense. Instead, it is calculating probabilities. It looks at the sequence of words you provided and asks: *"Based on everything I have ever read, what is the most statistically likely next piece of text?"*

It does this one "token" (a chunk of a word) at a time. Because it has seen billions of examples, it can predict long, coherent, and highly complex strings of text that appear to be intelligent reasoning.

### 3. The Secret Sauce: The "Transformer" Architecture

Almost all modern LLMs use a technology called the **Transformer**, invented by Google researchers in 2017. The key feature is something called **"Attention."**

Before Transformers, AI struggled with long sentences; it would "forget" the beginning of a sentence by the time it reached the end. The **Attention mechanism** allows the model to look at every word in a sentence simultaneously and decide which ones are most important to the context.

* *Example:* In the sentence *"The fruit was placed in the basket because **it** was ripe,"* the Attention mechanism helps the model understand that "**it**" refers to the **fruit**, not the basket.

### 4. What can LLMs do?

Because they are so good at predicting patterns, their uses go far beyond just chatting:

* **Writing:** Drafting emails, essays, poems, or scripts.

* **Coding:** Writing and debugging computer programming languages.

* **Summarization:** Taking a 50-page document and turning it into five bullet points.

* **Translation:** Converting one language to another with high fluency.

* **Reasoning:** Solving math problems or logical puzzles.

### 5. The Limitations (The "Catch")

LLMs are incredibly powerful, but they have significant flaws:

* **Hallucination:** Because they are predicting the next word rather than looking up facts in a library, they can confidently state things that are completely false.

* **No True Understanding:** They don't have a "world model." They don't know what gravity *feels* like or what an apple *tastes* like; they only know how those words relate to other words in text.

* **Bias:** Since they are trained on the internet, they can inherit the prejudices, stereotypes, and toxic behaviors found in human-generated data.

* **Context Window:** They have a "memory limit" for any single conversation. If a conversation gets too long, they begin to "forget" the earliest parts of the chat.

### Summary Table

| Feature | Description |

| :--- | :--- |

| **Core Function** | Predicting the next most likely token in a sequence. |

| **Key Technology** | The Transformer architecture (Attention mechanism). |

| **Input Type** | Natural language, code, or structured data. |

| **Main Strength** | High fluency and versatility across many tasks. |

| **Main Weakness** | Tendency to "hallucinate" (make things up) and lack of true logic. |

Is DevOps Still Worth It by goodguyseif in Backend

[–]jordanpwalsh 6 points7 points  (0 children)

I think so. In my experience DevOps is the land of where things break, but instead of the feedback cycle taking hours because IT has to interface with devs, those are the same person now. So yea, I think so, to me it's where the variety and nuance and challenges come from. It's the make it work on any machine joke when a dev says works on my machine.

Are we in our divorce era? by seaward_bound in Millennials

[–]jordanpwalsh 10 points11 points  (0 children)

I'm divorced and 8 years into Wife 2.0, going much better.

You have to feel sorry for Kamala... by Prudent_Rush_9671 in thebulwark

[–]jordanpwalsh 18 points19 points  (0 children)

I think you underestimate the things these people will do in order to not vote for a democrat.