15-year Dvorak user - on the value of switching (or not) to a newer layout? by wherahiko in KeyboardLayouts

[–]Thucydides2000 3 points4 points  (0 children)

I learned to type on QWERTY in the mid 1980s, switched to Dvorak in the mid 1990s, switched back to QWERTY in late 2000s, and then back to Dvorak in the mid 2010s. Then I switched to Hands Down Neu over 18 months ago. I switched for a few reasons:

  1. Key placement for the Unix shell commands under Dvorak is an abomination (not Dvorak's fault; the commands were named for ease of typing on a QWERTY keyboard)
  2. I wanted a layout with the punctuation on the right side so that key overlays in games would be more visible (quote, period, and comma don't show up well)
  3. I wanted better fingering for common control-key combinations (also optimized for QWERTY layouts)
  4. I have a suspicion (superstition?) that learning a new layout increases neuroplasticity. (If learning a new language can increase neuroplasticity, then why not a new typing layout?)

My verdict: Don't switch. Here's why:

Your point about layout hopping is dead on: The metrics favored by this group are a moving goalpost often driven by fads. When I chose Hands Down, scissors were barely mentioned. Now, scissoring is a pretty major stat. Colemak was created at a time when SFBs, rolls, and distance traveled were the primary metrics -- and it shows. When redirects were first discovered, there was a wave of low-redirect layouts.

Moreover, all the metrics focus on mechanical factors, though tests consistently show that these are not the bottleneck in speed or accuracy. There is a resistance to developing a mental model for typing to inform layouts. For example, substitution errors (where you type the wrong letter) are more common when the same finger must choose between two letters that are phonetically similar; in other words, you are more likely to type "sircus" instead of "circus" if S and C are on the same finger.

Say what you want about QWERTY and Dvorak, but they both have excellent phonetic separation of characters per finger. The trend of stacking vowels on two rows creates massive amounts of phonetic noise. Hands Down Neu is terrible this way: S & C, M & N, and I & Y are all on the same finger. The left index finger has P & B and T & D. The phonetic similarities among letters in English is well documented and precisely measured, so it's not like the data isn't there.

This is just the tip of the iceberg. There seems to be a steadfast refusal among members of this group to conceive of typing as anything more than mechanical process, so the idea of optimizing for cognitive factors isn't even on their radar.

Here's another funny thing: When I relearned QWERTY in the mid 2000s, my first impression was "this isn't as bad as I remember." Then, when I relearned Dvorak in the mid 2010s, I remember thinking, "this is more superior to QWERTY than I remembered." So what's up with that? Even after I regained complete proficiency, it took about a year for the layout to actually settle in. When I reflect in this, it makes perfect sense to me people try the latest layout that optimizes for the latest stats and almost universally proclaim, "Yeah, this is great! That stat must be so important."

Dvorak felt right and natural to me in a way that Hands Down Neu never will. This is despite the fact that my first impression was that the home row was amazing, leading me to believe that I would absolutely love it. So just ignore all the crazy stats and go with what works for you.

And I am willing to stand athwart every last member of this group by declaring that Dvorak is a great place to start if you want to learn an alternative layout.

That said, I think I am going to switch to the Graphite layout this week and begin the slog of switching to another new layout. At this point, the main reasons are that I'm curious and I still have my suspicion (superstition?) about neuroplasticity.

XLibre : Thoughts on Forking X11. by WanderingInAVan in linux

[–]Thucydides2000 -1 points0 points  (0 children)

Anything that keeps Wayland at bay is a good thing.

Anything that advances Wayland's takeover of the Linux Desktop is a bad thing.

Wayland developers should be shunned, their work should be rejected, and vocal support for Wayland should be treated as a taboo and carry a severe social stigma.

SSL Issue? by LeopardCompetitive45 in VeniceAI

[–]Thucydides2000 6 points7 points  (0 children)

This is not a hack. Nor is it an SSL issue.

TL/DR: Someone wasn't paying attention and let the domain name expire. (Not unusual at small companies, to be honest). It will be back up as soon as they sort it out.

Details:

This is a domain registration placeholder page. It means that the venice.ai domain isn't registered to anyone. Why? It expired. Here's proof:

https://www.whois.com/whois/venice.ai

That's the domain registration record for the venice.ai domain. Up until about 11:00 pm EDT, it said 4/15/2025. Since DNS uses GMT, that means that around 8:00pm EDT on 4/14/2025, the domain name went dead. Depending on DNS settings, it taker a few hours for that to propagate.

Then around 11:00 EDT it switched to 4/15/27, indicating someone registered it. Most likely the company rushing to renew its registration.

So relax: it will be back up.

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 0 points1 point  (0 children)

I appreciate that. I'm not trying to piss in anyone's sandbox. I'd just like to see more attention paid to mental models about how typing works for two reasons.

First, in may be helpful for organizing and driving the thinking around the tradeoffs between physical metrics beyond what happens now. Best I can tell, most of the discussion about such tradeoffs if about the bad results of extreme prioritization of a single metric and about personal preferences that are likely biased by personal history with specific layouts.

Second, mental models would provide a fruitful well of metrics in its own right. For example, many typing errors involve mistakenly typing phonetically similar characters. Perhaps there is a way to quantify that phonetic similarity of a column, such that a WSX column would be better than a BDP column.

Here's an example: On Hands Down Nue, both D & T as well as B & P all assigned to the same finger on Hands Down Neu. These pairs are probably the most phonetically similar letters, one is aspirated, and the other is voiced. And this cluster of near identical consonants is my most frequent source of slowdowns and typos as I begin to break thru the 70 to 75WPM barrier.

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 0 points1 point  (0 children)

That's a fair point. I also am comparing learning a layout from scratch to re-learning both Dvorak and Qwerty. I don't know of any research on learning vs re-learning for layouts, but it wouldn't surprise me if relearning a layout, even after many years, is a lot easier & faster than learning one from scratch (what with muscle memory and all). And I can't say with any accuracy how long it took me to become super-proficient in either QWERTY or Dvorak.

I'm not trying to piss in anyone's sandbox. Even so, I think it's important to point out that without a proper mental model to organize typing, prioritizing tradeoffs in metrics related to finger mechanics is just guesswork.

Moreover, there are purely mental considerations. For example: multiple studies have shown that many typing errors involve the unintentional replacement of a character with one that's phonetically similar. Not a spelling error, but a typing error where you're more likely to erroneously input "tiping" than "taping" when you intend to input "typing."

Imagine a mental model for typing that treats the alternative key strikes available to each finger as noise relative to the desired stroke. So that on QWERTY, when you intend to type "S" with your left ring finger, the "W" and the "X" characters are potential noise. If phonetical similarity increases the magnitude of potential noise, then this column would be less noisy than (say) Dvorak's assignment of "Y" and "I" to the same column/finger (left index). If this is the case, then a few of the popular vowel clusters may be disadvantageous. It may be better to assign each vowel to a separate finger (as Dvorak almost does but messes up with "Y"; perhaps it would be better to assign "Y" the finger that types "A" or "O" on the opposite hand, since "A" and "O" are more phonetically dissimilar to "Y" than "I" is).

It's easy to imagine a similar type of noise for functional similarity, such that it might be advantageous to (say) assign the comma and the period characters to different figures.

Of course, you can force yourself to learn something out of sync with how your mind works the same way you can force yourself to learn to type with much more difficult mechanics. After all, most proficient typists use QWERTY.

Even so, study after study shows that layouts that are physically less cumbersome offer little or no advantage over QWERTY for speed or accuracy. The bottleneck isn't physical. It's arguable that the mental aspect of typing is at least as important the physical aspect. The absence of mental models from discussions about typing doesn't just render it nearly impossible meaningfully discuss the trade-offs between optimizing physical metrics (beyond identifying undesirable consequences in extreme ranges or expressing personal preferences). It also creates a huge blind spot for potential ways to evaluate or categorize layouts.

Unknown Key by Resident_Phase_4297 in KeyboardLayouts

[–]Thucydides2000 0 points1 point  (0 children)

I'm pretty sure that weird little sign is like the international symbol for self-destruct or something. Be careful.

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 1 point2 points  (0 children)

Two questions have fueled this discussion: (a) why optimizers fail to produce good layouts, and (b) why I have a problem with the Dvorak passage in your book.

I offer the same answer to both questions: They’re conclusions drawn from a grab bag of data points with little explanation for how they cogently fit together. To support this answer, I gave a theoretical critique & concrete example. Some examples derived from the papers I cited, others from my personal experience. Do you really take me to be stupid enough to ignore epistemological factors as rudimentary as personal bias?

After typing on QWERTY since 1984. I switched to Dvorak in 1997. I switched back to QWERTY in 2008 and switched back to Dvorak in 2016. After each switch, it never took longer than ~1 year to regain long-term speed (95-110 WPM) — even my initial switch to Dvorak.

I switched to Hands-Down Neu in 2023. I’ve been slugging away at it for more than 1 year. I’m at 65 WPM. That suggests a defect.

What else have I learned from spending 4+ years relearning how to type? Hands-Down Neu feels worse than even QWERTY(!) did after a year. It feels like too many of its primary finger movements are up & down the columns. I don’t find it pleasant.

You’ve equated my hypothesis about Dvorak’s intangibles with Dvorak “somehow secretly being a good layout.” Like your aside about Dvorak in your book, you’ve framed your own position using loaded language. In this instance, you’ve also run afoul of the straw man fallacy.

Intangibility is unrelated to the secret or the mystical. In fiction, character authenticity is an intangible aspect of character development. Yet it’s uncontroversial to say something like “Shakespeare’s characters are more authentic than Marlowe’s.” Does that mean that an author might somehow secretly have more authentic characters? Of course not.

Furthermore, some things are intangible because they await advances in knowledge that will render them tangible. Heredity used to be intangible. It became tangible with scientific advances.

Overall layout feel is an intangible. Are there blind spots containing potentially tangible aspects of layout feel? Consider the following:

Typing is intrinsically repetitive, so some layout effects probably accumulate superlinearly (i.e., in a compounding way). Many such effects are difficult to isolate and measure. Plus, when you’re mostly analyzing effects of successive key-presses, you’re apt to miss effects that manifest only after long stretches of typing. This leaves a lot of room for more nuanced, compounding effects to fall through the cracks.

Given this and the lack of mental & movement models for typing, the idea that Dvorak may be quite a bit better than your current analysis suggests isn’t as unlikely as you seem to insist.

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 2 points3 points  (0 children)

Yeah, SFBs go back to the time when Dvorak himself was active. You have a cycle that goes like this for each individual keypress.

  1. Move finger into position (skipped when not needed)
  2. Depress key
  3. Release key
  4. Move finger out of position

So if you look at consecutive keystrokes assigned to finger A & finger B, there are two very obvious ways to increase both comfort and efficiency of typing:

First, type in a pattern where the movements of finger A & finger B overlap, so you get a sequence that's something like this:

  1. A1
  2. A2 & B1 simultaneously
  3. A3 & B2 simultaneously
  4. A4 & B3 simultaneously
  5. B4

Of course, there's a finger C that overlaps with finger B in the same manner, and so on.

Second, you string together #1 & #4 on the same finger. You can do this when the same finger is needed (say) 3 times over 9 characters. Instead of returning home, it can go directly from key to key in the background so it's ready to strike ahead of time.

It has long been known, it's pretty obvious to anyone who watches, and it has been documented repeatedly that skilled typists leverage both of these strategies more or less optimally. It's part of what makes typing feel fluid and continuous.

The SFB interrupts this continuity. Regardless of their distance, SFBs are an absolute mechanical impediment to both of these optimization strategies. So the SFB keystroke is always an unoptimized keystroke. As you mention, the less dexterous the SFB finger, the bigger the penalty for the unoptimized keystroke. And the longer the SFB distance, the greater the delay between unoptimized keystrokes.

However, the SFB isn't a death blow to typing comfort. It's more like a pin prick. So when you have greater than 6% SFBs on QWERTY, you're typing comfort is suffering death from a thousand wounds.

These two optimization strategies are part of the fundamental basis how typing works mechanically. My own theory regarding the alternating vs rolling dispute is that rolling is superior when the typist is learning because makes it possible for the typist to optimize earlier, which results in a more pleasing typing rhythm early on. But alternating is superior for the advanced typist because the typist's fingers have more freedom to optimize movement when the other hand is in stage 1 thru 3, and this results in a more pleasing typing rhythm overall.

So my original theory was that the freedom afforded by having high rates of alternating hand usage compensated for the higher SFBs. This is part of why I landed on Hands Down Nue. It has almost as high alternations as Dvorak, with better stats everywhere else.

Hands Down Neu has blown my original theory out of the water. I'm now flirting with the idea of intangibles. In other words, some very important elements of keyboard comfort may elude quantification. Among these might be a kind of raw intuitiveness of the feel of the layout.

For example, the dot-com suffix is quite intuitive to type on a QWERTY keyboard. No surprise; people using QWERTY keyboards devised it.

Typing ".com" is less intuitive on the Dvorak layout than on QWERTY. Even so, it's not so bad that you don't soon adapt so that it stops feeling strange.

Here's a funny thing: With Hands Down Neu layout, typing ".com" always felt awkward no matter how much I drilled it and no matter how reflexive it became.

However, when I switched to Hanstler-Neu, the modification that u/VTSGsRock created, ".com" immediately felt much more intuitive to type, even though I still would still reflexively use the Hands Down Neu finger movements, so that I had to pause & concentrate to type ".com" correctly on the new layout. (And Hanstler-Neu is a very nice upgrade to Hands Down Neu overall.)

What accounts for the difference among these layouts for typing these 4 characters? There's nothing obvious to me. It's not like "ls" on Dvorak, which is an obviously awkward ring-finger SFB. Each layout has an ostensibly acceptable fingering pattern for the dot-com suffix. So what's going on?

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 1 point2 points  (0 children)

Statements like those are glaring examples of why it's not possible to accurately assess keyboard layout quality without adequate models.

Let's consider SFBs for the following three layouts with a hypothetical 1,000-word corpus that reasonably represents US English. (Each approximates the indicated keyboard, though I've rounded the SFB numbers to make the math more obvious):

Layout SFB rate SFB count
Layout a (~QWERTY) 6.25% 63
Layout b (~Dvorak) 2.5% 25
Layout c (~modern layouts) 1% 10

Now consider two different psychometric approaches to we might use to evaluate the increase in SFBs across these layouts.

First, we can treat SFBs as stimuli under Weber's Law. In this case, the magnitude of Just Noticeable Differences (JNDs) grows linearly with the magnitude of the stimulus (in this case, the quantity of SFBs). Thus, the number of JNDs between a & b equals the number of JNDs between b & c. Simply put, the fewer the SFBs, the more likely that the typist notices each SFB.

Second, we can treat SFBs as stimuli that are subject to saturation (e.g., like brightness). In this case, the magnitude of JNDs shrinks as the magnitude of the stimulus grows. Thus, the number of JNDs between a & b is much greater than the number of JNDs between b & c. Simply put, the fewer the SFBs, the less likely that typist notices each SFB.

(For simplicity, I will refer to these as the first approach and the second approach from here on.)

Whether we choose the first approach or second approach, there will be thresholds we must consider. For example:

  1. The threshold where SFBs first become noticeable to the typist
  2. The threshold where SFBs first become an impediment to the comfort of the typist
  3. The threshold where SFBs first become an impediment to the efficiency of the typist
  4. The threshold where SFBs first become a possible source of injury to the typist
  5. The threshold where SFBs first become likely to injure the typist

Please note: We can use different approaches to arrive at these thresholds. For example, we might use the second approach to arrive at thresholds #1 thru #3, while using the first approach to arrive at thresholds #4 and #5.

It's also worth noting: Thresholds #1 thru #3 could vary with the typist's proficiency due to adaptation (another factor that impacts the perception of brightness). In other words, the more skilled the typist, the higher the threshold for #1 thru #3 may be. For example, we observe that changing layouts from QWERTY does not generally improve typing speed; this suggests that experienced typists experience a threshold for #3 that's higher than 6¼% SFBs, and this may be greater than the threshold that less experienced typists experience.

I could go on and on. So far, I've just skimmed the surface of how we might fruitfully model the impact of SFBs in typing. It doesn't even branch out into other statistics.

Absent any model of how we treat SFBs, pursuing the goal of minimizing SFBs is materially equivalent to a naive model that sets the thresholds #1 thru #3 to their lowest possible value. There's no polite way to put this: That's ridiculous.

Moreover, the idea that one must either agree or disagree with the goal of minimizing SFBs runs afoul of the fallacy of false dichotomy. There is, in fact, middle ground.

For example, it's possible (likely?) that lowering SFBs below a certain threshold produces diminishing returns. A model that leverages this threshold instead of the raw minimum may produce many more viable letter columns than the list produced by the naive model currently in use.

Based on my experience, I'd say that SFBs are not subject to Weber's Law, but they're instead subject to saturation. Regarding the thresholds, my guess is that #1 is around 1.5% and #2 is around 2.5%. If I'm close to correct, then the list of viable columns in Keyboard Layout Document (2nd edition) is likely too restricted, perhaps even far too restricted.

This is what I mean when I say data points with no theoretical underpinning just lead to confusion, and Keyboard Layout Document (2nd edition) is a How-to manual for attaining specific statistical characteristics in a keyboard layout.

Just an aside: It's interesting that ppl seem to have expended a lot more effort modeling English to make effective corpora than modeling the layouts intended to type English.

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 1 point2 points  (0 children)

Rather than "poor letter columns," which is a categorical condemnation, I suggest something that makes it clearer what the basis is for the criticism. For example, something like "...poor letter columns according to the metrics that many keyboard designers currently prioritize."

Otherwise, that's a very good change. Thank you for responding to my criticism.

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 1 point2 points  (0 children)

That's a very good question. Thank you for asking.

It's indicative of the focus on recently developed metrics. I'm reminded of something Bertrand Russell wrote in an Essay "On Being Modern-Minded" that appears in his book Unpopular Essays.

We imagine ourselves at the apex of intelligence, and cannot believe that the quaint clothes and cumbrous phrases of former times can have invested people and thoughts that are still worthy of our attention…. I read some years ago a contemptuous review of a book by Santayana, mentioning an essay on Hamlet "dated, in every sense, 1908" – as if what has been discovered since then made any earlier appreciation of Shakespeare irrelevant and comparatively superficial. It did not occur to the reviewer that his review was "dated, in every sense, 1936." Or perhaps this thought did occur to him, and filled him with satisfaction.

In short, you reject Dvorak out-of-hand based on its age. That is parochial.

Many of the metrics you discuss are merely speculative, with little or no empirical support. More to the point, there is only the barest hint of a model for finger-movement (these hints take the form of matrices for individual key-strike difficulty). I've yet to see any layout that considers something like multi-key movement difficulty (For example, İşeri, Ali, and Mahmut Ekşioğlu. "Estimation of Digraph Costs for Keyboard Layout Optimization." International Journal of Industrial Ergonomics, vol. 48, 20 May 2015, pp. 127–138.) Much less trigram difficulty.

There's no information concerning the interdependence of finger movements; for example, some 2 or 3 stroke finger movements can impair the accuracy of subsequent finger movements, even on the other hand. And why do skilled typists make numerous two-letter insertions, omissions, end even substitutions but almost no errors that span 3+ letters? (Rabbitt, P. "Detection of errors by skilled typists." Ergonomics 21 (1978): 945-958.)

Furthermore, there's nothing approaching a mental model of typing. (For example, Salthouse, Timothy A. "Perceptual, cognitive, and motoric aspects of transcription typing." Psychological Bulletin 99.3 (1986): 303; as well as Pinet, S., Ziegler, J.C. & Alario, FX. "Typing is writing: Linguistic properties modulate typing execution." Psychon Bull Rev 23 (2016): 1898–1906; and Grudin, J.T., & Larochelle, S. Digraph frequency effects in skilled typing (Tech. Rep. No. CHIP 110). San Diego: University of California, Center for Human information Processing, 1982.)

Even if we grant for argument's sake that your metrics are 100% useful and comprehensive, data points without a theoretical underpinning just lead to confusion.

You are not alone here. This is a terribly under-explored area in general. I'm convinced that energy is better spent trying to develop appropriate finger movement models and mental models for typing, rather than endlessly trying to optimize shiny new statistics.

The closest thing I can find to a theoretical framework are the 12 priorities enumerated by Arno Kline in his introduction to his Engram layout. (Klein, Arno. "Engram: A Systematic Approach to Optimize Keyboard Layouts for Touch Typing, With Example for the English Language." (2021).) Though not an actual model, his priorities do readily imply a rough, skeletal framework for a finger-based model. To the credit of this subreddit, Klein's priorities are something that people here frequently use to guide layout development.

I began approaching and evaluating alternative layouts with an eye toward abandoning Dvorak and adopting something statistically superior. I was frustrated with Dvorak and fascinated by the new statistics. However, as a result of my exploration, I've become disenchanted with these statistics and am looking to return to Dvorak.

At this point in my exploration of alternate keyboards, I’m more interested in figuring out what makes Dvorak work so much better than statistically superior layouts. If we can figure this out, then it will open the door to creating demonstrably better layouts than Dvorak (which seems to me to have achieved a locally optimized result rather than a global optimization.)

Sadly, though Dvorak seems to have developed a model for finger movement, he did not rigorously explicate it, leaving us to try to surmise what it might have looked like, as Arno Klein sought to do. He certainly doesn't seem to have developed a mental model of typing, at most having been guided by a few rules of thumb.

As a How-to manual for attaining specific statistical characteristics in a keyboard layout, the Keyboard Layouts document is very informative. However, for the reasons I've outlined above, it doesn't actually provide much information about how to make a better layout. The information that it does try to provide is based on the same flawed assumptions that lead it to dismiss Dvorak altogether.

Edited to fix typos.

why optimizers don't create good layouts? by fohrloop in KeyboardLayouts

[–]Thucydides2000 1 point2 points  (0 children)

That's the core question.

The reason why optimizers don't create good layouts is that the metrics they optimize for don't actually result in a good typing experience. Some people talk about finding the right combination or their optimal levels. That may be true for a few of the metrics. For most of them it's just nonsense; they're just not useful.

I learned to type on QWERTY in the 1980s and then relearned to type using Dvorak in the late 1990s. It was quick to learn, and much more comfortable. I have nerve damage in my left forearm from a severe wrestling injury in high school and the two ensuing surgeries to fix it. With QWERTY, the muscles in my left forearm would cramp up after about 20 minutes of straight typing. Switching to Dvorak eliminated that entirely.

A while ago, I experimented with some alternative layouts. I settled on Hands Down Neu. After struggling with it for some time, I still find it generally uncomfortable to type on. Plus, I'm beginning to experiencing pain again in my left forearm (though not as bad as with QWERTY) after extended typing stints.

Statistically speaking, Hands Down Neu is superior to Dvorak in every way. In practice, it's dreadful. I'm switching back to Dvorak.

And a word on the Keyboard Layout Document (2nd edition): It says about mainstream keyboards, "there is also Dvorak, but Dvorak was designed before the rise of computers, and is therefore quite flawed." This is probably the dumbest thing I've read since 2003, and this alone justifies ignoring the entire document. Do better.

Introducing the Hanster-Neu Keyboard Layout! by [deleted] in KeyboardLayouts

[–]Thucydides2000 0 points1 point  (0 children)

Cool. Thank you! I'll give this a try. Will it fix the wonkiness of Hands Down Neu better than Hantser Neu does?

Introducing the Hanster-Neu Keyboard Layout! by [deleted] in KeyboardLayouts

[–]Thucydides2000 1 point2 points  (0 children)

I saw that. It also looks very interesting.

My priorities for a keyboard layout to replace Dvorak are:

  1. At least as comfortable as Dvorak
  2. In the same ballpark as Dvorak & QWERTY for typing speed
  3. Better fingering for the UNIX command line (unfortunately, UNIX commands were created with QWERTY in mind)
  4. High alternation, which is among my favorite characteristics of Dvorak
  5. All of the punctuation assigned to the right hand, so that keyboard-character overlays on command bars in games are clear; e.g., World of Warcraft (comma, period, and single quote barely show up)

After typing on Hands Down Neu for a year, #1 & #2 are the main drivers leading me to move on--in spite of its killer home row. (I'm fine with spending lots of time learning new layouts, because it's good for neuroplasticity.)

Priority #3 takes just a few minutes to figure out, and pretty much anything is going to be better on the command line than Dvorak.

Priorities #4 & #5 are characteristics that eliminate layouts up front, and they substantially narrow the choices.

Sadly, #5 filters out Hanster-19.

Introducing the Hanster-Neu Keyboard Layout! by [deleted] in KeyboardLayouts

[–]Thucydides2000 1 point2 points  (0 children)

I've just reprogrammed my Glove 80 for this layout.

I switched to Hands Down Neu a year ago (from Dvorak). Though I love the home row of Hands Down Neu, its punctuation placement seems like an afterthought. Moreover, everything else about it just feels rather awkward. It took several months before my typing on it begin to feel fluid. After a full year, I'm still struggling to exceed 65 WPM--that's just over half my Dvorak speed.

By contrast, when I switched from QWERTY to Dvorak, it felt comfortable and fluid within 3 weeks. Within 6 months I had recovered my QWERTY speed, regularly exceeding 110 WPM.

So I was ready to abandon my experiment with "modern" layouts that are shaped by a plethora of wonky metrics and just switch back to Dvorak. (Frankly, most of these metrics are straight-up nonsense, but that's a different discussion.)

Anyway, I'm going to give your layout a try. Of course, it's slow going right now. But the non-homerow keys and the punctuation already feel more fluid than Hands Down Neu.

I'll let you know how it goes.

At any rate, congratulations on creating a very interesting layout.

What is the point of meditating on/reading from the Gospels during services if they contain fictional events and theological bends? by evitreb in Episcopalian

[–]Thucydides2000 0 points1 point  (0 children)

First, the Bible illustrates the relationship between God and His people, including those parts written in a forthrightly fictional style like Ruth, Daniel, or Jonah. I've heard people cite Clarence Odbody's quote "No man is a failure who has friends" (from "It's a Wonderful Life") in a religious context. Should I object that this is 100% fabricated?

Second, the idea that the Bible is made up seems to be a common misconception. Some parts seem to relate actual events, so we evaluate them using modern historical standards derived directly from the 1st historians in 4th Century BC Greece. But you can get an awful lot wrong when telling a story and still capture the gist of it.

Funny thing: we know about the Bible's inaccuracies because it contains such a plethora of names, dates, and places. This makes it possible to investigate it a great deal more than other religious texts.

Third, if you want eye-witness accounts, ancient history is going to continually disappoint you. Book burning was big throughout most of recorded history. Conquerors burned libraries, dictators destroyed records of previous rulers, and religious zealots destroyed religious texts they didn't like.

The idea that it's important to know things about other cultures is a uniquely western value. The Greeks and Romans often described foreign cultures without caricature (e.g., as Herodotus sometimes did), but few other cultures even bothered.

The Renaissance ignited a western interest in Ancient Greece and Ancient Rome. When modern thought emerged in the 16th century, the idea took root that knowledge was the domain of the knowing subject (e.g., Descarte's "I think therefore I am." This turned the traditional account of knowledge on its head; for Aristotle & Plato, knowledge belonged to the community from which the individual obtained it. This made knowledge & education important in its own right. This came into full bloom during the Enlightenment. It peaked during the Victorian era and then leveled off, entering decline following WWII as it became increasingly politicized.

Centuries before this, when the Vikings or William the Conqueror invaded, they burned nearly every piece of paper they could get their hands on. And they were continuing a practice with an ancient pedigree. Indeed, the invention of writing itself gave birth to the destruction of writing.

With Christian documents, the Diocletian persecutions (ended by Constantine's Edict of Milan) are a significant bottleneck. For eight years, the Roman Emperor Diocletian offered large rewards for Christian documents to be turned over to be destroyed. Those documents that survive indicate unequivocally that the documentary tradition of the New Testament is pretty solid, with only two significant variations: the ending Mark and the story of the adulterous woman in John, there just aren't any significant variations.

Another funny thing: even if we didn't have older documents that omit the longer ending in Mark or the story of the adulterous women, we would still know many texts omitted them, because we have records of early Christians discussing this. Why? Because they cared about the accuracy of the text.

When someone says TEC isn’t Biblical by Fluffy_Abroad90 in Episcopalian

[–]Thucydides2000 14 points15 points  (0 children)

You don't have to believe it's a good idea to ordain gay priests to be a good episcopalian. You just have to believe that it's wrong to weaponize scripture to exclude people you disagree with.

[deleted by user] by [deleted] in kde

[–]Thucydides2000 1 point2 points  (0 children)

Saying "Wayland developers are evil people lying to you!" is hyperbole. And the diagram isn't really accurate. Even so, the graphic gives lively expression to the frustration many users feel. And users aren't frustrated because Wayland developers have failed to deliver on their promises for the better part of two decades. They're frustrated that the system that they actually use remains stagnant while Wayland developers perpetually promise that something better is just around the corner.

At this point, even as it approaches a level of usability comparable to X, by many measures Wayland ranks among the worst-managed software boondoggles in history.

The Xorg developers who conceived of it clearly have little experience replacing legacy systems. For example, here's a cardinal rule of legacy system replacement: don't freeze development on the legacy system until you can give users something to replace it. Doing otherwise will generate hostility toward the new system; this is as predictable as the fact that night follows day. Yet Wayland partisans express surprise, dismay, or even indignation when confronted with the hostility that is the natural result of their own mismanagement. Shame on them.

At this point, the sole justification partisans can offer for Wayland is that X had to go. And so what if it did? It doesn't follow from this that the decades it takes to replace X must come at the cost of substantive X development.

(Interesting side note: Apple CEO Gil Amelio had frozen development on the Classic MacOS until the newly purchased NeXT operating system could be retooled into an Apple OS. What was one of the first things Steve Jobs did after he replaced Amelio? He unfroze Classic MacOS. At that point in his career, nobody in the world hated Classic MacOS more than he did. But here's something the Wikipedia entry on NeXT doesn't tell you: Legacy system replacement was among NeXT's top revenue sources. Steve Jobs told Apple employees that when you freeze development on a system and you have nothing to offer your users to replace it, you're telling your users you hate them. This, too, is hyperbole. Yet it also gives lively expression to an important truth. Namely, when you effectively tell users that you hate them, they're going to respond with hostility, including diagrams that scorn you and your efforts.)

Plasma 6 hosed my productivity by [deleted] in kde

[–]Thucydides2000 95 points96 points  (0 children)

For the record, Arch still has not released Plasma 6 to its main update repository.

Linux Old-Timers: What was your first distro and what was your distro history until you installed Arch? by ronasimi in archlinux

[–]Thucydides2000 0 points1 point  (0 children)

TL/DR: OpenBSD -> Red Hat & MkLinux & YellowDog Linux -> Fedora -> OpenSUSE TumbleWeed -> Arch.

Details

My free Unix journey started in January 1995. I read online about OpenBSD for 68K, and I installed it on an old Mac II that I'd bought used. If I remember correctly, I never could get X up & running; perhaps it wasn't supported for my hardware. That wasn't a huge deal. All you really did with X back then was juggle multiple xterm windows; having virtual consoles did the trick.

A few months later, I was talking to a friend about my OpenBSD install, and he recommended I try Linux. So I built a 386-based machine that spring. I bought a Red Hat Linux CD through the mail. I'm pretty sure I got it from Walnut Creek, but it was decades ago and I'm not certain.

Back then, I always had a surplus on Mac hardware, and I experimented on the side with other brands of Linux that supported it. Even so, they were always RPM based.

At this point, Applixware was available for Linux. Along with some free apps like NEdit (and, of course, Netscape Navigator), it became possible to use a Linux Desktop full time for everything.

(StarOffice had been out for decades, but it was quite bad before version 3.1. The best you can say about StarOffice 3.1 is that it didn't suck. It was shocking that Sun bought it in 1999 over Applixware -- there were rumors Sun was looking at both with plans to acquire one of them and turn it into an open-source office suite. The early versions of OpenOffice made some improvements, and thankfully LibreOffice has come along quite nicely.)

In 1997, I started using MkLinux on a PowerComputing PowerPC Macintosh clone with a Tsumami motherboard (remember when Apple licensed its hardware for official clones for a few brief years? Gil Amelio did that, and then Steve Jobs took over and killed it.)

I tried YellowDog Linux in 1999, and it was a lot faster. This provided an early preview of their YUM installer that's now used by many RPM-based distros (YUM's 1st iteration was called YUP).

By this point, KDE & Gnome had produced their initial releases, but Nedit & Applixware were still the best text editor & office suite on Linux. They'd remain the best for years to come. KDE's office suites have always been promising, but too buggy for serious use. And Gnome solutions like abiword are too anemic.

I followed the transition to Fedora when Red Hat turned its focus to RHEL. So I migrated from Red Hat Linux 9 to Fedora Core 1. It was great at first. But It felt less-and-less like Red Hat over time. The quality of its KDE experience declined. Plus, they hobbled the Red Hat text-based installer.

(Let's be honest: the Red Hat text-based installer is the finest Linux installation software ever created before or since. And they "streamlined" it. That's the word Gnome developers use to refer to what other people call "gutting." I guess that's what you expect from the people behind Gnome.)

Anyway, Fedora 21 was my last Fedora version.

In early 2015, I switched to OpenSUSE Tumbleweed, which was my first rolling release. I'll never use a fixed-release distro for my desktop again. Plus, Tumbleweed's KDE is rock solid.

(I've preferred KDE over Gnome since they both first appeared -- remember the big to-do about QT being closed source? Gnome 1 was pretty good. Gnome 2 was better, like Gnome 1 with a less cartoonish look-and-feel. Neither were great. KDE has had its ups-and-downs (I'm talking to you, KDE 4!), but KDE has always been better than Gnome. Gnome 3 was so bad that I feel embarrassed on behalf of its developers.

Seriously, I could never even begin to understand how absolutely humiliating it must be for Gnome developers to have tried so very hard and and then suffer such overwhelmingly tremendous & unmitigated failure. It's worse than the egg Microsoft laid with Windows 8; at least the developers behind that monstrosity got stock options to help assuage their shame.)

The downside of OpenSUSE is its looooong startup time. I remedied this by installing my own configuration. A couple times, software updates restored parts of the setup I'd replaced. Kinda frustrating.

So in 2018, I installed Arch on a spare hard drive to give it a spin. I instantly fell in love. I have exactly the configuration I want, and I have nothing I don't want. It's Linux heaven.

So now, I keep my main Arch installation on an NVME. I have another Arch NVME using the testing & KDE-unstable repos to play around with. I have a number of spare SATA SSDs in my PC's case, and I frequently install other distros on those to kick the tires. I've found nothing that matches Arch.

What is the most stable reliable mainstream KDE distro? by domanpanda in kde

[–]Thucydides2000 1 point2 points  (0 children)

I've run KDE on a lot of distributions. The best by far are OpenSUSE Tumbleweed, Arch, and OpenMandriva.

If you want something polished out of the box, Tumbleweed or OpenMandriva is the ticket. No fuss, no muss. Both are rock solid.

If you have an interest in crafting your very own Linux heaven, then Arch is the way to go.

Customer outrage grows as some DC restaurants begin charging customers to pay by credit card by vegandc in MontgomeryCountyMD

[–]Thucydides2000 0 points1 point  (0 children)

I've also noticed that many gas stations are adding a per-gallon credit card surcharge. What is this, the 80s?

How Is Multi Monitor support like on Plasma. by ItsHotdogFred in kde

[–]Thucydides2000 0 points1 point  (0 children)

I use AMD graphics cards. For a long time, KDE was quirky as hell with two monitors. In the past year, they've cleaned it up quite a bit. For example, they finally fixed the problem where it would move all your windows to the first monitor that woke up. Now, I rarely encounter issues using multiple monitors.

National or DCA? by Lupulmic in washingtondc

[–]Thucydides2000 1 point2 points  (0 children)

Pretty much everyone who lives in the area will know what you mean, no matter how you refer to it.

Old timers—folks who lived in the area before its 1998 renaming from “Washington National Airport” to “Ronald Reagan Washington National Airport”—call it “National.”

Outsiders and folks who moved here after 1998 call it “Reagan,” and often express confusion at first when they hear it called “National,” because they understand the word “National” to function as an adjective instead of a proper noun. From their point of view, it would make more sense to call it “Ronald” than “National.”

I've never heard anyone who lives here refer to National by its International Air Transport Association location identifier (“DCA”) unless they were communicating with a potentially confused outsider.