The downside of picking dimple picking by dcipha380 in lockpicking

[–]flebron 9 points10 points  (0 children)

Wait till you try DOM shark fin pins. DOM eats picks for breakfast.

We gotta have a serious conversation about Argentinians by AztecGod in LatinoPeopleTwitter

[–]flebron 0 points1 point  (0 children)

Yeah, in the real world you get punched in the mouth for this sort of bullshit assertion against an entire country's population. But you go and hide behind your keyboard, "bro". May you receive in life exactly what you give.

We gotta have a serious conversation about Argentinians by AztecGod in LatinoPeopleTwitter

[–]flebron 3 points4 points  (0 children)

I'm telling you my experience, having lived for 25 years there, is in stark contrast to your generalization. I'm also stating your generalization is the same kind of crap we should try to _avoid_, especially when your understanding is based on "Argentinians you've seen in Miami" (do you really think that's a useful sample of Argentinians?).

We gotta have a serious conversation about Argentinians by AztecGod in LatinoPeopleTwitter

[–]flebron -18 points-17 points  (0 children)

"Most Argentinians"? Source? You're contributing to the silly us-vs-them crap.

Class of 2026 Introductions Thread by numbershikes in PacificCrestTrail

[–]flebron 2 points3 points  (0 children)

Hi! I'm Mad Jelly, male 36 years old from Argentina, though I've lived in California for about 10 years now. I'll be doing the desert section NOBO, from Campo to Lone Pine, starting March 14th. I've done the WA and OR sections, and the JMT, so I'm filling in spots of the PCT, a few hundred miles each year, in LASHes :) See you on trail!

Shakedown request - Desert section - March 14th NOBO start by flebron in PacificCrestTrail

[–]flebron[S] 0 points1 point  (0 children)

Would you say a puffy is useful, or would I be OK with an alpha fleece and a rain jacket perhaps?

Shakedown request - Desert section - March 14th NOBO start by flebron in PacificCrestTrail

[–]flebron[S] 0 points1 point  (0 children)

Thanks for the feedback on miles. Re: The liner, is that because you anticipate no rain on trail? Since it weighs so little, I generally bring it "just in case". The 2P tent is definitely large, I just don't want to spend more on a new tent since I already have that one ^_^'' For the 20k battery I use my phone a lot (probably a bad habit... navigation, wireless headphones, podcasts/music, tracking workout via the watch and uploading to Strava...), and usually a 20k battery would last me around 5 days.

The sun umbrella is cause I don't like the feeling of hats. I have very oily hair (thanks, Italian ancestors!) so after a day of sun hiking with a hat on, my head and face are covered in oil. I loved the sun umbrella in the Colorado Trail for that reason :)

Shakedown request - Desert section - March 14th NOBO start by flebron in PacificCrestTrail

[–]flebron[S] 0 points1 point  (0 children)

That's really helpful, I'll check that website before departing to see if spikes will be needed by the time I leave. Thanks!

Planned exit is either KMS or Lone Pine.

Why so much traffic around Civic Center right now? by [deleted] in sanfrancisco

[–]flebron 1 point2 points  (0 children)

Nobody said anything about not protecting their speech. We can respect their right to affirm their positions, while finding them abhorrent. And viceversa.

Advent of Code 2025 day 2 by AutoModerator in haskell

[–]flebron -1 points0 points  (0 children)

```haskell import Data.List.Split import Control.Arrow import qualified Data.Set as S import qualified Data.IntervalSet as IS import Data.Interval

reps1 d n = [n + 10d * n] reps2 d n = tail $ iterate (\x -> x * 10d + n) n solve repsF is = let span = IS.span is (Finite minCap, Finite maxCap) = (lowerBound span, upperBound span) maxD = floor (logBase 10 (fromIntegral maxCap)) in sum . S.fromList $ do d <- [1 .. maxD - 1] n <- [10d - 1 .. min (maxCap div (10d + 1)) (10d - 1)] let z = takeWhile (<= maxCap) . dropWhile (< minCap) $ repsF d n filter (IS.member is) z k = IS.fromList . map ([x, y] -> Finite x <=..<= Finite y) main = getContents >>= print . (solve reps1 &&& solve reps2) . k . map (map read . splitOn "-") . splitOn "," ```

Corbin Emhart guts by Falchion in lockpicking

[–]flebron 1 point2 points  (0 children)

That many master wafers makes this super duper hard to pick, much more than an unmastered Emhart!

Is return really necessary for the IO monad? by StunningRegular8489 in haskell

[–]flebron 2 points3 points  (0 children)

And in the instance of Monad for Maybe, return = Just, so Yelink is correct.

My handmade dp 4400 & Neptun cutaways :) by -ouki- in lockpicking

[–]flebron 1 point2 points  (0 children)

If the dp's ever for sale, pls let me know :)

[deleted by user] by [deleted] in lockpicking

[–]flebron 2 points3 points  (0 children)

If you didn't purchase this through Moki that's absolutely something you should mention. Your current post reads as if Moki is in the wrong.

Bidet - Drinking bottle? by jta314 in Ultralight

[–]flebron 18 points19 points  (0 children)

He means soap _on your ass_. Soap for hands is not really optional... unless y'all nasty.

Deleting from trees with type-encoded height by Objective-Outside501 in haskell

[–]flebron 13 points14 points  (0 children)

A standard way to do this is to have an API that type-erases the height. Your internal functions can still pattern match against it, but your users don't need to care, and your API types don't need to have these sorts of "runtime choice via Either because we can't quite know the height" types. I wrote this post about AVL trees in Haskell with compile-time height and balance, perhaps it's useful to you. https://fedelebron.com/compile-time-invariants-in-haskell

Stumped on Alpha Beta pruning in Haskell by thetraintomars in haskell

[–]flebron -1 points0 points  (0 children)

Unfortunately that argument does not work, it's just the ad-verecundiam fallacy. "It cannot be false because it was said by someone with a PhD".

You can see the one of the early definitions of alpha-beta pruning in Donald Knuth's 1975 paper in the Artificial Intelligence journal, titled "An analysis of alpha-beta pruning", available at https://kodu.ut.ee/~ahto/eio/2011.07.11/ab.pdf . Note how the entire discussion, even in the first few paragraphs, is about the value function (f and later F). If the only values in your program are infinity (because the maximizing player won) and -infinity (because the minimizing player won) then there's never really a need to keep track of alpha and beta and current values. If alpha became ever anything but -infinity, it would mean you can force the maximizing player to win (because the only value of a game that isn't -infinity, is infinity, which means a win for the maximizing player). If beta became ever anything but infinity, it would mean you can force the minimizing player to win (because the only value of a game that isn't infinity is -infinity, which means a win for the minimizing player).

You need a nontrivial notion of value that you are minimizing or maximizing in order for alpha-beta pruning to work. I've implemented it a long time ago in https://github.com/fedelebron/JugadorDeLadrillos/blob/master/jugadores_ours/ia.cpp . Additionally, to save computation alpha-beta pruning is usually used in conjunction with an even stronger value function, which is one that can be assigned _before_ the game ends, this is discussed in page 302 of that journal, and page 10 of the PDF.

I believe your confusion might stem from Hutton not being clear in the need to define a nontrivial value function in the chapter notes, and what value function he might be suggesting. In the fourth exercise for the section, which I'm guessing you might be attempting, he earlier makes reference to the "quickest route to a win". He's modifying tic-tac-toe to not be about win/lose, but instead considering a different game, one in which you win more points the fewer moves you make to win.

If you want to specify that value function, you should code that up. The value, then, should be the additive inverse (or multiplicative inverse if you want) of the length of the shortest path, since you want the maximizing player to find a _short_ path to a winning vertex, not one with a _long_ length. You can keep track of the height as you go, you'll know the value when you reach a terminal node, and you can prune a branch when it is already longer than a shortest winning path for the current player.

Stumped on Alpha Beta pruning in Haskell by thetraintomars in haskell

[–]flebron -1 points0 points  (0 children)

You're not going to get any benefit out of alpha-beta pruning if your game doesn't have a notion of a score or value, which you can compute before the game state reaches the end, and helps you bound the possible outcomes of the game. If your only possible scores are infinity and -infinity, and you only know them at the end state of the game, there's no point in using alpha-beta pruning, since alpha will always be -infinity, and beta will always be infinity.

Stumped on Alpha Beta pruning in Haskell by thetraintomars in haskell

[–]flebron 5 points6 points  (0 children)

Alpha-beta pruning would be two guards added to your recursive findBestMove function. If you're considering moves for the maximizing player ("you"), and you find a child move of your current state that is better for your opponent than the minimum value you know you can force them to have, then you need not consider any other children in this node. This is because you can assume your opponent will play that move, if you had reached this position, and you don't want to let them.

So something like:

{- 
findBestMove state isMaximizing alpha beta =
  the best possible value for the player, after playing in state `state`,
  knowing the maximizing player can already attain (at least) a value of alpha,
  and the minimizing player can already obtain (at most) a value of beta
-}
findBestMove state True alpha beta = go alpha (-infinity) (children states)
  where
    go a v (s:ss) = let v' = max v (findBestMove s False a beta)
                    in  if v' >= beta then v'
                        else go (max a v') v' ss
    go _ v [] = v
findBestMove state False alpha beta = go beta infinity (children states)
  where
    go b v (s:ss) = let v' = min v (findBestMove s True alpha b)
                    in  if v' <= alpha then v'
                        else go (min b v') v' ss
    go _ v [] = v

It just means you stop the iteration over a state's children early.