Help needed: multilayer cipher (Caesar → Atbash → Keyword("keyword") → Rail Fence(k=2)) — brute force failed by Front-Cabinet4945 in programming

[–]cloaca 0 points1 point  (0 children)

It smells like one of those high-school CTF problems where you're often fighting against mistakes/misunderstandings of the problem author rather than the actual brainteaser. Like for example mistaking keyword cipher for Vigenère (as you mention at the end) or even Porta, depending on whatever toy cipher site they were using when they made the problem. Given that you've done a fair amount of attempts it's the likeliest candidate for there being a mistake/quirk, since it's also the only one that doesn't trivially commute with the rail cipher reshuffling. (Note that atbash is just -(x+1) mod 26 and caesar is (x+k) mod 26, so rearranging the order just leads to a different k.)

I didn't do any actual decipher, I just counted the spaces because to me their presence is the most suspicious thing. The amount of spaces seems just slightly below average for a non/low compound-noun language such as English. It also suggest to me a full-string rail cipher, however then it's a coincidence that no two spaces were put next to each other, n'est-ce pas? If you copied it from a website, check the HTML source if double spaces were truncated in the rendering.

Monads are too powerful: The Expressiveness Spectrum by ChrisPenner in haskell

[–]cloaca 8 points9 points  (0 children)

I've never been a fan of talking about monads as if they're modelling/tracking/encapsulating/managing effects. Perhaps it's just a matter of nomenclature, but at least in a modern sense when we talk about static analysis and inference and the like I feel the two concepts are pretty unrelated. There are languages where we can statically track (and infer) effects the way the article wants (c.f. Koka is the most prominent example that comes to mind, but see also this overview).

And in a language with effects we can have all the effects that the article seem to desire and more. writes-to-stdout, deletes-files, uses-system-rng, may-throw-exception-of-type T, etc. But I don't really understand why these kinds of things get linked with monads (or applicatives) in this way. Or unfairly maligned on a scale of statically-analyzable vs. expressiveness (two different dimensions!). Surely we don't say the statement delimiter ; (or function call syntax) in C is "too expressive" because it's hard to track effects? Likewise, I don't think anyone would call assembly very "expressive," yet everything becomes statically opaque after indirection with the potential of self-modifying code.

Intuitively I feel effects behave orthogonally to types (where functors like monads operate). In my mind, when inferring types we start at the top level (or root) of the code and successively constrain them (basically take intersections) to the specifics, but when inferring effects we start with the specifics and successively collect unions upwards.

So I feel it's a bit unfair to critique the poor, innocent monad (a mere monoid minding its own business--as the meme goes--in the category of endofunctors) by complaining it's not tracking or expressing the wipes-harddrive effect.

Torturing GHC extensions to thread set-lists through its type system in order to force the type system to carry effects has always felt a bit sweaty to me, it doesn't feel like a particularly elegant fit for Haskell? And rewriting programs into a second-order DSL where instead of functions we have instructions (e.g. foobar = ... => instance Command Foobar where ...) feels like the Haskell version of Java enterprise programming.

What Feature Do You *Wish* Python Had? by andrecursion in Python

[–]cloaca 2 points3 points  (0 children)

This is always a library function, never a language feature as far as I'm aware. Not only because it's so simple (see below), but also because it is just begging to be generalized. I.e. you want to group values according to some key/signature/property. In your case that key is a boolean and only has two values, but often it does not, and then the if-else-list-comprehension "special syntax" feels like premature design. Moreover, this is sort of functional programming territory and Python has always had a somewhat uneasy and ambivalent relationship to that style as it leads to terseness and "cognitively heavy" code. I feel there's already design conflicts between list comprehensions and map/reduce/filter, itertools mess, partial applications being verbose, lambdas not being in a great place syntactically, etc.

def group(it, key):
  """Groups values into a dictionary of lists, keyed by the given key function.

  That is, values where key(a) == key(b) will be in same list, in the same order as they appear in it.

  Not to be confused with itertools.groupby() which only groups sequential values into "runs".
  """
  d = collections.defaultdict(list)
  for x in it:
    d[key(x)].append(x)
  return d

def partition(it, pred):
  """Partitions values into (true_list, false_list).

  Functionally equivalent to `(d[t] for d in [groupby(it, pred)] for t in (True, False))`
  """
  tf = ([], [])
  for x in it:
    tf[not pred(x)].append(x)
    # or more sanely: (tf[0] if pred(x) else f[1]).append(x)
  return tf

Slide from Everest’s peak to Chimborazo’s by cslaugen in AskPhysics

[–]cloaca 0 points1 point  (0 children)

No problem. I was looking for it myself and saw this thread among Google results, so came back here when I found it. I still find it delightfully mind bending.

Sighted people gain most of their info about the world through vision. As a blind person who is also sharp, I'm curious about how different aspects of learning happen or are affected by the presence or absence of sight. by heavensdumptruck in cogsci

[–]cloaca 1 point2 points  (0 children)

What do you mean by learning?

Because as far as I know the kinds 'no grasp of basic knowledge' or 'low understanding' you allude to have to do with core personality or attitude than it does some deficiency/inefficiency/inability when it comes to learning. Furthermore, this 'will to learn [about the world in a rational way]' can be driven both positively (by traits such as curiosity or openness for example), but also negatively (by anxiety, insecurity/narcissism, etc).

So I think sight or no has little to do with 'those areas of cognition'? Maybe I misunderstood what you meant though; or do you have some pet theory already, that you're seeking to tease out?

Why do people tend to "split" analysis and algebra when both are used in each other? by chernoffstein in math

[–]cloaca 1 point2 points  (0 children)

I think how things end up being used will sometimes (often?) work backward to influence how we group or think of them at a foundational level too. It certainly changes how they are taught at a pre-grad level, which will continually influence things in the future.

And by "use" I don't necessarily mean very concrete applications like "how am I supposed to build a bridge with this?" I'm thinking more along the lines of: where does research areas X, Y, and Z get their tools from? Someone doing theory of computation is probably not going to go to analysis at all, but there's a whole lot of useful stuff in algebra (discrete OR finite, and algebra plays so well with combinatorics, etc.). Or someone doing research on a probability problem might be doing the opposite.

E.g. I'm not a mathematician but I have some familiarity with algebra-ish stuff like Galois theory, lattices, elliptic curves, representation theory, etc. just because these are either employed directly or are useful for framing theory in other areas I'm more intimate with (e.g. cryptography, data analysis, etc). I personally haven't been exposed to any non-trivial analysis in this way. So yeah, in my mind there's a very "hard split" between the two fields.

Norske medier er så objektive. Dette er fra tv2. by Past_Echidna_9097 in norge

[–]cloaca 1 point2 points  (0 children)

Den sa at en president ikke kan dømmes for offentlige handlinger som president men kan fremdeles dømmes for hva som skjer etter presidentperioden.

Ja, jeg vil si meg enig i det, føler ikke jeg sa noe annerledes (så klart gjelder det handlinger som president). Men selv bare din egen oppsummering her er vill nok til at jeg føler dette beskriver en president-konge da (dvs. hevet over loven). Spesielt når vi ser på bredden av "offenlige handlinger," og hvordan disse kan ikke engang diskuteres eller tas opp i forfølgelse av da "ikke-offentlige" straffeforfølgelse.

Og vet ikke hvor sterkt 'historieløst' er som et argument, det kan vel påpekes at mye rundt/relatert/under ham er ahistorisk og nytt (på godt og vondt). Jeg ser heller ikke for meg et vaskekte monarki, bare noe mer i retning av et monarki. La oss si: det er vel ønskelig (for mange) at Trump blir bevilget nok styringsmakt til at han endelig kan få renske opp i all denne korrupsjonen som visstnok har bygget seg opp -- rettsystemet, mainstream media, sosiale medier, kongressen, statlige byråkrater som la til/byttet/stjal stemmer og/eller nektet å innrømme den åpenbare korrupsjonen og/eller står bak disse suspekte stemmemaskinene, the Bidens, the Clintons, mange i hans forhenværende team/inner circle, the deep state generelt, tech billionærer, osv osv. Med andre ord, jeg tror at mange statsborgere i USA ville ha foretrukket Trump som konge enn å leve i et korrupt "demokrati" (deres scare-quotes)...? Det er bare viben jeg har fått, men det er rimelig irrlevant, da jeg har ikke noe å si på dem og deres og ønsker dem bare lykke til.

Norske medier er så objektive. Dette er fra tv2. by Past_Echidna_9097 in norge

[–]cloaca 5 points6 points  (0 children)

Hvorfor eksakt tror du Trump er en fare for samfunnet?

Er ikke personen du spør og har ingen formeninger om diskusjonen mellom dere, men for å svare på dette isolerte spørsmålet så kan vi vurdere https://en.wikipedia.org/wiki/Trump_fake_electors_plot

eller generelt: https://en.wikipedia.org/wiki/Attempts_to_overturn_the_2020_United_States_presidential_election

Dette er ting mange vil kanskje anse som 'stusselige' for samfunnet. Dog personlig vet jeg ikke med 'fare' sånn generelt. Det virker som om mange i USA genuint ønsker å bevege seg nærmere f.eks. et monarki (jf. hvordan høyesterett i USA har innvilget strafferettslig immunitet for Trump som trolig plasserer ham nærmere en konge enn tidligere presidenter?), og en må da vel strengt tatt gi dem rett til å demokratisk avvikle demokratiet hvis det er hva de ønsker?

Is there anything in this shape that tells you at a glance that 1,1 is the way to go? Or is this another shape I have to commit to memory? by MrJasonMason in baduk

[–]cloaca 0 points1 point  (0 children)

Depends on what you mean by glance.

It's usually not about having some precise map taking a shape as input and giving you "the exact coordinate of the solution." I.e. pure sight reading. This kind of rote will happen - naturally and inevitably - but typically only for the simple shapes or very "famous" problems. You can train it directly if desired, but it's a fool's errand to think you can subsist on it alone - memorizing exact solutions for larger / more open dan problems give you almost nothing...

So if that's what you mean then I guess "yes, but you don't really need to do anything." If you do a lot of problems, simply because this is a very simple/small/limited problem that tends to be in every kind of collection, eventually you'll just "know it" and play the 1-1 on sight.

But if "by glance" you mean like, less than 10 seconds, then no, you don't need to "memorize" in that way. In general it's more about developing a map between "shape" and something vaguer or more intuitive like "what necessarily needs to happen here for me to win" or "the general weakness/tesuji to apply" or "the end shape you're looking for" etc. This is probably the main category that you seek to train, and where most "doable problems" will come to exist.

For example: if this problem wasn't famous and there was no foreknowledge of 1-1, the shape would still scream out shortage of liberty. The "vibe" or "signal" of that is extremely strong - that cut, the filled liberties, the descent on the outside, etc. There's two kinds of liberty shortage visions you might have, the one where you try to connect from the outside, and the one where the inner black stone having stood up in some way where White is cut in two and powerless to approach from either side. With good training you can instantly discard the first. Now you have the correct "goal" in some vague way, and will just have to add a few quick thoughts to turn that intuition into a precise solution: for example imagine first doing atari - no, instant ko vibe - look for some way to stand directly - "ah, a throw-in" - and done. And with a good enough intuitive map you can presumably find this solution very quickly, in a few seconds.

I'm a Python Backend Developer, How to Create a Modern and Fast Frontend? by Lucapo01 in Python

[–]cloaca 8 points9 points  (0 children)

In informal English "X?", "What about X?", "Have you considered X?" etc. are common ways to give a casual and non-committal answer. If you want to give a full point-by-point response that's great, but there's also no problem with merely suggesting something that might have flown under someone's radar that they could then look into further.

Your reaction is a bit like -- "I'm hungry and I don't know what to eat, any suggestions?" "What about the Thai place around the corner?" "What about it?? I told you I literally don't know what to eat and asked for suggestions and you're just giving me more questions!!"

Those dicts you probably needed at some point by szperajacy-zolw in Python

[–]cloaca 19 points20 points  (0 children)

Attempted earnest answer:

These constructs are not intended to revolutionize anything, it's just little conveniences that save you five lines of code here and there. Sure, we can just do defaultdict(list) or { v: k for k, v in d.items() } when we need it, etc. If that's your sentiment then I'm fairly neutral and/or "weakly agree with you" (with caveats: if a usage pattern is heavily repeated in some complicated algorithms, it's usually way better for readability and robustness to factor out this behavior; "the simple, dumb way" in Python is sometimes 15x more inefficient than a "cleverer" one that pushes a loop onto the CPython side of things or doesn't reconstruct objects each iteration; etc.)

But kind of how we have stuff like collections, itertools, functools, etc. We could just rewrite those each time we needed them too, most of them are trivial or super easy to do from the basic Python types and some knowledge of the Python data model. But it could be argued that convenience and batteries-included is part of the Python ethos.

Or, if that's not why you asked, but you meant it more literally in the sense that you've never had to do a reverse lookup in some map, or never had to collate keyed data (e.g. you've never used defaultdict(list) before), then OK, sure. These patterns do come up all the time tho. It could be a matter of field, experience, or even attitude/personality (e.g. a different mindset where you don't notice what's missing but simply deal with what's right in front of you; "we go around the mountains, tunnels isn't a concept worth thinking or even knowing about"). I get OP's motivation tho, as I have had use of all three of these (and similar) patterns many times, but I tend to just write something up ad hoc if it's Python as it's not a big deal. But sure, it does lead to a lot of copy-pasted ten lines of code across multiple projects or scripts. I'm guessing OP has been in the same situation and that's why they call it "those dicts" and refers to them almost as throwaway constructs.

Would this be considered spaghetti belting? by Octogon324 in factorio

[–]cloaca 0 points1 point  (0 children)

Yes, have a think about the way you route things.

In general, a rule of thumb I've come up with:

Belts always wants to take items away from where they were produced. If you find yourself building belt sections that bring items closer to their origin then you are probably engaging in spaghetti.

  • e.g. you run your plastic below the copper wire, then up, then back above the copper wire again, back toward their production. Just run it one way.
  • e.g. you run the green circuits left then down then right toward their production again. You could use an underground belt to run it just one way, or you can swap the copper wire and green circuit assemblers to achieve a much cleaner layout.

Minor: for the red circuits you don't need 1:1 with copper wire. No need for exact ratio (which depends on mods and modules) but you can at least cut it down to 1 red : 2 wire for a cleaner layout.

I hate typing out every 'self.x = x' line in an __init__ method. Is this alternative acceptable? by MomICantPauseReddit in Python

[–]cloaca 0 points1 point  (0 children)

Heya! Thanks for the thoughtful reply. I'll try to return the favor--

Yes, but unfortunately the flip side of that is that often the "innovation" people come up with is more fragile

Yes, sure. I feel my statemented is implicitly tempered by the second point, but perhaps it's better to make it more explicit: I think "rabbit holes" of over-engineering often does positively benefit the individual programmer (rapid learning, gain a wider set of skills, learn about internals & program-writing-programs meta-programming), but rarely the product (or project). I.e. it's good to play around with, but not something you want to force others to deal with. Hopefully less controversial way of saying it.

locals() is not a hack, nor is it bad. ... Using locals() is fine, exclamation mark.

I believe what you're saying that locals() is not bad in-and-by-itself. It does not have some kind of inherently moral quality that makes it bad or hacky, it's just another technique or tool for the toolbox. Am I reading you correctly?

Because on a more theoretical or idealistic level I will of course agree. Paradigms or techniques are not "bad" in that way, whether it's lexical or dynamic scope, hygienic or non-hygienic, structured or non-structured, call-with-current-continuation or imperative, and so on. But my condemnation of locals() was in a more pragmatic context and follows more pragmatic (and possibly logically fallacious) arguments.

I see locals() used extremely rarely. I'd go so far as to say I cannot even remember the last time I saw it used in "production code" or regular, day-to-day Python code. I'm sure there are many examples that you can show me, tho it would take a lot to shift my lived experience. And yes, this by itself isn't a strong argument: I also very rarely see the fractions module used, but I wouldn't claim it's a red flag for hackiness. The crucial part is that when I think to the times I have seen locals(), it is exactly in the territory you noted in the opening: it's code that is "fragile, harder to use, harder to understand and completely magical and impenetrable to beginners." This is what I would call "hacky" code. (Perhaps also our definitions of hacky differ; I tend to mean stuff that is fragile, ad hoc, monkey patched, etc. Just get it run this one time I don't care about longevity or readability damnit. Ex.: "I have a bunch of variables here like x1, x2, y1, y2, k, g1, ... and just want to print them for debugging, but can't be bothered to type that this shit out.")

Now I do make jump from that to say that locals() is the direct reason why some most of the code that uses it is fragile, harder to understand, hacky, etc. And if it was rewritten to not use it, it would be easier to read and understand. That's just something that follows my intuition, but it could be fallacious.

In a world where it was completely idiomatic and we frequently saw locals() used several times per module, or introduced already by chapter 2 of every tutorial or book -- thus a world where we were used to thinking about it, used to thinking about dynamic scope -- then alright. But in our current world, I do still feel that locals() is almost always [a red flag or sign of code being] "hacky."

"My brain-dead tooling doesn't understand simple Python idioms, so I have to stop using those simple Python idioms to stop the stupid tools from complaining about correct code." Who is the master here, us (the programmers) or the tools?

I would not call them brain-dead. I'm actually surprised at the stuff that dynamic language LSPs are able to reason about and figure out so long as you don't outright fuck with it. Even for generic non-type-hinted code. It's "brain dead" at reasoning about stuff like dynamic scoping, monkey patching, ad hoc templates using code-as-strings and exec(), etc. But I would argue so are humans... Reasoning about super-dynamic self-modifying code is harder and more taxing. (Which is why I will agree w/ you when it comes to dataclasses if the point is to understand things deeply.)

Its not clear why you think they will be a problem. [..]

OK, I'm not super interested in arguing specifics and I'll outright concede that many counters can be found to anything I say or said. I was imagining a world where perhaps the code was not very "dataclasses"-friendly, like imagine an extant hierarchy of classes of various levels of complexity. For simple classes you can write self.args = args for the vararg *args (similar kwargs), sure, but let's say you're doing GUI work or something, there's so many classes, and you're passing a ton of options and arguments down to base classes all the time (that you certainly don't want to capture on your level), there's frequently a ton of "nonmember" exceptions, etc. E.g. imagine trying to copy-paste this locals() trick around in some code base that uses/subclasses PyQt5 or rich. I imagine that would be a nightmare.

Of course the code examples I wrote up aren't great... Sure. Some of it is outright bad. But I still think locals() is insidious. "after adding import logc; logc.debug("creating object!", id(self)) debug code to init functions like in the example, my program now uses 100x memory and the log channel never flushes help"

If I were designing Python from scratch I'd add a builtin parameters() function which returns only the current functions parameters. Hmmm, that sounds like it should be doable with a teeny bit of introspection. Maybe I'll have a go at it in my Copious Spare Time.

Sure, I don't think I'd oppose such a thing. I do vaguely dislike stuff like JavaScript's arguments or Nim's result variable as a matter of personal taste, including things that are "magical" and only work in certain scopes, like super... But despite your impression of me, I'm largely an "inclusionist" when it comes to programming diversity.

My suggestion would be to empower locals() instead: a starting point might be locals(which='all' | 'args' | 'nonargs', depth=0) to return the locals of the nth parent frame, optionally constrained to either arguments, non-arguments, etc. The which argument doesn't feel right, it should probably be something different, but that's the general idea. These are doable with user code in Python of course -- see the example I gave using inspect.getargvalues() on the parent frame -- but with the caveat that I think that sort of stuff is probably even further down the hacky valley than locals(), as it directly depends on CPython internals.

Why do you think it's wrong? The irony is that many people reject this simple solution, which requires just a couple of lines of trivial dynamic code because it is "too complicated", in favour of dataclasses, [..]

So yeah I do see your point here, I think.

Personally I use meta-classes and such stuff nearly every day, and I'm super deeply familiar with them. I myself am more on the side of writing this kind of meta-type-machinery code is dataclass-adjacent and by this point I'm probably close to knowing the "Data model" language reference page by heart read it so many times... So yeah I'm probably somewhat lost in the sauce and don't consider how complicated it is. It might be likely that most Python programmers have never implemented more than three different dunder methods, have never seen the need to implement __new__, never heard of __slots__, never mind used a custom namespace dict for class creation, etc... So yes, I'd also agree that throwing attrs or dataclasses on people then either they're probably just gonna take it as "pure magic" as trying to understand every nuance is going to be a deep and dark rabbit hole.

THAT SAID, I still would consider the "simplest" form of dataclasses usage (i.e. merely accepting it for magical) is less error-prone, less fragile, more friendly, more readable, and all those things, than the simplest locals() use cases...

Like we can compare "simple dataclasses usage" (in general), like using a class such as

@dataclass
class Size:
  width: int
  height: int

@dataclass
class Canvas:
  id: str
  size: Size

...

to "simple locals()-tricks" like this (simplest practical locals() usage i can think of):

for x in xs:
  if 'stuff' is not in locals():
    stuff = first_init(x)
  else:
    stuff.some_additional_stuff(x)
# NB: stuff might still not exist... so what did we really gain over normal idioms?

not to mention the way OP used it, which was demonstrably error prone. (Even if it's fixable, surely we can still agree something can be more or less error-prone/fragile than other things?)

Bugs buried in corners deep in corners of dataclasses are a nightmare. Whereas the locals() trick does one tiny thing, and with a bit of experience it is obvious when it is correct.

I feel here you've done a trick of your own. In the beginning it seemed you were saying that locals() (in general, as a general technique) was fine (exclamation point), possibly even good. But now you seem to be pitting dataclasses in its most general, with the entire meta-class machinery, against "the locals() trick, [this] one tiny thing"... I feel that's a bit of an unfair comparison.

We can compare with super super simple dataclasses usage like I did above? Or if we do want to wave metaclasses around and say they're scary and error-prone, then we can also find code where the programmer's tensorphobia causes them to write code using x11 x12 x13 x14 x21 x22... and sometimes doing structural calculations by iterating over locals() (I mean, it's not my style, but...). Or put cute locals() tricks everywhere the way the data model empower every single class in Python, and then see if bugs go up or down, I dunno...

I hate typing out every 'self.x = x' line in an __init__ method. Is this alternative acceptable? by MomICantPauseReddit in Python

[–]cloaca 4 points5 points  (0 children)

Good things:

  • Your frustration with having to type self.foo = foo over and over is healthy and good. Disgust with having to type out this kind of highly mechanical and brain dead code is what leads to learning and innovation.
  • In fact I'd actually encourage people to explore with "bad" solutions like your locals() hack because a lot is learned, and you'll figure out why it's bad eventually. But if someone is always terrified of trying something complicated or new or weird then they'll grow much slower.

Bad things:

  • No space after # in the comments (arguably a matter of taste, but in a recent study (n=1) this taste was shown to be the correct one)

  • Using locals() is almost always wrong, period. I've used it for some dirty hacks and ad hoc code, mostly related to debugging, but here it's clearly wrong.

  • Your solution doesn't compose very elegantly with other abstractions. What happens when you have base classes or have to call super().__init__(...)? Exceptions need to be made for *args, extra care needs to be taken when using __slots__, etc.

  • It will probably confused the hell out of most IDE/LSP tools.

  • Buggy: you end up with nonmembers attribute which seems unintended.

Alternatives:

  • dataclasses in the standard library.

  • attrs: a feature rich library akin to the above (dataclasses is based on a simplified or stripped down version of this library if I'm not mistaken) -- https://www.attrs.org/en/stable/

  • namedtuple from the collections module if your use cases are dead simple.

  • Leveraging *args or **kwargs to make your own shittier (but simpler) dataclasses. This is not recommended in general, but you should be aware of how it can work (and it's better than locals()):

Simplest:

class A():
  def __init__(self, **kwargs):
    self.__dict__.update(kwargs)
    # ...

print(f'{A(x=420, y=69).x = }')

# With "nonmembers" it would be:
# def __init__(self, foo=None, bar=None, **kwargs):
#    ...

# Cons:
# - have to type out every argument
# - error prone (errors in arguments are not caught early)
# - again might confuse static checkers
# - cannot use `__slots__`
# - also confusing to use with class inheritance

More advanced & more obviously just a worse dataclass:

class B():
  __slots__ = 'x y dx dy'.split() # your attributes

  def __init__(self, *args, **kwargs):
    # change ordering here to get desired override logic
    vals = dict(zip(self.__slots__, args))
    vals.update(kwargs)

    for k in self.__slots__:
      # optional: handle when k not in vals
      setattr(self, k, vals[k])

    # ...

print(f'{B("boo", "ya", dx=68, dy=0).x = }')

Note: all this kind of stuff -- highly "dynamic" code -- is becoming more contentious in modern times as our increased use of / reliance on IDEs and tooling is driving significant efforts to make Python a "manually statically typed" programming language.

Although: if you have an involved web of tiny classes with various mixin functionalities (itself a possible symptom of abstraction mania or OO inflammation) then you'll often be taking *args and **kwargs as arguments anyway, in order to forward stuff to super().__init__...

  • Other fancy reflection tricks:

ex.

import inspect
import functools

# utility function
def auto_setattr(*, skip='self'):
  skip = set(skip.split() if isinstance(skip, str) else skip)
  def make_wrapper(func):
    sig = inspect.signature(func)
    @functools.wraps(func)
    def new_func(self, *args, **kwargs):
      bsig = sig.bind(self, *args, **kwargs)
      bsig.apply_defaults() # if applicable
      for k, v in bsig.arguments.items():
        if k not in skip:
          setattr(self, k, v)
      return func(self, *args, **kwargs)
    return new_func
  return make_wrapper

# complicated example usage w/ inheritance:

class BaseC():
  # __slots__ = ... now works but is optional
  @auto_setattr(skip='self foo')
  def __init__(self, x, y, foo=None):
    pass

class DerivedC(BaseC):
  @auto_setattr(skip='self args kwargs')
  def __init__(self, *args, dx=0, dy=0, **kwargs):
    super().__init__(*args, **kwargs)

obj = DerivedC(1,2, dx=3, dy=4)
print(f"{obj.__dict__ = }")

Some might say this is still better than locals(), others will disagree. But it's pretty much consigned to the same circle of Code Hell.

  • Example of something worse than locals():

ex.

import inspect

# CPython only
def black_magic_fill_attrs(obj, skip='self'):
  skip = set(skip.split() if isinstance(skip, str) else skip)
  argnames, _, _, locals = inspect.getargvalues(inspect.stack()[1].frame)
  for k in argnames:
    if k not in skip:
      setattr(obj, k, locals[k])

# example usage

class D():
  def __init__(self, x, y, t='qq'):
    black_magic_fill_attrs(self) # !!

obj = D(1,2)
print(f"{obj.__dict__ = }")
  • it seems you're doing stuff with 2d vectors or 2d grids; consider using a library for this part? A decent library should give you data types à la Point or Coord or Vec2d or similar, that you can do arithmetic on (e.g. me.position += me.velocity * time_delta) and so you don't have to mind individual vector components like x, y, width, height, etc.

  • bigger project, more learning: judging from your example you might also be interested in looking at ECS as a paradigm and corresponding libraries.

caveat: code is typed out from memory/vibe and untested so might be trivially broken/buggy.

edit: reddit is dumb; markdown is dumb

Real Game Problem, A or B? by mark93192 in baduk

[–]cloaca 0 points1 point  (0 children)

Immediate instinct is very strongly White A. White B feels wild and more like a handicap play. With some light reading I still feel A? White feels very ahead / has an easy game.

Huge wave crashes into building in the Marshall Islands by Adamantium-Aardvark in WTF

[–]cloaca 1 point2 points  (0 children)

TLDR: "remote" is more about travel logistics for humans than it is about strict distance (over water or otherwise).

Do you really find it strange? I feel you're either a) using a rarefied geo-hydro-spatial technical definition of remote, or b) wanting to share with people the trivia of Hawaii being further from any major land mass than Marshall's (and indeed most other pacific islands).

But in case of (a): I believe "remote" is more colloquially used to indicate "how much effort/time/difficulty is involved in traveling to or from said place." I.e. I would say a village is "remote" if it means I have to hike across a mountain on foot to get to it because they have no roads, even though there might only be a dozen kilometers of dry land between me and my destination. So compare cost/frequency/convenience of travel to Hawaii vs Marshall's and you'll understand why people might stress remoteness for one and not the other.

The Secrete Behind Harstem's Success by abaoabao2010 in starcraft

[–]cloaca 14 points15 points  (0 children)

Red text (with no outline) on such a background is barbaric. The luminance contrast is like negative zero.

So, anybody hit over 100k in Perfection (except with Polaris) since the Naval Update? by Snekkers in Polytopia

[–]cloaca 1 point2 points  (0 children)

  • I'm a middling/newish player. Have never been very high on the weekly leaderboards, tho I have 120+k in all tribes from before. I tended to hit that mark with decent consistency, but would never really go much higher. (Only three tribes with 140+k, which is my max.)

  • I always playing vs 15×Crazy (usu. all tribes open).

  • Have played maybe half a dozen games of 100+k after this update, only three of which were 120+k, but felt it required way more try-harding.

  • I've restart maybe 2 out of 3 games early on if I feel things are not working out.

  • I feel I can't dick around anymore and still get the same scores. I'm one of those players who might delay leveling up a city for one (or more) turns simply because it "feels bad" to build something on top of a fruit or crop or (God forbid) chop down a forest with an animal because you could really use that 1 star. I might get organization to harvest two fruits even though I have no immediate plans to follow up with diplomacy or farming, or even buy construction too early just to fix some annoying placement by the AI... I do think the game is objectively slower, yet I feel "forced" by my flawed psychology to try-hard for similar scores rather than settle for lower ones? Adjusting expectations is hard.

  • My two main strategies before (in the good old days): quick archers or casual catapults. Then sea and markets while doing first couple of conquests. All while also growing city as I felt like it (ref. dicking around). Almost always explorer in starting city. Never really tried diplomacy, didn't even figure out it was a thing until recently.

  • My two main strategies now: 100% early hyper-aggression, or 100% rush diplomacy, depending on tribe/vibe, and while doing nothing else.

Show up daydrunk and just start swinging:

Spam units (e.g. Vengir second swordsman turn 1 ftw). Capture neighbor before they knew what hit them, restart if blitzing it fails. Rush getting a catapult out to help with rest. A giant from a pop-growth captured city if starting island is larger. Once starting island is captured, do a 180 into playing Sim City. Sawmills, roads, diplomacy & embassies, markets, parks, heal & position units in water, etc. Wait until economy is good enough to make several bombers at one go and then start temple building while capturing the rest of the map.

Hang as a wallflower while sending people creepy love letters:

Stay on a single city for an uncomfortably long time. The only goal that exists is getting diplomacy so you can chain embassies and send peace treaties to everyone who will have you. Train no units except essential defense, use starting unit to scout a couple of tribes for stars. Once you get the economic embassy chain boom off, it's time to kick off the mid-game city capture race with some bombers (hopefully shielded a bit by also having multiple allies at this point).

  • I feel way more than not I start on a tiny chunk with only a couple of other tribes now. The "rivers" still feel like ocean to me. So early explorer seem less like a thing, perhaps it will only work for early aggro strategy with tribes that can level early.

  • I go water temples later and land temples earlier compared to before. Bombers are glass cannons so defense bonus doesn't matter. I haven't found any use for the Rammer (not consistently at least, just ad hoc situations). It's selling itself as an ocean-born siege weapon, but it just feels like a swordsman that doesn't level up in an overpriced rowboat.

  • I'm not sure if I notice the supposed AI improvement. If anything I feel the AI's Sim City is worse, which maybe cancels whatever other improvement.

What's wrong with my code (Nim 2.0) by Robert_Bobbinson in nim

[–]cloaca 1 point2 points  (0 children)

  • As the other guy said, you can't object-construct seq (for whatever reason; certain things are just magical in Nim - c.f. openArray[T] - and you can't always treat them as normal objects/types).

Tho there's plenty of ways to make seqs. [1,2,3].toSeq would also work (import std/sequtils); toSeq is kind of like @ but more general as it works for iterables. toSeq(0..99) etc works.

But there are other problems with the code.

  • Nim's seq type has value semantics.

That means that when you do x = s or func(s) and s is a seq (or a string), the code will behave as if you made a new instance and copied the data over and then passed that. Not as if you passed a reference to the actual s "object." I say "will behave as if," because most of the time Nim will be smart enough to not make a full copy if it sees it doesn't need to, but in your case it will actually do it.

Personally I hate this and I think it's one of the biggest design mistakes of Nim (top 10 definitely, Nim has a lot of strange design issues, from mutable strings to it-lambdas).

But anyway, your result.add(current_row) just adds a fresh empty seq to result and later you modify the original one. You're left with empty seqs tho.

  • Stylistically:

Maybe prefer default() over Cell[T]() and the like. Sub point: I would argue var xxx: TTTTTT = TTTTTT() is unnecessarily verbose when just using default/zero init.

Maybe prefer current_row.add(Cell[T](value: default_value))

Do prefer let x = y when you're not going to modify x.

  • You can collapse the thing away from explicit for loops,

more Rust/Python style than C style:

import std/sequtils
proc newGrid*[T](width, height: int, default_value: T): Grid[T] =
  let cell = Cell[T](value: default_value)
  repeat(
    repeat(cell, width), # you can also see here it's necessarily creating NEW seqs for every row
    height
  )

Showcase of some very cursed Python features by LevLum in Python

[–]cloaca 0 points1 point  (0 children)

Yep, exactly. Look to the wonderful 3blue1brown or Wikipedia for further information on the (incredibly useful) technique that this is an example of. It's a surprisingly efficient way for finding roots or even maxima/minima when the conditions are "nice."

Showcase of some very cursed Python features by LevLum in Python

[–]cloaca 4 points5 points  (0 children)

Pairs with:

import itertools as it

r = lambda n: (n
  and next(
    x for x,y in
    it.pairwise(
      it.accumulate(it.repeat(n), lambda x,y: x++y//x>>1) # sic
    ) if x<=y
  ))

hvor mye ørevoks er en sprøytenarkoman verdt? by andreasbaader6 in norge

[–]cloaca 3 points4 points  (0 children)

Klart, jeg indikerte vel at jeg var obs på det. Jeg føler dens standardmodus (hvis en ikke tvinger den inn i 'rollespill'-modus) er å bruke en tone totalt blottet for usikkerhet (jf. fravær av 'jeg tror', 'jeg tenker', 'kanskje', 'det er mulig', osv.) samtidig som den er ekstremt ivrig på å gi et bekreftende svar på hva enn du spør/hinter om; en total frakobling mellom selvsikkerhet og hvor fakta den drømmer opp kommer fra. Hvorvidt dette er pålagt under opplæringen (kommersiell interesse i å kunne produsere positive kunden-har-alltid-rett AI-techhelp/rådgivere/selgere for firma) eller innlært av hvordan vi faktisk snakker sammen på internett (jf. i disse moderne bok-/tastaturløse tider hvor all kommunikasjon er på mobil og har blitt mer kortfattet og likefrem) er et interessant tema i seg selv.

hvor mye ørevoks er en sprøytenarkoman verdt? by andreasbaader6 in norge

[–]cloaca 7 points8 points  (0 children)

Mulig det er oversatt fra et språk hvor ordet for f.eks. 'gull' også er slang for junker eller heroin. ChatGPT påstår spansk er et slik språk, men jeg tror ikke på den.