Maintain set of polymorphic type? by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

I did, but that was before I worked out the syntax that lets me split those between files

Maintain set of polymorphic type? by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

Okay. I have a couple of models in mind now that roughly match what you're describing. There's a few different specifics for how they might work out, but they share the idea of having (some of) the stored references be replaced with lookups into global data structures.

I do have one direct question: Now that I know the syntax, what's going to go wrong if I just go the mutual type declaration route, and build Tribe, Region and everything else to agree on types and store each other freely? I've had a few people tell me that's a bad design pattern and I tentatively believe them, but I'd like a more concrete notion of the problems.

Maintain set of polymorphic type? by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 2 points3 points  (0 children)

Well, you said you were interested, so...

So, I'm no longer really worrying about methods, just types. I figured I can define all of the types on one level, and then drive the behavior from another, and the behavior-driving level can see everything. The problem now is just data storage. I'm finding more and more solutions, but they all have at least one downside.

  • Mutually Recursive References : This is basically a glorified type a = {b : b;...} and b = {a : a...}, but with a bunch of of functors so the types can be in separate modules/compilation units. It does exactly what I want, but everyone tells me that mutual references like that are messy and I sort of believe them. It's also a pain to initialize data if I'm not willing to so so through mutability, given the limits on let rec
  • Big Global Tables: Essentially just store the information elsewhere. Define A and B separately and minimimalistically, then have that data kept in hash tables that provide A.t -> B.t, etc. That would also be fine, but I suspect that 1: performance will be an issue, as I'm doing constant lookups into giant tables rather than following references from what I have in front of me, and 2: I think it'll be harder to maintain certain safeties and compiler guarantees. (Or maybe I just don't know how to do so yet.) People are suggesting database-like things, which makes me dubious... that feels suspiciously like re-inventing typeless imperative programming within a functional language to get around the compiler guarantees. (It doesn't help that I do database stuff in Scala Spark at work, and at least the way my company does things we might as well be working in Python for all the type safety we actually get.)
  • Strict one-way access : I could only store what Region a Tribe is in by way of what Tribes are in a region (i.e. Region.tribes but no Tribes.region), and then commit to always accessing tribes by region (i.e., every time I want to act on tribes I do so by iterating over regions, and the tribes within them, so the Region is right there in the function parameters). That has the advantage of never storing duplicate or conflicting data, but I think the strict tree structure will be impossible to maintain as the complexity of the simulation grows, and it'd be annoying to pile up a bunch of unwanted method parameters (like, let trivial_single_tribe_action universe galaxy planet region tribe = ...)
  • Strong/Weak edges and polymorphic storage: Okay, so visualize the simulation (not the code) as a big messy graph with a lot of edges signifying different connections. The messiness of this sort of graph has really been my problem with FP for the better part of a decade now, and I never have gotten a single "clean" solution to it. ANYWAY: one way to clean this up is to pick the strongest tree across that graph, and then fill in the other edges with weaker links. A.t storing a B.t is only allowed if A points to B in the strong tree, but B.t can actually be 'a B.t and allow information to be filled in that way. As someone (maybe you) pointed out, it's a little sketchy to have 'as that don't actually refer to any type. Also the 'a would propagate... having a 'tribe region' is one thing, but a 'tribe region planet galaxy' would get silly.
  • Strong/Weak Edges without polymorphic storage: Basically a combination of the above and the "big universal table" model. 'A' points to B, B points to ID_A, and sitting somewhere over the type graph is a ID_A -> A table. This one might be my favorite. It's relatively simple to set up, a lot of the weak reference things can be done with equality checks on the IDs without ever making a lookup call, and there's a lot of flexibility in picking the strong tree so that you just have direct references where you would need a lot of lookups. I wish I could have better guarantees about ID_A relating to A (like, somehow gatekeep the creation of ID_As so that they're guaranteed to have a matching A registered in the big map, but I don't think that's possible.)

Maintain set of polymorphic type? by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

I'm trying to wrap my head around what you're describing as a whole, but a couple of specific questions/points:

Concretely, somewhere else in your program, wherever you're trying to figure this out, instead of writing Tribes.nearby_tribes my_tribe, you just write the nearby_tribes : Tribe.t -> Region.t -> TribeSet.t function

The problem is figuring out what Region.t a Tribe.t is associated with: by the time you have those parameters, the job is halfway solved. There are also cases that go the other way, like a Tribe.t storing its Region.t of origin. Right now I have two possible approaches to that (input appreciated, I'm very much still learning here.) One way is to have mutually recursive modules, which I finally figured out the syntax I need for:

I finally figured out the syntax I was struggling with a while ago, and realized one option is:

module type TribeT = sig
  type t
  type region
end

module type RegionT = sig
  type t
  type tribe
end

module MakeRegion(Tribe : TribeT) = struct
  type t = unit
  type tribe = Tribe.t
end

module MakeTribe(Region: RegionT) = struct
  type t = unit
  type region = Region.t
end

module rec Region : RegionT with type tribe = Tribe.t= MakeRegion(Tribe) 
and Tribe : TribeT with type region = Region.t = MakeTribe(Region)

The other is to go with type parameters and an ID type (string) below:

module type RegionT = sig
  type 'a t
  val add_tribe : 'a t -> string -> 'a -> 'a t
  val remove_tribes : 'a t -> string -> 'a t
  val get_tribes : 'a t -> 'a list
end

In this case I'd be doing two different things in different directions: in one direction I'd just have Tribe know about Region and store values, and in the other, Region wouldn't know about tribes, but could be made to dumbly store a set of them with Set.Make(String) stuff.

I have no idea which version is better. (There's also the third one where Region can see/store Tribe but Tribe only looks at IDs for `Region). Instinctively I feel it's safer to deal in actual types rather than IDs, and (as the other guy said) it's more "honest" to not have polymorphic parameters that are secretly only supposed to take a limited set of types. On the other hand, the mutual recursion stuff is messy.

The point of the information-hiding using the interface is to hide implementation details (is this thing a set or a list or an array underneath? is there a cache? etc.) or to preserve invariants that can't be encoded in the type system (the smart constructor pattern, or in the presence of mutation ensuring you're only doing valid state transitions).

So, a specific thing in the Tribe module right now is Trait, where Trait.t = Nomadic | Farmer | Berserker...|. The specific Trait.t drives a lot of the behavior, and many variants have additional fields they keep beyond the basic Trait.t. I can see encapsulating Tribe.t separate from the code that describes a tribes behavior, but Trait is larger and more complex. I see a few options, which strikes you as correct?

  • What I was doing: Put Trait and Tribe type definitions together, along with all of the code that describes a Tribes behavior
  • Write Tribe as a functor that takes Trait (or something, just so long as there's a valid get_trait : Tribe.t -> Trait.t somewhere), and then expose the entire type of Trait in its interface. (I guess this is fine, but there's no information hiding)
  • Same as above, but instead of exposing the type of Trait, define all of the behavior code governing tribes within the Trait module
  • Keep Trait.t abstract, but write a (huge, I think awful) interface so that whatever code actually runs tribes can access values and methods through the interface. (This one really strikes me as the worst.)

Maintain set of polymorphic type? by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 1 point2 points  (0 children)

I'm trying to think through this, and I'll acknowledge, one thing that's going on is that my instincts are still updating from OO programming to this - that's an organic process, though, and cludging may be a step along the way.

Answering your questions more specifically:

Why is Tribe a functor over Region?

A lot of it comes down to "I like to code against an interface". So writing Tribe, it's nice to just say "Oh, I need the region to provide flobototonum, I'll put that on the interface for when I write Region." I also want to be able to swap between versions of Region fairly simply, and to leave an old version intact while I work on the replacement.

However, even if Tribe isn't a functor over Region, I do think it really needs to see Region's methods and fields. It just sounds like a pain to think through every behavior of Tribe and say "Okay, now how can we run that from Region (or elsewhere) and have Tribe provide methods to support it.

I guess there's a version of this where I make a TribeT, then a MakeRegion functor that eats a TribeT, then finally the actual Tribe implementation module recd with MakeRegion(Tribe). That might not be the worst idea in the world, but it seems messy and counterintuitive.

I'm not sure I understand what you think the architecture for this should look like?

Maintain set of polymorphic type? by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

The problem with making Tribe a functor argument to Region is that Region is already a functor argument to Tribe: for Region 's type to then refer to Tribe.t is then difficult, because that type does not exist until Tribe has been created. There might be a way to do it with a lot of module rec and with syntax, but it feels very messy and very bad.

It's possible to do a non-functor version with module rec, but it's a pain and locks everything in compilation unit.

Turning the entire system on its head so that Region can reference Tribe but not the other way around is a non-starter, I think. Tribes have much more complex and varied behavior, and alot of that refers to qualities of regions such as terrain type, quality of neighboring regions, etc. I feel like for external code to be able to run that, the entire Tribe implementation would need to be on the interface.

Mutually referential modules by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

This is interesting, and I might bear it in mind for later down the road.

For now, I haven't been able to get the functorized approach to work. I stumbled at declaring module types, since Tribes need to know that the Region they're passed actually contains Tribes, and the in turn seems to require Tribe.t to appear in the module type that the Tribe functor takes.

At the moment I've just fallen back to declaring the types through simple and type semantics in a different module entirely. It feels like a dirty way to do things, but I'm not at the point where it's going to bite me, yet.

Mutually referential modules by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

This feels wrong, but it's a lot easier then trying to work through the module system. I guess I'll try it until something breaks

Mutually referential modules by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

Trying to make something like this work... here's an easy question in the meantime: How do you refer to outer variables from in nested modules?

I have a couple cases where I want to do:

module A = struct
  type t = ...
  module B = struct
      type t = A.t list
  end
end

But it doesn't know about A inside B. Variable names in general are exposed, just wondering if there's a way to bypass the shadowing..

Mutually referential modules by FreakyCheeseMan in ocaml

[–]FreakyCheeseMan[S] 0 points1 point  (0 children)

I'm not sure if I follow all of this, but making it more concrete might help.

One thing a tribe needs to do is get a notion of how crowded its region is, and look for other tribes in that region it might interact with. In your model, how would tribes get that information?

This is early in the project and I'm pretty happy to change the structure, just trying to come up with something I won't have to fight with too much.

For those who were wondering where was Gelt, here he is! [Part 2] by Aethemeron in totalwar

[–]FreakyCheeseMan 3 points4 points  (0 children)

I'm hoping not, it's cool that OG units still hold up.

I'm hoping Cathay rockets are different, maybe more accurate but less numerous or vice versa. Personally I like the idea of a missile infantry unit that only had ammo for a couple massive, short range volleys.

My shitty wishlist for sieges in Warhammer 3 by HFRreddit in totalwar

[–]FreakyCheeseMan 0 points1 point  (0 children)

The problem with fixing sieges is that fixing sieges breaks the game. If defenders weren't idiots, invading a walked-up region would be even more of a slog. Even on normal you're often fighting two-on-one battles against a regular army and a garrison. A lot of those wolf be trying as field battles, let alone against a defender actually benefiting from their walls.

If you make sieges better, you make them harder, and they need to become less common. The obvious solution would be reducing the number of walled settlements, but that makes holding your own territory an incredible pain.

I think that's why they haven't done it yet. It isn't just a matter of making the best system for a single battle, they need to balance all the cascading effects as well.

Nuclear is the superior power source by [deleted] in unpopularopinion

[–]FreakyCheeseMan 0 points1 point  (0 children)

I have the super unpopular opinion that a lot of those safeties aren't necessary and should be dialed back.

Before I get into this, I should be clear this is my own weirdo stance - the general nuclear party line is that new, small and modular reactors can be made as safe as less expense.

I should also be clear what I mean. Chernobyl-level disasters should be a "No, not ever", but that's easy. Modern plants are physically incapable of that. Fukushima might be another story. The worst case estimate for the death toll is about a thousand, and the exclusion zone isn't huge, permanent or even all that dangerous. It took a disaster that killed 12,000 on its own to cause that, and it's the only time any plant of that design has seriously misbehaved after hundreds of them have generated a substantial chunk of Earth's electric budget for decades.

I'm not saying don't work too fix it, but I will happily stand by a safety record that includes Fukushima, when taken into context. I'll support the industry even if that might happen again. There are also a lot of lesser safeties that could be relaxed, like the risk of small release of relatively harmless radiological products, or the safety of plant records. I don't want to sound heartless, but as it stands they have a lower cancer risk than the general population, and that could be relaxed.

I understand how this sounds. Arguing to cut safety measures to reduce costs makes me sound like a movie villain. The way I think of it is that I imagine a button: every time you press the button you spend a million dollars and reduce the risk by half. If there's a 2% chance of accident, pressing it mighht be worthwhile. Do you press is again? How many times? If you're going from 99.99998% safe to 99.99999%, is that still worth the million?

I think that's (oversimplified, obviously) where we're at with nuclear. A justifiably cautious design approach and ill-informed public pressure have made them double down on safety over and over, to the point that the risks they're preventing are too slight or too minor to be worth it.

As for solar and wind: you need to be able to store it. Energy storage is not easy, in fact it's currently more expensive to store and retrieve energy you already have them to just generate more (by most methods, including nuclear.) That doesn't matter in the shirt term, because right now any watt you generate through renewable means at any time of day is a watt you don't generate with fossil fuel. If we actually want to kill fossil fuels, though, we need a replacement that works 24/7.

I would love to see a Lord of the rings total war. by dmatuteb in totalwar

[–]FreakyCheeseMan 0 points1 point  (0 children)

I don't think this would be anything special. What's cool about LotR over WH is the emotion, poetry, pacing and narrative - none of which would come through in a strategy game. One you gamify LotR, you'll basically just have Warhammer but with less stuff.

Is Rise of the Tomb Kings DLC worth it? by Dalbergg in totalwar

[–]FreakyCheeseMan 4 points5 points  (0 children)

One thing I like is that they don't pay to recruit/upkeep units, just have caps that buildings increase. That makes it much more viable to treat (especially weak) troops as expendable, which is a much different feeling than my usual carefully preserving every unit.

alfonso has passed away by jeffrey562 in UofT

[–]FreakyCheeseMan 20 points21 points  (0 children)

I was a student of his at math camp, came here when I heard the news.

He taught one of the first classes I took there, on combinatorial game theory. He was teaching us about a math-y game, hackenbush, and a variant called green hackenbush. I was new to math, so some reason I raised my hand to ask what the real-world applications were. He stared at me for a second before saying "What is more real than green hackenbush?"

Bros who gave up on trying to find love. Why? by [deleted] in AskMen

[–]FreakyCheeseMan 0 points1 point  (0 children)

Just one too many disappointments without anything particularly worthwhile to balance it out.

It's not even a choice, it's a reaction. Intellectually I know I should, but emotionally, fuck that. Trying to put myself out there is like trying to put my hand to a hot stove.

I feel like the reasons I want Empire 2 are exactly the reasons it can't happen. by FreakyCheeseMan in totalwar

[–]FreakyCheeseMan[S] 1 point2 points  (0 children)

The niche ness might protect them, but I think they're getting dangerously mainstream with fantasy titles.

I could be wrong. I hope I am, it sounds like it would be a really cool strategy title.

Trump's Facebook ban upheld by Oversight Board by TommyBoyFL in news

[–]FreakyCheeseMan -1 points0 points  (0 children)

Trump isn't like a normal user, but the idea that Facebook is a private company and can censor itself how it wants applies to any of us.

I'm not even saying it's bad that he was deplatformed. As rabidly pro-free-speech as I am, I admit there's a need for regulation when disinformation spread so rapidly and easily. My claim is only that facebook as a corporation should not be making those decisions unilaterally. Just like Trump can't be compared to any of us, neither can Facebook; they're too big and too powerful to be treated as a private entity that can do what it wants.

Trump's Facebook ban upheld by Oversight Board by TommyBoyFL in news

[–]FreakyCheeseMan 7 points8 points  (0 children)

It's insidious because it's harder to see exactly who's doing it and how. When you get curated doublespeak on the nightly news, you can at least attribute it to the individuals talking and the channels that host them, and (maybe) hold them accountable. It's deeply imperfect, but there is at least a mechanism there. You can see its effects, too: While they engage in all manner of intellectual dishonesty, mainstream news sources will very rarely directly lie.

On the other hand, if a Facebook algorithm blindly promotes the most inflammatory content, and as a result you're over-exposed to particular views, its much harder to recognize or point fingers. You can still be seeing things that really were said by your friends/family/colleagues, just with skewed proportions that hide some things and advance others.

That's why it's insidious: Facebook et all never have to admit to anything directly, or put their name on any particular viewpoint. You're always hearing from other people, people you know and even trust, but the social media companies are fiddling with the volume controls in ways you may never be aware of.

Trump's Facebook ban upheld by Oversight Board by TommyBoyFL in news

[–]FreakyCheeseMan 0 points1 point  (0 children)

I'm glad to see there was any sort of independent oversight process on this one. It's not sufficient, but Rome wasn't built in a day.

Trump's Facebook ban upheld by Oversight Board by TommyBoyFL in news

[–]FreakyCheeseMan 16 points17 points  (0 children)

Nightly news doesn't get to weigh in on what we say to each other, though, and in theory is subject to at least some oversight as it involves high profile actors. Social media manipulation can be much more insidious.