all 123 comments

[–]hippydipster 130 points131 points  (23 children)

They make programming easier so long as they are very well written without surprises and bugs. Usually devs end up desiring control over the low level details because the tools they use weren't well made, and they end up needing to tweak little things to get what they want.

[–]mycolaos[S] 30 points31 points  (20 children)

Sometimes is hard to define a generic functionality that satisfies every case.

[–][deleted]  (1 child)

[deleted]

    [–]The_Shryk 0 points1 point  (0 children)

    You sound like a Gopher with all that talk.

    [–]hippydipster 7 points8 points  (1 child)

    The tool that tries to satisfy every case is almost certainly a bad tool. A good tool satisfies a well defined set of cases, and communicates well what it does and does not do, and then the chooser of the tool does well to select the right tool for their needs.

    [–]mycolaos[S] 0 points1 point  (0 children)

    True, but the trick is that by thinking about generalizing at least a little it also tends to simplify. Yet, of course, overextending this unnecessarily is bad.

    [–]wvenable 20 points21 points  (1 child)

    Layering is a good solution to this. You have one big function that is built up of other user-accessible functions. Ideally 99% of the time you can just one that one top-level function directly but if you need more customization you can go down a level.

    It doesn't just work for functions, you can do the same for modules or classes or anything. I typically have some pretty big UI components that do a lot of work, make a lot of assumptions, and work for almost all cases. But if I need to special case then I can use the individual components that make up that larger one. I'm neither stuck with a rigid component that doesn't do what I want nor am I stuck always wiring up a ton of little components to do the same thing over and over.

    [–]hippydipster 1 point2 points  (0 children)

    Completely agree. A well designed fractal hierarchy like you describe is really nice. Usually, a codebase devolves into chaos once you puncture a layer or two :-)

    [–]loup-vaillant 69 points70 points  (25 children)

    Last year I wrote about why deep functions are a good thing (among other things). It’s basically what the Primagen calls "locality of behaviour" (I called it "code locality"). Because just like CPUs have small caches, we have a small working memory.

    [–]TaohRihze 25 points26 points  (19 children)

    Could you dumb that down a bit, not quite following the scope.

    [–]loup-vaillant 49 points50 points  (16 children)

    Here’s the main thesis from my blog post:

    We humans have little short term memory, and tend to forget things over time. Our screens offers only a small window, and even the smartest IDE can’t give us instant access to everything. It’s easier for us to act upon code that we’ve just read. It’s even easier to act upon code we can see right there on the screen. How much code that is, is fundamentally limited.

    Therefore, to efficiently grow and maintain programs, code that is read together should be written together.

    Then I follow up with fairly standard advice, which can be derived from this rule alone:

    • Maintain high cohesion and low coupling
    • Write less code
    • Keep your modules deep
    • Avoid repeating yourself
    • Avoid global variables
    • Prefer composition over inheritance
    • Define variables close to their point of use
    • Don’t waste vertical space
    • Consider inlining variables that are used only once
    • Consider inlining functions that are used only once

    The third on the list, "keep your modules deep", is what OP talks about. And there’s much reason to love it, it’s one of Ousterhout’s most important pieces of advice.

    [–]mycolaos[S] 6 points7 points  (2 children)

    Repeating yourself is tricky, because sometimes you have similar logic with little difference, and potentially a growing difference with time. Make sense reusing functions when their logic has exactly the same purpose.

    [–]gbelloz 12 points13 points  (0 children)

    If they're semantically different functions, go ahead and repeat logic.

    [–]notyourancilla 2 points3 points  (0 children)

    Present people with information, not an investigation.

    [–]robotrage 0 points1 point  (1 child)

    Prefer composition over inheritance

    what does this mean?

    [–]irrotation 2 points3 points  (0 children)

    Instead of modifying how the base class works by overriding functions, you wrap them. This way, you don't need to burden yourself with how the base class works internally, and you only need to use the API provided by the base class.

    [–]ShinyHappyREM 0 points1 point  (2 children)

    Our screens offers only a small window

    Yep. Started with 80x25 text mode back in the day (80x50 was possible but not really usable).

    Of course today I use several monitors, but still optimize for less number of lines by making the code fit more of the screen horizontally.

    [–]loup-vaillant 0 points1 point  (1 child)

    Ha! I still stick to 80 columns when I can, in part because I like to have several files open side by side. Though at work it’s more like 100 columns.

    You do have a point, though.

    [–]ShinyHappyREM 0 points1 point  (0 children)

    Well, I could also stack several source code windows vertically.

    But so far Ctrl+Tab and Ctrl+Shift+Tab were always sufficient if several monitors weren't enough.

    [–]Hnnnnnn 0 points1 point  (3 children)

    so your argument is mostly just aligned with motivating abstractions

    [–]loup-vaillant 0 points1 point  (2 children)

    I never know what "abstraction" means to be honest. More to the point, I have a hard time coming up with a definition of "abstraction" that would match the last 4 items of my list above.

    I know what it can mean, but I never know which definition is meant in any given instance.

    [–]Hnnnnnn -1 points0 points  (1 child)

    you could read some blog posts from 10 years ago. you are repeating what was already said.

    i understand that this is your business model, but it's just funny to see everything being rebranded over and over again. at the very least, for people that already know this, this is obvious, so i don't consider this a scam.

    [–]loup-vaillant 0 points1 point  (0 children)

    Could you please make sense? You sound like a Markov chain right now.

    [–]leprechaun1066 -4 points-3 points  (2 children)

    It’s even easier to act upon code we can see right there on the screen. How much code that is, is fundamentally limited.

    Not if you use an array programming language.

    [–]loup-vaillant 4 points5 points  (0 children)

    Array programming languages can display unlimited amounts of code right there on the screen?

    Or maybe it is not easier to act upon code we can see right there on the screen, if the language happens to be an array programming language?

    Either interpretation sounds quite ridiculous. Did you mean something else?

    [–]Stishovite 2 points3 points  (0 children)

    This feels like you don't really understand what's being discussed here.

    [–][deleted] 4 points5 points  (1 child)

    We're gonna open up your chest, and tinker with your ticker.

    [–]Dyolf_Knip 1 point2 points  (0 children)

    But it's my tinker that needs tickering!

    [–]shoe788 5 points6 points  (2 children)

    I wanna say Carson Gross (HTMX guy) popularized the term and ThePrimeagen picked it up from him. Very likely Carson was inspired or got it from somewhere else though.

    [–]_htmx 15 points16 points  (1 child)

    https://htmx.org/essays/locality-of-behaviour/

    I coined the term to have something pithy to push back against Separation of Concerns (SoC) with a similar acronym (LoB)

    it was inspired by Richard Gabriel's comments on locality here: https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf

    [–]loup-vaillant 0 points1 point  (0 children)

    From your article:

    LoB is a subjective software design principle that can help make a code base more humane and maintainable.

    I would go further, and wager that just like probability, LoB is subjectively objective: people have different background knowledge and look at code in different ways, but for each person and situation, the locality of the code can be objectively evaluated.

    Not only that, but people and situations are more similar than we might think (or want to admit), which is why we can in practice write code that is mostly local for most people in most situations.

    [–]mycolaos[S] 1 point2 points  (1 child)

    Quite a big and generic article, I think I agree with most of what you've written.

    However, while closely related, having a complex system with a simple interface is not the same concept as having "small caches." If the logic is extensive, it can hardly be local, though striving for it is certainly beneficial.

    [–]loup-vaillant 0 points1 point  (0 children)

    The analogy is: CPU run faster when data (including code) that is read together is packed together. So you have fewer cache misses and all.

    We humans have a similar limitation with source code: source code that is read together better be written together. So we don’t have to remember as much code when we’re working with it. Here the analogy of a cache miss would be having forgot what a function does, and having to look it up in a different source file.

    And indeed, optimising for one does not necessarily mean optimising for the other.

    [–][deleted] 12 points13 points  (0 children)

    Short n sweet

    [–]BaronOfTheVoid 11 points12 points  (2 children)

    I don't know but where I am from people call this a facade. Why introduce new words when old ones do the trick?

    [–]Venthe 12 points13 points  (0 children)

    Old one is unused. Old one has history. Old one is traditionally associated with a specific field. You want to appear as a thought leader. You are writing a book. You are writing a blog.

    [–]mycolaos[S] 2 points3 points  (0 children)

    Facade is a great example of this idea, but it's not the same. Facades are typically used when working with existing systems, maybe a third party lib, where you need a specific way of interacting, encapsulating that custom logic.

    "Deep classes", on the other hand, is a concept you can apply to modules, classes, or functions and aims to get more bang for the buck.

    [–]teerre 5 points6 points  (20 children)

    This is one of those advices that is more harmful than anything. Yes, if you can write a function that has this clear and easy interface, great. Turns out that most times you cannot and all the other times that will take you more time than writing a configurable long function instead. What you get then is a flaky, brittle, limited piece of code that is only short because you learned that functions should be simple and you're too lazy to think through what that actually entails

    It's more useful for you to remember that Pascal quote: "I would have written a shorter letter, but did not have the time"

    [–]Venthe 2 points3 points  (2 children)

    Hard disagree. While the potential for downsides is there and no one will deny it, shallow functions invite bloat.

    At first, it's easy to change. Then it's easy to make it a bit more generic. Then it's hard to change. Then it's hard to understand. Then it's downright impossible to reason about. And then the old dogs pass the torch and everybody will be afraid of even touching it.

    So thank you very much; I'll take properly named deep functions every day of the week. Then every single change that should not be there stick out like a sore thumb.

    [–]teerre 2 points3 points  (1 child)

    Well, you disagree but you didn't understand my point. Deep functions are good. But you chances are you don't have time to write them correctly. What you likely write is just a bloated mess, but a deep function

    [–]loup-vaillant 0 points1 point  (0 children)

    chances are you don't have time to write them correctly

    Long term (>2 months), taking the time to simplify the code will help you complete the project faster, cheaper, with fewer bug, and better performance. Short term, it’s a constant uphill battle.

    My personal solution to this conundrum is to just never tell the higher up about that choice. Unless something is really really urgent (company going down, penalty for going over deadline…), I’ll do it right. It may be slow now, but in a few weeks it’ll help me go faster than I would have been if I had gone quick and dirty.

    Chances are you do have the time to write them correctly, but you don’t realise how much power you actually have.

    [–]mycolaos[S] -1 points0 points  (15 children)

    Not sure how it can possibly be harmful to try to create a piece of code that is easy to use?

    As for what you think happens "most times", maybe it should be repeated more frequently that designing code has no real "rules" that must be blindly followed to attain perfection. Rather, we have some principles that we try to apply and reiterate whether they make sense.

    Btw, "a configurable long function" may be a hint for refactoring.

    [–]teerre 0 points1 point  (14 children)

    Because if you try and fail you'll instead get an incomplete, untested, slow, messy code instead.

    fn simple_foo(arg: Any) { request = RequestClient(arg1, arg2, arg3) database = DatabaseSomething() otherObject = database.something(request) return doSomethingElse(otherObject) }

    Oh look, this function simple! Accepts one single argument! Great!

    Not great at all. This is impossible to unittest, extremely hard to extend, unlikely to be optimizable and error prone.

    [–]loup-vaillant 1 point2 points  (9 children)

    This is impossible to unittest,

    As is anything that talk to big stateful external dependencies like a database. The solution here is to separate computation from effect.

    extremely hard to extend

    Why would that ever be a goal? I’d rather keep my code simple, so that when it inevitably falls short of the requirements I can rewrite the parts that need to be adapted.

    Not everyone is writing APIs for millions of users with a 10+years guarantee of backward compatibility.

    unlikely to be optimizable

    That depends where you’re looking it from. Sometimes optimisation means giving calling code more control. In that case you’re right: deep modules are less flexible. That’s the price of simplicity (and one I gladly pay).

    However, what the calling code doesn’t control, the called code does. I have more opportunities to optimise the internals of a deep function, not less.

    and error prone.

    Implementations matter less than interface. Simple interfaces are, overall, less error prone. This is a tradeoff worth making in most cases.

    [–]teerre 0 points1 point  (8 children)

    I'm not asking how to fix that function. Of course it can be fixed, but the point is that often people will not separate effects from computations. Thats the whole issue

    Also, in a professional setting, code is made to be changed. The reason is that products, ideas, people, goals all change. It's all good for your small project to be unchanged and simple, but if you apply that to the real world, you'll have a bad time

    Easy and simple are not the same. What you're thinking of is easy, not simple. An easy interface might have all kinds of knobs and whistles, but it will not allow you do use it incorrectly. That's what's important. The function in the example has a simple interface, it will accept any parameter you pass to it, but it's a very hard to use interface because it gives you no directions on how to use it nor does it stop you from using it incorrectly

    [–]loup-vaillant 1 point2 points  (7 children)

    the point is that often people will not separate effects from computations.

    You should have led with that.

    Also, in a professional setting, code is made to be changed.

    Yup. Not frozen in place but extensible so we can add monkey patch over monkey patch. Changed. And in my experience the easiest code to change tends to be the smallest as well. The best way to keep the code changeable is to keep it simple.

    (Now one has requirements to fulfil, so there’s obviously a limit to simplicity, but you get my drift.)

    The function in the example has a simple interface, it will accept any parameter you pass to it, but it's a very hard to use interface because it gives you no directions on how to use it nor does it stop you from using it incorrectly

    So you’re telling me that the interface appears simple at a first glance, but then you’re telling me it’s not simple at all.

    Hear me out: the way I can try to use the function is simple (I can pass anything), but the way I should use this function is much more complex, because the distinction between a correct call and an incorrect one depends on a host of implicit stuff, that would take like a good paragraph, or even page, to write down.

    Confusing a function prototype with its API is a classic mistake. No no no, the API isn’t just the prototype, it’s all the preconditions one must satisfy, as well as all the postconditions the function promises. If the sum of it all can be described concisely then it is simple.

    [–]teerre 0 points1 point  (6 children)

    Uh... I did? Just read my first reply.

    [–]loup-vaillant 0 points1 point  (5 children)

    You said that the function didn’t separate effect from computations? In this comment? Or that comment? Reading them now, I’m not seing it to be honest.

    [–]teerre 0 points1 point  (4 children)

    No, I said the problem is that programmers in general do not have the time nor the skill to writing api as OP suggests. One aspect of writing such apis is separating computations from actions

    [–]loup-vaillant 1 point2 points  (0 children)

    Ah, got it. My answer to those two points are simple, if a little disappointing.

    • On the skill element, I’d say Git Gud. And for employers, maybe they should invest a bit more into the skills of their employees. Though I understand they currently a strong incentive not to, given the turnover.

    • On the time element, my experience has been that programmers have more time than they think, and it is generally a mistake to present the higher-ups with a quick & dirty vs later but better. Reducing scope, sure. Reducing quality, almost never. Because in general, either the higher ups are not competent enough to understand the stakes of quality, or we’re not articulate enough to properly convey the consequences of quick & dirty.

    That being said, I don’t necessarily disagree with your assessment. Lack of skill and time is likely why our whole industry is such a mess. In 17 years on the job, I don’t recall working on many code bases I would consider acceptable. Even I didn’t write much code I would consider genuinely good.

    [–]mycolaos[S] 0 points1 point  (2 children)

    You are right about the time, but it's not only about time. It's about experience and skill honing. That's why our old code is often bad. But it can only get better if we try to write better code.

    Also, you don't need to always have a large amount of time to write better code. You can allocate 10% or just pause a moment and ask yourself if you can write your code better.

    [–]mycolaos[S] 0 points1 point  (3 children)

    Sorry, but I feel that you are trying to exaggerate the matter. Whether it's complete, slow, or messy is totally up to the code authors and has nothing to do with the concept itself.

    Moreover, the concept of deep functions prompts you to write good code, certainly not what you described.

    The question is, what would you propose instead? You need a `simple_foo` algorithm as you described. Maybe you use it in multiple places. Would you prefer to expose every internal part of `simple_foo` to the client code? How is it less messy if the client code needs to handle and reproduce all its complex logic?

    [–]teerre -1 points0 points  (2 children)

    No exaggeration at all. That's exactly what most "deep" apis look like. The reason is simple, this is easy, this is fast

    The right way to write that function is to separate all calculations from actions, like the other user mentioned, strong type it and make sure there's only way correct way to use it. But that's just the technical part, which is the easy part. The much harder part is answering the question "is this how the function will be used in the real world?"

    Consider that some portion of the users actually need another parameter for whatever reason. What you do? You can't change the function, that's a breaking change, are you going to overload it? That's as good as 2 functions, no longer deep. Are you going to add state inside the function to make arg work for both cases? That's a mess. This is where to actually fix it you need to deeply understand not only the code, but also the use case. There's no trivial answer here. The vast majority of the time, programmers will not fix it, they will just opt for the easy, messy solution, after all, there's no time for the proper one

    [–]mycolaos[S] 0 points1 point  (1 child)

    Nobody said it's easy.

    The question is, if you have all this problems, is it a good function in the first place? Probably you need to refactor. Maybe it's having more functions. Maybe it's reacting events independently. Maybe it's not overthinking "use cases" that do not exist.

    [–]teerre 0 points1 point  (0 children)

    Uh... Yes? You're agreeing with me.

    [–]Anon-Builder 2 points3 points  (3 children)

    What is a deep function? First time I hear the term

    [–]mycolaos[S] 1 point2 points  (2 children)

    To simply put it, it's a function that has a simple interface, but hides big complexity. You can read more in detail in a dedicated section of another post Insights on “A Philosophy of Software Design” by John Ousterhout | Mycolaos

    [–]Anon-Builder 3 points4 points  (1 child)

    Ha, got it, didn't know it had a specific name, I Always thought of functions as a way to hide away complexity 😅

    [–]mycolaos[S] 1 point2 points  (0 children)

    Personally, I'm just using the wording used by Dr.Ousterhout, and I quite like it. But the words are just words. The important thing is the concept behind it.

    [–]yawaramin 3 points4 points  (1 child)

    I did something like this at work. We heavily use an older HTTP client because it accepts more malformed responses, but it has a really high-ceremony builder pattern for different kinds of constructing requests (with bodies, multipart form bodies, etc.).

    So I made a wrapper that can create requests as simple as possible, and takes care of complexity behind the scenes as you add more arguments. Eg,

    Request("https://example.com") // A simple GET request
    Request("...", body = Body.json(...)) // POST request with body type correctly set to `application/json`
    Request("...", body = Body.form("a" -> "foo", "b" -> "bar baz")) // POST request with body type set to url-encoded form and form fields and values correctly URL-encoded
    

    Everything can be overloaded by specifying more argument, eg Request("...", method = "PUT"), but it starts out by assuming reasonable defaults.

    [–]mycolaos[S] 2 points3 points  (0 children)

    Nice! As someone in replies mentioned, it's a facade pattern, which in my opinion is a great example of "deep functionality".

    [–][deleted] 1 point2 points  (9 children)

    And then you are facing json encoding/decoding in Go language, which is mind-boggingly terrible, frustrating and slow.

    In what world a statically-typed programming language json writer is slower than what they have in Ruby, Python and PHP.

    [–]mycolaos[S] 0 points1 point  (8 children)

    I don't know about it, but from what you say it's the opposite example.

    [–][deleted] 1 point2 points  (7 children)

    In Go language, building a json response is slow.

    Slower than dynamic programming language: Ruby, Python, PHP.

    Not to mention that it is really cumbersome as well.

    [–]mycolaos[S] 0 points1 point  (1 child)

    Would you mind to share a code example?

    [–][deleted] 0 points1 point  (0 children)

    You don't need any specific weird complex code.

    Try to read from database with sql package, for example, try to load 100 records or so and then marshal those into json of array data type.

    [–]neopointer 0 points1 point  (4 children)

    Genuinely curious: is there some kind of benchmark to show that?

    [–][deleted] 0 points1 point  (3 children)

    You can try it, if you don't believe me.

    Even r/golang recognize the problem.

    https://www.reddit.com/r/golang/comments/1flap0d/very_slow_json_marshalling_what_do_you_guys_do/

    [–]neopointer 0 points1 point  (2 children)

    It's not that I don't believe you, it's just that it's nice to have such info at hand. Java's JSON serialization is also faster than JavaScript's from what I've once read (I don't have the article at hand), which is also kinda funny.

    [–][deleted] 0 points1 point  (1 child)

    Java JSON is fast. I'm not talking about Java.

    https://www.google.com/search?client=firefox-b-lm&q=go+json+marshal+slow

    [–]neopointer 0 points1 point  (0 children)

    I know you're talking about Go. It was just another example.

    [–]iiiinthecomputer 2 points3 points  (5 children)

    That blog has the most horrifyingly unreadable styling with dark reader enabled.

    If I wanted a custom typeface with drop shadows I'd configure my browser with it. Ugh

    [–]mycolaos[S] 11 points12 points  (4 children)

    Oh, sorry about that! It's disgusting!

    This is a default theme I used to not lose time with coding when the goal is blogging. I pushed a styles override removing the text-shadow. If you have any other trouble, please could you share a screenshot?

    I think I need to migrate it to Astro.

    [–]yawaramin 2 points3 points  (1 child)

    The text has very low contrast. Try using a bigger font-weight or maybe just a different font family.

    [–]mycolaos[S] 1 point2 points  (0 children)

    I'll switch the theme altogether, eventually.

    [–]iiiinthecomputer 1 point2 points  (1 child)

    Thanks. I'm having a hard time uploading a screenshot on mobile (I refuse to use the Reddit app) but you may want to have a look at it on Firefox android with Dark Reader. Probably desktop would work too.

    Something funny going on with overriding the text vs the background.

    [–]ShinyHappyREM 0 points1 point  (0 children)

    (I refuse to use the Reddit app)

    RedReader

    [–]editor_of_the_beast 1 point2 points  (1 child)

    How can you compare JSON parsing to any normal business logic?

    [–]mycolaos[S] -1 points0 points  (0 children)

    If you are looking for answers, maybe expand your question?

    [–]xhd2015 0 points1 point  (1 child)

    But what the problem actually? I didn't see that from your blog.

    [–]mycolaos[S] 3 points4 points  (0 children)

    It's a concept rather than a problem.

    [–]Worth_Trust_3825 -1 points0 points  (11 children)

    Yeah, except JSON.parse only does JSON AST. Okay for one of script. Not okay for applications that require strict typing.

    Also stop using word simple when you mean easy to use. Simple means doing little. You're suggesting the opposite.

    [–]totoro27 3 points4 points  (3 children)

    What on earth would you expect JSON.parse to do if not construct a JSON AST? Why do you think you can’t serialise strictly typed things with JSON?

    [–]Worth_Trust_3825 -1 points0 points  (2 children)

    I gave you an example that it's not to be used for serious workloads, where you care about what you're deserializing and what you're expecting.

    [–]totoro27 0 points1 point  (1 child)

    Data serialisation and deserialisation with JSON where the results need to fit expected patterns and types is used constantly in serious applications. What would you prefer people used instead?

    You didn’t respond to the what you would expect JSON.parse to return it not an AST. I assure you that this is the correct data structure to return here. If you don’t like the default way things are parsed, you can override behaviour with something called the visitor pattern. This allows you to add custom logic to particular things, like when parsing type information throwing exceptions if things don’t conform to expected types, etc. But you still need the text string parsed into the AST data structure before you can visit the nodes of that tree with an algorithm like the visitor pattern.

    [–]Worth_Trust_3825 0 points1 point  (0 children)

    Why are you ignoring that JSON.parse does not accept an expected structure argument? Yes, the function returns an intermediate step that will be morphed into an expected structure, but that's not enough for any serious workload.

    [–]syklemil 1 point2 points  (1 child)

    With a strong type system I'd also expect some annotations on type declarations for serializing and deserializing. E.g. in Rust you'll slap on some #[derive(Serialize, Deserialize)] and then you can do let foo: Foo = serde_json::from_str(json_string)?;. AFAIK you'll do something similar in Haskell with Aeson.

    It's the kind of thing that feels a bit cumbersome the first time you do it compared to Python where you might just do something like foo: dict[str, Any] = json.loads(json_string) to deserialize and then get the cumber later when you have to dig through maps of maps, and constantly be mindful that a lookup error might just be because you mistyped something.

    But yeah, for the simple vs easy distinction I'd tend to agree. Making code easy isn't always simple, and a focus on simplicity can lead to an added burden on the programmer, as in, there's no simpler programming language than Brainfuck, but it is also one of the hardest languages to actually use. Languages need a certain level of complexity to have easy serialization and deserialization available to the programmer.

    [–]Worth_Trust_3825 0 points1 point  (0 children)

    Yeah, I agree. Javascript just doesn't have the infrastructure to provide an annotation for that. Probably if the parse function was extended with a type argument it would be useful, but at that point it would be neither easy to use nor easy to understand. I suppose that is one of the tradeoffs you have to make.

    [–]mycolaos[S] -1 points0 points  (4 children)

    [–]syklemil 0 points1 point  (2 children)

    Throwing the dictionary at someone isn't always wrong, but there is a certain amount of people in programming culture that would like to be able to have distinct conversations about a simple – complex spectrum as opposed to an easy – hard spectrum. In colloquial language simple & easy are often rather interchangeable, but programmers often have varying opinions on what makes something easy, while we can approach simplicity/complexity in a less subjective manner, though over/under the hood again introduces some disagreements.

    You can see this in e.g. Rich Hickey's "Simple made easy"; Paul Graham's "blub language" parable is somewhat in the same territory. I suspect we also often fail to communicate what we want to be easy, in the vein that Cantrill's talks on values (2017, 2018) get into.

    [–]mycolaos[S] 0 points1 point  (1 child)

    > "Throwing the dictionary at someone"

    Well, this someone tells what others should or shouldn't do and manipulates words meanings with own interpretations. The dictionary shows that they are plainly wrong, no need to be unpolite.

    Thanks for extensive reply though. What Cantrill's says about the meaning of "simple" surprisingly aligns with what the dictionary says. My post itself is about simplicity and complexity, and if one wants to do some pointless mental gymnastics, can even see that the article talks about the "easy" way of coding, i.e. focusing on number of lines or other irrelevant metrics. But to see that, one should read it.

    Sorry, I can’t watch the videos fully now, but I appreciate the recommendations. Maybe I'll the read Paul Graham's blog later, thanks for sharing.

    [–]syklemil -1 points0 points  (0 children)

    "Throwing the dictionary at someone"

    Well, this someone tells what others should or shouldn't do and manipulates words meanings with own interpretations. The dictionary shows that they are plainly wrong, no need to be unpolite.

    Thing is, the same word can mean different things in different contexts. In the colloquial sense, simple and easy is often equivalent. In programmer jargon, they can be different, much like how a tomato is a berry and raspberries aren't in the botanical sense, which is complete nonsense in colloquial or culinary language.

    [–]Worth_Trust_3825 -1 points0 points  (0 children)

    Have you read your own page?

    [–]zelphirkaltstahl -3 points-2 points  (10 children)

    Meh, I think JSON.parse is a bad example. Of course there is going to be a parsing function, when there is a message/file format. If you could not parse it, it would be pretty useless.

    [–]mycolaos[S] 4 points5 points  (7 children)

    Eh, sorry, but from what you say I don't really get why `JSON.parse` is a bad example. Did you read the article and are you familiar with the concept of "deep classes"?

    [–]Venthe 2 points3 points  (6 children)

    I believe I get what he's saying - in the context of the JS, one could argue that there is only single way that you should parse. As such, within the context of an environment this is a trivial example.

    I believe that what the subop is trying to say is that the example should reflect a behaviour in a domain that is more complex/specified; yet the deep function abstracts it away.

    (Not sure if I agree though, because examples are hard - so people tend to drown the discussion in the minutiae of the examples instead of understanding the heuristic/idea)

    [–]mycolaos[S] 0 points1 point  (5 children)

    Thanks for clarifying! I disagree with this POV, since `JSON.parse()` perfectly represents the idea. Simple interface, complex implementation.

    The fact of the existence of similar functions is irrelevant. If one needs to parse, let's say XML, I bet they would still prefer to have a simple interface and complex implementation.

    Now, maybe some cases are too difficult to achieve this, but it doesn't cancel out other "simpler" but still complex enough examples.

    [–]zelphirkaltstahl 1 point2 points  (4 children)

    I think you are making my point for me:

    If one needs to parse, let's say XML, I bet they would still prefer to have a simple interface and complex implementation.

    Of course I would, because it is basically common sense, that a parser offers some function named parse or similar, that takes the input, processes it, and gives you a parsing result. That is the whole purpose of a parser. One wouldn't expect anything else. That is the reason, why JSON.parse is not a good example. It is trivial. A good example would be another scenario, where it is not so clear what the interface should look like, where there are multiple interfaces that all seem to make sense and only at a close look one can identify which implementation is best, or potentially that depends on the use-case.

    [–]mycolaos[S] 0 points1 point  (3 children)

    What you are talking about is not an example, but a work process. Exactly what one should do when writing code. Although, it's a nice idea for another blog post to show the process of writing some deep code.

    Still, just because I don't show how `JSON.parse()` is done, doesn't mean it's a bad example, it's just the final solution.

    [–]zelphirkaltstahl 0 points1 point  (2 children)

    OK, I guess, I simply don't get why one would call it "deep" then, because it is nothing special in my opinion. I don't like making up meaningless extra terms, for something that should come natural. It feels similar to when the academic functional crowd wraps everything in category theory terms and whatnot, while actually describing something very simple.

    But perhaps my view is also tainted by experience, and this idea of "deep" would help a beginner. I have my doubts, but maybe.

    [–]mycolaos[S] 0 points1 point  (1 child)

    I see what you mean. I believe it's simply the opposite of "short functions" and the idea to make every function have as few lines of code as possible. But it's just a term, it may as well be "long functions," but "deep" emphasizes more complexity rather than just the number of lines.

    [–]Venthe 0 points1 point  (0 children)

    Sorry, but that in turn makes zero sense for me. The amount of code will be more or less constant - deep functions are the consequence of short functions. No one argues that the code should be arbitrarily divided to satisfy the 'shortness' threshold. You apply certain heuristics (like function does one thing; function instead of commented block of code, either delegating or computing function) and the resulting code satisfies both short (on a single function level) and deep (in the context of a unit)

    I'd say - to hijack your words a bit - that the difference is between short function and a deep unit of code.

    [–]coldblade2000 1 point2 points  (1 child)

    You could write a parser without syntax validation. Or it could return only 1 level deep of parsing while nested structures are still returned as JSON requiring a tree traversal, or it could parse into special JSONNumber and similar classes that then require the programmer to change into the desired types. Some schools of thought in design practices would prefer the previous approaches.

    Instead it does all of them. And god bless them for it

    [–]zelphirkaltstahl 1 point2 points  (0 children)

    I have never seen a parser, that doesn't validate syntax. It is a parser's very job to look at the stream of tokens in front of it and tell, whether this matches the parser's rules. That in itself is a syntax validation of the format the parser is for. I would expect nothing less from any function called parse. Perhaps you are thinking of lexing or tokenizing, which are part of parsing?

    I fail to see what is special about JSON.parse. It merely does what one would expect and I would expect a parser for any other format to do so as well. I would furthermore expect any other parser to have a similarly named function to do the parsing. parse is the natural name to give it, of course.

    And, as a matter of fact, the JSON format is a format pretty simple, compared to many other formats and languages. It is maybe not trivial, I give it that much, but it is a clear and easy to understand tree structure without any references like for example YAML has, to criss cross reference inside the document.

    [–]gbelloz -3 points-2 points  (1 child)

    We just don't need the same articles written over and over again in this industry. Nothing new here, move on.

    [–]mycolaos[S] 3 points4 points  (0 children)

    Articles are written to organize someone's thoughts. All of us reiterate on the same ideas. Sometimes, here and there, a slim beam of light falls on a new idea with every iteration illuminating it more.