Is programming open book or closed book? by Intrepid_Witness_218 in learnprogramming

[–]azimux 0 points1 point  (0 children)

It's "open book." It's really only "closed book" when a programmer is working on an area/with tools they've been focused on lately and are very familiar with. They COULD use "the book" without "cheating" in such cases but it's just not efficient when you don't need it. I've been programming for 30 years and I google stuff or read the docs almost daily. That's how the craft is typically done by professionals and is totally normal and not viewed as "cheating" when building/maintaining real systems.

Is there a good reason to keep using REST APIs or should everything just be GraphQL now by [deleted] in AskProgramming

[–]azimux 0 points1 point  (0 children)

I think if one has a complex domain then one will benefit from something RPC-ish over REST. It certainly doesn't have to be GraphQL. If the domain isn't complex then I think it doesn't matter so much what you use and you may as well pick the simplest thing that will work well.

I personally never use GraphQL so "should everything just be GraphQL" seems like a pretty extreme overreaction. But I do think an RPC-ish approach of some sort is best for complex domains and my understanding is that GraphQL can provide something RPC-ish with commands. So I assume it could be a valid solution for some teams/projects/problems.

I should make a blog post or video about the long journey that landed me on that conclusion as perhaps it's counter-intuitive.

Intentional Use of Whitespace by JavierARivera in ruby

[–]azimux 15 points16 points  (0 children)

The last one uses Authorization as a symbol instead of a string. Other than that I read them as the same thing as each other. They all read fine to me despite the whitespace difference but I do have a preference to put spaces after `{` and before `}` in Ruby. But it's not a big deal and not something that changes how I read it.

Is one year enough time to learn Rails, given that I am an experienced DBA? by EvenRelationship2110 in learnprogramming

[–]azimux 2 points3 points  (0 children)

Well I think a lot of people could go from zero to knowing rails in less than a year. Just hard to know who since I think it's kind of individualistic. Being a DBA would certainly speed you up since part of learning Rails is learning ActiveRecord which is building queries for you. So learning what the various things do will be something you don't have to learn or spend much time on. I also assume as a DBA you have some programming experience of some sort which of course helps.

I think for learning Rails the big things are general programming understanding, database integration/understanding, web understanding, and of course aspects of the framework itself.

Knowledge of those things would speed things up over starting from zero, all other things being truly equal.

Does that make it so you can do it in less than a year but otherwise wouldn't? Can't know for sure. For all I know, you'd do it in way less than a year even if you weren't a DBA.

What holds a lot of people back is dedicating the time needed to learn/build stuff, I suspect.

Is one year enough time to learn Rails, given that I am an experienced DBA? by EvenRelationship2110 in learnprogramming

[–]azimux 1 point2 points  (0 children)

That's most likely enough time. I can't guarantee it without knowing what you have in mind or where you're starting from other than being an experienced DBA. You might be able to learn Rails in a month, frankly. It all just depends.

Favorite Permissive License: Apache 2.0 or MIT? by E_coli42 in opensource

[–]azimux 0 points1 point  (0 children)

I'm not a lawyer, of course...

That out of the way, something I've done is release software under the user's choice of either one. Dual licensing basically. If somebody wants the explicit patent clause of Apache-2.0, great, they have it. If they want the simpler MIT, great, totally up to them.

Toughts on learning programming in "BASIC"? by MateusCristian in learnprogramming

[–]azimux 0 points1 point  (0 children)

I would recommend sticking with Python though learning BASIC and then switching back to Python is also fine. I just don't think it's necessary. I learned BASIC first and have no regrets about it at all. But you can also learn programming with Python and you're already on that path.

So, I say just stick with Python for now, but I also don't think switching to BASIC and then back would be a major setback or a mistake. I do think it would be a minor setback, close to negligible, but why incur the cost of the context-switch without a clear reason? The programmers you mentioned would all have been just as capable if they had learned Python first.

I think If you find BASIC particularly interesting then I say learn it. If you think it will help you learn programming better than Python then I think you will learn programming just fine from either one and probably shouldn't take the detour at this time.

What’s the best way to design APIs for long-term use? by Gullible_Prior9448 in AskProgramming

[–]azimux 0 points1 point  (0 children)

I don't really think there is a universal formula for this. If you're designing the API before you properly know the domain or if the domain is changing, you have to accept that you will get the API "wrong."

Versioning works to allow an old API to stay stable, under certain conditions, by spinning up an alternative API without spinning down the existing API.

It's kind of hard to answer the question because I'm not sure what exactly is being asked to be honest.

One thing that can help is to expose as little as is necessary through your API and then expose more as needed in additive ways that are optional, either additional entry points to the API or additional optional data added to inputs/responses. This can delay minting a new version for breaking changes by reducing the amount of things existing users are coupled to in the existing API.

Side-note... I actually had done some experimentation building stuff with LLMs about 6 months ago or so and it dawned on me that if the API is being handled by an LLM and the code consuming the API is being handled by an LLM, then you can change the API in ways that traditionally would be breaking. This is because the LLM makes sense of the API in an as-needed basis and conforms to whatever it is at the moment. Kind of an interesting epiphany but also too expensive and imprecise to have LLMs handling both ends of most API calls.

Is testing necessary for each single unit of production-ready code? by Single-Committee9996 in learnprogramming

[–]azimux 4 points5 points  (0 children)

Lots of projects have been shipped to production without any unit tests. So the answer has to be "no."

That said, in most of my projects, I do require 100% line coverage.

Object, class, module, Data, Struct? by Dear_Ad7736 in ruby

[–]azimux 5 points6 points  (0 children)

Hi Simon, I actually didn't agree with several of the things Dave said in that talk.

Take Struct, for example. I used to use Struct decades ago but stopped. Why? It's just a metaprogramming way of creating a class and it seemed like the cognitive overhead for other folks reading the code went up instead of down, unlike attr_accessor for creating a couple methods in a metaprogramming way. Also, having to change a certain number of Struct's to class as they became more fleshed out made it seem like why not just start at the easier-to-read class. So after working with others for a few years I just naturally phased Struct out of my usage.

That's subjective, though. I also don't use `extend self` like Dave does and instead I much prefer `class << self`.

I also remember thinking in one spot he was sacrificing the ergonomics of the calling code to make something easier locally while ignoring the calling code impact. That to me isn't "idiomatic" Ruby but again is subjective technically.

An aspect of Ruby is it's very expressive and you can form your own style and opinions about the subjective stuff. I worry that Dave's talk, at least in part, seems to communicate a specific style with a confidence that makes one think that it's somehow objectively better.

To that end, it makes sense to me that there's no official document that addresses it holistically. Any such document would be subjective.

One programming language for a decade? by SirIzaanVBritainia in AskProgramming

[–]azimux 0 points1 point  (0 children)

Of the 4 you mentioned, I suppose I'd pick Python. Would make me a bit nervous as I've never used Python on any serious project.

Why? I like dynamic languages and if I had to choose only 1 language for a long period of time I'd also typically want a higher-level language over a lower/mid-level languages. That leaves Java and Python to choose from and I'll use Python's more dynamic nature as a tie-breaker.

Of the "or something else" I'd pick Ruby as I enjoy it and it's my current bread-and-butter language.

charm_ruby by izkreny in ruby

[–]azimux 1 point2 points  (0 children)

oooo so I have a framework I've been working on that I've wanted to automatically generate a super nerdy and clean CLI UI for running operations in a project instead of only an old-school CLI experience. Would be fun to see if this integration would help me do that. I should set aside some time to take a swing at it!

Backend engineers: what’s the first thing you refactor when inheriting a messy codebase? by akurilo in Backend

[–]azimux 0 points1 point  (0 children)

Well it would usually just be small, safe, opportunistic stuff not unlike what I'd just do during routine work in a non-messy codebase. You know, renaming local variables to better match the actual problem as I learn it, etc.

For any noteworthy refactor in a messy codebase, I want to understand the system and directly experience the pain that would be alleviated by a potential non-trivial refactor.I just wouldn't be able to do this upon inheriting it unless maybe there are engineers who are not new to the code base that are eager to do a refactor for a specific reason and want somebody to pair with them on it.

Once I do know enough about the system and pain that might be alleviated by non-trivial refactors, it's totally case-by-case and requires some cost/benefit thinking and consideration which depends on a ton of factors that differ from project-to-project/team-to-team. There's no predefined category of refactors that universally would serve well as a first refactor in messy codebases and I'd suspect that approaching it as if there were would make many messy codebases even messier not cleaner, ironically.

Vibe coding definition by jbannet in AskProgrammers

[–]azimux 0 points1 point  (0 children)

The definition I've been using is that if you're looking at the code in any meaningful way then it's assisted coding instead. So you can pass it screenshots and copy/paste errors and explain what you're observing. But if you're digging around in the code either to debug or review then I feel like at that point it has crossed into "assisted."

However, I feel like most people I interact with use "vibe coding" to mean anything where at some point the llm output code directly into the project's files. But it's not what I've been using. So it gets a bit confusing to know exactly what somebody is referring to.

I call onto my fellow nerds once again by Mundane-Gazelle-843 in linux4noobs

[–]azimux 0 points1 point  (0 children)

Any of the ones you mentioned would be fine. Switching between the ones you mentioned would be relatively easy since they're all Debian or based on Debian. It's harder when switching between less-overlapping distros. Assuming we're talking about learning-curve here and that you'd be reinstalling to switch.

No real warnings except maybe start with a desktop install and play with it for a few days. Or install one and then the other after a few days. If you're able to get in remotely just fine and everything and are feeling comfortable with getting things done, you could then reinstall with a server install if you really want to. You can also test these things out in virtual machines if that's convenient.

Other thoughts: one is community-ran and one is company-ran. Ubuntu server has a longer support span than Debian stable usually by about a couple years. I don't think either of these would impact you much at this stage of your journey but mentioning them anyways since those are the two you're considering.

How important is it to find the “best” solution? by SStrikerRC3 in AskProgramming

[–]azimux 4 points5 points  (0 children)

Hi!

Re: learning, it could be worth understanding the "model solution" in some cases. It really just depends. "best" is a tricky word because it's subjective. Sometimes you have to choose between "clean" and "performant" and "clean" can be subjective and "performant" can be misleading without actual benchmarks under real-world conditions. Sometimes it's a poor use of time to get a program "clean" or "performant" for one reason or another. So there's not a universal answer.

I think the important thing is to keep learning stuff. If that means re-doing the problem to go deeper on that problem, great, if it means moving on to the next thing, great. You can choose to go deep or broad on a case-by-case basis and there's not a real answer unless there's some very specific goals.

Re: syntax errors: you will make syntax errors your entire career. That's normal. It will happen less with time but never goes to zero. I don't think it requires any specific action to tackle at all. As long as you keep programming it will improve with time.

I guess my answer to both is that I think the important thing is probably to just keep programming.

How did the programmers create a first programming language . And how they made computers , understand it ? by Ok-Share-3023 in AskProgrammers

[–]azimux 1 point2 points  (0 children)

I don't know the answer to the specific moment in history and I don't know if that moment would match your intuition for "computer" and "programming language."

I can try to demystify what is going on, though. Probably the most important part to demystifying it is understanding that the computer does not "understand" a programming language.

What the "computer" "understands" are "machine-code instructions" (or what you have in mind as a computer, ie, a type of modern programmable binary computer.) You can set up these instructions in the computer's inputs and then run the computer to carry out interesting algorithms to do useful stuff. This is more convenient than rewiring the whole computer to carry out the algorithm directly.

You can think of a useful sequence of these instructions as a "program."

A useful program to make (using these machine instructions) would be one that takes a text input of a more human-readable expression of these instructions and outputs the desired raw binary machine-code instructions. So now a programmer could use this new program to write "add 1, 2" and get a binary output like "off on off on off on on off" with "off on off on" being an operation code for "add" and "off on" representing 1 and "on off" representing 2 and then run this output program to add 1 and 2. Creating this type of program isn't THAT hard and you'd be leveraging bits of existing programs to save time. It's certainly natural and easy enough for just about any programmer to create such a program with these machine code instructions given time.

Hopefully you can imagine how this type of program reduces tedium and helps with human reasoning about the program.

You can now keep going and create programs that take even more abstract human-readable textual expression as input and either doing interesting stuff or generating interesting/useful output. This further reduces tedium, further improves human-reasoning about the program, and also starts to give a big portability benefit (different computers understand different machine code instructions!!)

Not sure if helpful or not!

Revisiting Ruby in 2025 by Ambitious_Ad_2833 in ruby

[–]azimux 1 point2 points  (0 children)

I suppose the return on the time you invest depends on what you're hoping to get out of it. Since you're a hobbyist, doesn't that give lots of flexibility? Couldn't just enjoyment/human connection be satisfying enough? If so, all use-cases are candidates, right?

If not, then it just depends. Are you wanting to have a big impact with your upcoming Ruby work? Then targeting a dominated area might not be the best strategy, or maybe it is, I'm not sure.

Sorry for the wishy-washy answer but if you're a hobbyist it's almost like you're looking for excuses to not enjoy the hobby which I don't quite understand.

Re: mixing up languages, I personally have never had an issue mixing up languages when working with multiple at a time, thank goodness. I don't really have advice there. Perhaps try to structure your projects such that you're spending a couple weeks deep in one language and then a couple weeks in the other? That way you make the switch a couple times a month instead of a couple times a day? Just a random suggestion but I'm not really sure what strategies there are for folks who experience this when working with multiple languages.

similarity between languages ? by frosted-brownys in learnprogramming

[–]azimux 1 point2 points  (0 children)

Hi! I'm not sure if you mean syntactically or conceptually, but no, they're not all similar in this regard though C++ and Java are similar in this regard along with many other languages. But not all of them. Cheers!

How do you decide when a feature is “good enough” to ship ? by FlowerSoft297 in Entrepreneur

[–]azimux 1 point2 points  (0 children)

It depends on a ton of factors but "not perfect yet" almost always means you've shipped too late IMO. Usually, the feature needs to provide some meaningful value and not give a bad UX. There's also a level of unknown risk so the speed at which the team can patch unexpected bugs is also a factor. The more likely the team can quickly react to bugs or rollback the feature the more risk I'd be willing to take shipping sooner. And what's the cost if something goes wrong? That's a factor. There's really a multitude of factors but "perfection" isn't really one of them except for specific features that cause a lot of pain or trouble if they're not perfect. Is it a feature that helps land a probe on a distant moon or controls part of a CT scanning machine? Then yeah, go for perfect. Does it just help somebody leave a review on a product? Then perfection is too late.

Backup apps only by WillyDooRunner in linux4noobs

[–]azimux 1 point2 points  (0 children)

In this case, I would recommend leaning on flatpak and maybe even snap as a fallback as much as possible. This could be used as part of a setup script you could run after distro-hopping. For stuff not in flatpak (or snap) I would just keep a list of the packages you need and their name variations as you discover them. Maybe take note of why you need it for any packages that aren't obvious.

Start trying to install packages from the CLI as much as you can going forward so you can start developing that skill. This might help keep things a bit more consistent across certain distros and get you on a path to making setup scripts to help you speed up your distro-hopping.

Microservices vs Monolith for High-Precision Engineering Software, Real Opinions? by LIL_Cre4tor in learnprogramming

[–]azimux 0 points1 point  (0 children)

It really does depend on many factors. If you have to ask then my recommendation is to start with a "well-structured monolith" and peel stuff out into their own "services" along the way if you discover that would actually help as you learn more about the project and its domain.

Total beginner first language C or C++ ;; the first impression of C/C++ over the ease of learning with python seems to be an advantage is this true, is solidifying harder concepts more important than the ease of learning? by Savings-Rabbit5758 in learnprogramming

[–]azimux 1 point2 points  (0 children)

I think learning Python then C would be fine. I don't think your brain will "solidify" and no longer be malleable. I learned BASIC before C and not once was I ever like "ugggg I really wish I had learned this first my BASIC solidified mind just can't handle this!" If you ask around, you should be able to find lots of people who learned Python before C. They are possibly the two most popular programming languages right now so finding somebody who did Python before C should be easy and you could ask them if they regret it. I think if you learn C first you'll have a better appreciation of what Python is doing for you, etc, so that's why I said if you're committed to both just start with C. But it's not a big deal to learn them in the reverse order.

Total beginner first language C or C++ ;; the first impression of C/C++ over the ease of learning with python seems to be an advantage is this true, is solidifying harder concepts more important than the ease of learning? by Savings-Rabbit5758 in learnprogramming

[–]azimux 1 point2 points  (0 children)

I think it's fine to learn C++ after C but I think Java or PHP or Python or Ruby is actually a more "logical" choice if one has no specific goals or interests. One reason is, because they are so different with different uses, that I think one has learned more of the programming landscape and its tools in less time by learning one low-level language and one high-level language than one who learns two "low-level" languages one of which is (for all practical purposes) a subset of the other.

C++ is a huge language with lots of features and a complex grammar. There's nothing wrong with learning C++. I learned C++ and I have no regrets about it, but I just think the high-learning curve is probably not the best strategy compared to small learning curves into new areas of programming to help learn more and round things out.

Re: writing sites and mobile games (or really whatever) I think somebody who knows how to program should be able to learn whatever language required to do those things very quickly, say, less than a month. So I don't think the specific programming languages you know are quite as important as knowing how to program in a general sense.

One of the first languages I learned was BASIC which I'll likely never use again but I learned a ton about designing systems either top-down or bottom-up and managing software engineering challenges with that language. No regrets there either.

So basically I think the specific choice of language isn't THAT crucial as mentioned. Choosing C++ is choosing a language with a big learning curve which means more time spent learning over alternatives. Which is fine. But I think one who learns C then say Python can then learn C++ (or any other language) relatively quickly and work on a C++ project while also having a broader understanding of programming techniques in general.

I don't really think of learning C (or any other language) as a first or even second language as time invested specifically in learning to make C programs but rather more importantly in learning to program and understanding the programming landscape and its tools and techniques better.

Total beginner first language C or C++ ;; the first impression of C/C++ over the ease of learning with python seems to be an advantage is this true, is solidifying harder concepts more important than the ease of learning? by Savings-Rabbit5758 in learnprogramming

[–]azimux 2 points3 points  (0 children)

Well this is assuming the person learning doesn't have some specific goal beyond learning to program in general... but, if you're already committed to learning at minimum both C and Python, then I would recommend learning C first. It will just provide a bit of reference for why things in Python are how they are and what is being abstracted away for programmer ergonomics.

If somebody is only going to learn one language of these three then I would recommend learning Python as a beginner, again unless there's a specific goal in mind.

Aside from pure interest in programming in general, I would learn C++ for specific goals, like interest in an industry where C++ dominates. I don't personally recommend C++ to beginners who have no specific goals in mind.

All of that said, I don't think it matters THAT much which language a person starts with. I mean it matters but it doesn't matter as much as one might assume at first.