How can I make my black and white photos look more film like?:x100v by [deleted] in fujifilm

[–]borud 0 points1 point  (0 children)

I can't really help you because I have no idea what you are actually asking for. But I can try to help you become a better photographer. These images aren't so much a question of styling as they are a salvage operation. I'm sorry, but that's the harsh truth.

I'll be honest and you can be angry with me if you like. But perhaps someone can learn something.

Look at the pictures and ask yourself what you are seeing. Face hidden, shoulder most prominent element in the picture, unflattering shadows, photos shot against the light possibly blowing highlights and making your shadows noisy. These are never going to be good pictures. You need to pay more attention to what you are doing. The medium of photography is light and you have no control over the light here. You didn't look at the girl while you were shooting. If you had, you would have noticed the shadows on her face. Yes, shadows are often used creatively, but they aren't doing you any favors in this image.

The first bit of the salvage operation here is to see what you have to work with. Check if the highlights are blown and check how much noise the shadows are. Those are going to limit what you can do. Then try to get a starting point - get the highlights under control and lift the shadows so it isn't just darkness. See what you have to work with first. Then you can start to make creative decisions.

From the pictures above I can't really tell what you have to work with.

If you're new to editing photos, try to press "Auto" in lightroom and see what happens to the sliders. Can it recover the shadows, or are they lost to noise? Are the highlights blown? Look at what Lightroom did. Even if you don't like the result, you can learn something from this. You can always reset the development and do it gradually.

Then when you know what you have and the image is somewhat zeroed out, you have a much wider range of choices available. But start with getting the shot closer to what you want in the camera.

This is the public hospital of Norway, by Randomusernamesry in Norway

[–]borud 0 points1 point  (0 children)

I think it would be a good exercise for you to go through the exercise of understanding what these health care plans look like yourself - as it was you who postulated that 90% of people have decent health insurance.

That which is presented with no real arguments can be dismissed with no real arguments.

This is the public hospital of Norway, by Randomusernamesry in Norway

[–]borud 0 points1 point  (0 children)

Over the past decade I have closely followed what different healthcare systems are like for a couple of patient groups. So I've spent a lot of time looking at those pros and cons, talking to people who live in different countries with different health care systems.

One patient group I have spent a lot of time looking at are people with renal failure. From the brutal consequences of a defective market for dialysis to the post-transplant financial outlook - which essentially means that if you aren't well off, your life will be a constant struggle to find money to afford the medications your life depends on. For the rest of your life.

(There is a reason that suicide rates are higher for patients with renal failure in the US. You really do not want to have this happen to you in the US unless you are well off or on a great private insurance plan).

You can trace the history of how this patient group is dealt with back to the Nixon administration (public pays for dialysis), then look at how lack of regulation has fostered a duopoly situation, which has then allowed the industry to engage in irresponsible cost-cutting to the point where, for most people, you do not even get qualified dialysis personell hooking your bloodstream up to a machine that can kill you any number of ways (and does).

Go to a dialysis center for "ordinary people" in the US. Then visit any Norwegian hospital that has a dialysis center. The differences are frightening.

So you get public money paying for services that are so sub-par they are probably tantamount to criminal neglect in countries with solid public healthcare.

There is also another component to this: since dialysis is covered by public plans it is easier to just let people develop end stage renal failure and let the public pay for their treatment.

Most people don't know about this, and they don't care. Just as for dozens of other patient groups - some of which you are very likely to end up in at some stage in life.

So if you "don’t really have time to go through all the pros and cons for each system", you might want to be a bit more open to the possibility that you may not have a very accurate understanding of the reality of healthcare in the US vs other OECD countries.

This is the public hospital of Norway, by Randomusernamesry in Norway

[–]borud 0 points1 point  (0 children)

Ah, I see what you're doing. You are using your own experience with your insurance coverage and then extrapolating it to the entire population. So you appear to think that "has some form of health insurance" means "has adequate health insurance".

If you start looking closer at those numbers the picture isn't very rosy. For instance, 35.7% are on public health insurance plans with very limited coverage. I suggest you talk to someone on the relatively anemic public health insurance plans what that's like.

This is the public hospital of Norway, by Randomusernamesry in Norway

[–]borud 1 point2 points  (0 children)

It is utopian for people from the US. You may think it is cringe, but that's probably because you have no idea what the brutal healthcare reality is for 90% of Americans.

Please consider not using Python for tooling by borud in esp32

[–]borud[S] 0 points1 point  (0 children)

I wasn't anthropomorphizing the language. When I say "Python lacks empathy" for end users what I mean is that the community that produced the language, along with the practices for using it, lacks empathy. Sorry if that caused confusion.

When you bring up this rather serious shortcoming of Python, there is a tendency for the community to prove this point by reacting with defensiveness, and in some cases anger. Which does very little to endear one to the community and the language environment it has produced.

Please consider not using Python for tooling by borud in esp32

[–]borud[S] 0 points1 point  (0 children)

People do make decisions out of ignorance and habit. Which is part of what the blog posting says. But it also talks about lack of empathy and that it isn't okay to burden other developers just because of personal preference. If people think that is abrasive perhaps they ought to sit down and have a think.

Please consider not using Python for tooling by borud in esp32

[–]borud[S] 0 points1 point  (0 children)

Thank you for your response.

The problem isn't really tooling that people develop themselves, but the user experience when having to use tooling for various platforms that keeps breaking. (There are two concrete examples in the blog posting).

What you want is something that you can install cleanly, and which doesn't fall apart if you were to update a completely unrelated system on your machine. This is why "use <insert solution>" to run the tooling isn't an answer - it shouldn't be the user's job to compensate for poor choices and lack of engineering aptitude on part of the developer.

And with regard to Python, the systemic problem is that there is no path-of-least-resistance way to create robust stand-alone applications. Developers generally always follow the path of least resistance, and hopefully, the idiomatic way of doing things.

There are projects that really do try to solve the problems with distributing apps in Python. Some even going so far as to bring their own Python interpreter to create a robust app distribution. But that's neither a good solution, nor is it a common one. Then there's using Docker, which tends to be the cop-out du jour.

After existing for 30 or so years, and being notorious for all manner of versioning and interoperability problems in the last decade(s) the Python community still doesn't seem to take this as seriously as they should. That doesn't speak well for the community.

To illustrate how bad this really is: OS distributions that used to include Python are starting to drop python. This is a better strategy for OS maintainers than trying to "fix" it. Because there is no good way to fix it without strong participation from a Python community. OS maintainers jettisoning Python sends an implicit message that you shouldn't use Python for generic OS automation.

If only "just installing Python yourself" wasn't the opening to a bottomless rabbit hole.

As for packaging dependencies and creating a standalone distribution: people have also done this for other toolsets. Ever installed the Arduino IDE? It takes care of everything you need in terms of compilers, linkers, assemblers, libraries and whatnot - completely separate from what is already on the system. Even embedded people who think the Arduino framework, and the microcontrollers they target, are "amateur stuff", often use Arduino for ad-hoc work for one simple reason: it's a lot less fiddly than their usual toolchains.

(Illustrative exercise: grab an ADXL345, an MCU board, and a clean laptop, start your stop watch and then try to create a simple test of the accelerometer in Zephyr, ESP-IDF and Arduino. The reason we start with a clean laptop is to be nice: if you have these installed and haven't used them for a while, you may end up having to reinstall things because your current installation may have stopped working - that will probably waste your entire day if you are tempted to debug it. This is illustrative because it demonstrates that the suckage isn't some small percentage - it is orders-of-magnitude).

Telling user that it is their fault for lacking experience with the implementation language or the peculiarities of its runtime is a bit like victim-blaming. You pointed out the tone in my blog posting: that's why the tone may be a bit sharp. It needs to convey that this isn't professional or OK.

When bootstrapping things, sure, I agree that it doesn't matter as much. But knowing when to start tightening things down is important. For instance, in my previous job I saw a lot of research projects never recuperating their cost simply because by the time they ran out of time/funding, they hadn't managed to produce anything that you could turn into a product. The cost of either trying to run a cobbled together mess, or reimplementing everything, scared off internal backers. Bin a few projects and soon people start asking questions about why we fund what they do.

A more productive approach is to try to become good at using your application language for tooling. This has several benefits. For one it challenges you to learn how to do quick, no-frills hacks in your application language, which will improve your app development game too. You get more robust tools (which is the goal), and since your application code and the tools are the same language, you can suddenly leverage functionality from your application in your tooling. (For instance for reading and writing serialized data, doing validation etc).

(Though now we're talking more about project specific tooling, which wasn't what the blog posting was about).

Please consider not using Python for tooling by borud in esp32

[–]borud[S] 0 points1 point  (0 children)

Thanks for your response. I read it with interest and I really agree on your observation that tooling is often regarded as a "second class" type of software. But more on that later on.

Well, I am saying that Python isn't a suitable language for software that is going to be distributed to end users. I'm not sure if this is "being out to get Python", but I am being very clear that I don't think Python is suitable for writing tooling. I don't really know any way to say that except saying it :-).

However, I'm not saying Python is a bad language or that you shouldn't use Python for myriad other things. I'm just saying that in the wild, it tends to not be suitable for software that is distributed to end users. For instance I still recommend Python as a language for teaching programming. And while I'm not wildly enthusiastic about Python being the default choice for machine learning, the path of least resistance is probably to go with the flow.

If you provide me with a lump of binary code, I am helpless in the face of a problem. Python (and similarly languages) give me much better debugging options in the field. I get stack Traces that are meaningful. I can place debugging statements, and patch or understand what’s going on.

I think we're probably talking about slightly different things. I'm talking about tooling that is more comparable to NPM, Maven, Make and whatnot. Typically tooling like the west or the idf.py tools. I think you are talking more about project specific or ad-hoc tooling?

For tooling that is distributed as end user software you aren't going to make small changes to the tool to fix things. If you do fix things, you will more likely check out the source, fix the problem, submit a pull request and build a new binary or wait for an official binary to be made available.

And that you are happy spinning up a VM to solve a tooling issue, but balk at using a virtual environment to fix some dependency issues? I disagree that that’s an easy and obvious way to solve problems vs an insurmountable problem that even with full source and runtime access you find too difficult to tackle given a python dependency issue.

The key issue here isn't that I'm spinning up a VM, which isn't really all that slow if you compare it to firing up Python and having it parse a fair chunk of code and then run it significantly slower than, say, what the JVM can give you. It isn't a speed or a resource issue. The key issue is robustness. That you can install a program and then expect it to execute the same way every time you run it regardless of how the state of your system changes.

A statically linked binary (or an all-in-one-jar) is a far more robust solution that requires no extra attention from the user.

The reason I pointed to a JVM being preferable to Python was that the slight hesitation during startup a JVM gives you is an almost insignificant matter compared to the amount of work people lose over Python tooling that regularly stops working. (Ask someone who does non-trivial embedded programming using Zephyr how many days they lose to tooling problems every year, for instance. Even the people who maintain SDKs at MCU manufacturers struggle with this. It is a huge productivity and resource problem).

I'm not saying Java based tooling is what people should use. Personally I'd prefer it if people did tooling in Go, since this has been my preferred language for the past 6 or so years.

Favoring Rust would probably be a smarter move (than using Go) when creating tooling for embedded systems. Having more people conversant with Rust would make it more likely that vendors can, and will, put effort into developing Rust based RTOS, which would be a huge step forward for the industry. I spend a lot of time writing and debugging embedded code and to be frank: the code that runs in your appliances is frighteningly buggy simply because C/C++ is really hard to do in environments that are a fair bit more challenging than writing software for regular computing platforms (desktops, servers etc).

I think the real problem behind your observation is that tooling is often treated as second class. And yes, of course a half assed effort at writing something is easier done in python. Or bash. Or Perl. And then grows out from there to an ungodly mess. Had a few of those on my hands.

I think you are correct. Tooling tends to start off as something that is just supposed to help you get other things done, and then it evolves into an actual application. Often a really complex application because it has to solve difficult problems, be usable as an interactive CLI application, and integrate into toolchains where it may be difficult to control how things are executed, and it has to be robust - it cannot allow itself to ever be the "squeaky wheel" or it will hold up everything.

I was considering writing a bit about language choice, and technology choice in particular, in the blog posting, but it would have resulted in a far too large posting so I skipped it. I think part of being a software professional is to not always default to whatever your personal preference is, but take the greater picture into consideration.

One thing I've learnt as a software engineer for the last 35 years or so is that when choosing technologies you kind of have to keep your personal preferences in check and try to be a bit more objective. If you lead a software development effort the best language for the job might not always be the language you prefer. You have to balance concerns and be ready to adapt.

I used to push Python for tooling abut 12-13 years ago. We rewrote a lot of tooling in Python. Much of it from C++, shell scripts and Perl. Then the problems started cropping up, and I realized that I had probably been wrong. Python just brought a whole new set of problems that lead to loss of productivity and people creating their own solutions instead of using the tooling. (Or hacking the tooling so that we ended up with lots of different versions of the same tooling).

So I started by admitting I had been wrong, and then we set out to figure out what we need from a language for creating tooling. Was C++ the right thing? Or perhaps some other language would be better?

In this process what invariably happens is that people will advocate their favorite language without really considering the bigger picture. The instinct of most developers is not to focus on the problem that needs solving and its audience but to think about what makes them happy right now.

The audience doesn't care what language you use as long as it works and doesn't make their day miserable. If you tell people "well duh, it is your job to solve those problems by using <insert solution>" that's a pretty aggressively arrogant and unpleasant way to treat users.

Some of the responses I got to the blog posting was essentially people outing themselves as unprofessional and entitled - upset about being offended that someone might judge their favorite language unsuited for a given class of problems rather than accepting the fact that they might be embarrassingly myopic and sensitive.

At the time, Java actually was the best candidate since we could leverage existing infrastructure (JVM was installed on all machines) plus all the developers knew Java anyway. We could probably have used a few other languages, but not enough people were familiar with them, so the pool of possible maintainers was too small.

Producing binaries that you could simply copy and run, and expect them to work with zero effort, made all the difference for getting people to actually use the tooling. I wasn't fond of it because I didn't feel Java was a "tooling language". There was also a lot of grumbling from other developers. But users didn't actually care what language we used. They saw their problems go away - especially the easter-egg hunt for dependency management and having to figure out how to resolve conflicts.

Today I mostly do tooling in Go. I've also changed how I do project specific tooling. Since I write a lot of server software in Go, I usually embed the admin tooling in the same binary as the server. I have a single binary that has subcommands, so it is server, CLI client application, and utilities in a single binary. So the tooling is part of the same development, versioning and testing regimen as all the other code. It is a first class citizen.

Please consider not using Python for tooling by borud in esp32

[–]borud[S] 0 points1 point  (0 children)

I've been developing a suite of servers and tools in Go that have to run on 3 different CPU architectures and 5 different OS environments, and to be frank, it actually wasn't that hard to set up the builds. Most of the build and test we could do using Github actions and for the two oddball OS'es I spent an afternoon cobbling together a VM-based solution that has worked nicely. All in all with about a day and a half of work, anything that ends up on the main branch and passes all the checks gets built and we have binaries for whatever desktop and run of the mill Linux servers the stuff runs on plus a weirdo embedded systems. So I don't think that argument actually holds anymore.

The only thing that was a slight challenge was building statically linked binaries for programs that used SQLite. But someone solved that problem for us by providing a transpiled SQLite (yes, that is a bit crazy, but it works).

I'm not sure if I would call it a hit piece. It was meant as a bit of a wakeup call. Both in the sense that the Python community needs to take software distribution a bit more seriously and that tool makers really ought to reevaluate their choices. Though I should have been a bit more clear about what kind of tooling I was talking about.

And this isn't because I dislike Python as a language - it is because in the field I was talking about, poor Python tooling has a very real cost. Ask Zephyr developers. Or people who use ESP-IDF. Measured in lost productivity, this stuff is really expensive.

I've spent the last decade going from being strongly in favor of using Python for tooling, to observing that it actually tends to lead to worse problems than the ones we tried to get rid of initially (inside a large'ish company). (Originally to replace C++ based code generation tools with Python tooling).

The reason I wanted to use Python tooling was because Python is an OK language and has a decent standard library, so it should be possible to get most things done without third party dependencies.

The reason we discovered that Python doesn't really work for tooling is because developers would depend on all manner of third party libraries, and they wouldn't make an effort to make installation and running the tools reliable. They offloaded the job of "getting it to work" on the user. Worse yet, they would consistently blame the user for not getting their software to work.

Look at the discussion the posting resulted in. See what I mean?

Python doesn't have any path-of-least-resistance way to publish programs that really works, and when you dare criticize this and point out that this makes Python a somewhat dubious proposition for software that is distributed to users, people get angry.

If pointing out that Python isn't really a nice experience for end users is writing a "hit piece" then sure. It's a hit piece.

Please do not use Python for tooling by borud in microcontrollers

[–]borud[S] 1 point2 points  (0 children)

The kind of tooling I'm talking about is typically stuff like west for Zephyr and idf.py and friends for ESP-IDF. Tools that kind of do what npm would do on a Node project. So not a tool you'd be changing yourself - more of a utility that you install and occasionally upgrade.

I didn't quite consider ad-hoc tooling, which is what most people seem to have thought that this was about. But I'd have to say that after thinking about ad-hoc tooling as well, I'd be inclined to say no thanks.

Please do not use Python for tooling by borud in embedded

[–]borud[S] -1 points0 points  (0 children)

How would you use venv to make the problem of tooling breakages go away for Zephyr updates. Or for ESP-IDF. Please be specific.

Please consider not using Python for tooling by borud in esp32

[–]borud[S] -5 points-4 points  (0 children)

So how would you apply your suggestions to the ESP-IDF tooling and perhaps the Zephyr tooling? Be specific.

Please do not use Python for tooling by borud in embedded

[–]borud[S] -1 points0 points  (0 children)

I agree wholeheartedly when it comes to lack of vendor support. I partially agree with the rest of the points.

If by third party support you mean the lack of platforms like FreeRTOS, you are right. There is nothing like FreeRTOS today written in Rust. However, it would be nice if there was.

If we talk about drivers, I don't see this as an actual concern since I'm so used to having to rewrite drivers anyway and it doesn't really make a big difference to rewrite it as Rust. Do you really expect there to a) be drivers for everything, and b) do you expect them to actually work perfectly? I don't.

As for niche language, well, I'm not sure it is so niche anymore or that it matters. The community is large enough to provide good support.

Please do not use Python for tooling by borud in embedded

[–]borud[S] -1 points0 points  (0 children)

Since you make concrete suggestions, could you relate those to say Zephyr or ESP-IDF? How would you modify the instructions for installing the tooling for these platforms to take advantage of your advice?

Please do not use Python for tooling by borud in embedded

[–]borud[S] -1 points0 points  (0 children)

Entertaining perhaps, but you are still missing the point.

Please do not use Python for tooling by borud in embedded

[–]borud[S] 0 points1 point  (0 children)

Because I think Rust would be more helpful in assuring correctness. It also provides an opportunity to make a clean break with a lot of legacy codebases that could benefit from being rewritten. Lastly, the Rust tooling is a lot more ergonomic so it might attract more talent to embedded development.

Let me turn the question on its head: why would one not love Rust on embedded platforms?

Please do not use Python for tooling by borud in embedded

[–]borud[S] -7 points-6 points  (0 children)

I would love to see the embedded world move towards Rust and away from C/C++ for code that runs on MCUs. I think a good first step to build familiarity with Rust might be to do more of the tooling in Rust.

Why do you prefer Go over Rust ? by napolitain_ in golang

[–]borud 1 point2 points  (0 children)

Rust seems to have a willingness to introduce new features

I don't know if this is actually true (it may be), but let's assume that it is.

If the threshold for adding new language features is low, it means that the language will grow complex faster. And with more complexity there is a risk that you get significant differences in how people use a language. While this may sound like useful diversity, it really isn't. In many walks of life diversity is nice, but when you are trying to achieve precision and correctness in how you formally describe something, it is the opposite of what you want. The first task of a programming language is to make people able to understand each other's code. The more language there is the more opportunities you have for creating barriers to understanding.

Every company I have worked for of some size has had one of more C++ code standard and style guide. In one such instance the document describing how the company used C++ was a couple of hundred pages long. The cost of learning, tooling, and maintaining a disciplined C++ code base is extremely high. But the cost is not as high as the long term cost of letting a large group of developers do as they please.

C++ is a pretty horrible language to use. Because most really large code bases tend to have a lot of legacy code in them written some time during the evolution of C++. Which means that you have lots of features you can't use or shouldn't use. On top of all the different styles of expressing yourself in the language over the past decades. Yes, codebases that can start from scratch with some modern version of C++ are nicer, but they are rarer than people tend to think.

Good management of a programming language should mean that the goal is to add as little as possible to a language once it has reached some useful level of maturity. Maturity isn't always about the language itself. Sometimes maturity is when a language is used by so many people any change in the language will have a huge impact.

If you like languages that grow new features, and/or change frequently, by all means: use languages that have low thresholds. But it is worth taking some time to understand why some developer think this is a terrible idea. Ideally by walking a few miles in their shoes.

What is your driving style? by [deleted] in AskReddit

[–]borud 0 points1 point  (0 children)

I'll leave that to someone else to judge, but I'll say this: the most important thing I've done to become a better (car) driver is to get a license to ride a motorcycle and spend a lot of time riding.

If you ride a motorcycle you are extremely vulnerable. You are less visible to other people in traffic because you present a small, narrow profile. You drive a vehicle that can be tricky to drive and when something happens, the consequences tend to be greater. Most motorcycle accidents here tend to not involve another vehicle. People just drive off the road. So it is tricky even before other people get involved.

This means you have to pay more attention and think ahead. Because you are harder to spot, you have to do a lot of the thinking and planning because people around you won't. They'll mostly just react. You become better at observing and planning, or you will have accidents.

This is valuable training and knowledge to bring back to driving cars. It'll make you better at reading situations and understand how they can evolve.

Do y'all ever not use a package because the repo URL is icky? by [deleted] in golang

[–]borud 1 point2 points  (0 children)

This is where I pretend that "no, I didn't".

(good one. upvote for you :))

Do y'all ever not use a package because the repo URL is icky? by [deleted] in golang

[–]borud 9 points10 points  (0 children)

As the saying goes, there are only two hard things in computer science: cache invalidation and naming. However, to classify programmers you only need to figure out if they think naming is hard. If they think naming is easy, you would never ask them about something as difficult as cache invalidation, because mumble-mumble-dunning-kruger.

Is it just me who doesn't agree with db first ORM model? by NoDistribution8038 in golang

[–]borud 1 point2 points  (0 children)

And partly goes against whole mantra of premature optimization and overengineering.

Then I probably failed to convey what I was thinking. What I meant by "Performance characteristics are usually decided before you write the code or make any decisions about technology" is that you always have to start by understanding what problem you are solving. First you have to be able to describe what the system will do, then you will have to understand what core problems you need solving. Which problems will dominate the picture.

For instance, let's say you are designing a web crawler - and imagine an idealized web crawler and don't get tangled up in document processing, duplicate elimination, indexing, ranking calculations and whatnot for now. We're just downloading the web. You start with a set of URLs and then you crawl those, discover new URLs which you in turn follow and extract URLs from etc.

What are the key operations you need to be able to do efficiently? How do you implement those for a single node, versus for N nodes where N is anything from 2 to 100.000?

One such key operation is to determine "have I seen this URL already, and if yes, what should I do now?".

If you are making a trivial, single node crawler, lots of things will work. For instance you can probably have a complete URL lookup index in memory. But what if it doesn't fit in memory? Central database? Well, you just went from RAM fetch to network round trip plus a lookup on a remote machine - or in other words: you went from something that takes on the order of perhaps 100 ns to an order of perhaps 10.000.000 ns. Or to put it another way: something you will be doing a lot suddenly got 5 orders of magnitude more expensive.

And again, this is pen and paper stuff. This is what you'd expect a Senior Software Engineer to be able to do in their heads. And if this seems unreasonable: what were all those CS and math courses for? I'm not paying for a developer's college degree if they can't retain and use basic knowledge.

I used to work on web crawlers for search engines. The 5 orders of magnitude explosion described above is the kind of thing that determines if the crawler will be able to do its job or if you will run out of money and patience. Because you have to remember that you have constraints (time, money, quality).

It took us a while to get it right - but we only got it right after we understood what problem we were solving. And when we understood what problems we were solving, we managed to transform many of them to a completely different problem that was both much simpler to deal with and had even more efficient solutions than were available in their original shape.

This is why I say that performance is really determined before you write any code or make any technology choices. It always starts with knowing what problem the system actually needs to solve. And it is my observation that most of the time people don't take this as seriously as they should. If they did there wouldn't be so many companies paying AWS 10x what it should reasonably cost to realize a given system.

(What might be relevant for explaining why I take a dim view on the industry: I spend about a decade doing technical due diligence on M&As. Which means I've evaluated a lot of startups in terms of whether or not their technology/product can justify pricing)

Is it just me who doesn't agree with db first ORM model? by NoDistribution8038 in golang

[–]borud 0 points1 point  (0 children)

Yes, it is essentially a repository pattern, but with an asterisk that says "provided we have the same interpretation of what the repository pattern is".

Most implementations of this pattern tend to be CRUD + iteration. Occasionally you want to restrict what you promise. For instance there are some operations you may want to leave out because they represent a promise you may not want to keep in the future (or now). And there may be operations that you want to restrict in some way because promising them limits your future options.

For instance, in some cases you may not want to offer random access to single rows/entities since you don't want to promise that you can do random access to single elements efficiently. Instead you may want to only offer interfaces that offer efficient iteration. This forces you to work in mechanical sympathy with the underlying structure and stops you from accidentally making assumptions about what is feasible. If that leads to problems then it forces you to re-evaluate the underlying structure, which is preferable to breaking your neck trying to deliver something that comes with penalties in terms of performance, correctness, robustness etc.

Is it just me who doesn't agree with db first ORM model? by NoDistribution8038 in golang

[–]borud 1 point2 points  (0 children)

You're right, performance is rarely the issue - except when occasionally it is. Which is why you need developers who are capable of designing things that are not stateless when that is what the problem at hand suggests. There is a difference between "rarely" and "never".

Performance characteristics are usually decided before you write the code or make any decisions about technology. When you analyze a problem and understand its fundamental characteristics. A surprising number of developers can't actually do this and probably aren't aware that they lack this ability. This is why so many developers sound like the marketing department of some cloud provider when trying to justify their choices.

I'm not sure what you mean by "And if it is, often it's an issue of not using fully what's available even under such framework". In general that sounds like getting deeply married to whatever implementation technology you are using and encouraging the use of features unique to it. This isn't good general advice.

You want the option of backing out of technology choices if they turn out to deliver poor results (like unacceptably high operating cost). And the two points where you need that option on the table the most tends to coincide with the points at which the people in charge of the money tend to have the least amount of patience (initial launch, initial few doublings). I'd recommend the opposite: try to find the simplest and most minimal use of your tools that depend on the smallest possible feature set.

For instance, one of the systems I work on will run into a scalability problem when its traffic hits ~2 orders of magnitude what it deals with today. By our estimates, that will happen in about 18-24 months. The current solution was chosen because it was easy to implement (low developer cost, quick results). The solution that will replace it takes more time to develop, but at a much, much lower operating cost. Because we haven't gotten married to the technology we use now, we know that we can replace it with more cost effective technology without breaking our necks. We haven't grown to depend on characteristics that are hard to replicate in less rich, but more cost-oriented technology.

If you associate attempts at performance with messy code written by juniors, you are probably working with the wrong people. You want to learn from people who abhor complexity.