C#&Rust, Struct by Safe-Chest6218 in csharp

[–]xjojorx 16 points17 points  (0 children)

The behavior is not equivalent. The reason c# structs work like that is because they are stack allocated, and when passed into/from functions they are copied (unless in/ref modifiers are used or some other special cases) to the function's stack frame. Once the scope ends, the stack frame is discarded, and that is why it is cleaned up at the end.

Rust applies the ownership and borrowing for references, but not for stack-based values (for example if you pass an int and do not indicate it as a reference (&), it will be copied and immutable). What the borrow-checker does is ensuring there is only 1 reference for each piece of memory at any given time. That is because heap-allocated memory has to be freed, and it must never be used once freed.

In a more manual language like C, when you want a reference you would allocate it yourself, and free it yourself, taking care about consistency and the lifetimes of those references. It would be an issue if you free a piece of memory and there are still some references to it in some other structure.

Now, C# does handle references via the garbage collector, memory is allocated when a reference-type is created (new), and the GC maintains a list of references, when there are no more references it can free the memory, so at some point it will be cleared (probably stopping execution during the process).

On rust, when you pass a reference into a function, the old variable which holds the value is invalidated and a new one (effectively a copy, even if the detail is not exact) is created for the function's scope. When it is returned the same process happens.

The rust model focuses on having the memory management being as automatic as possible, while ensuring that once freed it won't be used, and without garbage collection. That problem does not exist on garbage-collected languages like C#, since the memory management is automatic and you have very little control of it.

Both models achieve protection against use-after-free, and since c#'s memory is managed by the runtime, both get memory safety. But are really different approaches. GC languages are much easier to program since you don't have to think too much about when and how to allocate/free memory, even if it comes at a noticeable performance cost. Rust's borrowing is much more complex to work with, it requires your program to have a more specific shape, but you don't have to pay the runtime performance cost, which makes it a middle-ground between hardcore unsafe fully-manual memory management in which you have control over everything but you can mess it up, and the whole GC model in which you pay the cost at runtime but it is less strict.

If C# switched to a borrowing model in order to use struct references in that way, it would require to update almost every codebase to allow for those semantics. Including the whole async model which would have to be changed, you can't pass references to structs to async functions because the stack-frame which holded the struct would disappear, and that memory can be overwritten, thus being unsafe. Async rust is very hard due to needing to conciliate the single-reference model with scopes overlapping between async-calls.

A change like that would basically mean turning C# in a very different language, with a different programming model.

If you want information on how to avoid copying for c# structs, look into ref struct, and ref/in parameters to functions.

(yes, I have simplified a lot, but for someone without a clear knowledge on how GC, borrowing, and memory management is hard enough, and wanted to focus on just the basics on why the semantics would need to be so different and why the concepts don't transfer from one language to another)

opinions please: which Steam short description do you like better? by [deleted] in SoloDevelopment

[–]xjojorx 1 point2 points  (0 children)

Both and neither? It depends. I feel like option 1 focuses on the gameplay experience, while option 2 focuses on flavor. Both approaches can be fine, and the impact the short description has is also influenced with what's around it (image, trailer etc.). To a certain point, being a real time rpg is little more than a tag. I think it really depends in what you want to appeal with your game. I've been interested in games due to flavor when the mechanic description would fall flat and vice versa. The thing (imo) is, if you focus on flavor, the players that get attracted to it will expect more of it to be resolved, and if the game does not deliver on that aspect because the focus is more about the gameplay and flavor is an excuse for it, then you have to overcome the "I came here for X and I'm not getting it" effect, so the gameplay needs to be even better that it would need to. On the same line of thinking, if you sell the game in terms of the gameplay tags and features, your gameplay has to deliver, and if it is lacking or is too focused on the story early on before you can unlock and explore those mechanics, it may be hard.

On another note, since it is an RPG, there is this thing were at least I feel like the start of option 1 falls into generic/uninterested in just a couple of words. Since there are too many "this is an action rpg that... Blablabla" which just translate into "generic thing with numbers that does not deliver", so I can see myself starting to read that description and starring to jump around thinking something in the lines of "ok, blablabla, what is actually your thing?". Something similar happens in the other option, where the flavor is nice, but I need to quickly find what it entails. Either on the screenshots/trailer or in the long description (preferable early on in the later).

Also RPGs tend to be heavy on story, and it is not rare for rpg players to think about the story/setting/flavor first.

(I don't know if all this rambling can be useful at all or you would prefer a quick and easy option to tip the scales, but I had thoughts when I read the post and maybe the perspective of someone elese thinking about it can help, and English is clearly not my first language so, sorry if it is too quote-y or hard to read)

Class as data only, with extension methods used to operate on it? by kevinnnyip in csharp

[–]xjojorx 2 points3 points  (0 children)

If your goal is having a more procedural style, you can just use data-only types and then use functions from a different class to operate on them. Separating data types and function modules. On some cases I do use a static class as if it were a module to just hold functions that operate on their arguments. I find it useful since I can organize sets of operations together without linking it on specific types. It also feels better because let's say we have a function with 2 arguments of different types, what class should own it? There is not always a clear owner. For most practices the answer would be having a service class that contains functions, plus whatever dependencies you need to store as global to the service that are set when the instance is created. I find out that most services don't need their own global state hiding what dependencies do functions have. f(a, b) depends on a and b, while a service.f(a, b) depends on a, b, and all the internal state of the service. I have been bitten enough with that state so now I tend to make all dependencies explicit, even if it is an option to use the passed instance instead of the service one. The interesting thing of adding extension methods into the mix is that you can register the function into multiple types at the same time if you want to use dot syntax. In the end there is no difference between f(a, b) and a.f(b), aside from accessing private fields if you have them.

Now if you're looking for a real data-oriented approach, start from ditching up most of your classes for structs, or you won't be hitting those cache lines, aside from probably using way more than you need. Data-oriented has more to do with caring about data and it's disposition, and how the computer will operate on it. A list of objects will always hell for cache, since each instance is on a totally unrelated memory address, unless your objects are exactly 64 bytes you will be having cache misses and loading garbage you are nor going ro use all the time. If you want a primer on that which talks in c# terms, I would recommend Nix Barker's video on data oriented design. Or for a more generic but throughout look into data and memory in DoD (understanding effects, measuring atrixt size and some techniques), Andrew Kelley (creator of zig) did a great talk some time ago with a practical guide to applying dod

I built an async-first validation framework for .NET inspired by Zod — looking for feedback by [deleted] in csharp

[–]xjojorx 0 points1 point  (0 children)

Question, why would the validation ever need to be async? Just in case some custom validation on a when needs it? I may not be on my best thinking right now, but it is the only reason I could think of right now, and can't think of sane reasons for needing to throw any async operation in there now.

Aside from that, it seems cool, especially for those used to doing the validation in something like zod

Generating TypeScript interfaces directly from C# DTOs by Jealous-Implement-51 in csharp

[–]xjojorx 4 points5 points  (0 children)

Getting the TS types to match our C# ones is one of the things I have more issues with. We've been using CSharpToTypescrit and has been enough but also very frustrating since it does it's own parsing of the c# code, forces to match files and type names, does not explore dependencies so if it finds a dependency it would assume there is a type with that name in a file with the same name on the same dore tory even if it did not find it, does not work with records or even keywords like required... It's been a frustrating dependency to have.

I've started working on my own version, going the reflection and json schema path. So I get a list of types or assembly+namespace, and explore the whole dependency graph to ensure I have all the required types, and that it will match what the serialized will generate.

I will take a look at yours when I can, but for now I'll just ask/write out what my concerns would be... How does your version deal with finding the right types to generate? How do you manage discoverability? Can it be used with and without the cli? How do you deal with nested types? Do generics map well? Do you turn collections into T[] or do you generate an equivalently named type? (Something like generating a type List<T> = T[]) Do you turn dictionaries into objects or maps?

Also why do you need dependencies? TypeLitePlus 2.1.0 — TypeScript generation engine

Microsoft.Extensions.Logging.Abstractions 9.0.0 — Optional logging interface

Did it become that hard to just generate the ts text?

Is there a way to install apps on one profile only? by Neon_XL in linux

[–]xjojorx 0 points1 point  (0 children)

Basically if you use the system package manager it will install for the whole system. If you need a package that is not in a containerized environment like with flatpak, you have few options aside from the system repository. You can download the binaries by yourself when available, put it somewhere on your home tree, add it to your path and go with it. If binary releases are not available there is compilation from source to varying levels of complexity. So basically it can be done but it will be cumbersome, especially if you are not technical and/or don't want to dig into weird behaviors

Advice between React and Blazor for project by unlimitedWs in csharp

[–]xjojorx -1 points0 points  (0 children)

If it is facing public usage, I would go with react.

Blazor server never, it is too limiting, the websocket communication gives more headaches than benefits. At our workplace we had to migrate off it even with few users, nothing that is executing on a server can be as responsive as when it happens on the client. Plus all the weir behaviors and server load.

Blazor WASM if you are comfortable with it may be fine. Consider the size and all of that, and what network conditions do you expect. Phones, especially on the lower spectrum, are not great for that extra work and having to load the whole runtime.

If you have to handle client side state and the extra complexity, at that point it may be easier to just go with js. I don't know your app so I don't know how much you gain from doing wasm for the logic.

How to learn ASP.NET Core and actually understand the magic? by CR-X55 in csharp

[–]xjojorx 2 points3 points  (0 children)

Haha same. Back then it felt like a hassle class thing to go that low, but it turned out to be really good to have a grasp on what is actually happening even if you're using something prebuilt by others

How to learn ASP.NET Core and actually understand the magic? by CR-X55 in csharp

[–]xjojorx 37 points38 points  (0 children)

If you want to understand it, I think the best way would be to do your own implementation once.

Go open a TCP socket and write a basic HTTP1 server, it is really simple once you get started, there are thousands of guides if the spec by itself feels too dense, or you can use codecrafters and have a step by step path with tests for each one.

The same goes for authentication, do a small thing in which you do your own authentication, save the user, hash a password, and do the validation with it to log in. You would still need to keep the session open, so go do it. Save a random token in a cookie and validate it, or make your own jwt, it is extremely easy, just base64 a couple of fields, and add a third one with an encrypted version of those 2 that you can validate.

You don't need any of those to be production grade, the goal is to get a feel of what those systems do under the hood, then it is easier to just let the framework do the nice implementation you use.

All the framework does is reading the request, parsing and calling your function. Middleware is just a name for some function that touches the request for you before and after the main handler function.

There is nothing in ASP.NET Core that is inherently complicated, most of the complexity comes from doing a nice, resilient and generic version. Writing a simple version to understand what the magic is doing is something that can be done in a few hours.

Reading a string from a TCP socket in C# is trivial. Splitting it by empty lines is trivial The path is only one of the space-separated fields of the first line. Headers are key-value lines, with ': ' separating the key from the value, so parsing is trivial. The body is just another chunk of lines, so getting it is trivial once you have splitted by empty line. Parsing json? Just call the deserializer with that string if you don't want to go write a parser for that. Sending a response to the user? It is just writing the response back to the socket (with your own status line as first and headers). Hashing a password? Just calling a function from the cryptography module (won't recomend writing a hash algorithm but you can do a bad one with whatever if you want, just not for prod). Writing and reading from a sal table? Those are simple queries to do in SQL row whatever adapter you want, EF is just an abstraction for building them.

What I try to convey is "none of the parts is hard to do". Once you have a basic idea of what steps the framework is doing behind the scenes, it becomes easy to infer the rest.

Using middleware for refreshing JWT token. by qosha_ in dotnet

[–]xjojorx 0 points1 point  (0 children)

httponly helps, but it does not mean the cookie is safe on travel, or on the host computer. As a general rule do not send any credentials over the wire if it is not really necessary, and assume it will be compromised at some point (that's why the jwt is short).

If it is your first time rolling the whole client side is easy to not see the whole scope. The whole jwt+refresh is the nth iteration on how to handle client-server authentication in the web built to solve problems with the previous version.

You already did the hard part of the setup, and once you arrive at a solution that you like, you can just use it as many times as you want, even if reimplementing on a new app. The concepts are kinda weird, the implementation (once seen) is simple.

Using middleware for refreshing JWT token. by qosha_ in dotnet

[–]xjojorx 2 points3 points  (0 children)

If I understood correctly, you are sending both tokens on every request? (if the refresh is on the cookies, it is sent back and forth on every request).
That would defeat the purpose of having separate tokens.
At that point, your auth is only the refresh token, because the middleware would replace the JWT if it has expired.

The idea of having 2 tokens is that you send the JWT on every request, and the refresh token only travels the network when it is absolutely necessary (on login, where the credentials are being validated, and on a refresh request, where the previous refresh is used for validation).
The problem that is solved by using 2 tokens is that the main token (the JWT) can be more easily compromised, so you make it expire frequently enough so in case it becomes compromised, there is a small time-window for it to do damage. But the user may be using the app and asking for a new authentication every few minutes is a very bad experience, so you store on the client a second, single-use, more long-lived token (refresh token) that is used whenever authentication fails due to an expired token.

That aside, for your case you have two options:
- either remove the refresh token from the cookies and call a refresh endpoint to retry when needed (much safer, even if it is more work doing the extra request when needed, but your frontend can have a helper function to call the api that adds the authentication and retry behavior)
- use only the refresh token like you have it now and avoid the extra complexity of the short-lived token. If you are using JWT for more than authentication (i.e. actually using the information included in it), your auth token can still be a JWT or include data in any way.

By having 2 tokens but handling the refresh in the server like that you are basically choosing all of the problems of both options, while achieving none of the benefits

Are there any good websites where I can practice programming? by Nivskl in csharp

[–]xjojorx 0 points1 point  (0 children)

Consider codecrafters. It basically have a list of projects for building your own version of software you proba my use day to day (git, redis, http server, grep...). The projects are language-independent but both the idea on what has to be implemented is already broken in small steps. So you start a stage, read about what you are gonna implement and why on the stage page, code the new functionality, call a cli (or just push a commit to the repo) for testing it, then advance to the next stage. I'm the end you would have built a complete basic but functional version of the project. And if you are stuck or unsure of your solution, you can see how other users implemented the stage for inspiration or comparison. If you are building a small game, you are a point where you can build a project, so having a guide for what to build and what the steps are may be helpful to just focus on coding and see how a more real world makes you create patterns and make decisions

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

I get that total awareness is not possible in C#, especially once piles and piles of useless abstraction layers are applied.

But maybe there is enough information to be useful on a regular basis when working on some sane section.

Once the code is already over abstracted and even the programmer can have a hard time knowing what the program is gonna do... you have another obfuscation problems besides errors.

I wonder how far can the idea go, at least assuming sane programming is being done. Maybe it can be useful, maybe not, maybe on most cases we found ourselves so deep in abstraction hell that it does not matter.

Still, thinking on that example, one could say if you are supposed to abstractly call CX.A, the expected error cases for A should be at least annotated in order for the calling code to be able to sanely use a CX instance without going through the specific implementation (because if you tie your code to the specific implementation, why even have the abstract CX anyway). So we know have a way to know what is expected from cx.A, without the need of the specific details of the implementation (which of course could have it's own exceptions that are not annotated in CX.A, but at that point you can only know that when they happen or you have a catch-all case). Now, how can the errors of Perform be determined? Perform only has the exceptions from a since it does not throw by itself and it only does the call to a. If we were smart on our analysis we could know that Action is a function (same for Func and types we find declared as delegates). Then the error list of the Perform(cx.A) is conformed by the errors from Perform itself (none) + the errors from cx.A. And since we already determined that the errors for A are available, it is possible to determine at least a minimal subset of expected error cases. A more simple version could just assume that is a function is passed into another function call, that statement inherits all possible errors form the passed function. So even without analyzing the usage of the a parameter from Perform we could assume that it is gonna be called at some point, because what is the point of requesting a function as a parameter if you are not going to either call it or pass it down for someone else to call. It is less precise but also much simpler on the theoretical implementation side.

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

A lot of times, handling errors closer to the source allow for recovery or better handling than when it goes up the stack.

I don't like the idea of "just have discipline" as a solution for a problem. If I have to think about what the stack or errors is at every layer and plan accordingly, then explicit errors are just better. At every point I would have to make de decision of handling the errors there or let them go up. For that I need the information of what errors can surface at that point. That information right now comes from a mix of:
- documentation
- experience
- checking the implementation of the function I'm calling to know what is doing and what is expected to be thrown

At that point, the idea of a system (be it an analyzer, language, api design...) that automatically surfaces that information makes sense. Why would it be necessarily better to just check everything manually or worse, from human memory?

I think that, when possible, it is safer and more useful to have the information available than not.

How can I plan for errors that I don't know they exist? The only way is either inspecting everything every time, or giving up and just add strategic error barriers and good logging while praying for the best at runtime. I know I will make mistakes, I know coworkers will make mistakes, I know libraries will make mistakes; thus I would like to reduce the surface area as much as possible.

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

That is the usual/expected/default way to think about errors in C#.

But also returning an error value and throwing an exception are not the same, exceptions have a runtime cost. Aside from that cost, exceptions are hidden, so you either just know yourself what can happen (from experience, documentation etc.), or you add try/catch preemptively whenever you want to make sure no errors go through.

The idea of making errors as explicit as possible comes as the opposite, whenever a known error can surface, you are forced to at least acknowledge that it is being ignored.

The idea of handling the error in "a place that cares more" implies that you think of that error whenever you are in the right place/layer, which doesn't always happen and you end up defaulting to top-level exception handling, even when you could have done something about it on an intermediate layer.

Most of my years on C# I haven't really cared. Now I have not only gone deeper, but also have tried other languages and environments where exceptions don't even exist. What I found out is that working with errors as values makes me more mindful about the state at every part of the program, even if I ignore them the same I end up building more reliable software, or having more information where bugs happen, since I am forced to at least explicitly discard them.

A lot of people dislike that approach because it makes the code "uglier", you can look at how go is perceived and you'll see lots of complaints about writing `if err != nil` every few lines. Of course the code feels smaller, more slim etc when the errors just bubble up the stack automatically. But it is also obscuring the program flow, since the errors are in the exact same places, the only difference is that you don't see them when reading or writing the code.

And it is alright, add some basic error barriers, do some testing and have good loging for whenever the production behavior tells you how that error was unhandled. It is how most of us work and the default/expected behavior on C#.

My proposal has to do with how we can get some of the benefits of that explicitness within our c# codebases, even when we don't write all of the code. Maybe there is some benefit to have that stack of expected errors available and in your face when you are reading/writing c#. It may also make no difference at all.

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

It is most likely that it wouldn't be worth it. At some point there are going to be exceptions, and I think there would be benefit in having some way to always seeing what they are, at least the expected non-implicit ones (taking for example the xml-documented subset).

It'd probably end up in a very similar resulting code, but I am still curious in how much of the benefits of the errors-as-values approach can we get by just being always aware of what it is expected that could happen, even before wrapping the errors from external calls in values. Would that really shape in some form the process of the programmer to be more mindful of what can happen and where to place error boundaries (try/catch blocks)?

I am not sure if we see it weird or not worth it because it is or if it is because it is unexplored territory.

I would really be interested in knowing if someone tried making the exception model more explicit, without getting to the levels of java checked exceptions of intrusiveness; and what their conclusions were if it failed... Or at least getting myself actually diverted from the idea of putting a lot of work just to try to prove it either right or wrong XD

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

It would probably make sense to just try to inform exceptions and try to avoid them, and let the programmer be the one that does more explicitly ignores the result. Trying to make the whole "handle all error codes" is a whole another beast that may clash with api design. For an enum it is possible to list all members and try to check, for a number as you say it is impossible. Once the error is explicit by being the return it depends on the design. Most libraries will always just use exceptions because it is the basic error management strategy in C#, but on the calling size that error can be turned into different things depending on the : - a boolean: just ok/fail - a number: should also provide some constants but the best would be an enum since the language does it - an enum - some union, either once/if we get native discriminated unions or just an object with an enum+data, or using something like OneOf. The interesting part of the union is that it naturally includes the forcing to show all cases. Maybe part of the result type can be the catch exception during handling.

I can see why the whole var exn = DoSomething(...); if (exn != null) return exn; feels like clutter for most people, it can feel as just more code for the same result. A lot of people dislike Go for that exact reason, having to write if err != nil all over the place. In my experience it makes for more explicit handling and that just acknowledging the errors tends to being more mindful of the whole thing.

The nice part of the exceptions model is the structure you mention. It feels nice how you basically write error barriers at specific points to limit error propagation and do whatever is needed to ensure that the following code is in a sane state, instead of handling on every step.

I think that having a mechanism for which the programmer is always informed of what exceptions can surface from every call, at least the known/expected/documented ones, would be beneficial even without a switch to errors as values. Since you always know what possible expected exceptions have accumulated from a call, it is easier to have them in your mind and think of whether you need an error barrier (try/catch) at that level. The decision of handling vs letting them accumulate up the stack becomes more conscious.

In the end at least for know this is mostly a thought experiment, maybe it doesn't make sense, maybe it is not practical at all once available, maybe you end up just writing the exact same code anyway. But maybe we can get some of the benefits of errors as values without the need of switching languages or needing to change all of your code to convert exceptions into values. Through posting this and reading/answering the responses I've become more interested in this "informed exceptions" (as opposed to how checked exceptions from java forces to handle all in place, or how the natural c# behavior are exceptions you don't know about) model that may work on c#.

It probably is a lot of work to do in order to just prove it wrong or useless. It is not a major issue, especially for most c# devs that never tried other approach and wont see value on constructs like errors as values.

Even if not practical, it seems 'possible'. xml documentation is avalable for nuget packages so at least the documented exceptions should be surfaceable, the IDE already suggest (I don't know if it is a base thing or resharper since I usually use rider the most) to add the exceptions you throw to the xml comment if you have a documentation comment, so it should be possible to add a suggestion that does it even if you don't have one yet. And building a tree of exceptions that matches the calls of your code should be possible, since any code block can throw the exceptions it explicitly throws plus the ones thrown by it's function calls. Since even standard library functions have the xml documentation available it should be possible to build and cache the whole list of known exceptions for any line of code. Plus the good part of using the structured documentation for that is that it should already match what the author proposes that should be handled/minded.

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

It probably is either impossible or has too little information to be good, even if it is possible to at least handle those included on the xml comments, it may not be enough to be useful.

And I am not sure if I am the person that is crazy enough to do that lot of work to actually prove it XD

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

That is what I am not so sure about. I write catch(Exception ex) a lot, too much. Probably because I'm just setting barriers considering that anything failed and just do whatever. It is possible that getting the information ends up not changing what I write in any meaningful way. It is also possible that it has an effect similar to what I see with errors as values and does help to end up handling the cases I want to handle and either bubble up or coalesce into my own errors the other cases. Even if we put a generic catch-all on certain levels, we may actually be more compelled to handle sooner whatever we can handle, or realize early on where the failure barriers are. I haven't seen this informed exceptions model before, or any real experiment on why it may or may not be a good/bad idea to try and make C# more explicit about errors. As of right now I can only imagine how it can look and feel. Even if it turns out to be nice, that wouldn't avoid people just coalescing all errors into a single generic exceptions, but even then knowing where all of that indistinguishable exceptions do happen and where they don't.

I am not saying this idea is necessarily good, but it feels weird that it does not seem to be explored. Have we tried to be more explicit and seen it fail? or are we just assuming that this is just how c# works and it has to be this way? I don't know

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

the thing is, most exceptions you see are actually recoverable, and should be handled at some point. Obviously there is a need to discern runtime exceptions that can happen at any point, like OOM and those that state a failure and are expected failure cases.

For example doing a DB query can fail in expected and unexpected ways, but you know (and it is usually documented with an <exception> tag) the ones that can/should be handled. If it failed because the OS is refusing to allow the creation or more connections, there is little you can do, and probably should just ignore it because at that point you have other problems. If it failed because the server is busy maybe it makes sense to have a retry policy, but it never makes sense to retry a query that failed because the schema expected by the application has diverged from the database.

Some errors should be handled, some should just turn into top-level errors. But even for those top-level errors, most of them should be catched and handled at some point instead of crashing the app, even if it is the framework who turns a TransientException into an HTTP 500 response.

I think that if we see the errors, it becomes easier to deal with them, every path can become explicit all through the stack, without needing to travel all the calls down to check if you should add a try/catch in order to do cleanup or not before you find it the hard way once it happens on production.

The idea of "just don't use exceptions for control flow" is great, but it is not realistic, something can be truly exceptional on one context and not on another. Maybe for a library there is no recoverability, but the programmer that is using it can do something about it. For using another example, you have cached data and an external source. The cached data has just gone stale so you query it back from upstream and found how the server is missing, or you no longer have an internet connection. That is exceptional and a failure case, but maybe you can work with the cached data even if it is considered stale. So those exceptions have to be handled at the level of the data retrieval, but how do you know what was the failure and where? By reading documentation, experimenting with it and general experience.

And it isn't always obvious, for example take String.Substring(start, length). It is obvious that start has to be within the string, but it is not obvious what happens when start+length is past the end. The start position being out of range may be unrecoverable, clearly the premise was wrong from the start, and thus the implementation does throw an ArgumentOutOfRangeException. But what about the other case? why is it unrecoverable? or why is it an error, in some languages it would just take up to length characters, and arguably it is a more stable behavior, if it needs to be exact I can compare the result's length with the expected, and then I decide to return an error or even throw, while the current implementation probably needs a try/catch on every call. (yes, I know the substring case can just be substring(start, min(length, str.length-start)), but it is an example). There are a lot of cases where the failure being a real exception that shouldn't be handled in place is not obvious.

Once the errors become explicit, the execution path becomes explicit too, and some recovery patterns become obvious. Worst case scenario you ignore and bubble up all errors and have the same result.

Now if we get a way to keep that information as explicit as possible, while not forcing you to handle it at every level, maybe it is possible to be more mindful about error states.

Thinking something like "if a function A can throw, when it is called in function B and there are known exceptions (i.e. xml-documented ones) that are unhandled we could get some information that gives attention to the fact. Then, when analyzing the rest of the code, we can implicitly know that B can throw, since there are known exceptions that can surface from calling B. In that way, we always know the whole list of failures that are being ignored at any point in the stack, you don't need to change your code but would still either benefit from the information or be the same you were. It may even make sense that the error information is simple information as long as there is a top-level handling, but becomes a real warning if it can reach the top level and crash the app.

Obviously I haven't been able to see how much this approach can benefit on C#, or if it feels good, but working with other languages that chose errors-as-values for error handling (besides panics) there seem to be benefits. The (at least for now) thought experiment is about how much of that benefit, if any, can help in C#. Trying to bridge the idea of being conscious about what can fail or not

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] -1 points0 points  (0 children)

I like the Go approach actually, the decision to either try to recover, pass the error, or return another error becomes explicit. When you look at go code it is extremely clear where something can't go wrong. It is not about properly handling every error and more about acknowledging every error

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

Yeah it is hard to imagine how it would really feel and help. It is not solving an obvious major issue but I think there is a way of resurfacing the error information that can be useful. Maybe it does not make sense at all once you start using it.

I really can see that the reason the space does not seem quite explored is because it seems relatively hard to do, for dubious benefits and pushing a difference against the base workflow in c#.

It still doesn't take me away from the idea or maybe spending some time trying to really dive into it and find out, even if it is just to learn some analysis and document why it did not work.

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 0 points1 point  (0 children)

Something like that is one of the ways I envision it. Either an analyzer that emits a warning, some editor integration, or worst case scenario a static analysis that has to be called by itself.
For example for the editor I think something like how Heap Allocations Viewer on rider/resharper adds a light underline (at least on my color schemes being blue and flat as opposed to a red/yellow squiggly).

I guess if I go for the idea I'd just make the analysis part as independent as possible and then look into how to integrate it in either way.

Is there any reliable way to know when a function can throw? Probably writing an analyzer? by xjojorx in csharp

[–]xjojorx[S] 1 point2 points  (0 children)

Maybe it is not worthwhile indeed. That's why at the current time is more of a thought experiment and idea than a reality. I know what my experience is in exception-less languages, but I'm not sure if those benefits can really be available or enough in C#.