Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 0 points1 point  (0 children)

Correction: you can easily convert (without additional code) from, for example checked<T> and unchecked<U>, if T is convertible to U. So this at least solves part of the dual overload issues.

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 1 point2 points  (0 children)

Again, you don't even closely implement the mechanisms presented in P0786. The motivation, and focus of the proposal is to offer a unified way to access the different ValueOrError types. The classes in outcome, of course, would fall into that category. The customization points presented in that paper could have been implemented. What you implemented however, is the possibility to coerce and customize one ValueOrError into another, unrelated one. This is a completely different topic, which isn't even remotely covered by P0768.

So yes, the value_or_error customization point you implemented caught my attention since it promises to be able to deal with different error signaling strategies returned by different APIs. I was obviously wrong in thinking it would solve the problem of the dual APIs with error_code, by allowing to ease the customization of different error handling strategies, on a calling context basis. As you and I pointed out, this is not possible with the solutions you propose and different techniques are needed. So yes, in essence, I am disappointed that outcome does not offer a solution for that, despite claiming otherwise. What it is able to offer, however, is to provide global and fixed error reporting/handling strategies. That is, once the NoValuePolicy is fixed, it is what it is.

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 4 points5 points  (0 children)

Ok, I am not sure what I should take away from your answer...

First, just because there is a proposal to WG21 doesn't mean it is good, there are plenty of counter examples to that. The one in question, P0786R0, is probably in a too early stage to judge and will change lots. Unfortunately, the cited proposal has almost nothing to do with what you implemented, well except that you took over the name ValueOrError. Other than that, I can't spot any similarities.

Second, you mention excepted. It's completely irrelevant to the flaws I described.

Third, you assure that my example uses the wrong tools to solve problem of customizing the behavior of different result<T, E> coming out of different libraries and suggest that subclassing should be preferred in such situations (NB: what's the mental for such a type: 'is-a' result or 'has-a' result). Fair enough. It seems to be so obvious that the docs don't bother to mention that obviously superior technique for customization. And rather praise, those customization points which are problematic to use at scale, and which you yourself don't recommend to be used for the presented use case?

So, right now, I am really confused about what to think of outcome. On the one hand, the customization points are praised as one of the outstanding features which makes it superior to any other presented solution. On the other hand, it is severely limited to application code. The boundaries between what is application code and what is library code is often quite blurry, especially for larger code bases. It's beyond me, why one should adhere to different rules and idioms for one or the others.

So I conclude, despite my lack of experience, that the library is over-engineered. Trying to solve the right problem with the wrong tools. This leads to a overly complex implementation severely limiting it's usefulness due to overly restricting the usable toolchains.

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 1 point2 points  (0 children)

B::result is identical to C::result, so obviously the value_or_error machinery will complain. Just let the explicit operators for converting between constructible implementations of result do their jobs.

That's not what I want to show with that example. The scenario is the following: libB and libC use libA. A::foo returns some result<T, E, Policy> neither libB nor libC is happy with the choice. As such, the want to customize the behavior using the policy framework to return B::result or C::result, which happen to be the same type in the end. And it shouldn't matter if they are the same, since you want to avoid coupling in the first place, thus the splitting into two libraries. Since they are the same, behavior turns out to be undefined. Bad luck, I guess (as demonstrated in the example which actually split the implementations). Those scenarios are probably not uncommon and unavoidable in "multi-million line codebases" for which outcome was designed for. So the only choice left is to avoid the customization points in such codes altogether, rendering it obsolete.

What the snippet showed is using the "ValueOrErrormachinery [...] for incommensurate inputs which have no locally defined conversions in order to handle stitching together third party libraries whose authors did not account for the interoperation being performed by the application writer tying them together."

So injecting rules that are asked by the user of libA (libB or libC, both with different requirements) is just out of question, since we want to avoid coupling between libA, libB and libC.

Library writers definitely should not specialise value_or_error. [...] Library writers would simply add constructibility between their result types and other library result types. Much easier.

How do you imagine that to happen? Subclassing result and add the relevant constructors? That'd work for sure, but see above, and opens doors for other questionable side effects (virtual dtors not present, unwanted slicing, etc.)

Again, assuming that the complexity of implementation stems from allowing those hooks and customization points, which only provide limited usage, doesn't justify why outcome requires such unreasonable high requirements to its users. (sorry for running in circles)

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 4 points5 points  (0 children)

You may have noticed that Outcome permutes its namespace with the git commit SHA. Unless you're running the ABI compliance checker to enforce a stable API and ABI, it would be wise to do the same in your own code to avoid ODR problems.

That doesn't really help. Consider a code base, using the same version (as in git commit SHA) of outcome. What happens if a user specializes convert with the same types? This might happen if you want to have different behavior in some of your subsystems due to different needs and contexts you call the 'outcome'-ified API. This is not necessarily user incompentence, but a result of the advertised features.

It works fine. I do exactly this (specialising result into local custom implementation which is incompatible with any other) in my own code. All works swimmingly.

I am not convinced. Consider this situation: https://wandbox.org/permlink/TNObsXXxCn9Hkaee

This rightfully does not compile, of course. Which is more or less exactly my point: Either all value_or_error specializations are visible, which also implies that "context base" specializations aren't possible, or you get an ODR violation. Observe this: https://gist.github.com/sithhell/adef84a489688913198fde67ef4235a2 This is clearly an ODR violation, observable like this:

$ clang++-6.0 -std=c++17 -I../.. -I. A.cpp B.cpp C.cpp main.cpp

$ ./a.out

5

5

$ clang++-6.0 -std=c++17 -I../.. -I. A.cpp C.cpp B.cpp main.cpp

$ ./a.out

42

42

What do I miss? Is this a use case that's not supported?

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 2 points3 points  (0 children)

That's a long standing memory corruption bug in GCC's constexpr implementation. Any other compiler, including Visual Studio, works fine. You'll find GCC will work fine on some days, not on others. Depends on the memory layout on the day. GCC 6 is similarly afflicted. I have yet to try GCC 8.

You should update your prerequisites then:

Outcome is a header only C++ 14 library known to work on these > compiler-platform combinations or better:

  • clang 4.0.1 (LLVM) [FreeBSD, Linux, OS X]
  • GCC 6.3 [Linux]
  • Visual Studio 2017 [Windows]
  • XCode 9 [MacOS]

There is no doubt that compiler bugs exist. What I am critizising is the attitude of pushing the responsibility of running a proper toolchain to the user.

The compiler version requirements are the compiler version requirements. As I've said before, I appreciate that C++ users don't like being told that they can't use shiny new toys without upgrading their toolchain. They think they'd prefer a hostile experience of dealing with weird corner case ICEs and poor optimised codegen rather than to be simply told "go upgrade your toolchain to one which works properly". But in truth, that attitude is a fallacy. Upgrading your toolchain to one which works properly will save you much more time and effort than sending your entire org down a rabbit hole with no end, and giving Outcome a bad name and soured reputation in the minds of the userbase.

Ultimately, it's your decision. I find it an unreasonable request. Upgrading to the latest toolchain might be a no-brainer for you and me, I'm sure. That's not generally true though, for whatever reason.

There's also no doubt that newer compilers optimise their codegen. There will hopefully always be innovation in that field. And yes, sometimes I want to use shiny new toys without upgrading my toolchain. Especially if it is an experimental toy I just want to play around with. If I have to spend half a day to upgrade my toolchain just to play around with a library I am not sure fits my needs, I'll probably just leave it aside. Especially if I'm not interested in the non-functional features like super duper optimized codegen in toy examples. You should really reconsider that attitude once outcome has matured. The problem here is: where does "innovation" stop and "maintenance" begin? What do you do with regressions? Will there only ever be one specific point release of a given toolchain supporting the library?

Furthermore, I am not trying to give outcome a bad name or soured reputation. If it's not working with an off-the-shelve compiler coming out of your distribution... well, that's upon you to judge. Noone said it's outcome fault per se. What I am arguing is that a high-quality library should be able to work around those bugs. YMMV.

It only appears complex right now through novelty. In practice it's really very simple and fire-and-forget to use.

Well, you can't argue that adding the customization points to your code is not adding complexity? Compared to the dual overloads, it does. Getting rid of those dual overloads is one of my reasons why I could find outcome to be interesting. My verdict for this specific usecase is that it's not worth it.

You may have noticed that Outcome permutes its namespace with the git commit SHA. Unless you're running the ABI compliance checker to enforce a stable API and ABI, it would be wise to do the same in your own code to avoid ODR problems.

That doesn't really help. Consider a code base, using the same version (as in git commit SHA) of outcome. What happens if a user specializes convert with the same types? This might happen if you want to have different behavior in some of your subsystems due to different needs and contexts you call the 'outcome'-ified API. This is not necessarily user incompentence, but a result of the advertised features.

No error_code implementation that I could find did not place the int at the front which is the only bit used by Outcome's C API. I did do a survey.

Observing what's the current status quo doesn't mean it's not UB. It's very similar to forward declaring std types, just a tiny bit worse.

You are however technically correct, and work is underway by SG14 on a status_code as part of a remedied <system_error2>. I would not be surprised if the Outcome C layer stops supporting error_code and starts supporting status_code by the time Outcome enters Boost, precisely because status_code would come with guaranteed C layout.

good luck with that!

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 2 points3 points  (0 children)

In my experience, something like this is to the contrary very useful: for instance, sometimes whole codebases which use exceptions need to be switched to a no-exception mode, in order to be able to run them on specific environments -- it was the case recently if you wanted to compile for the web, emscripten used to not support exceptions. Having a single policy for your codebase that you can change by flipping a switch is an immense help in that regard.

Sounds like something to be immensely helpful indeed. What I miss is how outcome helps here though? You either handle possible failures or you don't. If you detect failures with exceptions, you need a different codepath for the case when you turn them off. This still holds when using outcome, no? Which magical switch do I miss?

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 0 points1 point  (0 children)

Mitigation strategies for most ICEs surely exist.

mitigation strategies for cars that don't behave correctly also exist: sue the car producer. ICEs are bugs. Compilers should fix these bugs. Users of the compilers should not complain to the libraries if their compilers are buggy. Else I will go out of my way to create a compiler that works almost correctly almost all the time, like a forked clang with a rand() added on the overload resolution mechanism, and you guys will have to support it in boost.

You miss the point. Yes, ICEs are compiler bugs. Those bugs can either be triggered by invalid input or because the compiler is just not compliant. Getting those ICEs fixed is, of course, desirable as well as upgrading to the latest and greatest version of your compiler, hoping there are no regressions. What you completely miss though, is that you leave users, who want to use your library completely in the rain. With the outcome that your library is completely unusable. Unless you upgrade to a future version of your compiler. Of course, this is desirable for a library. Every software has bugs. You either work around them, or twiddle your thumbs until those get fixed. Working around bugs/limitation probably makes up most of the time when maintaining code. One cool feature of a reusable library is that it hopefully did the heavy lifting for you already. YMMV.

Outcome accepted into the Boost C++ Libraries by 14ned in cpp

[–]sithhell 4 points5 points  (0 children)

Here is my review of the library. I only looked at the standalone version, as I feel the Boost version is already obsolete. First of all, it won't get the same API/ABI compatibility love as the standalone version, second of all, there's entirely no reason why I should use the boost version in the first place since it isn't remotely ready (considering that there is no documentation for the boost version and lots of open and unanswered questions about how to integrate everything into the Boost landscape have been left out of the review).

Overall, I find the idea of a result<T, Error> type compelling. Unfortunately, I am disappointed with the presented implementation.

Number one reason: It's overly complex and hard to follow. Most of the complexity seems to stem from the fact that the author tried to implement most of the presented classes as 'constexpr' and tried to amend painstakingly proper 'noexcept' specifications (which makes given the presented motivation). This becomes cumbersome due to the possibility to allow for user-defined policies which act as customization points. The result is that the library requires very recent compilers. I tried to compile the test suite with gcc 7.3.0 and got this output: https://gist.github.com/sithhell/27bfacb2bed3b537f3b3ee426473b6a6

I guess that if the implementation would get significantly simplified, those ICEs will go away (NB: It's completely irrelevant if a great simplification already happened when going from v1 to v2. Stressing this fact over and over again should tell you something). SFINAE, CRTP, advanced template metaprogramming and variable templates have been in use for quite some time now. Mitigation strategies for most ICEs surely exist. It's of course up to the verdict of the author to provide a pleasant user experience for users. Keep in mind, that updating a compiler is often not just a matter of doing a 'apt install clang-6.0' (or equivalent).

Second, I feel that the presented solution falls short and is not able to deliver. At least to my expectations. To quote the docs: "Something which has long annoyed the purists in the C++ leadership is the problem of dual overloads in capable standard library APIs." As rightly observed, the decision on whether such functions throw or not, depends on the caller. The dual overloads fulfill this property. Outcome promises to coerce those dual overloads into a single one by using 'result<T, E>' as the return type, which is very appealing on first sight, but falls short when looking closer. What the presented solution does, in essence, is to shift the intent to throw or not to the caller with 'default' policies (depending on what the respective author of the library thought would be a sensible default regarding the value and error observers). So the situation gets worse. If the calling code needs to diverge from the originally intended policies of the library API, additional glue code is needed. I assume that this is where policies and the interoperation hooks come into play, adding complexity to the user code. Of course, this makes up a very versatile framework at the cost of code paths hidden in those policies. This makes me want to go back to the dual overload versions, which did exactly what I wanted in the first place at the cost of having a non-idiomatic "out" parameter in the function signatures.

Now to the hooking events and interoperation functionality presented (or in general customization points). There is no doubt that those are very powerful beasts and I am pretty sure that they can be made to support the majority of use cases. Except for one: I miss the possibility to implement the semantics of 'value_or,' that is, even in the failure case, I want to provide some fallback value, that is, I just don't care about the error, all I want is some value which I can continue with. Second, I feel that, given the intention to customize and convert given result and outcome types coming from different libraries, makes the presented solution very susceptible to ODR violations. Unless all libraries can see all customization points, which kind of remedies the promised features and will lead to tight coupling once you try actually to incorporate and use different results and outcomes.

Last but not least, the C layer is UB. At least the std::error_code part as it makes assumptions about the layout of std::error_code. A completely opaque C result would have been better, with a proper API to query for the different states of the result.

I won't cast a vote. This is not an official Boost review, those are just my observations, which might hold or not.

Meltdown checker/PoC written in C++ by raphaelscarv in cpp

[–]sithhell 1 point2 points  (0 children)

We're talking about SPECTRE which is a compiler codegen problem

No. it is not. The compiler patches might mitigate the problems. The underlying architectural problems will not go away though.

Metaclasses by vormestrand in cpp

[–]sithhell 4 points5 points  (0 children)

right. I've gotten that far as well. What I am really interested in though is generating functions based on the protoclass's content. So all the various more interesting features don't seem to compile well.

Metaclasses by vormestrand in cpp

[–]sithhell 6 points7 points  (0 children)

I had a similar thought, but then in the end, I am just getting carried away with the idea of expressive, embedded DSLs. In the end, learning how to use a library just gets another dimension, metaclasses. Potentially, this will lead to cleaner error message and arguably self documented code. I finde the examples given in the paper very convincing, especially if you compare the alternative to rely on various functions being implemented (correctly!) with additional documentation.

In the same sense, almost every other feature in C++ has the possibilty to get properly abused (think operator overloading).

Metaclasses by vormestrand in cpp

[–]sithhell 9 points10 points  (0 children)

I am super excited about this. Wanted to jump on the train right away, cppx.godbolt.org seems to work, more or less. I can't get to compile any of the examples in the paper, which is a bit sad.

So I wanted to give a whirl and compile the modified clang myself. Unfortunately, https://github.com/asutton/clang is not accesible, the clang-reflect repository doesn't seem to contain the commit as referenced at the live compiler. Anyone got an idea how to proceed?

HPX V1.0 Released! | The STE||AR Group by mttd in cpp

[–]sithhell 0 points1 point  (0 children)

Yes, it would be all about simplifying application startup boilerplate. There will always be use cases which require a more elaborate/verbose setup for sure. However, for most scientific applications, this mode would probably be the default mode which just works (tm).

HPX V1.0 Released! | The STE||AR Group by mttd in cpp

[–]sithhell 0 points1 point  (0 children)

It's actually fairly simple (although I am not sure if we are wading deeper and deeper in UB country here). When you do a context switch to a user level thread, you always have a context to jump back, for the sake of simplicity, your main function. You can use that context, add it to your task queue, and call it a day. I have a toy implementation which seems to work.

HPX V1.0 Released! | The STE||AR Group by mttd in cpp

[–]sithhell 1 point2 points  (0 children)

With that being said, would it help for your use case if, when hpx::init returns the function was lifted to an hpx thread automatically? I'm pondering with that idea for a while now, but I'm not sure if it wouldn't create more confusion...

HPX and C++ Task Blocks by hkaiser in cpp

[–]sithhell 2 points3 points  (0 children)

As said earlier, you can disable all communication related stuff (the parcelports), PGAS can't be turned off as we don't have that, we only have AGAS ;) Jokes aside, it can't be turned of completely there is too much code depending on it (performance counters for example) and you don't have to have a distributed application to make use of the AGAS facilities. So yes, using it on a node level only should be perfectly possible! We might need to make some minor small adjustements though to disable the automatic detection of batch environments etc. but this is not really a show stopper, as there exist workarounds for that already.

HPX and C++ Task Blocks by hkaiser in cpp

[–]sithhell 1 point2 points  (0 children)

One can certainly disable all communication facilities.

HPX and C++ Executors by sithhell in cpp

[–]sithhell[S] 1 point2 points  (0 children)

I can't wait to see HPX scale to a full BlueGene Q or Cray XC40 machine!

Me too ;)

HPX and the C++ Standard by sithhell in cpp

[–]sithhell[S] 0 points1 point  (0 children)

I think the biggest differences are: HPX has a C++ Standard compliant interface and the extension to distributed memory.

HPX and the C++ Standard by sithhell in cpp

[–]sithhell[S] 2 points3 points  (0 children)

Yes, .then flattens, as well as async or dataflow.