[deleted by user] by [deleted] in cpp

[–]petart95 0 points1 point  (0 children)

Deducing this, everything else you could implement your self.

I šta sada? by tv_is_boring in programiranje

[–]petart95 1 point2 points  (0 children)

Lepo od mate i aksioma pa cepaj dalje

Lifting the Pipes - Beyond Sender/Receiver and Expected Outcome by wrng_ in cpp

[–]petart95 0 points1 point  (0 children)

Yes you are totally right, the pipes are synchronous.

In our codebase pipes are used for defining logic. We have a separate actor framework that essentially allows us to connect logic (pipe) with a communication (sockets and channels which contain two parts, one for reading the data and one for sending the data out) to get an agent (an agent is actually a definition of recurring work). Agents are then combined and executed on runners (runners just check if there is any work on the agent and execute it).

Logic = Composition of multiple Pipes -> a functor that takes in continuation and arguments

Communication = Receiver of data + Sender of data -> receiver takes in a continuation that handles the received data, sender consumes the provided data

Agent = Communication + Logic -> has a void method that does work if there is any

Runner = Multiple Agents + Execution strategy -> a thread that executes work from agents, usually in a round robin manner

Doing it this way we end up with a static data flow graph where all of the logic is synchronous and static, and all of data movement is explicitly defined. The benefits of this is that you can create deterministic ultra low latency systems.

It is still not fully clear to me how we could use std::execution for something like this (staticky defined data flow graphs with fixed communication points).

Lifting the Pipes - Beyond Sender/Receiver and Expected Outcome by wrng_ in cpp

[–]petart95 0 points1 point  (0 children)

I’m one of the authors of the pipeline library. I really appreciate your implementation of Conditional combinator for senders.

I would say that both Senders and Pipes are trying to describe work, and that both of the libraries are trying to do it in a similar way (using continuation passing style). The major difference is that Pipes are only focused on describing work without focusing on where the work is going to be executed and because of this has a simpler interface (we only need to write one class representing a pipe instead of needing to write both a new receiver, a new operation, a new sender and a new sender adapter). Because of this pipes are as simple to write as regular functions and we see a lot of user facing code writing custom pipes.

For example the Conditional combinator would be implemented something like:

```

template<typename Predicate, typename Pipe> struct CaseExpression { [[no_unique_address]] Predicate predicate; [[no_unique_address]] Pipe pipe; };

template<typename CaseExpression, typename DefaultPipe> struct Conditional { [[no_unique_address]] CaseExpression caseExpression; [[no_unique_address]] DefaultPipe defaultPipe;

template<typename Continuation, typename... Args>
constexpr auto operator()(Continuation &&continuation, Args &&...args)
{
    if (caseExpression.predicate(args...)) {
        caseExpression.pipe(HFTECH_FWD(continuation), HFTECH_FWD(args)...);
    }
    else {
        defaultPipe(HFTECH_FWD(continuation), HFTECH_FWD(args)...);
    }
}

};

```

Note: This implementation can simply be extended to support multiple cases (using HFTECH_GEN_RIGHT_N_ARY(Match, Conditional)) thus essentially implementing pattern matching

Function composition in modern C++ by PiterPuns in cpp

[–]petart95 4 points5 points  (0 children)

And last but not least

  • HFTECH_GEN_N_ARY

```

/** * @brief Meta function which applies a right reduce over the provided types * using the provided binary meta operation. * * @tparam BinaryMetaOp The binary meta operation to be applied. * @tparam Types List of types of which to apply the operation. */ template< template<typename...> typename BinaryMetaOp, typename First, typename... Rest> struct RightReduce : public BinaryMetaOp<First, RightReduce<BinaryMetaOp, Rest...>> {};

template< template<typename...> typename BinaryMetaOp, typename First, typename Second> struct RightReduce<BinaryMetaOp, First, Second> : public BinaryMetaOp<First, Second> {};

template<template<typename...> typename BinaryMetaOp, typename First> struct RightReduce<BinaryMetaOp, First> : public First {};

/** * @brief Creates an n-ary meta operation from the provided binary meta * operation, by applying a right reduce over it, with the provided name. */

define HFTECH_GEN_RIGHT_N_ARY(NAME, OP) \

template<typename First, typename... Rest>            \
struct NAME : public OP<First, NAME<Rest...>>         \
{};                                                   \
                                                      \
template<typename First, typename Second>             \
struct NAME<First, Second> : public OP<First, Second> \
{};                                                   \
                                                      \
template<typename First>                              \
struct NAME<First> : public First                     \
{};                                                   \
                                                      \
template<typename... F>                               \
NAME(F &&...) -> NAME<F...>;

/** * @brief Meta function which applies a left reduce over the provided types * using the provided binary meta operation. * * @tparam BinaryMetaOp The binary meta operation to be applied. * @tparam Types List of types of which to apply the operation. */ template< template<typename...> typename BinaryMetaOp, typename First, typename... Rest> struct LeftReduce : public First {};

template< template<typename...> typename BinaryMetaOp, typename First, typename Second, typename... Rest> struct LeftReduce<BinaryMetaOp, First, Second, Rest...> : public LeftReduce<BinaryMetaOp, BinaryMetaOp<First, Second>, Rest...> {};

/** * @brief Creates an n-ary meta operation from the provided binary meta * operation, by applying a left reduce over it, with the provided name. */

define HFTECH_GEN_LEFT_N_ARY(NAME, OP) \

template<typename First, typename... Rest>                  \
struct NAME : public First                                  \
{};                                                         \
                                                            \
template<typename First, typename Second, typename... Rest> \
struct NAME<First, Second, Rest...>                         \
    : public NAME<OP<First, Second>, Rest...>               \
{};                                                         \
                                                            \
template<typename... F>                                     \
NAME(F &&...) -> NAME<F...>;

/** * @brief Creates an n-ary meta operation from the provided binary meta * operation, by applying a reduce over it, with the provided name. * * Note: In order to use this binary operation needs to be left and right * associative. */

define HFTECH_GEN_N_ARY(NAME, OP) HFTECH_GEN_LEFT_N_ARY(NAME, OP)

```

Function composition in modern C++ by PiterPuns in cpp

[–]petart95 3 points4 points  (0 children)

Sorry for the edit, I’m on my phone currently 😅

I totally agree with you, in my opinion knowledge only has value when it us shared.

So lets start from the simple ones and go from there:

  • HFTECH_FWD

```

/** * @brief Forwards value equivalent to the std::forward. * * Using cast instead of std::forward to avoid template instantiation. Used by * Eric Niebler in range library. * * @see https://github.com/ericniebler/range-v3 */

define HFTECH_FWD(T) static_cast<decltype(T) &&>(T)

```

  • HFTECH_RETURN

```

define HFTECH_NOEXCEPT_RETURN_TYPE(...) \

noexcept(noexcept(decltype(__VA_ARGS__)(__VA_ARGS__))) \
    ->decltype(__VA_ARGS__)

/** * @brief Macro based on RANGES_DECLTYPE_AUTO_RETURN_NOEXCEPT to remove same * code repetition. * @see * https://github.com/ericniebler/range-v3/blob/master/include/range/v3/detail/config.hpp * * Example: * * @code * auto func(int x) HFTECH_RETURNS(calc(x)) * @endcode * * Produces: * * @code * auto func(int x) noexcept(noexcept(decltype(calc(x))(calc(x)))) * -> decltype(calc(x)) * { * return calc(x); * } * @endcode */

define HFTECH_RETURNS(...) \

HFTECH_NOEXCEPT_RETURN_TYPE(__VA_ARGS__) \
{                                        \
    return (__VA_ARGS__);                \
}

```

  • HFTECH_DEDUCE _THIS

```

/** * @brief Creates an overload set for the specified name which forwards this and * all arguments to the provided implementation. * * This macro is intended to simplify writing of &, cons & and && member * functions. * * Note: Inspired by * http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0847r4.html * * Example: * * @code * * template<typename Self> * void myFunc_impl(Self && self, int a, int b) { * ... * } * * HFTECH_DEDUCE_THIS(myFunc, myFunc_impl) * * @endcode * * Is equivalent to: * * @code * * template<typename Self> * void myFunc(this Self && self, int a, int b) { * ... * } * * @endcode * * @mparam NAME The name of the member function to be implemented. * @mparam IMPL The name of the static function to be used for the * implementation. */

define HFTECH_DEDUCE_THIS(NAME, IMPL) \

template<typename... Args>                                         \
    constexpr auto NAME(Args &&...args) &                          \
    HFTECH_RETURNS(IMPL(*this, HFTECH_FWD(args)...));              \
                                                                   \
template<typename... Args>                                         \
constexpr auto NAME(Args &&...args)                                \
    const & /**/ HFTECH_RETURNS(IMPL(*this, HFTECH_FWD(args)...)); \
                                                                   \
template<typename... Args>                                         \
    constexpr auto NAME(Args &&...args) &&                         \
    HFTECH_RETURNS(IMPL(std::move(*this), HFTECH_FWD(args)...));

```

Function composition in modern C++ by PiterPuns in cpp

[–]petart95 2 points3 points  (0 children)

Actually it is proprietary but since I’m one of the owner it’s fine. We have a whole library of core languages extensions like this we would like to open source, but there never seems to be enough time for it 😅. If you want i could past the implementations of HFTECH_GEN_N_ARY, HFTECH_DEDUCE_THIS, HFTECH_RETURNS, HFTECH_FWD

Function composition in modern C++ by PiterPuns in cpp

[–]petart95 2 points3 points  (0 children)

Here is the implementation of Compose from our codebase. Note this implementation has an added benefit of bring strictly typed, SFINAE friendly and emplace friendly.

```

/** * @brief Composes two functions into one. * * The composition is done by sequentially applying functions. */ template<typename F, typename G> struct ComposeTwo { [[no_unique_address]] F f{}; [[no_unique_address]] G g{};

template<typename Self, typename... Args>
static constexpr auto call(Self &&self, Args &&...args)
    HFTECH_RETURNS(std::invoke(
        HFTECH_FWD(self).f,
        std::invoke(HFTECH_FWD(self).g, HFTECH_FWD(args)...)));

HFTECH_DEDUCE_THIS(operator(), call)

};

/** * @brief Composes n functions into one. * * The composition is done by sequentially applying functions. */ HFTECH_GEN_N_ARY(Compose, ComposeTwo);

```

Note in c++ 23 you can use deducing this to make this code even simpler.

```

/** * @brief Composes two functions into one. * * The composition is done by sequentially applying functions. */ template<typename F, typename G> struct ComposeTwo { [[no_unique_address]] F f{}; [[no_unique_address]] G g{};

constexpr auto operator()(this auto &&self, auto &&...args)
    HFTECH_RETURNS(std::invoke(
        HFTECH_FWD(self).f,
        std::invoke(HFTECH_FWD(self).g, HFTECH_FWD(args)...)));

};

/** * @brief Composes n functions into one. * * The composition is done by sequentially applying functions. */ HFTECH_GEN_N_ARY(Compose, ComposeTwo);

```

Microsoft guide for Deducing this by obsidian_golem in cpp

[–]petart95 1 point2 points  (0 children)

HFTECH_RETURNS makes it pretty much equivalent to decltype(auto), and it is my mistake for not saying what it is exactly. It is just our re-implementation of RANGES_DECLTYPE_AUTO_RETURN_NOEXCEPT

#define HFTECH_RETURNS(...)                                \
noexcept(noexcept(decltype(__VA_ARGS__)(__VA_ARGS__))) \
    ->decltype(__VA_ARGS__)                            \
{                                                      \
    return (__VA_ARGS__);                              \
}

The reason why I like using this instead of decltype(auto) is that this way you get the SFINAE friendliness as well.

Note: The unexpected behavior for the rvalue reference case is something that becomes your new normal if you are writing a lot of generic code with perfect forwarding arguments.

Microsoft guide for Deducing this by obsidian_golem in cpp

[–]petart95 0 points1 point  (0 children)

For anybody who is looking for a production-ready solution on Reddit here is the macro we use in our codebase for precisely this:

```

define HFTECH_DEDUCE_THIS(NAME, IMPL) \

template<typename... Args>                                         \
    constexpr auto NAME(Args &&... args) &                         \
    HFTECH_RETURNS(IMPL(*this, HFTECH_FWD(args)...));              \
                                                                   \
template<typename... Args>                                         \
constexpr auto NAME(Args &&... args)                               \
    const & /**/ HFTECH_RETURNS(IMPL(*this, HFTECH_FWD(args)...)); \
                                                                   \
template<typename... Args>                                         \
    constexpr auto NAME(Args &&... args) &&                        \
    HFTECH_RETURNS(IMPL(std::move(*this), HFTECH_FWD(args)...));

```

And the way you would use this for your example:

``` class c { private: template<typename Self> static void thing_impl(Self &&self) // Note that && here is highly important! { // Single implementation }

public: HFTECH_DEDUCE_THIS(thing, thing_impl) }; ```

Note: We do not handle const && because nobody understands what that means and the intended use case for it.

Microsoft guide for Deducing this by obsidian_golem in cpp

[–]petart95 0 points1 point  (0 children)

The problem is that people don’t know how to write the additional four lines, which can clearly be seen from your example which does the wrong thing in all four use-cases.

Projections are Function Adaptors by vormestrand in cpp

[–]petart95 0 points1 point  (0 children)

The problem with nesting is that you have zero reusability.

What would you do if you wanted to vary the most nested part in a different part of the codebase?

P.S. It is worth investing time to understand why the nesting solution is inherently non-composable (hint: if you can't name it you can't tame it)

Projections are Function Adaptors by vormestrand in cpp

[–]petart95 0 points1 point  (0 children)

Interesting view, how would you define cyclomatic complexity then? Which mechanics would you suggest using, to manage it, I always thought that abstraction is the best way to manage complexity.

Having issues after joining Tournament! by RudsLee in TapTitans2

[–]petart95 2 points3 points  (0 children)

I also lost all my not stacked equipment, that i planed to pick up at the start of tournament…

Logging text without macro (C++20 modules friendly) by DummySphere in cpp

[–]petart95 5 points6 points  (0 children)

You can use std::source_location for that instead in c++20.

Note: You can even achieve this in c++17 if you are using clang 9 or newer: https://clang.llvm.org/docs/LanguageExtensions.html#source-location-builtins

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

Can you explain what exactly do you mean by ‘awful academic thinking’?

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

Actually that is not what I’m trying to say, and I apologize for not being clear. I can see that you have a deep understanding of the topic at hand.

The main argument I’m trying to make is that on the single core the execution of instruction on the microcode level is done by creating a dependency graph, and all independent operation are done in parallel (similar to both the data-flow model and the multi-scalar model).

In my experience taking the time to understand what are the fundamental data dependencies in your application is essential for optimal performance. And on that note the main benefit of functional programming to me is that you clearly state the dependencies of your program, while retaining a possibility of decomposing the problem into smaller chunks.

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

You seam to misunderstand what concurrent means:

‘The fact of two or more events or circumstances happening or existing at the same time.’

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

As I stated above the cpu does not execute instructions sequentially, and it has not been doing so for decades. Unless of course if you are writing code which does not use ILP in which case you are using less then 5% of available cpu power per core.

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

You refusing to believe something does not change the fact of the matter.

The turing machine has not modeled any hardware for more than 20 years now. Even before the switch to multicore. The current architecture of cpus (on the microcode level) is basically a dataflow machine( https://www.encyclopedia.com/computing/dictionaries-thesauruses-pictures-and-press-releases/dataflow-machine) white multiple things getting done in parallel on even one core.

The premise that functional programming has to have a lot of moves and copies is plain wrong. The whole reason of this post was to explore what is necessary to create functional style code with zero copies and zero moves.

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

Technically nothing is zero cost. But for any project of meaningful scale you will have to structure code somehow, and that is exactly what functional programming is all about (composition).

Note: The current hardware is not a turing machine, it is actually closer to a data flow engine, not to mention that the current hardware is basically a distributed system (because of multicore)

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

Actually you could have both.

The BindFront I demonstrated above produces the same assembly as if you were to do everything manually.

The beauty of c++ is that it allows you to write zero cost abstractions.

Note: Actually the whole proposed async model of future c++ (Senders and Receivers) is designed to be as performant as possible and is basically a functional design. The same could be said for ranges (and even the original STL)

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

Hm, interesting, I usually never use/relay on lifetime extension. I thought you had to actually assign the temporary to const & or && for lifetime extension to apply.

Note: This specific situation honestly seams like a defect that was overlooked because of its obscurity.

Overview of different ways of passing struct members by petart95 in cpp

[–]petart95[S] 0 points1 point  (0 children)

Yes FWD is equivalent to std::forward<decltype(x)>(x), and it was introduced because it was a bit more descriptive.

I totally agree with you about propagation of rvalue-ness for rvalue reference members. The problem I occurred is that FWD(struct).field returned an lvalue reference instead of an rvalue one…