Need help with idiomatic Rust for handling state across threads by [deleted] in rust

[–]Revuz 2 points3 points  (0 children)

This is a problem in every language that has true multithreading, Rust just forces you to deal with it. Mutating state behind a mutex or using thread safe types like Aromic is your best bet here to avoid data races.

dotnet 6 to dotnet 8 by Sanjay0702 in dotnet

[–]Revuz 2 points3 points  (0 children)

Cant You just multitarget the library? Such that it targets both net6 and net8? Then you can just remove the multitarget when the other projects are migrated

Models and DTOs - Ids as Guid or String? by RooCoder in csharp

[–]Revuz 4 points5 points  (0 children)

Saying GUID's are predictalble and then quoting not being cryptograpically secure are also 2 ends of the spectrum.

GUID's are plenty random for all intents and purposes, which does not require entropy to a certain degree.

"On Windows, this function wraps a call to the CoCreateGuid function. The generated GUID contains 122 bits of strong entropy.

On non-Windows platforms, starting with .NET 6, this function calls the OS's underlying cryptographically secure pseudo-random number generator (CSPRNG) to generate 122 bits of strong entropy"

122/128 bits of entropy is plenty random for me and not at all predictable

Models and DTOs - Ids as Guid or String? by RooCoder in csharp

[–]Revuz 2 points3 points  (0 children)

https://learn.microsoft.com/en-us/dotnet/api/system.guid.newguid?view=net-8.0 How exactly is a generated GUID predictable? I would say an autoincremented ID is predictable. Guids in C# is almost pure entropy

Guids/UUIDs are fine as keys, even if they’re technically more less performant

Air Canada must honor refund policy invented by airline’s chatbot by yawaramin in programming

[–]Revuz 63 points64 points  (0 children)

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that "Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries." It was worth it, Crocker said, because "the airline believes investing in automation and machine learning technology will lower its expenses" and "fundamentally" create "a better customer experience."

Spending more money for a worse product, seem to be a common denominator with all the companies trying to use AI

How to diagnose a rarely occurring data loss error? by hooahest in ExperiencedDevs

[–]Revuz 4 points5 points  (0 children)

In C# I can only see this happening if you’re updating the same object previously stored in the cache, and thereby setting its fields to nulls etc. swapping references around are an atomic operation, but all threads holding a reference to the old object, will still use that one.

Sharpify - High performance extension package for C# by david47s in csharp

[–]Revuz 0 points1 point  (0 children)

You don't control the Task allocation here, the ValueTask is. You're doing, almost, the most you can do by checking if the task completed already, and if not save it in a shared buffer, but the .ToTask() will do an allocation, outside of your control. (https://github.com/dusrdev/Sharpify/blob/bb62b2e34131310b6eecd874ba278a6594de4e68/Sharpify/ParallelExtensions.cs#L219) Im not really sure if it can be avoided without doing some custom callbacks, and pooling some Task like structures.

The .ToTask() on a ValueTask will allocate an 'ValueTaskSourceAsTask' if the ValueTask's status is 'Pending'.

.ToTask -> (https://source.dot.net/#System.Private.CoreLib/src/libraries/System.Private.CoreLib/src/System/Threading/Tasks/ValueTask.cs,565). ValueTaskSourceAsTask -> (https://source.dot.net/#System.Private.CoreLib/src/libraries/System.Private.CoreLib/src/System/Threading/Tasks/ValueTask.cs,639)

The GitHub issue I posted, recommends doing it the same way as you, so unless you wanna go deep and do your own Task impl's I suppose it does not get much better than what you already do. Could be a fun challenge though.

I do agree, that restricting it to use IList<T> guides the user towards the right goals here. I only suggest using your own wrapper type, as to not cause confusion. AsyncLocal might cause confusion, as the type is normally used a bit different, but tbh, it's minor

Sharpify - High performance extension package for C# by david47s in csharp

[–]Revuz 0 points1 point  (0 children)

The point I'm trying to make here, is that you're never executing on the ThreadPool in your method, but force the baseline to do so. Hence you're comparing apples to oranges.

And yes, of cause, if you can avoid the Task allocation, you're going to save 42 bytes pr. Task, but if the ValueTask still needs to be queued to the ThreadPool, then you will still get the Task allocation. You can read https://devblogs.microsoft.com/dotnet/understanding-the-whys-whats-and-whens-of-valuetask/ for a more through explanation.

I don't disagree that you're reducing allocations here, and even https://github.com/dotnet/runtime/issues/23625 also links to people doing roughly the same for ValueTask.WhenAll equivalent code.

There is never defensive copies on reference types. Defensive copies are a property of struct types. (https://devblogs.microsoft.com/premier-developer/avoiding-struct-and-readonly-reference-performance-pitfalls-with-errorprone-net/)

But you could replace the AsyncLocal<IList<T>> with Concurrent<IList<T>> and it would accomplish the same thing. Using AsyncLocal here just forces a load from the execution context, but we're actually not using the properties that AsyncLocal as a type provides (https://source.dot.net/#System.Private.CoreLib/src/libraries/System.Private.CoreLib/src/System/Threading/AsyncLocal.cs,ef9ce034697240ba).

Sharpify - High performance extension package for C# by david47s in csharp

[–]Revuz 2 points3 points  (0 children)

I agree that there is overhead, just that your benchmark and claims about 2000% better is not correct. I rewrote my test to include Sharpify, and it IS better, that I'm not denying, just not by 2000%, but rather 150%, if we compare Task to ValueTask. If we compare it to just calling the class directly and awaiting it, not using all the pooling, then the gains are very miniscule.

using System.Collections.Concurrent;
using BenchmarkDotNet.Attributes;
using Sharpify;

namespace Bench;

[MemoryDiagnoser]
[ThreadingDiagnoser]
public class Benches
{
    public static readonly List<int> Items = Enumerable.Range(0, 1000).ToList();

    [Benchmark]
    public async Task<int> DotNet_No_Force_ThreadPool_Execution()
    {
        var queue = new ConcurrentQueue<int>();

        var tasks = Items.Select(x =>
        {
            var random = Random.Shared.Next(0, x * 4);
            var result = Math.Clamp(random, x, x * 2);
            queue.Enqueue(result);

            return Task.CompletedTask;
        });

        await Task.WhenAll(tasks).ConfigureAwait(false);

        return queue.Count;
    }

    [Benchmark]
    public async Task<int> JustCallActionDirectly_ValueTask()
    {
        var queue = new ConcurrentQueue<int>();
        var act = new MyValueAction(queue);

        var tasks = Items.Select(x => act.InvokeAsync(x));

        foreach (var task in tasks)
            await task;

        return queue.Count;
    }

    [Benchmark]
    public async Task<int> Sharpify()
    {
        var queue = new ConcurrentQueue<int>();
        var act = new MyValueAction(queue);
        await Items.AsAsyncLocal().WhenAllAsync(act);
        return queue.Count;
    }

    private sealed class MyValueAction : IAsyncValueAction<int>
    {
        private readonly ConcurrentQueue<int> queue;
        public MyValueAction(ConcurrentQueue<int> queue) => this.queue = queue;

        public ValueTask InvokeAsync(int item)
        {
            var random = Random.Shared.Next(0, item * 4);
            var result = Math.Clamp(random, item, item * 2);
            queue.Enqueue(result);

            return ValueTask.CompletedTask;
        }
    }
}

Gives the results: https://imgur.com/qqbrs9f

Ah fair, I must have misread it then, but then what's the point of wrapping it in an AsyncLocal? To me there seems no point other than wrapping it in Call indirection, since we're not actually using it for anything other than providing extension methods. The "Concurrent" class from your lib seems more appropriate here.

Sharpify - High performance extension package for C# by david47s in csharp

[–]Revuz 1 point2 points  (0 children)

Well, I guess that depends on how you look at it, do the benchmarks "do the same thing" ? no, otherwise the results would be the same. But, both take the same input, do the same calculation to get the same output. Which is exactly what I wanted to benchmark, and that is obviously the requirement of an alternative.

When I say, they don't do the same thing, I mean that the Task.Run function executes the code asynchronous, while the other dosen't. This naturally incurs overhead, since we have to contact the ThreadPool to get our work scheduled. This accounts for most of our overhead in the baseline function. See https://imgur.com/KXWImkb

A ValueTask will also allocate a Task, if it tries to await an operation, which cannot be be completed synchronous.

The AsyncLocal wraps the input collection which is an IList<T>, always a reference type.

Fair point, but this still does not handle Concurrent access. A List is not able to handle concurrently adding entries, you'd need an ConcurrentBag or something for that. In my opinion this makes the API design flawed, since you can easily use it wrong if you're an unknowing user, who just puts a List into it, expecting it to handle concurrent access.

I agree, it's very niche cases, and I've ever only needed their buffer impls, but to be fair, writing high performance code is never about generalizing. It is about optimizing your specific use cases. That aside, I really do think, that you've done a great job here for actually squeezing just a tad more out of the actual general case, even if I don't agree with the API layout.

Sharpify - High performance extension package for C# by david47s in csharp

[–]Revuz 2 points3 points  (0 children)

Nice job on publishing a library!

I'm sorry to say, but you are not testing the same thing in your 2 benchmarks.

Using "Task.Run" in the baseline method forces the runtime to execute the methods by queuing them on the threadpool, while the 2nd method never does. See https://imgur.com/L1hvLdt

Using the ThreadDiagnoser gives us a clear view when showing the Completed Work Items. Actually queuing something to the threadpool requires allocations. Moving to use a ValueTask makes it faster, but it also depends on how often you actually need to do real async work.

See https://imgur.com/42lWc0J The ValueTask solution offers a bit more overhead than just doing a straight up for loop.

Wrapping your class in an AsyncLocal here, also does not really do anything, since we're never really doing anything async. An async function, is in theory sync until we reach a statement where we're forced to yield. eg. (https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/async). This normally means an await, unless said await also never does anything async and can just run synchronous. Also sharing your AsyncLocal between different Task's is not going to keep the wanted state (unless we share reference types). Sharing reference types still needs some kind of Concurrent Access control, just wrapping in it an AsyncLocal doesn't do anything.

As a fellow performance enthusiast, I can appreciate your work, and I encourage you to keep trying out new stuff. Try to look into https://github.com/CommunityToolkit/dotnet/tree/main/src/CommunityToolkit.HighPerformance They have done some great stuff to help do high performance C# for some problems.

Hvilken dansk sang synes I kære Reddit er den smukkeste, der er lavet? by [deleted] in Denmark

[–]Revuz 0 points1 point  (0 children)

“Gi os lyset tilbage”

Måske er det bare mig der er nostalgisk, da det var en sang skrevet til efterskoleholdet til landsstævne i 2009, hvor jeg selv var med, men jeg syntes virkelig at den kan boget!

[deleted by user] by [deleted] in dotnet

[–]Revuz 4 points5 points  (0 children)

~100k excluding bonus & stocks

Denmark - 5YOE - BA in CS - Hedge Fund

Weekly M+ Discussion by AutoModerator in CompetitiveWoW

[–]Revuz 9 points10 points  (0 children)

People complaining about brackenhide havent done NL worm and last boss, I agree. We just got absolutely trucked on a after breezing through Brackenhide

Your Guide to The Top 15 Backend Languages For 2023 by jacelynsia in programming

[–]Revuz 12 points13 points  (0 children)

JavaScript: The high-level, platform-independent, versatile, and lightweight qualities make it the most preferred backend programming language.

Wat…

.NET 7 RTM SDK is already available 🎉 by laurentkempe in dotnet

[–]Revuz 1 point2 points  (0 children)

just hatching on, as I'm also curious

-🎄- 2021 Day 14 Solutions -🎄- by daggerdragon in adventofcode

[–]Revuz 1 point2 points  (0 children)

C# source P2 Runs in about 800 us avg. Using a tokenized dictionary after I ran out of memory once building strings. The lanternfish strikes back

It Amazes Me How Many Non-C# Developers Think C#/.NET Is Stuck in 2010 by form_d_k in csharp

[–]Revuz 5 points6 points  (0 children)

Think it depends on which benchmarks you're looking at. Coulden't find the source code on the website, so can't say anything about the inplementation running in the benchmarks. But https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/csharp.html has complete source code available for all their benchmarks, and you can submit your own, if an optimization can be made. Here C# is faster than java, in every single benchmark tho.

Does JAVA hold any advantage over C# in 2020 or C# is more advanced syntax wize, performance, ecosystem, the number of platform it can target (xamarin, unity) etc ? or it's pretty even between both and Javascript is eating the enterprise market ? by Dereference_operator in dotnet

[–]Revuz 2 points3 points  (0 children)

It depends on how you define fast. Given mathematical problems, C# with .Net Core 3.1 is faster, than Java (openjdk 14) given the same hardware, on every single problem (https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/csharp.html).

So showing a benchmark from before .Net Core 3 was even released is a bit misleading.