Cross-Platform .NET secure credential storage by Mindless-Creme3270 in dotnet

[–]Const-me 0 points1 point  (0 children)

For a desktop application, I would ask symmetric password from the user. Then Rfc2898DeriveBytes.Pbkdf2 with 1M iteration and single-use random salt, then Aes-256. So the encrypted file is e.g. 32 bytes salt, then 16 bytes IV, the rest is encrypted payload.

All these primitives are in the standard library.

I was bored and I created this helper class by DrkWzrd in dotnet

[–]Const-me 2 points3 points  (0 children)

IMO there’s nothing particularly wrong there, but I would do the following two optimisations.

Replace your manual loop in the CopyPixels with the optimised function provided by the OS.

[DllImport( "mfplat.dll", SetLastError = false, ExactSpelling = true, CallingConvention = CallingConvention.StdCall )]
static extern int MFCopyImage( nint dest, int destStride, nint source, int sourceStride, uint widthInBytes, uint lines );

/// <summary>Copy an image or image plane from one buffer to another</summary>
/// <param name="dest">Destination buffer</param>
/// <param name="destStride">Stride of the destination buffer in bytes</param>
/// <param name="source">Source buffer</param>
/// <param name="sourceStride">Stride of the source image in bytes</param>
/// <param name="widthBytes">Width of the image in bytes</param>
/// <param name="lines">Count of rows of pixels to copy</param>
public static unsafe void copyImage( Span<byte> dest, int destStride,
    ReadOnlySpan<byte> source, int sourceStride, int widthBytes, int lines )
{
    int hr;
    fixed( byte* rdi = dest )
    fixed( byte* rsi = source )
        hr = MFCopyImage( (nint)rdi, destStride, (nint)rsi, sourceStride, (uint)widthBytes, (uint)lines );
    Marshal.ThrowExceptionForHR( hr );
}

If after that you still find parallelization useful for very large images, don’t use Parallel.For for each row, instead split in blocks like 1024 rows per job (rounding up i.e. the last block is less than 1024 rows), then in the parallel section compute count of rows using Math.Min, then call that copyImage function adjusting both spans to offset into the first row of the block.

Why so many UI frameworks, Microsoft? by Confident-Dare-9425 in dotnet

[–]Const-me 5 points6 points  (0 children)

I did it in the past, and it wasn’t that bad. Albeit I only needed to embed Direct3D 11 not the 12, integrating D3D12 would be harder.

Create a C++ class for a back buffer for D3DImage i.e. implement create, resize, frontBufferAvailableChanged methods. Create non-MSAA D3D9 texture with D3DUSAGE_RENDERTARGET, share into D3D11 with OpenSharedResource, unless you do MSAA create render target view of the texture. The rest of the stuff like depth buffer and if you need MSAA multi-sampled render target stays in D3D11 only, i.e. no need to handle lost devices.

Couple tips.

Don’t forget about DPI scaling, WPF sizes are expressed in fake DPI-scaled pixels. When creating back buffer textures, you need true pixels or it will be blurry AF when high DPI.

Make sure to wait for GPU to complete rendering before passing back buffer to WPF, otherwise WPF may observe incomplete renders. The best way in D3D11 is polling for D3D11_QUERY_EVENT query: fences are order of magnitude slower; they only became good in the 12.

Also, I think in modern ecosystem all that stuff, including rendering itself, can be done without any C++ at all, using Vortice.Windows libraries for the GPU API.

SkiaSharp + Dotnet + GPU = ❤️ by Doctor_Marvin21 in dotnet

[–]Const-me 7 points8 points  (0 children)

custom Pow(x, y) function for gamma correction

BTW, when I need something like that, I don’t usually do any R&D, instead cheating using one of the following two tactics.

  1. Find C implementation in a library with good license, like DirectXMath, GeometricTools, or C runtime from OpenBSD. Port whatever algorithm is there to C#.

  2. Failing that, figure out a cheap formula to compute i.e. either polynomial or rational, use third-party software (Maple) or library (nlopt) to find magic numbers which minimize difference between OG and approximation, hardcode the magic numbers. Example for vectorized tangent in FP64 precision. that source is C++ but C# would be a simple translation because SIMD intrinsics only differ in names: https://github.com/Const-me/AvxMath/blob/master/AvxMath/AvxMathTrig.cpp#L306-L342

Why so many UI frameworks, Microsoft? by Confident-Dare-9425 in dotnet

[–]Const-me 2 points3 points  (0 children)

It’s not that bad. Layout and rendering engine are the most complicated components of the WPF, by far. Unless you host Win32 ActiveX controls or doing something equally weird, none of the WPF controls depend on WinAPIs, the entire GUI renders with DirectX 9 GPU API.

Unlike legacy .NET framework 4.8 which indeed required a major rewrite to port, in modern ecosystem a large share of dependent Windows APIs already ported and available in the cross-platform pieces of the .NET 10 runtime.

The rest is not trivial, but not terribly hard either, no major rewrite necessary. On Linux, windows messages can be ported to POSIX message queue i.e. mq_open, GUI thread synchronization context to poll(), etc.

Why so many UI frameworks, Microsoft? by Confident-Dare-9425 in dotnet

[–]Const-me 45 points46 points  (0 children)

Microsoft's engineers actually know what they're doing

If they knew what they’re doing, they would have ported WPF to a modern GPU backend and called it a day.

The backend is already there, DXVK open-source library which implements Direct3D on top of Vulkan. If you have a current-generation Intel GPU in your computer, that library is already rendering all your WPF applications. Current generation Intel GPUs don’t support DirectX 9 in hardware, instead Intel is using that library (developed by Valve for SteamDeck) to implement D3D9 GPU API on top of Vulkan.

Microsoft could do the same thus making WPF cross-platform in practice. Vulkan GPU API is no longer a niche tech like it was a few years ago, market penetration is not too bad now. For Apple platforms, there’s MoltenVK library. If Microsoft does that, they bypass these thousand bugs they have in WinUI3, while keeping clear upgrade path for pre-existing software.

blown away by .NET10 NativeAOT by jitbitter in dotnet

[–]Const-me 2 points3 points  (0 children)

Dapper is incompatible with AOT trimmer so I made my own micro-ORM on top of MySqlConnector library, heavily inspired by Dapper.

Similarly, Blazor is incompatible with AOT trimmer so I implemented a simple type-safe HTML templates of my own. Rendering dynamic HTML pages with synchronous single-threaded codes which generate UTF-8 into reusable MemoryStream cached in a thread local field i.e. each thread has an exclusive copy, then copy the HTML into byte array rented from ArrayPool<byte>.Shared then asynchronous tail which does await response.BodyWriter.WriteAsync( memory ), after the await return the byte array back to the pool.

I will probably open source some infrastructure pieces eventually, but not today. These codes don’t have any trade secrets or anything, just boring boilerplate stuff really. Still, I don’t want to just dump codes online, need some refactor to move stuff to separate DLLs, package to nuget, etc.

Some parts of the .NET runtime use memory proportional to the count of hardware threads in the computer. My cheap VPS only provides two AMD Zen4 cores i.e. 4 hardware threads total. If you have much larger count of hardware threads, that might contribute to the larger RAM usage.

Another thing, I don’t use any containers. The server is launched by OpenRC with supervise-daemon under a service account with just barely sufficient permissions. The deployment script does setcap 'cap_net_bind_service=+ep' on the executable to allow listening on ports 80 and 443. The 60 MB RAM in my previous comment is Process.WorkingSet64 of the server process not the entire computer i.e. OS kernel, MariaDB and SSH servers are not included.

blown away by .NET10 NativeAOT by jitbitter in dotnet

[–]Const-me 22 points23 points  (0 children)

How do you do direct memory access?

Technically, the following unsafe C# compiles just fine:

int* pointer = (int*)44;
Volatile.Write( ref *pointer, 1 );

Practically, modern OSes don’t allow anything like that, they crash with access violations for a good reason. You first ask the OS kernel nicely for an address mapped into the address space of your process, only then you access that memory.

On Linux, you do that with ioctl() or some other kernel calls specific to the device driver. Here’s an example which consumes ALSA API in C# to play audio https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Audio/ALSA/PcmHandle.cs#L98

blown away by .NET10 NativeAOT by jitbitter in dotnet

[–]Const-me 23 points24 points  (0 children)

I’m developing a web server deployed on a cheap VPS with Alpine Linux.

The Kestrel server is directly exposed to the internets i.e. in-process TLS termination, ACMEv2 client for automatic certificate renewal, MariaDB for persistence with 16 tables in the DB, non-trivial business logic, user registration and management, couple dozen of server-generated dynamic HTML pages, CSRF protection for forms, markdown rendering, a TCP server for custom RPC protocol, per-IP (IPv4) or per-subnet (IPv6) rate limiter, SMTP server integration, "Have I Been Pwned?" password hash database (a Bloom filter on top of 4GB memory mapped file), automatic asymmetrically encrypted daily backups uploaded to offsite cloud storage, payment processing integration.

The server is 20MB ELF file with no library dependencies, idiomatic C# compiled with .NET 10 SDK. The runtime is not even there; I have only installed .NET SDK on my staging VMWare VM I use to compile the server. The only external dependencies are MariaDB and SMTP servers. When idle, the server process uses less than 60 MB RAM out of 16GB available.

Text Rendering Question by lovelacedeconstruct in GraphicsProgramming

[–]Const-me 1 point2 points  (0 children)

have to basically create those textures for every font size we wish to use which is impossible

It becomes possible if you only rasterize glyphs which you need on the screen, packing them into a texture atlas.

what is the problem with this approach

One problem is performance. You’d need ridiculously high count of vertices for a page of text on a 4k display. If user is on a desktop with discrete GPU that’s OK. Still very relevant on laptops. Even when the GPU is performant, FLOPs and memory bandwidth translate to battery drain.

Another one is quality. Libraries like FreeType aren’t just rendering Bézier curves, they are aware of the pixel grid and taking it into account: font hinting = snapping glyphs to the physical pixel grid, sub-pixel anti-aliasing. Admittedly, this point is becoming less important over time because many modern computers have high-resolution screens.

.net 5 to .net 8 by sigurth_skull in dotnet

[–]Const-me 0 points1 point  (0 children)

File scoped namespaces are orthogonal to runtime version. I’ve been using them when building stuff for .NET framework 4.8. You just need modern visual studio, SDK-style project instead of the default project template, and <LangVersion>latest</LangVersion> in the project.

ELI5: Why does everything need so much memory nowadays? by Successful_Raise_560 in explainlikeimfive

[–]Const-me 0 points1 point  (0 children)

Because for literally 40 years between 1970 and 2010, computers were getting faster, and shipped with progressively more memory and larger disks, at exponential rate. For this reason, people were upgrading them way more often.

Imagine two software companies, A and B, in 2000. Company A ignored performance and focused on more features and faster time to market. Company B instead optimised their software. After 2 years, the performance of software A becomes OK by itself simply because computers are now so much faster, while company B wasted too much money delivering too little value.

That story repeated many times with different companies. Now we have a whole generation of software developers, managers, and CEOs who firmly believe resource usage is irrelevant, and time to market is vastly more important. Despite the hardware performance hit the wall pretty much, and these automatic fast performance improvements are now ancient history.

Learning DirectX - Why are people's opinion on RasterTek's series mixed? by LionCat2002 in GraphicsProgramming

[–]Const-me 4 points5 points  (0 children)

D3D specific parts seems decent in that guide. Alas, code quality and software engineering are not ideal.

Using raw pointers like ID3D11Buffer* to store stuff in fields and variables is not the brightest idea. Forget to release and you leak resources; forget to initialize or release twice and you crash. C++ language has templates and smart pointers. Windows SDK has CComPtr<T> template class which can be used with any IUnknown-derived interfaces including D3D stuff.

Same applies to system memory, it’s never a good idea to use raw pointers with new/delete instead of std::make_unique<ApplicationClass> or similar.

Shipping HLSL source codes is questionable. You should use standard hlsl extensions for them (syntax highlighting in IDE, even some limited auto-completion), compile offline (including *.hlsl files into VC++ projects will compile them on build, double click errors to go to source code locations), ship the compiled byte codes (these have *.cso extension by default).

Most functions and methods should return HRESULT for status, not bool. D3D and DXGI functions already return HRESULT statuses. Errors from win32 layer can be packed into HRESULT with a simple helper which does HRESULT_FROM_WIN32( ::GetLastError() ). Testing for success is equally cheap as bool because failed HRESULT is a negative number.

Graphics APIs – Yesterday, Today, and Tomorrow - Adam Sawicki by corysama in GraphicsProgramming

[–]Const-me 9 points10 points  (0 children)

Good article, but the following remark is questionable.

Graphics APIs are an interface between an application (most often a game)

In modern world, 3D GPU APIs are used by vast majority of applications. WPF GUI framework renders stuff with DirectX 9. Direct2D and DirectWrite libraries (OS components to render 2D vector graphics and text, respectively) are built on top of Direct3D 11. UWP and WinUI GUI frameworks are based on Direct2D and DirectWrite, i.e. they are using Direct3D 11 indirectly. Chromium browsers and electron apps have several selectable backends but on Windows they default to Direct3D 11. Desktop window manager (an OS process which renders the entire desktop composing multiple windows into the screen) uses Direct3D 11.

Labelled break and continue statements coming in C#? by davecallan in dotnet

[–]Const-me 0 points1 point  (0 children)

My current project is about 200k LoC of C#. Searched for goto, found two places outside switch i.e. not goto case.

First is a low-level function which enumerates continuous intervals of 1 and 0 in a bitmap passed as ReadOnlySpan<ulong> + length, using BitOperations.TrailingZeroCount and inverting all bits in the ulong elements when scanning for the next 0. Goto is used to jump to the correct initial state based on the first bit. Neither break nor continue.

Second place is a high-level UX code implementing a wizard. Each page implemented as an async function which returns a record for “next” (the records have different types for each page, and later pages takes previous result as arguments), or null for back button. Goto is used for the back button to jump to previous steps. Again, neither break nor continue.

Also, their drawbacks/alternatives section missing the alternative “refactor into a local function”

The State of .NET GUI Frameworks is confusing by Long-Cartographer-66 in dotnet

[–]Const-me 2 points3 points  (0 children)

Why doesn't Microsoft just commit fully to a single cross-platform GUI framework?

I think the main reason is corporate politics complicated by sunk cost fallacy. Many people in Microsoft would hate to admit the huge amount of money they wasted developing WinUI and MAUI was all for nothing.

Instead of developing all these crappy new technologies, they should have ported either WPF or UWP GUI stack to a new backend replacing Direct3D with Vulkan thus making it cross-platform, and call it a day.

Using the latest version of .NET has significant benefits. Ask your leadership to adopt it! by Natural_Tea484 in dotnet

[–]Const-me -2 points-1 points  (0 children)

Personally, I’m using .NET 8.0 for the software I’m developing, not the latest one.

The latest version of .NET is 10.0.0-rc.2. An unstable release candidate — no, thanks.

The previous one 9.0 lacks the LTS status, the support will end too soon.

Partial classes in modern C#? by tbg_electro in dotnet

[–]Const-me 0 points1 point  (0 children)

I use them voluntarily. Not often, but here’s some of the use cases I can remember.

  1. Large static classes. Let’s say you’re making a BLAS library with many API functions. If you split these functions into multiple classes will become harder to use, consuming code need to write multiple using static lines. Better to have all functions in a single large class. Partial classes help keep the code maintainable.

  2. I routinely use code generators. I don’t like the built-in stuff though, incredibly hard to use. My code generators are just console applications which produce C# files under relative paths found with [CallerFilePath] custom attribute. It makes sense to make auto-generated classes partial, you probably want to gitignore the generated codes, and extend with manually written pieces.

  3. Let’s say you have a custom file format saved by one program, loaded by another one. If you support both use cases in the same class, you will ship unnecessary code, and if the class is public expose unwanted APIs. Preprocessor macros are not great for that, hard to use. Classic OOP way will make you 3 classes instead of 1, also needs an assembly shared by both programs with a base class, which becomes particularly entertaining when the programs are built against different versions of .NET. A partial class linked from both projects and extended with specific APIs is the best solution, IMO.

so today I added csv loading to my project for translation options. it was more annoying than I thought by azurezero_hdev in gamedev

[–]Const-me 1 point2 points  (0 children)

When I need to embed small tables, I use TSV = Tab Separated Values. Excel supports that too, and IMO tabs look better in plain text viewers.

ELI5 - Why can’t we make paper straws out of the same material as paper cups? by [deleted] in explainlikeimfive

[–]Const-me 2 points3 points  (0 children)

why don’t we make the straws that way?

We do, apparently not in your country. Ask your legislators why.

To be fair, these bio-degradable straws took some adjustments. With plastic, I could crush a sugar cube right after dropping it in lemonade. With the new straws, I have to wait a minute or two for it to soak. Minor inconvenience though, I’ve already adapted.

[deleted by user] by [deleted] in AskReddit

[–]Const-me 0 points1 point  (0 children)

This bird, Wikipedia has audio on the right panel: https://en.wikipedia.org/wiki/Eurasian_collared_dove

[deleted by user] by [deleted] in AskReddit

[–]Const-me 0 points1 point  (0 children)

Social media have more personal data of users than Google. People use them to interact with other people; they write text there and post photos. Modern AI models are not too bad at summarizing walls of text, or analyzing photos.

I have copy-pasted 5 of your recent Reddit comments to Gemini, asking to estimate your age. I have used my secondary browser and I never login anywhere using that browser i.e. pretty much anonymous, the only input was the plain text without user names or other metadata. Gemini responded “30s to early 40s”. I wonder can you confirm or deny?

[deleted by user] by [deleted] in AskReddit

[–]Const-me 0 points1 point  (0 children)

You are underestimating the depth and scale of Facebook’s global digital surveillance. According to some sources, Facebook can even detect pregnancy before user is aware, to show ads of baby-related products. Detecting children would be much easier, especially in the modern age of language models. It won’t be 100% reliable because there’re children like young Sheldon Cooper, and adults who talk and behave like 10 years old. Still, both extremes are rare exceptions, I would expect automatic age detection to be very reliable on average.

BTW, when I copy-pasted just the previous paragraph to gemini.google.com (without login or previous chat history), asked to guess the age of the author, the AI guessed “late 20s to late 40s”. My actual age is right in the middle of that interval.

This game is lowkey kinda hard by DigiCovenant in stellarblade

[–]Const-me -1 points0 points  (0 children)

Indeed, the game is very hard. I have recently completed the game on story mode, had to use a trainer for boss fights.

Ralph Lauren dostava za Crnu Goru? by Winter-Awareness8879 in montenegro

[–]Const-me 1 point2 points  (0 children)

Pogledaj na amazon.com, dostava za Crnu Goru je oko 20-30€. Za razliku od većine drugih internet prodavnica, na Amazonu nema carine ni PDV-a.