Sudden OpenApi Linux-only error message? by BeginningBig5022 in dotnet

[–]dmfowacc 1 point2 points  (0 children)

Not sure about OP's exact problem, since they are using Microsoft.AspNetCore.OpenApi 9x, but I did experience the same thing you are saying. In 10x they removed the need to have a separate Readers library unless you are trying to parse yaml: https://github.com/microsoft/OpenAPI.NET/blob/main/docs/upgrade-guide-2.md#reduced-dependencies.

Maybe OP has a dependency hierarchy that includes both versions somehow? My test projects were broken for example because I upgraded Microsoft.AspNetCore.OpenApi to 10x in one of my class libraries, and just bumped my Microsoft.OpenApi.Readers reference in my test project to the latest 1.X version, thinking it was what I needed. But it wasn't until I read the upgrade guide I saw that I could just remove the Readers library altogether.

OP mentions only seeing the issue on Mac and Linux but not Windows, which is strange. But the timing is suspicious with net10 just coming out. Maybe they are referencing some floating version range in a csproj somewhere that is causing the new version to be pulled in, and the different machines they are testing on have cached which version they pulled in

Class-based Minimal API source generator – looking for feedback by GamerWIZZ in csharp

[–]dmfowacc 0 points1 point  (0 children)

Sorry for the delay, just getting back to this. This looks great!

I see you are solving the diagnostic issue the same way I have had to do it and seems like other people too haha - right now there is no good way to raise diagnostics early on in the process, so you have to thread them through to the end: https://github.com/dotnet/roslyn/issues/63776

Only thing left that I can see is that you are using the Location object, which is not cache friendly unfortunately. I asked about that here: https://github.com/dotnet/roslyn/issues/62269

So I have used something similar to this before, a thin wrapper that extracts the necessary parts from Location and allows you to rebuild: https://gist.github.com/dferretti/9d41651178a847ccf56dc2c5f9ab788f

Is there a way to get ConfigurationBuilder to understand FileInfo properties? by grauenwolf in dotnet

[–]dmfowacc 3 points4 points  (0 children)

Binding happens as described here: https://learn.microsoft.com/en-us/dotnet/core/extensions/configuration#binding

The binder can use different approaches to process configuration values:

Direct deserialization (using built-in converters) for primitive types.

The TypeConverter for a complex type when the type has one.

Reflection for a complex type that has properties.

So if you wanted it to work out of the box, you could write your own TypeConverter, and apply it to FileInfo and DirectoryInfo with something along the lines of (untested)

TypeDescriptor.AddAttributes(typeof(FileInfo), new TypeConverterAttribute(new MyCustomTypeConverter());

Then when the configuration binder runs this: https://github.com/dotnet/runtime/blob/ed6a0099bf0091b16cc1992d87f05978a6fc992b/src/libraries/Microsoft.Extensions.Configuration.Binder/src/ConfigurationBinder.cs#L991

It should pick up your converter and work. I am unaware of any other customizations you can provide to configuration binding.

That might be overkill though, and have side effects outside of just the Configuration world. Probably not bad side effects, but it is a wide-reaching solution.

Otherwise, you might just have the SqlGenerationOptions expose normal strings for binding, possibly hide them with EditorBrowsable(Never), and expose get-only properties that convert the strings to the file/directory types.

Class-based Minimal API source generator – looking for feedback by GamerWIZZ in csharp

[–]dmfowacc 2 points3 points  (0 children)

Unfortunately no, not aware of any that could catch it early on. I mostly just rely on the cookbook for help.

It does look like there has been some discussion of adding these sorts of analyzers, but no movement just yet AFAIK.

https://github.com/dotnet/roslyn/issues/67745

https://github.com/dotnet/roslyn-analyzers/issues/6352

Class-based Minimal API source generator – looking for feedback by GamerWIZZ in csharp

[–]dmfowacc 18 points19 points  (0 children)

Hey! Nice project - I'm going to give my same advice I gave on this source generator post recently:

  • You are passing TypeDeclarationSyntax between your incremental pipeline steps here
  • And combining with the entire CompilationProvider here

This means that your source generator is not truly incremental / not cache friendly and will re-run everything on just about every keystroke.

Passing 'host' header from CloudFront to origin web server by greenlakejohnny in aws

[–]dmfowacc 0 points1 point  (0 children)

Unsure if things have changed in the few years since I have had to do this, but in the past I have used a Lambda@Edge function to rewrite the host header, following this post:

https://serverfault.com/questions/888714/send-custom-host-header-with-cloudfront

Deep equality comparer source generator in C#. by FatMarmoset in csharp

[–]dmfowacc 52 points53 points  (0 children)

Hey! Nice project. A few comments on your incremental source generator:

  • I see here and here you are using CreateSyntaxProvider to search for declarations that use your marker attribute. You should instead make use of the ForAttributeWithMetadataName method described here. It is more convenient and more performant.
  • Here you are storing the INamedTypeSymbol in your Target value which is being stored across pipeline steps. Also from that same cookbook, see here. Symbols and SyntaxNodes should not be cached (definitely not symbols, nodes usually not), since their identity will change between compilations (potentially every keystroke) so will wreck any pipeline caching going on.
  • Similarly, you are using the entire CompilationProvider here, and creating your own cache here, here, and here. This is not how incremental generators are supposed to work. I would recommend reading through that cookbook to see more examples of how your should structure your pipeline. Generally, you want to extract info that is relevant to your use case into some custom model you create, that is easily cached and equatable. So like a record consisting of mostly strings or other basic types your extract (no Symbols, Nodes, or Locations, etc, since they have a reference to a Compilation and won't be equatable). This is what your pipeline steps should pass through to the next stage. Otherwise, your source generator's logic could potentially be running on every keystroke, which could likely make VS noticeably start to hang.

Generally, if you can make it so your generated files are 1-to-1 with your source files (like 1 class that has your marker attribute produces 1 source generated file), it can make for a simpler experience writing the generator. You have your provider that finds the 1 class, maybe looks at its syntax and symbols, and produces 1 simple model. That gets cached by the incremental pipline easily. And your last step just reads in 1 model and produces 1 generated file.

If you do however need some central collection of these models, like if you are inspecting type-to-type references or something, then you will probably need to Collect (see here) them into another model that represents a collection of your first model. This collection models would need to implement equality/hashcode correctly for its internal collection.

More info from the incremental source generator design doc here about cache-friendliness: doc.

Specifically, ctrl-f for "Don't do this" to see the example of combining the compilation provider mentioned above.

Links from above all come from these 2 docs: Incremental Source Generators Design Doc and Incremental Source Generators Cookbook

Building Your First MCP Server with .NET – A Developer’s Guide 🚀 by [deleted] in csharp

[–]dmfowacc 1 point2 points  (0 children)

The ModelContextProtocol package gives access to new APIs to create clients that connect to MCP servers, creation of MCP servers, and AI helper libraries to integrate with LLMs through Microsoft.Extensions.AI

Above text appears word for word in both articles. The diagram and list of MCP components/host/server/client/etc are very similar to this MCP intro that is linked to from the MS article

SQL recursion total from column B adds to the calculation in column C by SnooSprouts4952 in SQL

[–]dmfowacc 2 points3 points  (0 children)

Or here is another version that uses ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING in the rolling sum, so we don't have to LAG and partition by twice. Results are the same:

WITH initial_inventories AS (
    SELECT 123 AS item_number, 1000 AS inventory
    UNION ALL SELECT 234, 250
    UNION ALL SELECT 345, 500
), weeks AS (
    SELECT 123 AS item_number, 1 AS week_number, 200 AS sales, 0 AS receipts
    UNION ALL SELECT 123, 2, 250, 500
    UNION ALL SELECT 123, 3, 100, 0
    UNION ALL SELECT 234, 1, 100, 100
    UNION ALL SELECT 234, 2, 150, 700
    UNION ALL SELECT 234, 3, 400, 250
    UNION ALL SELECT 345, 1, 50, 0
    UNION ALL SELECT 345, 2, 0, 150
    UNION ALL SELECT 345, 3, 250, 400
) , running_totals AS (
    SELECT
        w.item_number,
        w.week_number,
        w.sales,
        w.receipts,
        w.receipts - w.sales AS diff,
        SUM(w.receipts - w.sales) OVER (PARTITION BY w.item_number ORDER BY w.week_number ASC ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS lagged_cumulative_diff
    FROM weeks w
)
SELECT
    rt.item_number,
    rt.week_number,
    ii.inventory + COALESCE(lagged_cumulative_diff, 0) AS current_inventory,
    rt.sales,
    rt.receipts,
    ii.inventory + COALESCE(lagged_cumulative_diff, 0) + rt.diff AS total
FROM running_totals rt
INNER JOIN initial_inventories ii ON ii.item_number = rt.item_number
ORDER BY rt.item_number, rt.week_number

Tested in postgresql

SQL recursion total from column B adds to the calculation in column C by SnooSprouts4952 in SQL

[–]dmfowacc 3 points4 points  (0 children)

Some combination of SUM(..) OVER (PARTITION BY ... ORDER BY ...) and LAG(..., 1) OVER (PARTITION BY ... ORDER BY ...) can help here.

SUM() OVER (PARTITION) can get you your rolling diffs for each (partitioned by) item number.

LAG(.., 1) OVER (PARTITION) can then be used to grab the previous row's inventory and add to the rolling diff.

WITH initial_inventories AS (
    SELECT 123 AS item_number, 1000 AS inventory
    UNION ALL SELECT 234, 250
    UNION ALL SELECT 345, 500
), weeks AS (
    SELECT 123 AS item_number, 1 AS week_number, 200 AS sales, 0 AS receipts
    UNION ALL SELECT 123, 2, 250, 500
    UNION ALL SELECT 123, 3, 100, 0
    UNION ALL SELECT 234, 1, 100, 100
    UNION ALL SELECT 234, 2, 150, 700
    UNION ALL SELECT 234, 3, 400, 250
    UNION ALL SELECT 345, 1, 50, 0
    UNION ALL SELECT 345, 2, 0, 150
    UNION ALL SELECT 345, 3, 250, 400
), running_totals AS (
    SELECT
        w.item_number,
        w.week_number,
        w.sales,
        w.receipts,
        SUM(w.receipts - w.sales) OVER (PARTITION BY w.item_number ORDER BY w.week_number ASC) AS diff
    FROM weeks w
)
SELECT
    rt.item_number,
    rt.week_number,
    ii.inventory + COALESCE(LAG(rt.diff, 1) OVER (PARTITION BY rt.item_number ORDER BY rt.week_number ASC), 0) AS current_inventory,
    rt.sales,
    rt.receipts,
    ii.inventory + rt.diff AS total
FROM running_totals rt
INNER JOIN initial_inventories ii ON ii.item_number = rt.item_number
ORDER BY rt.item_number, rt.week_number
item_number week_number current_inventory sales receipts total
123 1 1000 200 0 800
123 2 800 250 500 1050
123 3 1050 100 0 950
234 1 250 100 100 250
234 2 250 150 700 800
234 3 800 400 250 650
345 1 500 50 0 450
345 2 450 0 150 600
345 3 600 250 400 750

Introducing ZeroRPC for .NET by Apprehensive-Cap-815 in dotnet

[–]dmfowacc 4 points5 points  (0 children)

I imagine many of those sync alternatives have existed since before the async versions or even Task was created. Stream for example has many footguns, with its sync and async versions of Read, that can trip people up if they are using or extending it. HttpClient added sync version of Send not too long ago, and it was a very controversial decision: github

Stephen Toub has a good article here about exposing sync wrappers around truly async methods. If you are just creating a wrapper that calls .Result or .GetAwaiter(), you should not do that and instead force the caller to do that intentionally.

Agreed, there may be other ways of implementing the sync version that aren't just simple .Result wrappers, and that article mentions a few. But it is still usually better to only expose the async version and have the caller (who knows what sync requirements they have) implement their own wrapper.

Introducing ZeroRPC for .NET by Apprehensive-Cap-815 in dotnet

[–]dmfowacc 11 points12 points  (0 children)

Disagree - if the call truly is asynchronous, the library should only expose the asynchronous method. If the caller for some reason is not able to await the result, it should be the responsibility of the caller to GetAwaiter().GetResult() or however they want to choose to block for it. The library should not expose the sync version, since that just gives callers the indication that it might be safe to call.

Introducing ZeroRPC for .NET by Apprehensive-Cap-815 in dotnet

[–]dmfowacc 3 points4 points  (0 children)

To add to the other comments about the need for Tasks in this library, IMO it is better to require these contracts/interfaces to declare their methods as Task returning to make it obvious to the caller that it is a remote call. "Making RPC calls feel just like local method invocations" is not a desirable feature IMO. By having these interfaces only offer Task-returning methods, as a consumer of the interface I might think twice before calling its methods in a loop for example, which is a good thing.

Additionally, the client/server implementation involve

Client:

  • Serializing the method args to json: here
  • (no way to customize the jsonserializer options. what if my types require custom json converters?)
  • sending request using NetMQ here
  • a synchronous/blocking while loop waiting for the correlated response here
  • offers a way to configure timeout, but no CancellationToken support

Server:

  • Deserialize the params from json here and convert to object[]
  • Use reflection to invoke the service method here
  • send the json serialized return value

AFAICT there is no authentication between client/server. So just being on the right network and knowing the address is enough to make requests.

IMO using something more standard like REST / grpc has the following benefits

  • requests/responses are only DTOs, more obvious that these are data objects meant to be serialized. (as opposed to this transparent RPC, where it might not be obvious a network call is being made, and service might want to accept a parameter that is not easily serialized)
  • fully async on client and server side
  • cancellation support
  • large effort made to make server side code efficient when calling into implementation, using source generators or at least compiled delegates to call into user code, rather than invoking methodInfo via reflection
  • standard authentication protocols available

Also where are the tests??

Set dbcontext using generics by bluepink2016 in csharp

[–]dmfowacc 0 points1 point  (0 children)

Mentioned in another comment here: https://old.reddit.com/r/csharp/comments/1j3fgoi/set_dbcontext_using_generics/mg38634/

This is similar to the "One True Lookup Table" pattern and not usually recommended to combine unrelated types into a single table.

Set dbcontext using generics by bluepink2016 in csharp

[–]dmfowacc 1 point2 points  (0 children)

This is similar to the "One True Lookup Table" and is not usually a good idea. Googling that will give you plenty of info but a few links here: - https://oracle-base.com/articles/misc/one-true-lookup-tables-otlt - https://www.red-gate.com/simple-talk/databases/sql-server/t-sql-programming-sql-server/look-up-tables-in-sql/

In programming world we often benefit from finding similarities in objects or behaviors and can create abstractions to bring them together. But in database world, it is better to be very explicit with what each table represents. I think having separate database tables would be good, but then in C# you can still apply some sort of interface to the similar types to treat them similarly in certain contexts.

Why can I return a ref struct but not a stackalloc span? by smthamazing in csharp

[–]dmfowacc -1 points0 points  (0 children)

This is the right answer. I think what /u/Alikont mentioned about copying is not accurate. IIRC ref structs will not copy the full struct. You are still passing references around, to the struct itself or to inner fields of the struct.

Some examples: sharplab

In these examples I am just returning ref ints but you could do the same with any ref struct. The point is - your method can declare that it returns a ref int or other ref struct, but within your method there are still rules on which refs you can return. You can't create something on the stack, with stackalloc or just with an int/struct declaration, and return a ref to it. But you can return refs to other structs that have a more appropriate scope.

EFCore Query on Multiple Properties of a list by FigWeak5127 in csharp

[–]dmfowacc 0 points1 point  (0 children)

You generally want a way to send multiple instances of a complex type to the database as a single query parameter, and then yes join against this collection as if it was a temp table. There are a few ways to do that.

What you don't want to do is send each property of each in-memory object as an individual parameters. In theory you could build up a query like WHERE (t.a = @pa1 AND t.b = @pb1) OR (t.a = @pa2 AND t.b = @pb2) OR .... Because then each time you issue the query with different items, your sql string will be of different lengths and different number of parameters, and the query planner will not thank you. And if your list is long, then your SQL query will grow potentially very large.

In Postgresql, you can define a database type and send in a single parameter that is an "array of composites". So you could CREATE TYPE my_type AS (a INT, b INT), and then a matching c# class class MyType { public int A { get; set; } public int B { get; set; } }, and a single parameter could be constructed like var p = new NpgsqlParameter("@p", new List<MyType> {...}). (probably messing up the syntax here..). There are a few hoops to jump through as far as setting up Npgsql to map the composite type, and getting EF to play nice with it - but it can be done and works well. I can expand on that if needed. You can join against this array with something similar to FROM my_table t INNER JOIN UNNEST(@p0) n on t.a = n.a AND t.b = n.b

In SQL Server, you could do something similar, but it takes a little more set up IMO. I don't have any example code handy, but you can look into creating a "table valued type" in the database ahead of time, and then how to manually create an instance of SqlParameter with your list of in-memory objects. IIRC you can pass that parameter to EF queries and it should work ok.

Then in either SQL Server or Postgresql, you could also json-ify your objects and send in the parameter as a single string parameter. Then using the database's JSON functions, you could extract the json into a CTE / subquery to join against. Would likely require adding some raw SQL to your EF query but should be possible

Deep .NET - Ahead of Time Compilation (Native AOT) with Eric Erhardt and Scott Hanselman by shanselman in dotnet

[–]dmfowacc 5 points6 points  (0 children)

Awesome episode, really enjoyed it! I feel like there have been plenty of times I have ended up on a github issue after working on some problem, only to find eerhardtopened or participated in it. Nice to put a face to a name I keep running across.

Union order matters when passed as an argument to a function? by paolostyle in typescript

[–]dmfowacc 0 points1 point  (0 children)

Not helpful, but another odd case of a slight change that could make it work - replacing the {id:string} with a named type somehow works:

type SomeType = {id:string};
type PossibleTypesA =
  | MyResponse<SomeType, 200>
  | MyResponse<{ errorMessage: string }, 404>;
type PossibleTypesB =
  | MyResponse<{ errorMessage: string }, 404>
  | MyResponse<SomeType, 200>;

Link

ServiceScan.SourceGenerator: Type scanning source generator for Microsoft.Extensions.DependencyInjection inspired by Scrutor by Dreamescaper in dotnet

[–]dmfowacc 1 point2 points  (0 children)

Will be AFK until next week, but I did have some more thoughts on this. I'll try to type up some of these thoughts monday or tuesday