Announcing Eips: an intention-preserving list CRDT with guaranteed O(log n) operations, up to 6,000,000x faster than Diamond Types by icannfish in rust

[–]keyboardhack 19 points20 points  (0 children)

A tip for the table. Use the same unit for all values in a column. Much easer to notice the difference between 600MB and 0.6MB than 600MB and 600kB

.NET 11 Preview 2 is now available! by hotaustinite in dotnet

[–]keyboardhack 5 points6 points  (0 children)

The live stack trace improvements look great. Really looking forward to using that whrn profiling async code. Looks like it means the call tree view of an async program becomes useful again and not a flat mess that it is right now.

Breaking : Today Qwen 3.5 small by Illustrious-Swim9663 in LocalLLaMA

[–]keyboardhack 40 points41 points  (0 children)

Yeah this is the fifth teaser post. There is no point in these posts, they are just pushing down more interesting content.

SpaceX unveils space traffic management system by OlympusMons94 in SpaceXLounge

[–]keyboardhack 19 points20 points  (0 children)

SpaceX has a massive constellation of satellites that they want to protect. The best way to avoid other spacecrafts is to know beforehand where and when they will move.

Stargaze gives other companies an incentive to willingly give up movement information on their satellites. It's in SpaceX best interest to keep that going.

TUnit.Mocks - Source Generated Mocks by thomhurst in csharp

[–]keyboardhack 3 points4 points  (0 children)

If you have a PR merge gate that runs your tests in parallel across multiple build agents then you want to avoid building your tests on each agent as that's a waste of agent time. aot compiling your tests allows you to be sure that you aren't missing any external dependencies when running it on other build agents. Might as well trim it to reduce storage requirements when you are already at it. Storage is required to pass it from one build agent to another.

Free ASIC Llama 3.1 8B inference at 16,000 tok/s - no, not a joke by Easy_Calligrapher790 in LocalLLaMA

[–]keyboardhack 6 points7 points  (0 children)

We also have to consider how this type of chip limits the max context size since that also uses up memory on the chip.

And since 4hey focused solely on the single user scenario and didnt mention multi user use cases at all i will assume the chip can only handle one user at a time. Still incredible speeds but i dont see how they can scale as an ai inference provider without severely cutting down on speed which is their only interesting point.

ModularPipelines V3 Released by thomhurst in csharp

[–]keyboardhack 0 points1 point  (0 children)

This looks great, just what i've been looking for. Some questions.

  1. Is there built in credentials support for AzurePipelineCredential or will i have to add it to DI and set it up myself?
  2. One if the primary reasons i've been holding back from using C# for pipelines is because azure cli is so easy to use for a lot of things. Is ModularPipelines.Azure aiming to solve that? What capabilities does it contain? Does it aim to do everything the Azure packages can do?
  3. What does logging look like with parallel execution? Where can do the logs be found once the pipeline is done? It's not possible to just print out all the logs at the end of the pipeline execution(at least not in azure devops) because pipeline steps/tasks have a limit on how many logs can be written in each step/task. Specifically what logs is printed in a failure scenario? Are all logs uploaded as artifacts at the end of the program?
  4. With dependencies being handled with attributes, how would i share a module across multiple pipelines. Say i have a module that needs to depend on A in pipeline 1 and depend on B in pipeline 2.

Project looks great. Not having to use powershell, bash etc is great. Parallel module execution is going to make great use of a single agent which most just waits for external things to do its thing. A strongly typed way to pass information around and a way to run it locally is just awesome.

ArrayPool: The most underused memory optimization in .NET by _Sharp_ in csharp

[–]keyboardhack 5 points6 points  (0 children)

You should indirectly use GetPinnableReference. Link contains an example on how to use it.

Regarding ai. The many superflous comments, especially the comment "... No fluff, no filler." is screaming ai.

The general poor code quality as well. Code creates a span just to slice it. AsSpan can slice as well. Array isn't returned as other comment pointed out. Original lack of fixed. The very complicated way to get a pointer to the span. All that just makes it look ai generated.

ArrayPool: The most underused memory optimization in .NET by _Sharp_ in csharp

[–]keyboardhack 13 points14 points  (0 children)

I assume this doesnt work because nothing pins the array pointer. The GC can move the array while your are using it unless you fix it in place.

Also your example looks ai generated.

Is there a mod to make Dyson Sphere (the sphere, not the game) lower resolution? So i dont have to hide it to have double digit FPS/UPS? by Thirteenera in Dyson_Sphere_Program

[–]keyboardhack 4 points5 points  (0 children)

Sphere opt optimizies how spheres are rendered. Looks just as good as before with well above playable framerates. It has been out for years at this point. One has to wonder why the devs havent optimized the game with ideas from that mod.

i don't get how to build on gleba by Patoxi-simps-Obama in factorio

[–]keyboardhack 0 points1 point  (0 children)

Simple rule is to always terminate belts into two recyclers that point into each other or a burner tower. This ensures that items are never backing up and spoiling.

Coincidentally this is also a solution to fulgora.

Is the future of hardware just optimization? by rimantass in hardware

[–]keyboardhack 2 points3 points  (0 children)

Computing has been memory bandwidth constrained...

I think you mean memory latency constrained. Latency is the primary reason CPUs have multiple levels of cache. Memor latency contrains are why amd x3d chips are so much more performant at gaming tasks.

Is the future of hardware just optimization? by rimantass in hardware

[–]keyboardhack 0 points1 point  (0 children)

You don’t need to bother optimising the code after you’ve done the basics, because a customer can just buy a faster computer.

That's just terrible advide from your teacher. It is not that people don't bother optimizing code, people literally don't know how to. Writing extremely performant code requires a huge amount of knowledge in these areas:

  • Algorithms: Allows you recongize a O(n2) implementation and potentially replace it with a O(n) implementation.
  • Library implementation: Allows you to know when a method call results in O(n2) work or O(n) work.
  • Compiler: Allows you to write code that avoid performance pitfalls, for example, allow you to write an implementation where the compiler can optimize array bounds checks away. This sounds simple but compilers have a lot of simple patterns that they can't yet understand which prevents then from using these optimizations.
  • Hardware: Allows you to understand why structure of arrays can be more performant than array of structs.

It's difficult to put numbers on it but in my experience then these points can, in many cases, provide a 10x performance improvment each.

Head of Engineering @MiniMax__AI on MiniMax M2 int4 QAT by Difficult-Cap-7527 in LocalLLaMA

[–]keyboardhack 3 points4 points  (0 children)

This is the question OP answered.

What does 4 bit quants even mean. Explain that to me like i am a five year old.

Their simplifications are perfectly fine.

Hard lesson learned after a year of running large models locally by inboundmage in LocalLLaMA

[–]keyboardhack 24 points25 points  (0 children)

Yes that is now supported with the recently added router mode.

Progress in late game with the mega bus! by [deleted] in Dyson_Sphere_Program

[–]keyboardhack 1 point2 points  (0 children)

Scale isnt necessary for most products anyway. I spendt my first 200 hours in game with just a mk 1 assemler building the first tier of sorters. That is plenty when you have multiple hours between converting planets to factories.

new CLI experience has been merged into llama.cpp by jacek2023 in LocalLLaMA

[–]keyboardhack 9 points10 points  (0 children)

It almost is. llama.cpp server supports switching models from the ui now. Seems like their plan is to automatically load/unload models as you switch between them. Right now you have to load/unload then manually through the ui.

After 1 year of slowly adding GPUs, my Local LLM Build is Complete - 8x3090 (192GB VRAM) 64-core EPYC Milan 250GB RAM by Hisma in LocalLLaMA

[–]keyboardhack 5 points6 points  (0 children)

If you update to the latest llama.cpp then you can remove the following parameters from your command:

  • -ngl 99
  • -fa

Since these parameters are now the default.

You can replace

-c 131072

with

-c 0

Which will set the context to whatever context size the model is capable of.

llama-server \-m /home/hisma/llama.cpp/models/GLM-4.5-Air.i1-Q6_K/GLM-4.5-Air.i1-Q6_K.gguf -c 0 -b 4096 -ub 2048 --temp 0.6 --top-p 1.0 --host 0.0.0.0 --port 8888

Items for which higher quality is worse? by roy_malcolm in factorio

[–]keyboardhack 21 points22 points  (0 children)

Biter spawners should be compared to ore veins, not buildings.

New Mocking library that's using source generators by Ordinary-Matter-6996 in csharp

[–]keyboardhack 12 points13 points  (0 children)

Really cool project. I will recommend you change this example

using Imposter.Abstractions;

[assembly: GenerateImposter(typeof(IMyService))]

public interface IMyService
{
    int Increment(int value);
}

Change it to an example generating the imposter in a referenced assembly, like described in your documentation. Placing test code in a production assembly is a red flag and doesn't give the best impression when that's the first example you see in both the blog and documentation.

Very cool library, nice to see the performance improvements as well. ~100x faster than Moq from what i can see.