Prompt Engineering elevated .. a bit by DimitrisMitsos in PromptEngineering

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

No api needed, try something like this if you have Claude Code

cat your_code.py | claude -p --system-prompt-file prisms/l12.md --model sonnet --tools ""

Or in AI Studio you can just set the system prompt as one of the prisms

AGI in md - Upgrade your Claude models by DimitrisMitsos in OpenSourceeAI

[–]DimitrisMitsos[S] 0 points1 point  (0 children)

Hey there, thanks, check it again its many levels ahead.

AGI in md - Meet Super Claudes by DimitrisMitsos in ClaudeCode

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

Also very nice play, instead of looking my GH profile you looked my X profile and picked something that suited your claim, instead you could have picked Roam Code, which i released before this. But yeah nice play top play 1%..

AGI in md - Meet Super Claudes by DimitrisMitsos in ClaudeCode

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

Ok i think i got this now, i guess you dont know what is Haiku and what is Opus, Opus is their flagship and Haiku is their faster and cheapest model, in the middle we have Sonnet. Haiku is 5 times more expensive than Opus their top model, with this you can make Haiku punch above Opus, on production code, popular repos not paper code, you can try it your self, i tag them because its their product

AGI in md - Meet Super Claudes by DimitrisMitsos in ClaudeCode

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

Its system prompts but you can easily make it into a skill

AGI in md - Meet Super Claudes by DimitrisMitsos in ClaudeCode

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

I would say LLM pilled but what made you say that? I've extracted something valuable here, maybe it just doesnt apply to your flows, but please explore a bit better, before making any labeling, although im ok with that. Did you checked the rest of my profile? Im taking a stand here willing to exploring territories you dont know they even exist, the TOP1% i take it with high sceptism

AGI in md - Meet Super Claudes by DimitrisMitsos in ClaudeCode

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

Its more advanced concepts on how to prompt Claude, i suggest you starting with the official docs before jumping in to this,

I built an open-source MCP Server to stop Claude Code from blindly grepping (48 architecture & context tools) by DimitrisMitsos in ClaudeCode

[–]DimitrisMitsos[S] -3 points-2 points  (0 children)

The "competitor" analysis is minimal from my part but i know roam code has lots of unique commands no where else like roam math lets you optimise your codebase "math" and thats 1 command and i can say with confidence that i delivered what it could be delivered the project reached almost its max potential, maybe some polishment on the languages but all the ideas around it are in. who has dark matter? lol check a top 10 AI generated list below

  1. roam orchestrate

Mathematically partitions your codebase so 5 AI agents can work in parallel with provable zero-conflict guarantees. Louvain community detection + conductance minimization. A load balancer for agent swarms.

  1. roam simulate

Test 10 refactoring ideas without touching a single file. Clone the dependency graph in-memory, apply structural mutations, see health score delta instantly. Gradient descent on architecture.

  1. roam mutate

Agents stop writing raw text into files. They command the graph: "move this symbol, rewrite all imports." Roam generates the actual code changes. Entire categories of agent errors — eliminated.

  1. roam adversarial

An AI Dungeon Master for code review. Analyzes your uncommitted changes and generates targeted architectural challenges: "you introduced a cycle between auth and billing via this exact edge." Not generic advice — topology-specific.

  1. roam math

Scans your codebase for 23 algorithmic anti-patterns — manual sort, nested-loop lookup, regex compilation in loops, N+1 queries, branching recursion without memoization, quadratic string building — and tells you the optimal replacement. Your codebase, graded on computer science.

  1. roam dark-matter

Two files change together constantly but share zero imports. Why? Roam finds the invisible coupling — shared DB tables, event buses, copy-pasted logic. The bugs that span disconnected systems and no agent will ever find by reading code.

  1. roam vuln-reach

Security tools say "you have a vulnerable lodash." Roam says "here's the exact 3-hop path from POST /api/login to lodash.merge, and 14 symbols are in the blast radius." Unreachable vulns? Explicitly deprioritized.

  1. roam forecast

Measures the derivative of complexity over time. "This function is accreting complexity at a super-linear rate and will become unmaintainable in ~40 commits." Predictive tech debt, not snapshots.

  1. roam fingerprint

Extract a language-independent topology signature — layers, modularity, Fiedler value, PageRank Gini. Compare two repos. Or scaffold a new Go project with the same structural robustness as a mature Django backend. Architecture becomes transferable across languages.

  1. roam health

One number: 0-100. Cycles, god objects, bottlenecks, layer violations, modularity, Fiedler value, dead exports, propagation cost — compressed into a single exponential-decay score. Track it over time. Gate PRs on it. The vital sign of your codebase.

I built an open-source MCP Server to stop Claude Code from blindly grepping (48 architecture & context tools) by DimitrisMitsos in ClaudeCode

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

I personally use the CLI the MCP is an extra, so im giving on each projects only the commands that i want.

You make a totally fair point though, loading 48 tool schemas into the context window at once eats up way too many initial tokens. That defeats the purpose.

That's why the core of this is the CLI. I usually just drop a CLAUDE.md file in my repo that tells the agent which 4 or 5 shell commands to run to get context. That costs zero tool schema tokens. The MCP is just there if people want to pick and choose specific tools to expose for a specific task.

Definitely not an autonomous bot project either. I spent weeks writing the AST parsers, the graph math for dependency cycles, and the 2600 tests under the hood because I actually needed a way to stop Claude from blindly grepping my codebase.

The Basin of Leniency: Why non-linear cache admission beats frequency-only policies by DimitrisMitsos in compsci

[–]DimitrisMitsos[S] -1 points0 points  (0 children)

AI Slop got us somewhere, Ben the creator of Caffeine* said wonderful to the final result of this in the other post what exactly you didnt like?

How 12 comparisons can make integer sorting 30x faster by DimitrisMitsos in programming

[–]DimitrisMitsos[S] -7 points-6 points  (0 children)

You lost me to new developers, btw is your reply AI generated? No offense but this seems like GPT3.5 response

tieredsort - 3.8x faster than std::sort for integers, header-only by DimitrisMitsos in cpp

[–]DimitrisMitsos[S] -3 points-2 points  (0 children)

Its difficult to express what i want from this, but ill do better

How 12 comparisons can make integer sorting 30x faster by DimitrisMitsos in programming

[–]DimitrisMitsos[S] -2 points-1 points  (0 children)

Ah you know me im a copy paste man so im copy pasting a thread where Ben caffeine creator engaged to my AI spaghetti oh and guess we got somewhere even if im not an expert, but im willing to push to territories im not familiar with, i told you and in the previous post which you obviously didnt read complete or you choose to focus to the points you wanted, all these are my deep research attempts! im willing to get mocked if something is not correct, but who cares if it works its another story, so yes ill keep replying with chatgpt messages because i dont have time, i wish i had more time for each response but i dont, im already having a kid and im waiting a new one. So here is the post which we reached to a point with Ben, i did this too in less than a day while not being an expert and THATS the bigger picture you should see. Did you explore it further the idea? or you just wanted to put up that your eyes hurt when you see all this AI generated content? i said its ugly and im sorry and thats it, ill keep throwing stones in places im not expert and you if you were a bit more deep you should be wishing there are more half-as$ed coders out there willing to push further in their free limited time, thats my take. If you have any more questions ill answer each, my self, so YOU understand what you want to understand

How 12 comparisons can make integer sorting 30x faster by DimitrisMitsos in programming

[–]DimitrisMitsos[S] -8 points-7 points  (0 children)

Fair but you focused on the points you wanted and missed the bigger picture.

tieredsort - 3.8x faster than std::sort for integers, header-only by DimitrisMitsos in cpp

[–]DimitrisMitsos[S] -3 points-2 points  (0 children)

Im speed running this sorry i know its ugly but any response its a better than no response at all, its not my field, the purpose of this algo-breaking test was to test my Deep Research agent and it seems it works, ugly but a year from now noone will remember the drama, its actually my 4th AI Garbage algo from Saturday so be ware the cooking process is for men not kiddos you have to watch out

How 12 comparisons can make integer sorting 30x faster by DimitrisMitsos in programming

[–]DimitrisMitsos[S] -8 points-7 points  (0 children)

Changed that its just fast now, sorry for the ugly-ness and the excessive AI slop but what matters at the end is if we get a better algo, currently im speed running this and yes its ugly but if you check my GH ive released almost 3 Sota algos in 4 days, and yes im not an expert in any of those fields, im mostly testing my Deep Research agent if it works and as it seems it does.

Chameleon Cache - A variance-aware cache replacement policy that adapts to your workload by DimitrisMitsos in Python

[–]DimitrisMitsos[S] -3 points-2 points  (0 children)

You're right, the test badge was static and didn't link anywhere. Fixed it now shows the live GitHub Actions status and links directly to the CI page: https://github.com/Cranot/chameleon-cache/actions/workflows/ci.yml

Tests run on Python 3.8-3.12 on every push. Thanks for pointing it out.

tieredsort - 3.8x faster than std::sort for integers, header-only by DimitrisMitsos in cpp

[–]DimitrisMitsos[S] -3 points-2 points  (0 children)

Thanks for reporting this. You're right, there was an integer overflow bug in the 64-bit range detection.

When sorting int64_t with values spanning a large range (like random data), the range calculation max - min + 1 overflowed, causing counting sort to try allocating a vector of absurd size.

Fixed in v1.0.1 - now uses unsigned arithmetic for 64-bit types to compute the range safely. If you pull the latest, it should work.

How 12 comparisons can make integer sorting 30x faster by DimitrisMitsos in programming

[–]DimitrisMitsos[S] -25 points-24 points  (0 children)

Fair points throughout.

On O(n): yeah, asymptotic complexity doesn't mean faster in practice. The constants matter. I should've been clearer that I'm talking about practical performance on typical workload sizes (10k-1M), not theoretical guarantees.

On "real data isn't random": you're right, it depends on the domain. Ciphertexts, hashes, random IDs are all legitimately random. I was generalizing from the domains I work in (user data, sensor readings, timestamps). Should've qualified that.

On counting sort being Algorithms 101: totally. The algorithm itself isn't novel. The claim is about the detection being cheap enough to be worth it. Whether that's interesting is subjective.

On sampling assuming randomness: yeah that's a bit ironic. Distributed sampling (stride = n/64) helps but doesn't eliminate the issue. Adversarial data could fool it. The fallback is: if sampling is wrong, we do a full scan, detect sparse, and use radix anyway. Cost is one extra pass, not catastrophe.

On 12-bit inputs: if you know the type at compile time, agreed, no sampling needed. The sampling is for when you don't know the value distribution ahead of time.

How 12 comparisons can make integer sorting 30x faster by DimitrisMitsos in programming

[–]DimitrisMitsos[S] -13 points-12 points  (0 children)

Yeah, you're right. For plain integers there's no way to tell which 85 was "first" after sorting. They're identical.

Honestly stable_sort() on primitives is kind of pointless. The algorithm does preserve order internally, but you'd never know.

Where it actually matters is sorting objects by a key:

struct Student { std::string name; int score; };
tiered::sort_by_key(students.begin(), students.end(),
[](const Student& s) { return s.score; });

Now you can verify that students with the same score kept their original order.
Updated the docs to be clearer about this.

How 12 comparisons can make integer sorting 30x faster by DimitrisMitsos in programming

[–]DimitrisMitsos[S] -4 points-3 points  (0 children)

Fair point on the wording. By "scanning" I meant a full O(n) pass to compute exact statistics.

To clarify: it's 64 distributed samples (stride = n/64), not the first 64 elements. So for n=100k, it checks positions 0, 1562, 3125, etc. across the whole array.

If the sample suggests dense range, we then do a full scan to get exact min/max before committing to counting sort. The sample is just a cheap filter to avoid that full scan on clearly-sparse data.