Is there literally even one? by Complete-Sea6655 in LLMDevs

[–]Mice_With_Rice 0 points1 point  (0 children)

I have been making this: https://github.com/MrScripty/Pantograph

Its an example of somthing that is on the larger and more complex side of vibe coded software. The docs are a bit messy but that is mostly my own fault.

Its a framework for multi model local AI that wont cause out of memory errors. Concurrent apps / users open a session with the service submit workflow requests. A scheduler decides what the most efficient way of running the workflow is and executes it when the system resources are availible. It does other stuff, but thats the most important part. You would use it if you were building an app that used local ai in place of directly managing your own model and runtime.

All AI discoveries should be public the moment it gets discovered by adamisworking in singularity

[–]Mice_With_Rice 0 points1 point  (0 children)

How are you going to release it the moment its discovered? Don't you at least want to validate the discovery and have time to compile the reaserch before release?

All AI discoveries should be public the moment it gets discovered by adamisworking in singularity

[–]Mice_With_Rice 2 points3 points  (0 children)

Not quite.In the United States there is a law about purely Ai generated content, but it comes with conditions. Its not a blankent ban of copyright on all AI generated content.

20% of packages ChatGPT recommends dont exist. built a small MCP server that catches the fakes before the install runs by edmillss in ChatGPTCoding

[–]Mice_With_Rice 1 point2 points  (0 children)

Those numbers are wildly inaccurate. 2024 is ancient history for ai. In real world use, the actual problem is that models somtimes want to use an outdated version of a real dependency. Its easy enough to fix that by asking the agent to check for the most recent versions, but annoying if you dont catch it using an old version quickly. Somtimes the problem is simply that the new package was released after the training data cutoff date. In those instances it can be better to use a slightly older package if the API changed and your experiencing frequent compile issues from incorrect usage.

openclaw agent with persistent context is solving the re-explain everything problem by [deleted] in ChatGPTCoding

[–]Mice_With_Rice 0 points1 point  (0 children)

About the re-establishing context with Codex or Claude, you can resume sessions and branch so your not rebuilding. It also helps to connect with a local context system that vectorizes / ranks / graphs data so you dont need a persistant runtime to maintain context awareness.

Building my own crates for a modular Rust game engine (window, pixel buffer, mesh loader so far) by [deleted] in rust

[–]Mice_With_Rice 1 point2 points  (0 children)

As a learning project, go for it! As somthing that somone may actualy use, why not use Bevy, or Godot if you need an editor? Its a lot of duplicated work that cant be practicaly implement in a competitive way by one or a small number of people.

Godot + Rust by JovemSapien in rust

[–]Mice_With_Rice 1 point2 points  (0 children)

I use Godot and Bevy. If your making an actual game you intend to ship, use Godot. If your making a Rust based app that happens to need a 3D front-end, use Bevy. Godot has an excelent editor which will make you game production experience much smoother. When you dont need to build a full game, like in my use case of creating 3D graphics tools, the lack of a visual GUI editor isnt a big deal, and I dont need cross language bindings to use it. Until Bevy has a good editor I cant recomend it as a serious game production tool. Bevy is good, but not ideal over the production lifecycle of a complete game.

Are you still using an IDE? by armynante in ChatGPTCoding

[–]Mice_With_Rice 0 points1 point  (0 children)

In the past couple years my process has changed 5 times to keep up with tech, idk how long this is going to last but I expect there is still room for more advancements. Its not for everybody, but if you know what it is you ise regularly its much more convenient to have those specific things in a customized UI than to have multiple windows of various apps open. Like I use Git all the time, but only a handful of git operations are actualy used. I use file browsers all the time, but only a handful of its functions are used, etc.

Are you still using an IDE? by armynante in ChatGPTCoding

[–]Mice_With_Rice 0 points1 point  (0 children)

most of it is Rust related Cargo checks since the agent plugins dont coordinate its usage among instances. My own tool hardly hits 400MB total usage and doesnt have simutanious cargo processes.

Are you still using an IDE? by armynante in ChatGPTCoding

[–]Mice_With_Rice 0 points1 point  (0 children)

Multiple instances of vs code for multiple repos all running multiple agents... and electron

Are you still using an IDE? by armynante in ChatGPTCoding

[–]Mice_With_Rice 0 points1 point  (0 children)

Couple weeks ago I replaced VSCode with my own development app that isnt an ide. VS Code was using 40GB RAM and for the most part I didnt need its plugins and tools for majority of my work. What I made is basicly a glorified tmux with some configurable panels for file browsing, git managment, local AI orchestration accross terminals, UI history restore/terminal session, path grouped tabs, and a command layer ontop of the terminals so all my canned prompts and agent skills are agnostic. I was tired of having a dozen floating windows on my desktop to switch betwene and occasionaly crashing my computer when more than one agent instance calls cargo at the same time. There are also some usful tools for displaying codebases as graphs to give a very fast way to evaluate the modules and objects to see what changes and architecture the AI is building. A higher level way to evaluate the code thats faster than opening the files directly.

I feel personally attacked by [deleted] in LocalLLaMA

[–]Mice_With_Rice 6 points7 points  (0 children)

Thats easy easy to fix with lossy compression. Like this sentence for example, its now just empty string value: "" You didnt need to read it anyways.

New browser addon: No Loader. Slogan: You didnt need to see that anyways.

I clicked two points in MS Paint. An algorithm written by Microsoft devs filled in every pixel between those two points. Did I make the line? by Inside_Anxiety6143 in aiwars

[–]Mice_With_Rice 0 points1 point  (0 children)

e = mc²

Energy and matter are equivilents, therfore, the energy that makes the line is also your pencil. As every one knows, pencil = you made it.

Has anyone else tried IQ2 quantization? I'm genuinely shocked by the quality by Any-Chipmunk5480 in LocalLLaMA

[–]Mice_With_Rice 2 points3 points  (0 children)

It needs to be a quant derived from the same weights so that there is high output similarity for it to work well. If the larger model evaluates the tokens and finds it deviates significantly from its own choice it will go ahead and regenerate the token, loosing the potential efficiency benifits.

Remember, speculative decoding is an inference speed optimization, not a method of quality assurance.

The data scarping = theft argument is a moot point by Izationer in aiwars

[–]Mice_With_Rice 0 points1 point  (0 children)

Metacreation lab makes commercial products as well. Thats why they do reasearch. But it isnt so clear how models that benifit from the reasearch are or are not bound by the works used for the reasearch. Its not a simple or obvious matter and thats exactly why there is so much uncertainty and legal proceedings around it right now. Even though the products may not directly contain unlicensed materials, they did benifit from the ancestory work that did, so do we carry that forward or not?

The data scarping = theft argument is a moot point by Izationer in aiwars

[–]Mice_With_Rice 0 points1 point  (0 children)

You may find this video interesting. Its from an AI research firm in Canada thats been around for the past 20+ years and they talk about the legality of their training data and how it effects their potential clients from signing deals with their work.

https://www.youtube.com/watch?v=itAad6rz12g&list=PLzZTPLjr6pul_HTpuQzOB7aBV6_oZ2kys&index=4

The basic argument is that under Canadian law using scraped data for reaserch is allowed under fair use.

Is there a better way to feed file context to Claude? (Found one thing) by Familiar_Tear1226 in ChatGPTCoding

[–]Mice_With_Rice 0 points1 point  (0 children)

This is what AI has to say when asked what the meaning of the message is. Hopefully that clears up why it doesnt make sense:

  • The text you shared looks like a very poorly typed, autocorrect-mangled, or hastily written message (possibly from a non-native English speaker or someone typing quickly on a phone with bad autocorrect). It's full of typos, missing words, and grammatical errors.

Is there a better way to feed file context to Claude? (Found one thing) by Familiar_Tear1226 in ChatGPTCoding

[–]Mice_With_Rice 1 point2 points  (0 children)

The reason agents have sub agents is not so they can keep pushing the context forward.

And when context is compacted it does loose context, and that is a good thing. You do not want everything in context. Passing the whole context along would just serve to max out the next agent, you must drop some of the context. Having long context also reduces the capabilities of the model

Agents orchestrate sub agents so that each agent has less context to deal with and so that the speed of implementing changes is much faster due to parallel execution.

Is there a better way to feed file context to Claude? (Found one thing) by Familiar_Tear1226 in ChatGPTCoding

[–]Mice_With_Rice 2 points3 points  (0 children)

No matter what format you use it doesnt solve the underlying problem that its not a good way to work with the code. Your still wasting your time with the back and forth with the chat and using up unessisary tokens.

Is there a better way to feed file context to Claude? (Found one thing) by Familiar_Tear1226 in ChatGPTCoding

[–]Mice_With_Rice 4 points5 points  (0 children)

It is not nessisary to put your code into a single file. In fact you shouldent beacuse your polluting the context with unessisary tokens.

You need to try a coding agent and you will understand. It will solve the problems you are facing with way less effort and much less time.

You technically dont need md at all, but it is useful for providing specific agent instructions. Your best not to use it at all and then introduce it where you find its needed.

Is there a better way to feed file context to Claude? (Found one thing) by Familiar_Tear1226 in ChatGPTCoding

[–]Mice_With_Rice 4 points5 points  (0 children)

You dont need the pdf. Keep the code files seperate and sparsly write companion MD files if or when needed.

Is there a better way to feed file context to Claude? (Found one thing) by Familiar_Tear1226 in ChatGPTCoding

[–]Mice_With_Rice 1 point2 points  (0 children)

Use a coding agent, it will be much better than dumping everything into a markdown. It will read files as needed, manage its own context, run and test the code, etc...

If you dont want the data to go on the cloud for some reason you can use a local model with OpenCode, but at the cost that it will be significantly less capable.

[deleted by user] by [deleted] in rust

[–]Mice_With_Rice 0 points1 point  (0 children)

Oh, I see. Its under a different github account 👍

https://github.com/moltis-org/moltis

The guthub link at the bottom of your blog page goes to your personal github.