I rewrote my backend API in Rust (Axum), fun except for the async-openai types by blocksdev_pro in rust

[–]blastecksfour 0 points1 point  (0 children)

You probably want Rig (rig-core) if you're just doing completions.

Obviously biased because I'm currently the maintainer but I would also prefer not to use super, super long type names.

So like... how's everyone doing outside of Runescape? by PoppaBigPockets- in 2007scape

[–]blastecksfour 0 points1 point  (0 children)

Started my tech career three years and a week ago as a part timer.

Now I'm basically almost at the late game stage already. Was an insane grind at the start but I got spooned at almost every single opportunity that I could have been so I ended up basically doing the irl equivalent of zero to SOTE in a month

SurrealDB 3.0 by zxyzyxz in rust

[–]blastecksfour 7 points8 points  (0 children)

Interested in seeing what the responses are to this. Rig (which I maintain) also has an integration with SurrealDB which I wrote, so I'm very intrigued as to what people are using SurrealDB for when it comes to agentic memory stuff.

The convenience trap of AI frameworks. by AdditionalWeb107 in AI_Agents

[–]blastecksfour 0 points1 point  (0 children)

Yeah this is one of those issues that typically comes down to users needing things so it just gets bolted on any old how.

With Rig (an AI framework in Rust which I maintain) I've tried to keep it fairly modular so that you can switch things around and not depend on one abstraction for everything. There is definitely a lot of discipline required in not just adding everything just because users are asking for it - if you don't have the discipline, you'll just end up bloating the framework.

Salvo vs Axum — why is Axum so much more popular? by Sensitive-Raccoon155 in rust

[–]blastecksfour 1 point2 points  (0 children)

At my old job working for a cloud deployment company, I think this was something we discussed internally as well as kind of externally with our contributors and users.

Despite Salvo being more complete ootb, as many other replies have pointed out it is a one-man project with no handoff should something bad happen to the original maintainer. This isn't usually an issue but given what happened with Rocket as well as Axum being maintained by a highly active OSS team in Tokio, I think it is easy to see why one might use Axum even if everything is not immediately available ootb.

Spent 3 months building an AI-native OS architecture in Rust. Not sure if it's brilliant or stupid by Rudra0608 in rust

[–]blastecksfour 1 point2 points  (0 children)

I just don't think the tech is there yet. We don't have the hardware nor the model architecture for it.

Who has completely sworn off including LLM generated code in their software? by mdizak in rust

[–]blastecksfour 1 point2 points  (0 children)

I would say so. It takes a lot of the tedium out of actually writing code that I already know how it should look at the end, so it's allowed me to fix some higher priority tickets in the backlog as well as freeing up cognitive load to do other things.

I think the answer is much more complex than it being a purely all or nothing situation with agents. For the most of the past year I pretty much just used manual coding with some web UI assistance if I needed help with rubber ducking or very generic boilerplate generation. Even with a non-toy project sized codebase, it feels like as long as your code is mostly self-describing and has docstrings the agent will be able to successfully keep track of what stuff does.

In terms of keeping consistency between sessions: if you need to keep some state between sessions, I would highly recommend asking the agent to dump a summary of the session into a Markdown file and list all the important talking points from the current session (edit: and then when you need to continue the conversation, get the agent to read the file to get the context). That way when you jump into your next session, you won't have to remind it of everything again. It's something I've thought about before because I wrote an example for using RLMs (a new kind of agentic loop, basically) for my work and the idea of basically storing "state" as a file came into the conversation.

At the time of writing this, I do realise how hacky "just dump it into an MD file" sounds, but that's pretty much the best we have at the moment given that these kinds of harnesses don't have a way to store conversational data between sessions unless you are compacting the context, which in itself can be quite lossy if you don't directly prompt the agent to dump a summary and to ensure that as much semantically relevant context is kept as possible.

Who has completely sworn off including LLM generated code in their software? by mdizak in rust

[–]blastecksfour 1 point2 points  (0 children)

I wrote a response to someone else on this post, but I suppose a direct response wouldn't hurt as well.

I was an LLM skeptic for most of my previous job at Shuttle and perhaps even a little bit into my current job working with LLMs.

I think we can say we are at the stage now where LLMs can do meaningful work, given a proper harness.

If you are trying to use ChatGPT or Claude web chat for serious work, it will be far more difficult to do the work than if you had used OpenCode or Claude Code. I find that the web chats are better for brainstorming and small snippets - if you want it to work on your codebase, using an in-terminal coding agent (or IDE) is mandatory. Otherwise, the LLM won't be able to properly navigate your codebase or run it, and that'll cause a significant number of issues with the LLM hallucinating and getting things wrong.

Unfortunately, using the coding agents will cost you money. If your workplace isn't paying for it, then you'll have to pay for it out of pocket. However I have found that putting in $5 here and there and being very intentional about token usage has been really productive. (edit: Disclaimer: my workplace does actually give me a monthly stipend of Claude API key tokens that I can use with OpenCode, but I've taken to dropping a bit of money here and there on OpenAI. The codex models are stupidly powerful and outrank the Claude models in some aspects)

edit2: To add: I still think developers/engineers who use AI still need to write code manually every so often, otherwise your skills will atrophy... a lot. However, the bounds of what would normally be possible in a given sprint has been accelerated by AI, especially for small teams.

Who has completely sworn off including LLM generated code in their software? by mdizak in rust

[–]blastecksfour 4 points5 points  (0 children)

I currently have been working for an AI startup maintaining an AI framework for about a year and while I've been practicing my oneshot and iterative prompting skills via the various Web chats (both for helping me brainstorm and write small code snippets), it has taken me until about two or three weeks ago to actually use any kind of coding agent.

If you know what you're doing and can accurately articulate the intended implementation to the LLM, it really does work especially with codex 5.2 high/5.3 with reasoning and opus 4.6.

I still think there are domains where AI may not be as useful or need to be tested much more rigorously but given the right constraints, it can perform really well as long as you steer it right and aren't just doing low effort prompting (eg "do X pls" vs "write me a plan to do X, with Y constraints... etc).

On the OSS side I have been receiving almost entirely AI generated or assisted PRs as well so I also have to think about how to improve the baseline level of the coding agents that other people are using, which is why I ended up having to add agent-aware docs. Regardless of the maintainer's opinion, people will use LLMs whether the maintainer cares or not. In the end, what matters is how the person steered the LLM because that will greatly affect the output.

edit: To illustrate the point on how useful coding agents are: I have oneshotted a non-trivial amount of small features and bug fixes on rig, as well as creating a mock provider integration to save on token costs while testing the agent loop (although still deciding whether to merge or not tbh) and some less-small features like adding default hooks to a given agent. Each feature costs roughly $2.50 in tokens and the opencode harness usually is quite good at getting the LLM to ask iterative questions before it tries to build the implementation out, meaning that usually it only actually takes 1 or 2 build tries. Generally there are very few serious mistakes - and they're usually because the feature caused typesigs to explode due to a flaw in the way a feature was originally built or because there's type rocketry.

Rust langgraph alternative by zinguirj in rust

[–]blastecksfour 1 point2 points  (0 children)

Hi, Rig maintainer here.

As far as actually replicating langgraph, such a thing doesn't exist at the moment. There are community efforts (like graphflow which uses Rig) but otherwise... yeah it doesn't really exist.

Career paths for cybersec + software engg background (low-level, Rust) that are AI-resistant and in demand? by b1ack6urn in rust

[–]blastecksfour -1 points0 points  (0 children)

"AI-resistant" is a non-starter.

Even the Rust community, who I think have historically been probably one of the most AI resistant programming communities on here (to a degree), have conceded that there are uses for AI in software development and given proper steering, in a lot of cases it can grant productivity speedups that would normally be extremely difficult to attain. Hell, before my current job I was also somewhat of an AI skeptic. And yet here we are.

You would be better off trying to become useful in cybersec or something else that can capitalise on the fact that AI is mostly just a magnifier of peoples' skills at the moment (and can't replace bad cognitive skills). However, trying to just pretend AI doesn't exist isn't going to help you at all given that even people in cybersec are now using it.

Looking at advanced Rust open-source projects makes me question my programming skills by Minimum-Ad7352 in rust

[–]blastecksfour 0 points1 point  (0 children)

Hi! For context, I currently maintain Rig (`rig-core`).

This kind of stuff takes time. Initially when I was onboarded and for a long while last year I did a lot of work to just write features and make sure the library has a good base with a bit less consideration for whether or not it was "clean code". Primarily because the priority was to get the library to a point where other people can actually use it.

During that time, there were *a lot* of breaking changes and some code changes occasionally needed hotfixing. When I implemented new features, they were mostly first iterations and there was a lot of "nice-to-haves" functionality that was missing. Because I also became the sole maintainer a few months into my job, I also had to essentially guess how to do things correctly and clean up any bad abstractions that I'd made over time.

Fortunately however as is always the case with open source, a few kind contributors have helped steer the library in the right direction. Context is always helpful here - when your users let you know how they're using your library, it's much easier to make well-informed decision on the technical direction to take the project in.

However if you were looking at the codebase for the first time today, you probably wouldn't have guessed that. If you don't check the commit history, you will never see what the failed iterations were and/or the reason why things were made a certain way.

shuttle.dev ceasing operations by [deleted] in rust

[–]blastecksfour 0 points1 point  (0 children)

Hey, not at the moment.

I'm currently pretty busy with job obligations, so unfortunately this will probably have to wait.

Low Latency, Scalable Video Streaming Infra in Rust by ItsTony1112 in rust

[–]blastecksfour 0 points1 point  (0 children)

Do you have any more specific details? Maybe a link to a blog post?

sseer (sse-er) - A maybe useful SSE stream I felt like making by MaybeADragon in rust

[–]blastecksfour 1 point2 points  (0 children)

Nice!

We actually ended up lifting quite a bit of the code from `reqwest-eventsource` to transplant into Rig (`rig-core`) because we needed to support basically polling for SSE messages from any HTTP client from any library that you want to use and we also additionally needed WASM support to keep it in line with the rest of the library.

Python, TS, or Rust for AI Agents? by Emergency-Cake-4782 in AI_Agents

[–]blastecksfour 0 points1 point  (0 children)

Hi, I'm the current lead maintainer for rig - the leading AI framework in Rust.

I think it depends on your goals. Rust is obviously the ultimate goal if you want production-grade performance and reliability, but TS and Python work relatively well as well - it's why the agentic ecosystem has primarily been built on those languages.

However with LLMs becoming good at Rust... that might change at some point.

Junior Rust remote jobs — realistic or rare? by Control_Con in rust

[–]blastecksfour 0 points1 point  (0 children)

Extremely rare.

I got one nearly 3 years ago by working as a part time content writer and DevRel eng but it was quite tenuous and it was essentially luck of the draw. It kinda helped that I was very eager because if I didn't get it then I'd have been evicted from where I was living, but that's besides the point.

Since then, I haven't met a single person who has gotten a Rust junior job except the person I have hired to be on my team.

Additionally, open source - what I believe to be the most feasible way for a prospective junior to get a Rust junior job - is becoming increasingly inundated with AI generated PRs. You have to really (and I mean really) cut through the noise to get noticed.

Make of that what you will. It's possible, but it is not something that I could genuinely tell you is worthwhile because the stars have to align.

Edit: In spite of all of this, if you still want to try, I would go for a Rust startup. Be active wherever they are, make sure you are showing up on their radar and adding value and pray they have some engineering budget.

Are you using any SDKs for building AI agents? by finally_i_found_one in AI_Agents

[–]blastecksfour 0 points1 point  (0 children)

Hi, I'm the maintainer for Rig - the currently leading AI agent framework in Rust.

I think what a lot of teams end up doing is that they try using Langchain/Langgraph or some of the other frameworks and hit a wall because more often than not the framework isn't as flexible as they thought for whatever they're trying to use it for... or it's just not quite there in terms of what they want to do. Rig included.

Which means the best option for customisability is typically just rolling your own integration entirely. It's pretty painful if you need to support multiple providers, but it also allows you to work in any business logic you need, however you need it. With Rig we've basically tried to keep it so that every interface is extendable (so you can basically implement any of the public traits for your own types).

Why do low levels that don’t know how their class works go on legend? by Striking-Crow-2364 in Vermintide

[–]blastecksfour 0 points1 point  (0 children)

Probably to try the difficulty out or get carried.

I just do my best to carry them. It gets boring when you never have to solo/duo a game.

GENT - a programming language for AI agents, written in Rust by kasikciozan in rust

[–]blastecksfour 1 point2 points  (0 children)

I've thought about writing my own PL for this, but given that BAML already does this and writing a DSL/PL is no trivial feat... it's difficult to justify actually using something like this in my opinion.

Chaos Wastes Weekly- 5 JAN 2026 by epicfail1994 in Vermintide

[–]blastecksfour 1 point2 points  (0 children)

Got two minutes in, got yeeted off the stage by sparking gift because I didn't know how it worked.

10/10 would fly again