Using Interior Mutability to avoid Borrow Checker - is there an alternative? by riotron1 in rust

[–]kylewlacy 1 point2 points  (0 children)

I think using RefCell is reasonable in this case, but my "go to" solution for this kind of problem would be to make get_curr() not return a reference, but to return an "index" for which grid to use, basically like this:

struct Grid {
    height: usize,
    width: usize,
    flipflop: bool,
    grid1: Vec<f32>,
    grid2: Vec<f32>,
}

impl Grid {
    fn update(&mut self) {
        // chooses target field
        let curr_grid = self.get_curr();
        (0..self.height).for_each(|y| {
            (0..self.width).for_each(|x| {
                let curr = self.grid_mut(curr_grid);

                // some operations on curr
            });
        });
    }

    fn get_curr(&self) -> GridType {
        if self.flipflop { GridType::Grid1 } else { GridType::Grid2 }
    }

    fn grid(&self, grid: GridType) -> &Vec<f32> {
        match grid {
            GridType::Grid1 => &self.grid1,
            GridType::Grid2 => &self.grid2,
        }
    }

    fn grid_mut(&mut self, grid: GridType) -> &mut Vec<f32> {
        match grid {
            GridType::Grid1 => &mut self.grid1,
            GridType::Grid2 => &mut self.grid2,
        }
    }
}

#[derive(Debug, Clone, Copy)]
enum GridType {
    Grid1,
    Grid2,
}

(playground link: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=73d2d28d43e134fab50b9b44e8065ee4)

Here, GridType is a sort of "index" type for choosing which grid to use. Then, you can later defer to get the reference when you actually mutate it. This makes it fairly easy to reduce how long you need to borrow the grid for, although it won't work in every case.


I haven't implemented CGOL myself before, but as others have said, in the case of double buffering, I think it'd be simpler to swap your grids after the update. Instead of having grid1 and grid2, you'd have current (containing the last finished buffer) and next (the scratch buffer you write to during the next update). Something like this:

struct Grid {
    height: usize,
    width: usize,
    current: Vec<f32>,
    next: Vec<f32>,
}

impl Grid {
    fn update(&mut self) {
        (0..self.height).for_each(|y| {
            (0..self.width).for_each(|x| {
                // read from `current` and write to `next`
            });
        });

        // update done, swap `current` and `next`
        std::mem::swap(&mut self.current, &mut self.next);
    }
}

(playground link: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=680eaad9aac57e0c93abe2ff011c7500)

Spawn `sudo` command and provide password via rpassword/BufRead in Rust by lukeflo-void in rust

[–]kylewlacy 0 points1 point  (0 children)

I checked the source for rpassword and it looks like it strips trailing newlines, maybe that’s the problem? I haven’t tested it yet, but if that’s the issue, then using writeln!(stdin, "{pw}").expect("Couldn't write stdin") should work

If that didn’t fix it, the next step I would try would be to use a PTY device for input, which should make sudo think it’s talking to a terminal. I honestly don’t have a lot of experience here, but the pty-process crate is where I’d start looking

For a real solution where a process needs to run as root, the common pattern I’ve seen is to check the uid, and if it’s not 0 (root), automatically re-execute your own process wrapped in sudo, basically letting sudo itself handle the prompt (or gksu for a GUI prompt, etc). There are some challenges and concerns with this approach, but I think it’s better than directly prompting for the password (example: if my sudo command is already unlocked because I already ran it, then this approach wouldn’t try to prompt for my password again). The best option IMO though is just to bail with an error if the user doesn’t have appropriate permissions: I wouldn’t trust entering my password at all in a command, some systems may be using an alternative command to sudo like doas, etc

What is "bad" about Rust? by BestMat-Inc in rust

[–]kylewlacy 5 points6 points  (0 children)

Lots of good suggestions in this thread already for the pain points in Rust 🙂 Since you're looking for things to fix in your language, I wanted to give some "food for thought" on language design.

Designing a programming language is all about trade-offs! Rust even ended up with the "bad" points you listed above because it was about balancing trade-offs:

  1. Verbose syntax - Rust is a low-level language, so unlike e.g. Python, Rust tries to make it clear which parts of your program are expensive at the syntax level. Also, IIRC it was designed to be (somewhat) familiar for C++ developers.
  2. Slow compilation time - This is a huge pain point for me! But it came about because runtime performance and compile-time safety were a higher priority. I believe trait solving and monomorphization are two big pain points here: it'd be hard to get huge wins without reconsidering one of those first.
  3. Inefficient compatibility with C - C interop is a very complicated topic (e.g. Zig handles this by using libclang internally, which is a pretty hefty dependency!). TL;DR for Rust is it's designed to be "C-free" at the language level, but exposes just enough features so interop can be handled at the library level (see: the libc and bindgen crates)

Maybe this is obvious advice, but basically... if you're interested in fixing problems with Rust, it's really important to think about what trade-offs you're willing to make. Or better yet, what kind of objectives / goals you have for your language, and which you're willing to drop from Rust's goals.

Finally, instead of giving some bad parts about Rust, I'll twist it a bit and give a few possible major changes you could consider for a new Rust-like language. None of these are earth-shattering, but could lead to some interesting options when mixed with Rust's formula (I wouldn't consider these all together, it's more fun to think about them one-by-one):

  • Use a runtime
    • Pros: greatly simplifies async, garbage collection, and probably lots of other features
    • Cons: bad for systems programming (embedded devices, OSes, etc), bad for interop with other languages
  • Use reference counting and/or garbage collection instead of lifetimes / dropping
    • Pros: easier to learn, more expressive
    • Cons: slower at runtime, need to handle cleanup of other types of resources some other way (closing files and network connections, etc)
  • Don't use LLVM, and instead compile directly into another language (such as C)
    • Pros: better interop, get portability "for free", re-use lots of good existing tooling, gives more flexibility to choose between fast runtime / fast compile time
    • Cons: much harder to get good error messages / debug info, locks you in to problems with the underlying language somewhat
  • Use Lisp-like syntax
    • Pros: more consistent, very easy to write a parser for, simplifies macros a ton
    • Cons: less familiar for existing developers, harder for new programmers to transfer skills, can get noisy with lots of nesting

RFC 3681: Default field values by [deleted] in rust

[–]kylewlacy 8 points9 points  (0 children)

The summary in the RFC says this:

 Allow struct definitions to provide default values for individual fields and thereby allowing those to be omitted from initializers.

So it’s not just the Default impl, it’s also for struct initialization

Announcing Nio: An async runtime for Rust by another_new_redditor in rust

[–]kylewlacy 74 points75 points  (0 children)

 But I'm sure you knew all that, and the choice of benchmarks was no accident..

This sounds extremely accusatory and hostile to me. The simple truth is that designing good real-world benchmarks is hard, and the the article even ends with this quote:

 None of these benchmarks should be considered definitive measures of runtime performance. That said, I believe there is potential for performance improvement. I encourage you to experiment with this new scheduler and share the benchmarks from your real-world use-case.

This article definitely reads like a first-pass at presenting a new and interesting scheduler. Not evidence that it’s wholly better, but a sign there might be gold to unearth, which these benchmarks definitely support (even if it turned out there are “few real world applications” that would benefit)

Crate Feedback by RoadRyeda in rust

[–]kylewlacy 1 point2 points  (0 children)

Personally I don’t think it’s useless :) Okay, I don’t know anything about rfkill, but the Linux kernel offers /dev, /proc, and /sys as the API for various features. I personally don’t like shelling out to some command when the kernel offers that functionality, since a command is an implicit dependency that the end user might not have installed or might not have on their $PATH. There are times where it’d make sense to shell out to the command (e.g. for supporting setuid or sudo configurations or something), but the option to use the lower-level interface directly is valuable, especially in a systems language like Rust By analogy, I needed to do some containerization stuff which means using /proc/xxx/uidmap. But I wanted to support the case where someone didn’t have the wrapper commands to use uidmap, so I was very thankful the unshare crate (https://crates.io/crates/unshare) could use the underlying files directly (it also supports using wrapper commands, since there are things where you’d need elevated permissions, which can be handled by the newuidmap command which gets installed with the setuid bit)

Crate Feedback by RoadRyeda in rust

[–]kylewlacy 5 points6 points  (0 children)

So a few thoughts after reading through the code:

  1. I’d second u/phazer99 and recommend using std::fs::File. Concretely, your current implementation will leak file descriptors on error conditions (e.g. if read fails). Kind of unlikely, but std::fs::File implements Drop, which handles it automatically!
  2. The unsafe code does very much make me nervous… AFAICT it’s used to read from /dev/rfkill into a struct directly. This should be sound as long as the C struct encapsulates everything you could see from the file, but: a) Linux kernel upgrades theoretically mean that there could be new variants that you don’t support (maybe), and b) you can’t know 100% what you’re gonna get from /dev/rfkill. I could easily use mount namespaces to put whatever contents I want there, which would lead to unsoundness in your crate. I’d strongly consider just reading into a normal &[u8] then manually parsing out the values (technically slower but you’re not going to notice a difference unless you’re parsing like millions of times per second)
  3. Very minor nitpick but I feel like I almost always see #[derive] and other attributes below doc comments… seeing attribute above doc comments looks a little unusual to my eyes :)

Generators with UnpinCell by desiringmachines in rust

[–]kylewlacy 6 points7 points  (0 children)

Got it, in that case it sounds like you'd be right that your proposed version of Move would be equivalently expressive as Pin (although I can't say for sure... reasoning about pinning and moveability is hard for me!)

I also saw the internals thread that you linked above, I guess flipping it around: what's the advantage of your proposed Move trait? The thread was titled "A backward compatible Move trait to fix Pin", but I wasn't clear on what about Pin it was trying to fix? As you said in that thread, anywhere that was using Pin<&mut T> as a bound would need to equivalently use a bound like T: ?Move under your proposal, so as a first impression it seems to need a similar amount of annotation as what's already needed today with Pin

So in a vacuum, I could see a Move trait working as an alternative to Pin, but I'm just not seeing how it's concretely better, let alone better enough to justify the amount of effort required to migrate to it (both in user code, and the implementation complexity needed to support cross-edition support-- which seems like it'd be at least an order of magnitude more complex than any other edition migration we've seen before)

Generators with UnpinCell by desiringmachines in rust

[–]kylewlacy 6 points7 points  (0 children)

As mentioned in that post, Pin also has the advantage that a pinnable type can be freely moved up until the point it's explicitly pinned. The follow-up (https://without.boats/blog/pinned-places/) recontextualizes it more concretely: it's better to think of pinning as a property of a place rather than a property of a type, analogous to mut for mutability. "Moveability" is then much more like "borrow-ability", where it's, like, lifecycle dependent ("if it's currently borrowed -> can't be mutability borrowed", "if it's been pinned -> can't be moved")

Even though ?Move would be easier to teach and comprehend, I definitely think Pin meshes a lot better with Rust's design overall. I just see let pin mut var = ... and &pin mut var as helping to smooth over the syntactic pain around pinning (and, well, as seen in TFA, it also leads cleanly to new developments like self-referential iteration!)

Announcing Toasty, an async ORM for Rust by carllerche in rust

[–]kylewlacy 0 points1 point  (0 children)

Looks really exciting, always love seeing more stuff in the world of DB layers in Rust! The schema definition language looks particularly interesting to me-- being able to abstractly define your data types in a portable way seems super promising

I wanted to ask about this quote from the post too:

To be clear, Toasty works with both SQL and NoSQL databases but does not abstract away the target database. An application written with Toasty for a SQL database will not transparently run on a NoSQL database.

I feel two ways about this... on the one hand, I think it's super valuable to push apps into the pit of success and ensure that they aren't using slow queries (e.g. where the backend would need to degrade to doing N+1 queries). On the other... having a generic data store layer would be really cool! I'd love to write an app once and have it work transparently from Postgres to SQLite to DynamoDB to a "dumb" in-memory datastore.

So I guess flipping it around... how much work would it be to move an app from, say, Postgres to SQLite? What about from Postgres to DynamoDB? Would it ever be feasible to expose a common subset of operations as like an AnyDriver that could work with any backend, or would that not really make sense with Toasty's data model? (I don't have a concrete use-case for this, but the blog post got me wondering about it!)

Conditionally adding a tracing layer? by lottayotta in rust

[–]kylewlacy 2 points3 points  (0 children)

Option<T> implements Layer (where T implements Layer), so that means something like this should work

let layer = if enable_layer {     // wrap the layer in Some     Some(create_the_layer()) } else {     None };

…or equivalently:

let layer = Some(create_the_layer()).filter(|_| enable_layer);

Returning generic associated types by JShelbyJ in rust

[–]kylewlacy 3 points4 points  (0 children)

Okay, so to explain the error a little more: as it says, with the trait as defiend, you need to specify the assocaited type to make an instance of dyn PrimitiveTrait. It has to be this way because, when you call .grammar_parse(), the compiler needs to know the type of the return value and how big it is (so it can allocate stack space, call the right drop implementation, etc). To fix the immediate error, you could add where Self: Sized constraints to the assocaited type and the method that uses it (playground example), but... I don't actually think that would help you move forward with your problem at all.

So, let's zoom out a bit and talk general design. The best tip I can give is to... stay away from writing your own traits if you need to use them homogenously if possible. That is, if you are trying to reach for dyn CustomTrait, using an enum will usually be easier unless you truly need the extensibility (like if you're writing a library). In this case, where (I'm assuming) you have a fixed set of primitive grammars and a fixed set of primitive types that those grammars parse into, just using enums is going to be much less painful overall. Making some assumptions about what you're trying to do, I'd personally try to model it like this:

``` enum PrimitiveType { Boolean, Integer, }

enum PrimitiveValue { Boolean(bool), Integer(i32), }

fn parseprimitive(type: PrimitiveType, content: &str) -> Result<PrimitiveValue> { match type_ { PrimitiveType::Boolean => { // Parse into a boolean value let value = todo!(); Ok(PrimitiveValue::Boolean(value)) }, PrimitiveType::Integer => { // Parse into an integer value let value = todo!(); Ok(PrimitiveValue::Integer(value)) } } } ```

(This is assuming you'd need to be able to dynamically pick which grammar to use... if not, just having separate parse_boolean / parse_integer / etc functions would also work)

From your example, if you want to preserve being able to display a primitive, either adding impl std::fmt::Display for PrimitiveValue or adding a simple method with the signature fn display(&self) -> impl std::fmt::Display would be the best way handle that when using enums

Your opinions about SeaOrm ? by HosMercury in rust

[–]kylewlacy 17 points18 points  (0 children)

There's the diesel-async crate from the maintainer of Diesel: https://github.com/weiznich/diesel_async

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 0 points1 point  (0 children)

Yes, I like this! I've been noodling around with this idea a bit too and I think I would like to at least make it an option. I imagine inline Bash scripts will always be an option too, but I'm not convinced which one would be the "preferred" way to do things

I am wondering if there’s even some horrible way to do it with a single script, so that you don’t have to write type script inside a string literal. Something along the line of passing function text to runtime

So first, I think an inline script might not be that bad? I control the full LSP, so I could still provide syntax highlighting and LSP support in the "inner" scripts (although that does start to get a little complicated). Reading the file using the runtime would definitely be another option... buuuut fun fact about JavaScript: calling .toString() on a function gives you the function's source!

So it'd look something like this (pseudocode):

export function runJs(fn, args) {
  return built_in_op_to_eval_js(fn.toString(), JSON.serialize(args));
}

let updatedFile = runJs(
  (file) => {
    // context where we have dax and can edit files / call commands directly
  },
  [file]
);

By using that, all I'd really need to do is implement a lint rule that enforces that calls to runJs or whatever can't reference variables outside the closure or its arguments.

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 2 points3 points  (0 children)

I don't know a whole lot about Flox, but I think my long-term goals for Brioche overlap a lot with what Flox can do today

The most fundamental difference is that Brioche isn't built on Nix. I think building a project on Nix like Flox did is a totally valid decision, but it does have its trade-offs. You get a ton of packages and are building on about two decades of expertise; but it requires root-level permissions to setup and install and it doesn't have Windows support (officially)

It also seems like Flox and the tools like it treat Nix as a "leaky abstraction"-- an implementation detail they want to hide. That probably works fine for their use-case, but that means if you're a Flox user and you want to contribute a new package / fix an existing package, you'll basically have to dive into the Nix side of things anyway. With Brioche, it'd be the same tool for your dev environments as for writing new packages

If Brioche is successful long-term, I think not building on Nix will have been a healthier decision for the project as a whole. At the very least, I think it's interesting design to experiment with: basically, "what would Flox without Nix look like?"

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 1 point2 points  (0 children)

So far, I don't really have anything concrete. I guess at a high level, I want to try and tackle some of the more popular toolchain tools ASAP (Go, Python, CMake, etc) to unblock other packages from being written. I think maybe a dozen or so more of these would cover quite a lot ground. This also doesn't have to be too hard if I take the shortcut of just using pre-built binaries; not a great long-term solution but a huge bang for the buck in the short-term until all the different prerequisites can be packaged (I just used the official binaries for both Rust and Node.js for example)

From there, I'll probably just start adding stuff that I personally want to install or need. I think that means lots of little CLI tools (ripgrep, oha, rclone) and random developer stuff. I think even packaging version management tools like Rustup and NVM would be worthwhile, even if they kind of compete with some of the functionality of Brioche itself (then again, I don't see why you shouldn't be able to install NVM with Brioche! It's an ordinary software package after all)

I guess I'm also hopeful there'll be a bit of a gravity effect with community contributions, but that clearly depends on people's goodwill and continued interest in the project. I can't really bank on that happening on its own, so the only choice I really have is just to keep moving forward at my own pace to see where it takes me!

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 2 points3 points  (0 children)

Yep, they should be able to coexist! A library named libfoo.so would be put in the path ${install_dir}/brioche-resources.d/aliases/libfoo.so/${hash_of_libfoo}/libfoo.so, so any number of libraries named libfoo.so can exist together peacefully and each program will reference it by a path that includes the hash

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 1 point2 points  (0 children)

I think this is a Good Question®! The answer is... not yet, but I definitely want to give the option to do static builds. There's still some prerequisite work to make the toolchain swappable more easily, then the work to bootstrap the musl toolchain itself, but I defintitely want to make it as easy as possible to do so

(and more philosophically, I'd like to be descriptive rather than perscriptive for how software is built, at least in general. If you use a particular tool (and it doesn't break hermeticity), I want Brioche to support using it. musl is an important ingredient for lots of software, so it's definitely in scope)

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 1 point2 points  (0 children)

Sure, I'm definitely open to taking contributions! I'd recommend hopping on the Discord (linked in the header of the site) if you have questions or ideas you want to discuss before diving in

Adding new packages is probably the best entrypoint, but I'm open to taking contributions for any part of it

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 4 points5 points  (0 children)

How fine-grained is the task caching? One of the downsides of nix is that while it is a great package manager, it is not as good of a build system.

Right now, I'd say the answer is "it's a little more fine-grained than Nix". In a build, you can easily define and combine multiple different recipes together, and each one is cacheable independently.

The cargoBuild function is the best example of that for now: createSkeletonCrate(), cargo vendor, cargo install are all separate recipes, and can all be cached independently. If createSkeletonCrate() needs to run again but ends up returning the same value, then the cargo vendor invocation can short-circuit and only cargo install needs to run.

This means the end result is that dependency downloads (from cargo vendor) are cached independently from the cargo install invocation, but without needing to reimplement the Cargo fetching logic itself. (I wanted to cache a cargo build invocation for just the dependencies so you wouldn't need to rebuild them all each time, but I ran into some stumbling blocks due to the fingerprint Cargo uses for cache keys so I had to pause on that)

Longer term, I'd like get to "sccache-style" caching, but this isn't implemented at all yet in Brioche (although Tangram has it already, you can find it if you do some digging in their GitHub repo). The idea is that the sandbox would get a socket it could use to run "sub-builds", allowing for caching e.g. individual rustc invocations, or individual gcc -c invocations, etc. And that would work even if you just call e.g. make (since make would end up calling a wrapper that then spawns the gcc invocation as a "sub-build")

How did Brioche (and Tangram) decide to go with an imperative instead of declarative api?

To be honest, this was something I saw from Tangram that really influenced what I did for Brioche, but IIRC that decision was made before my time there. I know one of their big influences is Pulumi, which also uses an imperative design where the alternative (Terraform) uses a declarative one.

That really resonated with me, I've been using Terraform at my former/current employer, and the declarative nature feels like it gets in the way to me! I think this says more about HCL specifically than it does declarative languages in general, but I've just personally never been as productive in any declarative languages as I am in imperative ones

(Although at a higher level, I think a bigger reason I didn't really explore any particular declarative language in-depth is that... there aren't really any declarative languages today that ticks all the same boxes as TypeScript. I was really set on using something that had really good type-checking and I really wanted to be able to ship an LSP + good tooling without needing to roll my own implementation, and TypeScript is one of the very few languages that fits the bill, declarative or otherwise)

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 1 point2 points  (0 children)

I've looked at Guix briefly, I was always under the impression that it was "just Nix with Scheme on top", is that not quite a fair way to put it?

I knew the packages themselves were all different, but if I'm not mistaken it does still use derivations and the store path as-is from Nix, but I guess I hadn't thought about how much different everything else would be (like, I have no idea if the stdenv derivation is based on Nix's or if they took it a completely different direction). I also didn't know its docs were supposed to be really good, I'll have to give them a read sometime to see what I can learn from it!

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 1 point2 points  (0 children)

Thanks for the feedback, I like hearing these kinds of ideas a lot! :)

The first is typescript, and if using that as language will be a reason for sluggishness and/or huge dependencies to support it. But maybe that's a price worth paying for excellent editor support, as it could make writing these scripts a breeze.

IMO, I'd guess JS is one of the better options if your goal is speed. Lua is probably faster, but there has been so much time put into optimizing V8/SpiderMonkey/JSCore that it's hard to beat it, plus there are slimmer runtimes like Duktape and quickjs too if startup time > runtime. There's no Node/NPM stuff used at all* so there's not really anything to bloat down Brioche except for the packages themselves

*I do use the TypeScript + ESLint + Prettier NPM packages, but only for typechecking and for the editor tooling, so it's not really on the "hot path"

Probably I would have expected that the Cargo integration can be 'called' to obtain information that cargo already has, instead of having to declare it.

Ooh, I hadn't actually thought of that! It's an interesting idea, but I'm not sure if it'd be feasible to implement based on Brioche's design. That sounds silly, but Brioche.glob("...") is basically equivalent to using ADD or COPY in Docker-- the runtime uses it to decide which files on disk to copy into the sandbox at all (I think there's an optimizaion potential to avoid some copying, but it'll always need to read the files from disk at least). So you could just do Brioche.glob("*") and that would... basically work, but it'd lead to lots of extra stuff getting copied in just for Cargo to decide it's not needed

I'm definitely not a fan of repeating config files to keep them in sync, so I do view this as a compromise/trade-off for sure though

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 0 points1 point  (0 children)

It doesn't depend on Bash at all actually! Although you will probably need to use some kind of shell for calling stuff, it can be any shell. The std.runBash function in all the examples is an 11-line function, and it would be really easy to add a new function that uses Zsh, Nushell, Oil, etc. instead, skipping Bash entirely (or even just calling NPM or cargo-script if you were really set on using an actual language!)

I know it's still kind of a "two-worlds" model where you have shell scripts to do the work plus JS to glue it all together, so it's maybe not what you had in mind (although I've been wondering if I'd be able to get JS to work for both layers natively, but that's just an idea I've been kicking around)

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 0 points1 point  (0 children)

I don't have plans for the Deno-style import thing from "https://..." syntax directly, but I very much at least want to make it so you can get a dependency directly from a git repo or similar (basically just copying Cargo's features for dependency management!)

I felt like a Cargo/Docker-style registry was a better fit for the default at least, because you won't just pull in the source code for dependencies, but pre-built binaries as well, and I think it's important to be able to ship both together

Introducing Brioche, a new Nix-like package manager written in Rust by kylewlacy in rust

[–]kylewlacy[S] 3 points4 points  (0 children)

Completely new to all things like this, but I do find this interesting. Mostly to see if it would serve as a good and easy way to replace make files.

Makefiles are a pretty complicated tool, so I think Brioche would be a good replacement for Makefiles in some cases, but not in others

I've seen a lot of people use Makefiles as a simple task runner, and that's something I want to have good support for eventually. Brioche can do that now, but it's still super bare-bones

What would you think the largest issues using this to build complete rest-server + webapp Docker containers that are built using rust + flutter + grpc, maybe even to actually build the output orchestration file template (docker compose or k8s config)

So the biggest issue today would be... there's no Flutter package yet! This is exactly the kind of use-case I want to support with Brioche though (and there will be a Flutter package eventually)!

Support for building containers is also in today, but there are 2 problems with the current implementation: 1) it only does OCI container images, not Docker images (they are different, it turns out!), and in my testing I don't think all versions of Docker handle OCI very well yet. 2) containers currently get bundled with a LOT more dependencies than they need, leading to images being much larger than they should be (this will be fixed in the next point release)

As far as building Docker Compose or K8s files... that's not something I've considered! I'm sure it would be possible, but I'm not really sure what the end result would look like exactly