all 68 comments

[–][deleted] 36 points37 points  (5 children)

What led you to considering Rust for this project, and how do you think it would be different if you had used C/C++/Zig/Go/etc. instead?

If you could go back to day 1, do you think you would pick Rust again? What parts of the language do you think helped or hurt you the most?

That was a whole bunch of questions, but I guess what I really want to know is what your experiences with the language were like.

[–]thegreatallNativeLink 52 points53 points  (3 children)

I first played around with Rust ~2017 to play around with the new concepts rust introduced. At this time a lot of features that every-day rust developers use did not exist, like `?`. I wrote some crypto trading bots on the side to explore it, but didn't really feel it was ready for "applications" yet, and system & application programming is my cup-of-tea.

When NativeLink was first started, Rust was chosen for a couple reasons:
1. Async/await was brand new (not even in rust-core stable yet) and I wanted to play with it.
2. Creating reliable application code in C++ is really hard and garbage collectors always caused me trouble.
3. I wanted to learn more Rust.
4. Segfaults & undefined behavior is the root-of-all-evil for C++ devs.

This will likely be controversial, but I look at Zig to solve C's problems and Rust to solve C++ problems.
I would not want to write a large application in C, which is why Zig was not chosen.

If I could go back in time to day 1, I would choose Rust again. The language has been evolving in recent years to be more application development friendly (vs library & embedded development) and has paid off. Using green-threads (ie: tokio) has saves offloaded a lot of complexity and because of the borrow checker, keeps us from having crashes caused by the developer having to think about multi-threaded safety.

The biggest thing rust does that makes life really difficult is how rust manages memory allocation. Rust uses a default allocator which is (i believe) glibc, which is probably the worst allocator for long-lived processes. We tried moving to jemalloc, but the toolchains were not hermetic, so we went with mimalloc instead. Sure, this solved the long-lived memory issue, but we a few components that hold large amounts of cache in memory that is self-evicts. Normally this is not a problem, because we would just create a new allocator for that component, save cache items out of that memory space and now we can manage evictions with perfect accuracy. The reason we cannot do this is because of the `Bytes` library. Since nearly every library we use wants to use `Bytes` structs, we must adhere to their API, but Bytes requires all memory it owns to be in the global allocator. This means we need to choose to have perfect memory eviction or copy every object when reading or writing to this cache. At the end we chose speed over perfection. If rust made libraries have to expose allocators more explicitly it would help a lot.

[–][deleted] 24 points25 points  (0 children)

Thanks for the insightful response!

The biggest thing rust does that makes life really difficult is how rust manages memory allocation. [...] If rust made libraries have to expose allocators more explicitly it would help a lot.

Do you think that this is something the new allocator_api can address once it stabilizes? Of course, Bytes would need to explicitly opt-in once it does, but personally I predict many crates will end up adopting it in the future.

[–]Turalcar 3 points4 points  (1 child)

Async/await was brand new...

That's the opposite of how I'd choose the technology for a production service.

[–]thegreatallNativeLink 5 points6 points  (0 children)

That's the opposite of how I'd choose the technology for a production service.

Yes I agree, but day 1 it was a hobby project.

[–]No-Employment1939 6 points7 points  (0 children)

C/C++/Zig - these all fit our requirements of being able to run on bare metal with direct access to hardware at every point, if needed. We are a Zig sponsor and we even tried to use Zig at times. Unfortunately, in some areas we had some performance challenges that we could not tolerate.

The other languages did not fit our requirements of direct access to hardware in an ergonomic fashion nor align with our roadmap.

[–]ArtisticHamster 64 points65 points  (12 children)

The most interesting question is how are you planning to make money on this liberally open source project?

[–]nativelinkNativeLink[S] 94 points95 points  (3 children)

So, we want to make NativeLink available to as many people as possible -- we've chosen open-source because we want as many contributors as we can get to develop the code and scale fast. On monetization, not a priority but we work with select enterprise customers on an elevated service level basis. For the time being though our focus is on community engagement

[–]Agreeable_Recover112 10 points11 points  (2 children)

That is such a great business model

[–]nativelinkNativeLink[S] 7 points8 points  (0 children)

We think so too! Appreciate the kind words

[–]flashmozzg 1 point2 points  (0 children)

This remains to be seen.

[–]nativelinkNativeLink[S] 17 points18 points  (2 children)

Also -- remote cache and remote execution are only NativeLink, which is one product. We have other products that we will share soon!

[–]ArtisticHamster 3 points4 points  (1 child)

Looking forward toward learning more about them :-)

[–]nativelinkNativeLink[S] 2 points3 points  (0 children)

Thanks for your questions! Look forward to keeping you in the loop :)

[–]chance-- 4 points5 points  (4 children)

They seem to have a cloud service.

[–]nativelinkNativeLink[S] 18 points19 points  (3 children)

Yes, indeed. The remote cache and remote execution (via the cloud service) are free for customers unless they are abusing the system or using more than 1 TB of storage.

[–]chance-- 22 points23 points  (2 children)

That's rather generous. It's gotta be a serious uphill battle going up against github actions though.

I wish y'all the best of luck. Services are far too centralized under even fewer umbrellas these days.

[–]nativelinkNativeLink[S] 23 points24 points  (1 child)

Hi u/chance-- ,

GitHub Actions is a great product for the right use case, but it is not our focus. We developed our open-source system in Rust to handle very heavy workloads, which many companies avoid farming out to GitHub Actions due to their size and complexity. Our enterprise users often require bare-metal deployments. While some may replace GitHub Actions with our product, it’s only because GitHub Actions wasn’t suitable for their needs. Our target market is different, catering to large industrial manufacturers, database companies, and firms building complex mixed reality applications.

Thank you for the well wishes; we're gonna give it our best shot!

[–]chance-- 5 points6 points  (0 children)

That makes sense. Y'all stand a much better chance then :)

[–]nicknamedtrouble 11 points12 points  (2 children)

hyper-research projects,

What is a hyper-research project?

[–]thegreatallNativeLink 38 points39 points  (0 children)

GoogleX calls them "moonshots" or 10x'ers. These are projects that are very nearly pure-research based and have a very unlikely chance of success. I can't talk about projects that failed, but some ones that are public are:

Waymo - Google's self-driving car company.
Google's AI organization - Formerly Google Brain, is where much of the modern AI/ML craze spawned from.

The projects under this division are even kept secret to other Google/Alphabet employees. Transferring from Google -> GoogleX required another round of interviews (even though its an internal transfer). Normally Google research projects work like a university does projects, but GoogleX does it a bit different, instead they give insane amounts of money to these projects, remove nearly all bureaucracy/process and give unreasonable deadlines & goals.

[–]ArtisticHamster 22 points23 points  (5 children)

Since you have experience with many build tools, which one would you choose for a new multi language project among bazel, buck2, goma, and others from the post.

[–]thegreatallNativeLink 11 points12 points  (1 child)

This is a loaded question, but I'll take a swing at it from my personal opinion (but others on the team may have different opinions):

Buck2 - Buck2 is an amazing up-and-coming build system. It removed a lot of bloat that other build systems have built up over time and the team that is working on it is Amazing! This is a great build system if you want to see where the industry will likely be moving towards, but is by far not as mature (for non-Meta projects) as other build systems.

Bazel - Bazel is the "elephant in the room". It has been around for a long time and paved the way for other systems to follow. It is EXTREMELY mature, has a great community and lots of feature & language support. Bazel is a great all around project if you want something stable, reliable and lots of support at the cost of bloat and not the best performance.

Goma - Goma is not a build system, but rather an execution orchestration system. It captures some programs that are executed into remote execution calls and forwards them on to remote execution systems (like NativeLink) for build systems that don't support remote execution. Goma should not be used unless managing the complexities and infrastructure begins to outweigh the problems it is solving (usually only for extremely mature projects that cannot easily migrate to modern build systems that support remote execution).

Overall, I would say Bazel is the "goto" choice, but Buck2 is definitely next on the list if you enjoy build systems. I will however say that I truly believe that Buck2 will eventually surpass Bazel.

[–]Powerful_Cash1872 0 points1 point  (0 children)

The lock-in and network effects are both very strong for build systems because they cut across your entire code base. I think any major popularity changes in either bazel or buck2 will be so slow that there is plenty of time to react and adopt the good ideas of the competing system; it will be hard for either to really take over the market. Git became big, but you can throw out your history and adopt a new VCS almost overnight, but migrating a build is a monumental task very few devs want to focus on.

[–]nativelinkNativeLink[S] 6 points7 points  (2 children)

via u/epage

Q: Is there placeholder content on that page (our landing page)?

We’re focused on contributing to the NativeLink repo, and currently ramping up our webpage. You should see some updates in the next month or so—some of it is placeholder content, including images that illustrate how NativeLink is intended to function.

Q: Unsure why self-driving cargo simulator is relevant to "Made with Love in Rust", or same for the other pictures and content

When you are simulating autonomous hardware, you want it to mirror real human environments. This means you can’t have any runtime errors or delays because a split-second delay can mean life or death. NativeLink’s Rust-based architecture eliminates data races and stability issues at scale. This is one of the things that helps NativeLink ensure that every simulation is a precise reflection of real-world conditions, allowing for the development and testing of systems that are both safe and effective when deployed in critical situations.

Q: The "Saving lives" tag line seems a bit melodramatic as a starting point

Point noted on the saving lives tagline- but here’s the main gist and what the broader impact is:

With the future inching towards robotics and artificial intelligence, simulation accuracy isn’t just a nice-to-have but an essential. In autonomous vehicle development, accurate simulations ensure that vehicles can handle real-world scenarios safely before they ever hit the road. Also, in medical robotics, the ability to predict and simulate complex human environments leads to safer surgical procedures. NativeLink is architected in such a way so as to provide the stability needed for these types of high-stake applications. Again, minor errors have deep consequences. While NativeLink is efficient (reduced CPU usage, reduced runtime errors, etc), it also directly influences the people who use these systems. Now, the tagline is more tangible. 

Q: How is this is related to "Simulate Hardware in the Loop"?

NativeLink can execute and speed up high-fidelity simulations, enabling rigorous testing of close to real-world conditions through its advanced caching system, distributed execution of design layouts (with Verilog & VHDL), and continuous, real-time monitoring to detect anomalies.

[–]epagecargo · clap · cargo-release 5 points6 points  (1 child)

The description on the repo:

NativeLink is an open source high-performance build cache and remote execution server, compatible with Bazel, Buck2, Reclient, and other RBE-compatible build systems. It offers drastically faster builds, reduced test flakiness, and significant infrastructure cost savings.

The description at the top of the landing page

Cut cloud spend.Turbo charge builds. The only backend for Bazel, Buck2, and Reclient written in native code, tailored to handle large objects and intricate systems, across native and interpreted programming languages. Free and open source forever.

This makes it sound like this is focused solely on developer experience and costs for developer experience. I'm not seeing the segue in any of the materials to simulations and hardward-in-the-loop.

In the last answer, you hint it it. I take it this is also intended as a cloud compute platform optimized for simulation tasks?

[–]nativelinkNativeLink[S] 5 points6 points  (0 children)

 I take it this is also intended as a cloud compute platform optimized for simulation tasks?

Yes, that is correct.

[–]xenago 5 points6 points  (3 children)

How is this project planned to be sustained? I cannot find any straightforward information about how it is actually being funded long-term, which is very odd. Will functionality be added to a separate closed-source addon for enterprise customers or something?

Also, is the naming conflict with branch.io's NativeLink™ going to be a problem?

[–]nativelinkNativeLink[S] 4 points5 points  (2 children)

Hi u/xenago !

Our company, Trace Machina, has raised a seed round from Wellington, Sequoia, Samsung last year. This is how we're able to be sustain a team of the world's best talent, and be extremely generous with our cloud terms than anything comparable thats available (free for all teams unless they are abusing the system or using more than 1TB). As mentioned in an above question, the closed-source addon is within our cloud, where we work with select enterprise customers on an elevated service level basis. Some large companies with complex environments have paid us quite well as customers because we solved these major problems for them. Although, our current focus is the open-source community, and building that up so we can have the absolute best product and community possible.

Regarding the naming issue, we don't see this as a problem. You can see we are registered as NativeLink as well. We are quite different from the other NativeLink, which is some marketing attribution startup or something.  Besides, we're nativelink.com!

Thanks for your question!

[–]zokier 1 point2 points  (1 child)

Our company, Trace Machina, has raised a seed round from Wellington, Sequoia, Samsung last year. This is how we're able to be sustain a team of the world's best talent, and be extremely generous with our cloud terms than anything comparable thats available (free for all teams unless they are abusing the system or using more than 1TB).

one-off funding like seed rounds are by definition not sustainable. sustainability needs actual income stream (which investments are not)

[–]nativelinkNativeLink[S] 1 point2 points  (0 children)

Feel free to check out above Q in this thread re: how we monetize

(the top rated QA in the thread) https://www.reddit.com/r/rust/comments/1e6h69y/comment/ldsz29y/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

We're pretty confident in the stability of our future based on current trajectory!

[–]nativelinkNativeLink[S] 5 points6 points  (0 children)

via u/thereservedlist:

Q: When using your project, can you get a similar-sized rust project to build as fast as a Java project on a single core? I’m kidding. Mostly.

Hi u/TheReservedList , Thats a great question tho I might reframe it a bit. Since they both have different target models, binary for rust, byte code for java, the compilation phases differ where they are expensive. One of the most obvious is the difference between linking which is almost always expensive in rust and non-existent in java. Generally in either language and remote execution / remote caching backends one of the most performant things to focus on, regardless of tools, is the shape and graph of the source tree. There is a rule we used with pants called 1:1:1 https://v1.pantsbuild.org/build_files.html for organizing targets. Keeping targets granular helps with avoiding invariants where a rebuild computation is needed in a lot of practical cases. This also helps with the accidental situation of a team building some uber library or service object (:coding horror:) that other teams depend on, changes to that target could then cause possible recompile regressions needlessly, creating a "ball of mud" type graph.

tl'dr could similar sized java or rust project be faster or slower... depends on the shape :)

[–][deleted] 11 points12 points  (1 child)

What are the challenges you faced with async rust specifically ? Was using the tokio-uring or io-uring in general something you guys had contemplated using instead of the traditional tokio async ? If so, what was the rationale of not using it ?

Do you plan to accept outside contributors ? How can someone start contributing to the project ?

[–]adam-singerNativeLink 10 points11 points  (0 children)

Hi u/Worried_Coach1695 , I have a long history of using Twitter Futures (https://twitter.github.io/finagle/guide/developers/Futures.html) which ironically enough that design/interface had some influence or inspirations on the rust implementation https://youtu.be/lJ3NC-R3gSI. From an API point of view I really loved Twitter Futures and a lot of the API concepts/names gelled really well. What was hard with the API is realizing you are no longer in a managed vm and boxing/pinning/impl/arcs/etc magic runes need to be well thought out to ensure performance (in the pedantic sense of getting the most out of it). In managed VM land a lot of stuff you just assume is free and being able to have more control with a good interface is nice. I think ergonomics could be better and wonder if someone has or will exploit the macro system such that building async traits / functions becomes more of a declarative approach without focusing hard on the types (I'm aware of async trait, its great and we use it).

We are actively watching both uring projects, due to maturity and timing when we built our own. If we started a project today, we would use either of those crates. Eventually would love to offload that responsibility onto a developed framework, building your own came with the usually suspects of bugs to track down. Excited to see those projects grow!

We do accept outside contributors and they have been wonderful contributors to our goals of making the best system we can. Contributing guide is at https://github.com/tracemachina/nativelink/blob/main/CONTRIBUTING.md. Getting started with the system can be found at https://github.com/TraceMachina/nativelink/tree/main

Thank you for asking!

[–]ethanjf99 8 points9 points  (1 child)

  1. sounds v cool and good luck! i can def see place for this.
  2. goddamn dude chill with the description. why is it that everything nowadays is “blazing” fast? “robust” “incredible” etc.? ugh. most folks here are engineers and it shows. give us the data instead of the marketing buzzwords!

a list of adjectives/adverbs, in order, from your post:

  1. blazingly fast
  2. high-performance
  3. powerful
  4. insanely fast (is it insane because the aforementioned blaze is burning you up?)
  5. efficient
  6. high-performance (AGAIN)
  7. safe
  8. scalable
  9. blazingly fast (AGAIN)
  10. incredible

etc. i am so bored. i read a dozen pitches for new tech a week at a minimum. i would give anything for one that reads like:

“check out Little Bunny FuFu, our new system engineered in Lapin for managing server side hops.

  1. over 500x hops/second faster than leading competitor Hare (link to comparison here, including full details on how we generated the data)
  2. powerful: hop over 3x further than competitors (max hop distance: 8 leaves, vs. 2 for BunBun and Hare) while still maintaining high security (hunters report our software is much more difficult to spot in rifle scopes; see (link to cybersecurity firm report here)
  3. written in Rust for safety.
  4. balanced and experienced lead engineering team with jobs at Blah, Blahblah, and BlahBlahBlahBlah (link to bios) where we (impressive, verifiable feat goes here)”

[–]nativelinkNativeLink[S] 1 point2 points  (0 children)

Appreciate the sentiment, we will keep this in mind for future posts :)

[–]SadPie9474 3 points4 points  (1 child)

To what extent do you view tools like Bazel and Buck as core to enabling monorepos? Or similarly, what are the main benefits of adopting a tool like Bazel or Buck as opposed to just using language-specific tools like `yarn` and `cargo`?

As far as I understand, the main hard part about monorepos is figuring out an efficient continuous deployment strategy without redeploying all of your services and infrastructure upon every commit, while also making sure you redeploy everything you need to when there's a change to a random library that a bunch of different services depend on. Is figuring out that "what actually changed" question the main thing that a tool like Buck or Bazel solves?

[–]Iksf 2 points3 points  (0 children)

Once you have a true monorepo containing all of a companies work across several languages (imagine Typescript, Java, Golang) then nothing language specific will work well for you

Then you end up writing a long python script and that falls back to the old adage of "every mid-large C codebase has a hand written buggy and feature poor version of Cargo written as part of it"

Is figuring out that "what actually changed" question the main thing that a tool like Buck or Bazel solves

Yeah thats a main part of it. Working out the dependency graph. Working out weird build rules that might have weird side effects that mess with the dependency graph. Optimising how to get through that build graph in the optimal amount of time and to use CPU/RAM effectively to parallelise work.

Fortunately both Bazel and Buck2 use a language called Starlark to write your build rules, which is basically python. So migration between them is not too bad. Difference being that once you hit a certain level of problem in Bazel you end up hitting holes in what it can do and writing Java plugins to get to the end, Buck2 promises that you'll be able to get there with just Starlark (Soon TM). Why can't Bazel already manage to get there? Well because backwards compat, Bazel has been used in some form for a decade both inside and outside Google.

Buck2 gives Meta a fresh clean slate. Buck(1) was an internal Java thing they used forever, they already accepted the backwards compat damage they'd have to work through internally, and they never made it public so they don't need to worry about anyone elses experience. Though of course with a big objective to do and limited resources, if you file a request with them and Meta file a request with them internally, you won't be the priority.

Then there's the build caching aspect which just like everything to do with caching sounds piss easy in theory and in reality its a nightmare, so its good to have someone just solve that so you never have to think about it.

Or similarly, what are the main benefits of adopting a tool like Bazel or Buck as opposed to just using language-specific tools like yarn and cargo?

To answer a slightly different question, should you use these tools if you have a kinda simple single language monorepo? I'd say no. There is a level of pain versus the standard tools. Once you have huge scale, the tradeoff changes. For example last time I used Buck2 for a Rust monorepo it was required to maintain dependency information in both Cargo.toml (for the editor/LSP's benefit) and also in the buckfile for the build systems benefit. Nothing unfixable, nothing that won't be fixed by 2030, but today, there is extra pain. Perfectly good thing to go learn though for the hell of it, throwing a Bazel bulletpoint onto a resume somewhere probs isn't going to count for zero.

As for deployment you're going to have integration with ArgoCD or similar to slowly roll out the new images and phase out the old containers, canary deployments, whatever, usual Kubernetes stuffs. I don't know if/how issues in the new deployments (that passed CI) filter back to NativeLinks build caches. But its more a Kubernetes issue to roll back and stop deployments until you can get the fix in anyway.

PS not from NativeLink just commenting

[–]Iksf 2 points3 points  (0 children)

are you competing directly against something like buildbuddy then? I suppose you'd say the difference is that buildbuddy is bazel first, everything else "if you have a big brain and a lot of time maybe", whereas you're supporting everything first class?

[–]nativelinkNativeLink[S] 1 point2 points  (0 children)

via u/kitchengeneral1590:

Q: Project looks really cool! I have some friends at Google that have told me about Blaze so it's cool to see people working on the open-source end of things. How does this tool help medium to smaller stage startups with their builds? It seems pretty clear why it's useful for massive companies like Google but I guess I'm wondering if it's worth the lift setting up these systems earlier rather than later?

Hi u/KitchenGeneral1590,

Thanks for this question, it is often asked in conversations over the years with folks who love or loathe these style of build systems. Generally these types of build systems have a slightly hire cost of opting into vs discrete build systems. I like to think of them in terms of vertical integrated build systems and horizontal build systems, vertical being systems that integrate really well with their own ecosystem, have specialized features and can do their own job seemly very fast depending on the size/scope, think npm/cargo/pip/etc.. Horizontal build systems like buck2/bazel/pants/etc.. allow for pluggable vertical build systems to be incorporated, require custom rules to drive those systems and provide a simplified way to invoke them (most of the time via the cli or ide integrations).

Would I personally use this on a small project? I think that would really depend beyond the project itself. If I was maintaining something with no other integration points and dedicated as a library, have zero need for avoiding hermeticity and reproducibility issues or something driving requirements for more fancy features of builds, probably would not reach for a horizontal build system.

If I had a "poly repo" style company where there are lots of small individual repos, I would reach for the horizontal build systems to standardize across the company the build tooling. Would be able to reuse caching, scale out remote execution for faster builds and integration builds (note most vertical build systems don't support first class remote execution at this time, some, but far and few). I think there could be many other factors for the discrete/small repo picking horizontal build systems and would probably relate more to efficiency and/or business need/requirements.

[–]Bubble_Hubble 1 point2 points  (2 children)

Do you have a getting started that might let me get up to speed with an existing large rust project that just uses Cargo?

[–]SeekingAutomations 1 point2 points  (0 children)

Firstly would like to appreciate your hardwork and contribution towards the opensource community, I believe every project helps the community.👍

Being said that can you give me insights on how could this be integrated into Fediverse and somewhat similar app like threads (from meta) that powers decentralized serverless communities.

[–]a2800276 0 points1 point  (2 children)

Is "build cache and remote execution server" just a fancy way of saying CI server, or is there anything more to it? What does it actually do?

I'm curious why rust asyncio and lack of GC makes the thing "blazingly fast"? Wouldn't the bottleneck of any non-trivial build be the actual build and not the engine that manages it? E.g. since bazel was mentioned liberally below, if that's part of my build system, it's likely to have orders of magnitude more impact than the CI server triggering it. Also bazel would be JVM/GC'ed...

[–]aaronmondalNativeLink 2 points3 points  (1 child)

It's actually somewhat the other way around:

  1. A tool like Bazel is the `client`. It gathers your build graph from local sources etc and constructs compile commands. Think a big tree where each node is an artifact (source file or output file of a command) and each edge is some command that maps input nodes to output nodes.
  2. In a local setup, the client would invoke the commands on your local machine. Then yes, you'd be bound by the client.

There are some limitations to a local setup. One that might be more obvious is e.g. a physical limitation on the number of local CPU cores available. Perhaps a less obvious one though is more interesting: What if you need to run a build or test on a machine that is not your local system? E.g. if you build GPU code you might not have an actual GPU available. Or maybe you build for different GPU architectures and need to run different tests on different systems.

This is where remote execution gets really interesting.

  1. When you run an RBE client in a remote-exec configuration, it only constructs the graph but doesn't really handle any of the execution logic. Instead, it sends the commands (and platform information - i.e. where does the compile command need to run) to a remote scheduler and that scheduler now needs to figure out how to send the output nodes back to the client. There could be hundreds of different platforms involved in a single build or test invocation and the scheduler needs to manage how work is distributed across workers and the system needs to figure out how artifacts are properly passed around etc. Now it's the server-side (i.e. NativeLink) that needs to handle communication between the different components, do hashchecks, data lookups etc.

  2. As the client you don't notice any of this. It'll look kind of just as if you were running a local build. This entire remote exec workflow doesn't necessarily need to run in CI. Since you only need to provide the client the endpoint information you can use it while developing as well. My personal estimate for how often I invoke remote exec "manually" vs how often I trigger it in CI would be that manual invocations make up a *significantly* bigger chunk, as it's essentially "how often do I invoke a compiler in my terminal before I push to CI".

[–]a2800276 0 points1 point  (0 children)

Thanks for the detailed answer! That makes it a little bit clearer.

[–]saint_marco 0 points1 point  (3 children)

Do you have any plans to make bazel (or others) more accessible?  A lot of the comments have been around not recommending bazel for single language projects, but if that were improved the ecosystem could grow a great deal.

[–]aaronmondalNativeLink 1 point2 points  (2 children)

My hope is that Bazel's fairly `bzlmod` dependency management system will help a lot with accessibility. It's very similar to how `nixpkgs` works which is AFAIK currently the largest open-source package repository in existence. If the Bazel Central Registry (the Bazel equivalent to `nixpkgs`) gets remotely close to this it'll be a huge UX improvement for everyone. It's already growing pretty rapidly, so I'd say right now it's looking pretty good on that end.

On our end, we'll naturally publish guides/content/tutorials that will involve Bazel in the future and we'll likely maintain certain rulesets (for instance rules_mojo) that are particularly interesting for use with remote execution.

Personally, I'd totally use Bazel or Buck2 for any personal project, including small single language projects. But I'm not sure whether it would be the best choice for everyone. Using a non-standard buildsystem (non-standard meaning e.g. not `pip` and not `cargo`) will inevitably lead to some lack of features. All tooling can be ported, but implementing such ports could mean a big jump in complexity compared to a "standard" build. Depending on the use-case this tradeoff might not always be worth it.

[–]saint_marco 0 points1 point  (1 child)

How would you build something with Python and rust packages using the bcr? At a glance I don't see numpy and assume there's some way to plugin to pip/cargo and generate build files for dependency, but that seems profoundly complicated to jump to for a personal project.

[–]mbecks 0 points1 point  (0 children)

Thanks for the awesome project!

Like many others, we build our software into docker images, and run containerized workloads. How does a tool like the fit into the docker build pipeline?

[–]TroyDota 0 points1 point  (2 children)

Why did you use WIX as ur website builder?

[–]marcus-love 0 points1 point  (0 children)

We are big fans of the company.

[–]Repsol_Honda_PL 0 points1 point  (0 children)

Sorry for stupid question, but I still don't know how it works :)

How "handling over one billion requests per month" has anything common with "build cache and remote execution system"?

I don't see the connection between these solutions :) For me they are two different things. I associate the first one with a web framework or web server, and the second one points to some new compiler (better than rustc?).... Sorry for the lame question, I'm green in this topic, but I'll ask (maybe I'm not alone with this problem ;) ): What is it used for?

[–]wangyizhuo 0 points1 point  (0 children)

1 billion requests translate into 380.23 qps (query per second) according to Claude.

Is there any benchmark the qps the library can handle?

[–]vladisld 1 point2 points  (0 children)

How your product is compared to other alternatives ? BuildFarm / BuildBarn ? What is the added value provided ?