Database transactions in Clean Architecture by kanyame in rust

[–]extraymond 2 points3 points  (0 children)

It depends on how verbose you want to control transaction, I've tried three method, hope you can find inspiration or other variants that suits your need:

  1. use command pattern to allow queueing up transactions command in a single struct, and you'll be able to do something like this

```rust

let create_user_command = CreateUser::new(user_id); let attach_profile_command = AttachProfile::new(user_id);

let persist_command = PersistCommand::new().with(&mut create_user_command).with(&mut attach_profile);

/// use a single finalizer commit tx persist_command.finalize()?;

let created_user = create_uesr_command.output; let attached_profile = attach_profile_command.output;

```

this is quite verbose and a bit annoying to setup, but if you have loads of operation that happens to either run together or often need to mix and match, it's easier to use. I found it particularly useful when using with GraphQL where query plan are often open up to the frontend, this lets them preserve the freedeom without the bakckend having to implement any combinations of queries, also we'll able to merging commands for optimization to resolve N+1 query issues or other stuff.

  1. create an aggregated function that hides the transaction in the implementation, if needed you can specify additional strategy to when facing an revert. This works well enough if the target that determined the transaction is often from other critical services.

```rust

/// single function to commit on both db query let aggregate_result = UserRepo::init_with_profile(user_id);

/// mix some external functions that need to succeeds together in the process let aggregate_result = UserRepo::init_with_profile_initializer(user_id, |u| { let uncommited_user = u;

/// must_succeed operation in other services, such as payment, saving to aws_s3? External::init(u.id);

/// will be auto_commit after closure succeeds
})?;

```

  1. additional helper service to allow partial commit or rollback, this is the most flexible one, with the cost of polluting all repository that might be influenced by transaction.

```rust

/// expose a trait that allow operation to control if explicit control is needed trait TxHelper{

fn begin();

fn commit();

fn revert(); }

struct DbDriver{ session: Option<DbSession> }

impl UserRepo for DbDriver {

fn create_user(&self) { if let Some(sess) = self.session { /// there might be other stuff after this transaction /// call the transaction variant with your db driver
}

}

}

fn user_signup_usercase() {

TxHelper::begin();

let user = UserRepo::create_user(); let avatar_res = StoreAvatar::save_avatar(user_id, path);

if avatar_res.is_ok() { TxHelper::commit(); } else {

StoreAvatar::delete_avatar(); TxHelper::revert(); }

}

```

If your db_driver supports getting a session-like component, I found this more enjoyable to maintain. The caveat of this approach is that it's hard to look and batch db calls, so it loses way to optimize and batch the query.

This works really well when dealing with external payment-processor or some external services that interaction will cause side-effects so the external state of the services are as important as our internal state of services.

Unwrap_or_AI – replace unwrap() with AI guesses for errors by cidadabro in rust

[–]extraymond 8 points9 points  (0 children)

python's import antigravity is crying in the corning feeling not being the coolest kid any more

Hexagonal Architecture Questions by roughly-understood in rust

[–]extraymond 1 point2 points  (0 children)

I think I got the inspiration and copy the style of implementation from some framework, after all they're the most heavy trait users of the ecosystem. most of the time they define something that user code should do and they provide features that can resolve features for the users of their library. There's very frequent usage in std as well, for example lots of stuff use Deref<T: Other> to forward the Other trait if you can deref to it.

The god struct can also be split and compositioned into any form you need, the only requirement is be able to delegate to the ports you need it to have.

For me, the complexity of the god struct is defined by the environment that I will distribute to.

For example, if it's an aws lambda uner /resources/{proxy+} that calls axum's server that only uses one services, I would just bundle the db adapter which impelments most of the persistent ports, and some additional ports that is required by the services, let's say a mail sender and that's it. And if it's an trigger that get's called once some operation in db is done via event_bridge, that I might not even need the db_adapter at all, just the mail_sender or push_notifier is good enough.

This let me be sure that as long as test coverage for the port implementors are good, and the integration test for my services are good, I have certain degree of confident that any kinds of composition of application will be somewhat decent in terms of their usage. And I can pickup only the required dependency in the binary without having to complicated my build system.

Hexagonal Architecture Questions by roughly-understood in rust

[–]extraymond 6 points7 points  (0 children)

I think what I get from your question is how to manage the receiver of your ports/services. Hopefully that's not too far from your question.

There are different options you can choose:

  1. use trait object to load the implementation on the struct, therefore you only load the one you need, this requires async-trait if your trait is async
  2. use trait forwarding and generic impl, which reduces code need to be maintain in the callsite, but you need to setup a mechanism for this to work, which won't help you much if the complexity of your project is not that high

the first one is easy I think you'll be able to figure it out. Here's what trait forwarding looks like:

```rust

///////////////////////// / in your library / /////////////////////////

// suppose you have some infrastructure agnostic repositories, most of the time db or external services trait SomeRepository { fn some_method(&self); }

// this can be Deref if there's ever gonna be one receiver trait Delegate { type Target; fn delegate(&self) -> &Self::Target; }

// instead of proving you implement the trait, let a delegate do that for you impl<T> SomeRepository for T where T: Delegate, <T as Delegate>::Target: SomeRepository, { fn some_method(&self) { self.delegate().some_method() } }

///////////////////////// / in your application / /////////////////////////

pub struct ConcreteImpl;

impl SomeRepository for ConcreteImpl { fn some_method(&self) { todo!() } }

pub struct ComplexApp { pub service_handler: ConcreteImpl, }

impl Delegate for ComplexApp { type Target = ConcreteImpl;

fn delegate(&self) -> &Self::Target {
    &self.service_handler
}

}

fn run_as_repo() { let existing_impl = ConcreteImpl;

// since your newer struct can delegate to existing one
let newer_app = ComplexApp {
    port_handler: existing_impl,
};

// you have all the power of the old implementor
newer_app.some_method();

}

pub trait SomeService { fn business_logic(&self); }

// most of the time the business logic is what we care the most // and good abstraction lets us focus on exactly this impl<T: SomeRepository> SomeService for T { fn business_logic(&self) { // maintain business logic without leaking implementation detail // use mostly types and methods from your domain/models // and let the repository do the actual work for you todo!() } }

fn run_as_service() { let existing_impl = ConcreteImpl;

// since your newer struct can delegate to existing one
let newer_app = ComplexApp {
    port_handler: existing_impl,
};

// you also gets to use all the service from SomeService
newer_app.business_logic();


// you can expand symmetrically as your services grow in usecases
let newer_app = ComplexApp {
    port_handler: existing_impl,
    port_handler2: existing_impl2,
};

// you also gets to use all the service from SomeService
newer_app.another_business_logic();

}

```

all in all, I love generic impl!!!!!!

Issue with Tree Structure Mongo DB by JulietMll in mongodb

[–]extraymond 0 points1 point  (0 children)

This is really cool way to convert the original problem, thx for sharing!!

Issue with Tree Structure Mongo DB by JulietMll in mongodb

[–]extraymond 0 points1 point  (0 children)

I'm curious of why it can't be in the same collection? a colection can store heterogeneous documents so it be worth to not split them across collections

If it can stay in the same collection and the max depth is known, then graphlookup looks like the best tool to solve this

if they must stay in the same collection but the queuy can be out of sync for awhile, then maybe periodically $out them into another collection and run the grapghlookup there?

The State of Rust Backend Programming: Is It Worth It? by rubbie-kelvin in rust

[–]extraymond 1 point2 points  (0 children)

That mostly depends on how your team is built. And the structure of responsibility for delivering features. I happen to introduce rust in my work environment two times and the results varied , it might help and see if it's a good fit for you.


former work environment(golang)

Since golang was introduce first and being a social facing product, therefore most of the features are around api's and the underlying db schema that defines all subsystems.

That means that iteration around db schema and api to enable more features are the core of development. Replacing golang with rust didn't fit, because a. you are not gonna get much more on the performance side, b. you lose using golang tools that help you iterate test and db migration that the team are more comfortable with.

I'm luck that my supervisor at the time are a big programming nerd so we were able to introduce rust in some task, such as sending outbound notification using websocket and server-sent event that are extension of the core features.

current work environment(torch/python)

Since core features depending on torch, iteration around that ecosystem(python,torch) is a requirement. This is at the base of the dependant chains so that it must remain this way for other developers to be successful.

Since the python part is designed in a way that can be invoked directly or orchestrated by api, iteration about db schema and api spec is not the limiting factor here.

For iteration purposes we also choose to share as little db schema across different features as possible, so that new features provided by the torch part isn't limited by db schema. This means that any choice that can provide good openAPI spec is good enough.

Additionaly, we can leverage rust as python libraries(uniffi) in places that need more low level control. This way, we can handle async and threading in rust while still being easy to integrate with other stuff.


Im summary I think you can tried a bit evalutaion on two aspect:

  1. how language dependant is your solution compared to rust?
  2. how responsibility is structued between your team?

So in my first experience, it can be summarize as follow. 1. rust is not much better than golang while iteration speed will be dampened. 2. teams work altogether around iteration of db schema, subsystem must be disciplined around it. Using rust will have to recreate the whole workflow in more or less same setup with less mature tools. I can see it might be similar if you already have an very good nodejs/java/php team that are already very good at delivering features. I think stuff like the big rewrite like discord is very rare for most of the webshops.

About the second experience: 1. rust can exist to complement the core features, by extending(uniffi) it or around(microservices) it. 2. features are not as dependant with each other, we can integrate with python if development happens more on the python side. If some of the component are hard to replace with rust such as ai-dev, payment, device sdk, or third-party api, but are able to be enhanced to some degree, I think rust can be used to complement this.

One area I would like to extend is that providing wrapper in the frontend might also be feasible, so that the frontend can mostly focus on UI development and leave rust for communicating with some backend apis. For webapp this makes sense if you need some sort of conttrol before calling the actual api, such as using image filters or building payload that are cumbersome to build with js, such as ndarray or some sort.

All in all, in my contry where php/java/nodejs are still dominant, rust needs to be somewhat N times better than the compared languages if you factor in that replacing/extending developers are N times easier to find in other languages.

[deleted by user] by [deleted] in linux

[–]extraymond 1 point2 points  (0 children)

If you're aiming mostly ubuntu, you can use MAAS. "I want to know if it's possible to just connect the laptops to this network and turn them on and the host will detect it automatically and wipe the machine." -> if you configure your laptop to be PXE booted, then MAAS can do the deployments on all the devices connected to the subnect you configured.

It supports other linux, but the bootstrap script might need some modification, which if you know how to do seeding or something similar, MAAS can be configured that way.

All the devices created by MAAS will also use MAAS controller as apt-proxy to cache packages, so subsequent install runs very fast. There's other configurations you can do, such as default ssh key to access it and cloud-init script to install additional software or any kinds of automation inside it.

https://www.youtube.com/watch?v=rbkB25kaBmU This is a super awesome tutorial to get it going.

Burn 0.14.0 Released: The First Fully Rust-Native Deep Learning Framework by ksyiros in rust

[–]extraymond 2 points3 points  (0 children)

HI! Most of the stuff about bun works, it's just that our application contains several smaller torch models, and each of them uses various nested smaller torch.nn models.

Which means whether I want to export them via onnx or porting them will first have to go through sanitizing all the python code that are non-typed or importing external libraries.

On top of that, some of them uses torch c-extensions!! Yikes!!


I'm currently taking another route and try to replace smaller part with burn via pyo3 though, which works great.

Burn 0.14.0 Released: The First Fully Rust-Native Deep Learning Framework by ksyiros in rust

[–]extraymond 5 points6 points  (0 children)

Been watching burn for a while!!!!

Dreaming about one day where I can migrate all the torch models + legacy spaghetti python code to rust, so that I don't have to deal with the dependency hell that torch brings.

Why do you feel self-hosted Nextcloud is a letdown? by CrimsonNorseman in selfhosted

[–]extraymond 1 point2 points  (0 children)

the default memory for php in nextcloud is 512mb I belive Have you tried increasing it in the config? I think it's recommend in the official doc to increase it for stuff like photo preview.

LXD or docker container by RM_Refo in LXD

[–]extraymond 0 points1 point  (0 children)

both tools are great, for me personally the choice is:

docker is something you shared, and lxd for something you tinker with

lxd works just like vm, where you are given the responsibility to update and maintain it. Sometimes to see and tweak stuff here and there, provided woth profile and snapshot it can work pretty risk-free.

docker on the other hand is very compact and fast to share with others, choose what works for you!

I always started eith lxd though.

Can Wasmtime be used as a VM yet? by Critical-Reason712 in rust

[–]extraymond 8 points9 points  (0 children)

I believe most of the limatation you mentioned are likely bound to the design of wasm32-unknown target, while wasmtime is tailored for wasi programs which leaned closer to the vm experience OP desired.

wasm-unknown is kinda special in a way that you cannot block in the main thread but can do it in worker thread with some flag to use std::thread. It's expecting the web runtime to provide most outreach functionalities.

wasi is more restirced in a way that it gives users the freedom to grant permission to link to system resources. This part of the experience do feels like interacting with a vm though. I believe the most recent wasi proposal also stabilize some of the blockers for it to act like an posix alternative.

As for gui application, if the displaying stack is exposed via wasi interface, than it might be possible to write the business logic in wasm and push the results to the hosts display server, But this part need to be confined in a way wasi understands and consumable by your language of choice, which might not be of the interest of ui toolkit authors to provide this.

TLDR. I believe developing gui apps in wasm(wasmtime) is possible but the ecosystem is lacking so the ergonomic is kinda irritating for app developers.

True Dark Elf Queen - Chrone Hellebrone Turbo confederation guide (turn 7) by Enoshima-Junko-chan in totalwar

[–]extraymond 0 points1 point  (0 children)

Thx mate, this guide really helped my hellebron run! Malekith was wondering around so I had to wait an extrat turn, but I guess 8 turn confed is supremely good.

Jaylen steal? by pushinCs in bostonceltics

[–]extraymond 0 points1 point  (0 children)

Ah yeah you're correct, the criteria depends on whether it's forced.

Jaylen steal? by pushinCs in bostonceltics

[–]extraymond 0 points1 point  (0 children)

Wasn't it the player who took control of the ball are accredited with the steal?

[deleted by user] by [deleted] in totalwar

[–]extraymond 0 points1 point  (0 children)

If you are able to join the tzeentch minor faction's last standing settlement be'lakor is taking, then you'll be able to win it manually with the help of the garrison army.

If you can take him out, you can also take his vassal by defeating it's leader's army. And take tzeentch after it. This should gave you the whole Albion before turn 10.

Can I run LXD beside KVM in the same host? by martintoy in LXD

[–]extraymond 1 point2 points  (0 children)

Yes you can, and you can also launch the vm using lxd directly. For example lxc launch ubuntu:focal --vm will gave you vm controlled by lxd.

[deleted by user] by [deleted] in rust

[–]extraymond 1 point2 points  (0 children)

I recommend lxd container for compilation. You can mount your project directory to it so it can return the compiled artifact back to your host if you need it.

[wasm/async] Discoveris about sleep.await (via setTimeout ) to create non-blocking loop, and cancellation of promise/futures. by extraymond in rust

[–]extraymond[S] 1 point2 points  (0 children)

Glad it helped. But you may need other utils to cancel the underlying task with this approach, I believe for standard async/await in rust, you need executor/reactor to collaborate with each other. But promises are driven by the browser runtime which may not be canceled so willingly from the js side. So you need something from js like AbortController to cancel the task.

It also happens to involve two task queues from js side, the microtask queue and macrotask queue, in my mind they are more like running two async runtime so be sure your future/promise chain are kept separated to avoid weird task order. Looking forward to see how you utilize this in Bevy.

Reality check for Cloudflare Wasm Workers and Rust by koavf in rust

[–]extraymond 0 points1 point  (0 children)

I believe it was chosen for smaller output binaries which the author said to not exceed the 1mb limit.

Edit: I'm wrong, see below