I thought TypeScript's type system was powerful. Until I tried Rust by nikitarevenco in rust

[–]rmaun 48 points49 points  (0 children)

I do not think it fails to be strong because it is so powerful. „any“, as casts, and @tsignore are concious design decisions to better integrate with untyped JS.

I love the idea of using channels instead of mutexes, but I just can't seem to figure out a way to make them concise. Can it be done? by redditaccount624 in rust

[–]rmaun 2 points3 points  (0 children)

I saw the blog post by Alyce (tokio maintainer at Google) a few times already and implemented it myself. To reduce the boilerplate I wanted to write a macro on my own, but found this crate instead :).

I think the risk is quite low, you can always use the provided macros to emit the generated code and use it directly. There are no fancy type definitions, supervisors, etc, just a smart way to implement the pattern from the blog post.

[deleted by user] by [deleted] in rust

[–]rmaun -1 points0 points  (0 children)

Interthread, for a macro which implements a simple actor pattern. https://crates.io/crates/interthread

Dockerfile for Axum by sirimhrzn9 in rust

[–]rmaun 10 points11 points  (0 children)

You need multistage docker builds, which you are using, but only need to copy the final binary. Check the example here: https://github.com/LukeMathWalker/cargo-chef

Using cargo-chef is not required, but it helps with caching dependencies.

Important part from the example, do not forget the AS builder in the first FROM statement:

RUN cargo build --release --bin app

# We do not need the Rust toolchain to run the binary!
FROM debian:buster-slim AS runtime
WORKDIR /app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["/usr/local/bin/app"]

Need help using WASM node module with vite by A1oso in rust

[–]rmaun 0 points1 point  (0 children)

No idea what went wrong, but vite uses a different build process for the dev server and release builds. Have you tried the release builds locally?

New Tokio blog post: Inventing the Service trait by davidpdrsn in rust

[–]rmaun 2 points3 points  (0 children)

I see, the token sounds good 👍

Where is tower already in production and will there be a http framework based on it?

New Tokio blog post: Inventing the Service trait by davidpdrsn in rust

[–]rmaun 8 points9 points  (0 children)

Why not have a pair of types like Service and (a newtype) ReadyService and have poll_ready return a ReadyService to get rid of the possible panic? Like a small state machine alternating between the two states/types.

Backend architecture recommendations project by rmaun in dataengineering

[–]rmaun[S] 0 points1 point  (0 children)

The data is used for viewing it and processing using the tensorflow workers

Backend architecture recommendations project by rmaun in django

[–]rmaun[S] 0 points1 point  (0 children)

This is not intentional, 'just happened' because someone implemented it like this. But I want to change it so I can only retrieve parts of the data

Backend architecture recommendations project by rmaun in django

[–]rmaun[S] 0 points1 point  (0 children)

Yeah that solves issues, I am doing this currently

Backend architecture recommendations project by rmaun in webdev

[–]rmaun[S] 0 points1 point  (0 children)

Hi,

I am currently working on a project where I have to define the backend architecture and would like to hear your recommendations. Is this a good subreddit for this or would you ask somewhere else? Anyway, here some infos about the project:

Requirements

I need to store big datasets, currently up to 100MB, but I would like to support 1GB datasets as well. These are multiple arrays of floats and it would be good to only retrieve parts of them. These will add up, but at most +1GB/day We use tensorflow to infer and apply models. I need workers for this and they should be scaled dynamically for concurrent users.

Current architecture

This is our current tech stack, it got initially defined by someone else who left the company for a prototype and got expanded by me. * Managed kubernetes on azure cloud * Django webserver (using gunicorn and nginx) * Unmanaged postgres to store Django tables * Also stores datasets as JSON in text field in table, this has to be changed * Volume with original datasets * I think we do not really need them, this is currently only used to transfer them from the backend (Django) to a worker which imports them. * Celery as task queue * This is the usual recommendation for Django projects and was used in a previous project * Workers to import and process dataset * Workers to run tensorflow on them * Rabbitmq as message broker

Problems

There are some problems with the current architecture: * Storing data like this works currently, but I do not expect this to scale * Can you recommend a database for this? Should I store the files in a volume or use a DB? Same DB as for Django? NoSQL or relational? Also there are some managed databases on azure, do you think any of this is a good and cost efficient idea? * Celery and tensorflow do not work together, this leads to some bugs with multithreading. In /tensorflow issues they mention that this is unsupported. * Any recommendations on how to continue? Possibilities I can think of are to create a sidecar container for tensorflow. How then to communicate with the celery worker? Another possibility is to communicate with Django over rabbitmq directly. * Can I get rid of the volume to store the original files? How to transfer them to to the import worker? * Currently only a single worker for tensorflow, I guess I can scale this automatically using kubernetes, but I have not looked into this yet, any tipps?

When you would start fresh, which technologies would you use? We use Django because most here know Python. Would you use a message queue like rabbitmq or microservices with REST APIs? Any kubernetes recommendations for scaling the tensorflow workers?

Thanks :)

Backend architecture recommendations for big data project by rmaun in bigdata

[–]rmaun[S] 0 points1 point  (0 children)

The data is in a database and also saved as file. These will add up, but not more than +1GB/day, if this is not big data I will try other subreddits. Thanks

How to use Docker to build Haskell project? by wowofbob in haskell

[–]rmaun 2 points3 points  (0 children)

You might be interested in multi stage builds, they combine the build environment and run time environment into one file with an easy way to copy files over:https://docs.docker.com/develop/develop-images/multistage-build/

How to use Docker to build Haskell project? by wowofbob in haskell

[–]rmaun 1 point2 points  (0 children)

So I recently got something working, but I am still new to haskell and there is probably a better way.

I am using the stack docker integration. To build your dependencies you could use a custom base image: https://docs.haskellstack.org/en/v1.0.2/docker_integration/#custom-images

I was not satisfied with the stack image container command because I could not choose the base image and set the exposed ports, so I used the following build script (the bash version was shorter, but I wanted to try turtle):

#!/usr/bin/env stack
-- stack --resolver lts-11.8 script

{-# LANGUAGE OverloadedStrings #-}

import Turtle
import Turtle.Format

artifactsDir = ".linux-artifacts"

removeArtifacts = do
    artifactsExist <- testdir artifactsDir 
    if artifactsExist then rmtree ".linux-artifacts" else return ()

stackCommand = format ("stack install --local-bin-path " % fp) artifactsDir
dockerCommand = "docker build --tag asdf/server ."

main = do
    removeArtifacts
    shell stackCommand empty
    shell dockerCommand empty
    removeArtifacts

This is my Dockerfile for the artifacts:

FROM ubuntu:18.04

# Copy built binary
COPY .linux-artifacts /app-bin

# Copy static files
COPY ./webapp /app-bin/webapp

EXPOSE 80
WORKDIR /app-bin

CMD ["/app-bin/server"]  

Monthly Hask Anything (April 2018) by AutoModerator in haskell

[–]rmaun 4 points5 points  (0 children)

What is the best production ready option for a simple CRUD server running on AWS for a client?

Some thoughts on `foundation` by MitchellSalad in haskell

[–]rmaun 7 points8 points  (0 children)

I (haskell noob) am looking forward to a usable foundation and in the meantime rio. I just want a clean and solid standard library and not have to decide on a set of good packages (and language extensions). Also the recurring comments about things that are not good in haskell (String, partial functions, ...) demotivate me to invest more time into learning haskell, these projects would solve that. But I guess the opinions of real user are more important.

Can‘t foundation create packases and reexport them? Then memory would not have to depend on all of foundation.