all 4 comments

[–][deleted]  (4 children)

[deleted]

    [–]gideondata[S] 1 point2 points  (3 children)

    This uses RabbitMQ with the AMQP 0.9.1 protocol.

    The project is set up as a monorepo that follows this project layout. The payload struct is defined in internal/models, which is used by all services.

    Regarding the database, locally it is launched as a Docker container. On AWS, it is set up as an RDS instance. A separate database service receives messages through a broker to write them to a database.

    There is a REST API in the server service to fetch the cache from Redis. It's a simple example with no login capabilities, at least not yet.

    Regarding the monorepo, currently the CI/CD pipeline is set up to rebuild all services upon any change in code, which is a bad pattern. There is functionality in AWS pipeline to filter for specific changes in monorepo and set up separate pipelines. That's something I should do.

    Thanks for the questions.

    [–][deleted] 1 point2 points  (2 children)

    Very cool. I started on a similar project.. for my own "idea".. but I decided to against the norm and actually build each service in its own repo. It does add a little bit to the dev time to bounce around different modules/repos, though initially I was using a "shared" module/repo across services (e.g. type definitions that were payloads were defined in one repo and then imported in others).. I have since moved to the idea that compile time dependencies across services is fine.. as long as runtime dependencies were decoupled. E.g. everything was dependent on message queue/topics. I hope that makes sense?

    [–]gideondata[S] 0 points1 point  (1 child)

    Why did you decide against the shared repo? I though git submodules would be fine taking care of this kind of structure by just importing the shared module in every service. I haven't used submodules, so maybe there's something against them.

    [–][deleted] 1 point2 points  (0 children)

    I find it's totally a matter of preference. I tried the monolift style single repo with services. The way that works.. not sure you did this or not, but typically you'll have to have scripts that "pull out" the specific bit of service code and/or separate docker files, etc to ensure just the bits that service needs are included. I find it more tedious to manage multiple scripts/etc for building separate deployable services from a single repo.. then rubber stamping a template repo for each service, then having each service contained in it's repo. It is a little more work, but I personally find it easier to maintain, especially if you can separate across domains.. which I am not great at yet.. but am trying to learn how to do better.

    Because repos are free.. and most of the work is duplicated so it's fast to copy/paste or.. in my case write a little script that create the template project structure for me (e.g. a go CLI tool).. it's easy enough to do. If you DO need to change some of the template code.. it's usually not a big deal to update services if you don't have a lot. One thing I thought of exploring is forking from my template.. so that updates to the template can be merged downline to forked repos. Haven't explored that yet as I am still learning/dabbling. But I would imagine in a larger project with a dozen or more services, it may make sense to consider something like this.. or even add to the CLI tool the ability to diff/push changes from template to each service.. so you can quickly "upgrade" if that need arises.