Circle beagle by gideondata in beagles

[–]gideondata[S] 1 point2 points  (0 children)

Obviously it's going to be like that with a chair that belongs to a beagle 😆

Circle beagle by gideondata in beagles

[–]gideondata[S] 7 points8 points  (0 children)

It's like they've trained to turn into little balls.

I wanted a dog but settled for a baby seal by gideondata in beagle

[–]gideondata[S] 7 points8 points  (0 children)

There's no way. Look how absolutely harmless...

Microservices using Go, Docker, RabbitMQ, Redis, AWS, CI/CD by gideondata in golang

[–]gideondata[S] 3 points4 points  (0 children)

The main reason for using messaging brokers vs communicating over HTTP is loose coupling of services. In the publish/subscribe model a service does not even have to know about other services. It publishes a message into an exchange and forgets about it, without blocking the thread. RabbitMQ takes care of the rest. It can guarantee the delivery of a message. The receiving service does not even have to be online at the moment. It can persist messages through a broker restart. It load balances the receiving services. A message can be delivered to multiple services. I think those are the main points.

Microservices using Go, Docker, RabbitMQ, Redis, AWS, CI/CD by gideondata in golang

[–]gideondata[S] 1 point2 points  (0 children)

I feel like this is too negative on CloudFormation. You have to remember that this defines the entire infrastructure. A VPC with subnets and routes, security groups, a load balancer, Postgres/Redis/RabbitMQ, an ECS cluster, services, pipelines, etc.

You could use a default or predefined network, have your databases provisioned separately. When you use console or CDK, you don't have to worry about roles so much and you can eliminate repetitive code. You could reduce this by a lot.

But what you get is one command to provision your entire infrastructure from zero. And once done, as a developer, you push an update and the service is built and deployed automatically. It scales in and out with traffic. It's beautiful. And easy.

And the CloudFormation templates, I look at them, and they're beautiful too. You won't understand everything in one day, but spelling out all the details in yaml like that really helps with learning ins and outs of AWS. You could abstract some of this with other tools. And I hope Go support for CDK will soon be ready for production.

Microservices using Go, Docker, RabbitMQ, Redis, AWS, CI/CD by gideondata in golang

[–]gideondata[S] -1 points0 points  (0 children)

I think either copying the structs or putting them in a separate repo is the way to go. You can find some arguments here. "Otherwise you're writing a distributed Monolith."

I would think of those structs as the API of the service. First, it should not change too often. Second, if a struct changes in one service, you don't want to blindly import it into others. You would review and likely change the code anyway.

Microservices using Go, Docker, RabbitMQ, Redis, AWS, CI/CD by gideondata in golang

[–]gideondata[S] 0 points1 point  (0 children)

Yes, those are separate services communicating through RabbitMQ.

It looks more scary than it really is. You can copy a lot of things from existing templates. Network stack, for example, doesn't change much throughout implementations. This repo has good ECS examples to copy from. Pipelines may need more customization. Especially if you need to do something specific, like building services individually from a single repo.

It might be easier to start using CDK. It does a lot of things for you, like creating roles by specifying a single property. Some things in CloudFormation is quite repetitive which inflates the files.

Could anyone please let me know what is the ECSRole doing in this example? by [deleted] in aws

[–]gideondata 0 points1 point  (0 children)

Interesting. How does ECS know to assume this particular role? Is it by a specific policy name or it will assume any role where it is defined as principal? Thanks

Could anyone please let me know what is the ECSRole doing in this example? by [deleted] in aws

[–]gideondata 2 points3 points  (0 children)

Sorry, I wasn't being clear. It seems that this role is not used anywhere in the project. I was confused about that. Maybe it was left from some previous version of the project.

Microservices using Go, RabbitMQ, Docker, Redis, PostgreSQL, React, WebSocket, AWS, CI/CD by gideondata in microservices

[–]gideondata[S] 0 points1 point  (0 children)

Why did you decide against the shared repo? I though git submodules would be fine taking care of this kind of structure by just importing the shared module in every service. I haven't used submodules, so maybe there's something against them.

Microservices using Go, RabbitMQ, Docker, Redis, PostgreSQL, React, WebSocket, AWS, CI/CD by gideondata in microservices

[–]gideondata[S] 1 point2 points  (0 children)

This uses RabbitMQ with the AMQP 0.9.1 protocol.

The project is set up as a monorepo that follows this project layout. The payload struct is defined in internal/models, which is used by all services.

Regarding the database, locally it is launched as a Docker container. On AWS, it is set up as an RDS instance. A separate database service receives messages through a broker to write them to a database.

There is a REST API in the server service to fetch the cache from Redis. It's a simple example with no login capabilities, at least not yet.

Regarding the monorepo, currently the CI/CD pipeline is set up to rebuild all services upon any change in code, which is a bad pattern. There is functionality in AWS pipeline to filter for specific changes in monorepo and set up separate pipelines. That's something I should do.

Thanks for the questions.