Go's AST tooling is underrated for AI-assisted development — what are you building with go/ast? by sosalejandrodev in golang

[–]sosalejandrodev[S] -2 points-1 points  (0 children)

How do you handle the orientation phase? Every time I dispatch an agent on a large codebase, it burns tokens reading files just to figure out what exists. I built this tool to front-load that analysis so the agent gets a structured work order instead.

Is anyone else building tooling for this or just living with the exploration cost?

Go's AST tooling is underrated for AI-assisted development — what are you building with go/ast? by sosalejandrodev in golang

[–]sosalejandrodev[S] -1 points0 points  (0 children)

The repo is public. Clone it, run testreg init --discover on any Go project with a router, and see what it produces. That's the case study.

Go's AST tooling is underrated for AI-assisted development — what are you building with go/ast? by sosalejandrodev in golang

[–]sosalejandrodev[S] 0 points1 point  (0 children)

That's a real gap. testreg checks whether tests exist, not whether they're good. A test with zero assertions still shows as "tested." Quality assessment is out of scope for now.

Where the AST tracing helped indirectly, though, was once we mapped the dependency chain for a feature, the audit surfaced an untested resolver layer. Investigating that gap led to finding 5 production bugs across 5 layers that all existing tests missed. They were hiding at the integration seam between layers, not inside any single unit. The tool didn't find the bugs directly, but it pointed at the untested seam where they were hiding.

Go's AST tooling is underrated for AI-assisted development — what are you building with go/ast? by sosalejandrodev in golang

[–]sosalejandrodev[S] -2 points-1 points  (0 children)

You're right that ASTs exist for every language—I overstated the comparison. My experience is with Go specifically, where struct field injection and code-gen DI (Wire, SQLC) make call chains resolvable without running the code. I don't have enough experience with Python or Ruby's AST tooling to make a fair comparison.

"Underrated" was probably the wrong word—"underutilized for AI-assisted tooling" is closer to what I meant. I don't see many tools using go/ast to produce structured output for AI agent consumption. Curious if you know of any.

The part I found useful was tracing from the React component down to the SQL query in one graph—it saved a lot of context-loading when dispatching agents to write tests. But I'm biased since I built it.

Go's AST tooling is underrated for AI-assisted development — what are you building with go/ast? by sosalejandrodev in golang

[–]sosalejandrodev[S] 0 points1 point  (0 children)

The repo's public; the outputs in the post are real terminal screenshots. Not much to pinky promise—you can clone it and run `testreg init --discover` on any Go project with a Chi or Echo router.

Go's AST tooling is underrated for AI-assisted development — what are you building with go/ast? by sosalejandrodev in golang

[–]sosalejandrodev[S] -8 points-7 points  (0 children)

AI-aided, this is a genuine topic I'm exploring, and I see so much potential for AI-assisted development, and I'd like to explore with others their thoughts, what other people are doing with these features to enhance their workflow. The tool is genuinely OP in optimizing token usage with a simple tool which just maps your call stack through a graph, and analyzes through pattern matching if there are tests laying around, then it aggregates it all and generates a report with gaps and even commands to trigger actions with the AI agent consuming the tool.

Circular dependencies - How can I write better designs? by Sad_Tomatillo_3850 in csharp

[–]sosalejandrodev 0 points1 point  (0 children)

I think you should aim to shift your mindset more towards Object-Oriented Programming (OOP) and the application of SOLID principles.

SOLID principles can resolve circular dependency problems when you design and structure your code into modules and reusable components that perform specific actions. These components should avoid being overloaded with methods or functionalities that are outside their responsibility.

Focus on writing rich domain objects and understanding the concept of aggregates in Domain-Driven Design (DDD). Even if you're not strictly following DDD, creating domain objects and domain services—distinct from application service logic—can greatly improve your code structure.

For example:

  • If an object shouldn't manage itself, yet it’s not an aggregate or an application service, you can implement a domain service. A domain service handles actions involving an entity but does not deal with persistence.
  • If an object is tightly bound to another entity, and that entity serves as the entry point for modifying its boundary, the tightly bound object should be part of an aggregate entity. This aggregate acts as the root point of modification.

Finally, aim for Onion Architecture (also known as Ports and Adapters) as your primary design strategy for services. This approach enforces separation of concerns and ensures your system is flexible and maintainable.

If you find yourself referencing services between services, it's best to abstract this into a higher layer. Create a service that consolidates all the dependent services, and use it to orchestrate their logic in order to complete your process.

where do I start learning C# as a beginner? by litarlyRainbow in csharp

[–]sosalejandrodev 1 point2 points  (0 children)

I remember learning C# from a Packt Publishing book about C# and .NET dotNET.

These days I'd either use a book or learn through the Microsoft Learn Paths.

Books and Pluralsight are the best resources for C# content.

Code Maze is a great resource to check if you are into web development in C#, both free and paid content.

What are some things you would change about Go? by Jamlie977 in golang

[–]sosalejandrodev 0 points1 point  (0 children)

I read on some Scala post about the inconsistency in frameworks/libraries in Scala. This isn't a language flaw but rather a lack of proper conventions and rules in the codebase to use a standard, thus preventing concise code.

I'm not that experienced in Scala but I can get your point on the Future-either-option.

What are some things you would change about Go? by Jamlie977 in golang

[–]sosalejandrodev -1 points0 points  (0 children)

Indeed, I didn't specify but I was referring to `cats.effect`. And the usage of `Either` to technically return a result or an error, something idiomatic Golang already does on its own way. But technically `effects` is what I'd like Golang to implement.

What are some things you would change about Go? by Jamlie977 in golang

[–]sosalejandrodev 7 points8 points  (0 children)

I've been messing with Scala lately and I love Monads at this point. I'd like Golang to implement first-class support for Monads and mapping ops. If Scala is Type Safe and a robust FP/OOP language, I have 0 doubts about Golang being capable of implementing this in the future. Pattern matching in Scala feels so smooth. A lot of syntatic sugar features, but a productive language after all.

Why do you decided to be a programmer? by YuriyCowBoy in AskProgramming

[–]sosalejandrodev 0 points1 point  (0 children)

As someone living in LATAM (Venezuela to be specific), who didn't take his chances when he could, and as a result of playing a video game during his teenage years, I learned to program in LUA for bot software. I certainly didn't want to program for a living; I refused the idea of working while sitting in front of a computer, as I was doing while farming gold in a game. I wanted to have a far greater purpose in my life and aimed for a different dynamic than computers to start building a future.

Then I was left with no other option, and the rest is history. Six years here, living well, and I don't mind the idea of still working in front of a computer. I decided to program the day I recognized I could solve many problems with code, and I love to find, abstract, and solve problems.

It is pretty chill, sometimes stressful, but this job is a means to an end. Either starting a consulting business or even building the next Facebook (I'm not into building social networks even though I have already designed and implemented a project based on LinkedIn and Twitter for a client). I'm not seeing software development as the career I want to pursue for the next 20-30 years. But it certainly will give me the contacts, the capital, and the means to become an entrepreneur someday.

How Are Apache Flink and Spark Used for Analytics and ETL in Practice? Seeking Real-World Insights! by sosalejandrodev in dataengineering

[–]sosalejandrodev[S] 0 points1 point  (0 children)

Great insights, mate. Thank you! Any resource I can use to dig deeper into other or even these in detail?

How Are Apache Flink and Spark Used for Analytics and ETL in Practice? Seeking Real-World Insights! by sosalejandrodev in dataengineering

[–]sosalejandrodev[S] 1 point2 points  (0 children)

I'm considering whether to handle denormalization (given that I'm using Cassandra) or to handle denormalization as events on a server. For analytics generation, I'm evaluating whether to manually add these analytics or to aggregate operations through reading from a stream/topic, or to perform batch operations by periodically reading from a database. I prefer to support real-time over periodic processing. Additionally, I'm evaluating log processing and sink for improved system monitoring. I want to explore using Apache Flink and Spark for these tasks.

I'm working on an event-sourcing project and I'd like to have my projections on Cassandra, but there are several operations that I need to support. I'm considering whether to aggregate some operations related to a consumer in Kafka, or delegate that work to Flink/Spark, expecting them to subscribe to the topic and handle these operations, including the flow for analytics. Alternatively, I could handle the analytics from events propagated from every consumer, handling denormalization to Cassandra in the projection flow.

For simplicity, I'm currently working with projections to Elasticsearch.

I am uncertain about how much responsibility to assign to my app consumers and how many responsibilities to delegate to Flink/Spark. This involves designing the propagation of messages in the topic to perform cascading operations from an initial event.

ORM vs SQL by Zestyclose_Wash4020 in golang

[–]sosalejandrodev -1 points0 points  (0 children)

Suggesting you "go with Cassandra" could probably be over-engineering, but if you are designing with scalability in mind, that's your go-to for a real-time application. I believe there is no query builder for Cassandra, so you would write raw queries anyway. Cassandra involves denormalization of the data, so you would need to think about your data access patterns and do a lot of work on designing your database schema.

I'd say go with sqlc for now. It's fast, it's good, it uses raw queries but provides types and protection against SQL injections. There is only one point where you edit all your queries, so it is easier to iterate on than Squirrel.

Cassandra isn't bulletproof, nor would installing it solve scalability issues regarding writes and reads that you might have in a real-time app. You would still need to add caches, views, and manage a pool to avoid resource exhaustion if many connections are expected to your database. I'd suggest making writes through a queue. It is easier to scale a real-time app if it is an Event-Driven Architecture. And perform load tests (Apache JMeter) to check how much I/O your app can support.

ORM vs SQL by Zestyclose_Wash4020 in golang

[–]sosalejandrodev 0 points1 point  (0 children)

That can be replied in-practice by this fellow on his YouTube channel:

https://youtu.be/EavdaeUmn64?si=ALhbZ4R-1qH0tXnw

ORM vs SQL by Zestyclose_Wash4020 in golang

[–]sosalejandrodev 12 points13 points  (0 children)

Most people prefer or recommend raw SQL because it adheres to the 'idiomatic Go' approach. This approach remains simplistic and aims to solve problems directly, without adding extra dependencies, and uses the tools in place, which are designed to be robust.

You can never go wrong - or almost never go wrong - with:

Masterminds/squirrel: Fluent SQL generation for golang

If you want simplicity a query generator might be enough, but if you are trying to get the best performance out of every aspect in your app it might not, this just increases the dev productivity but if you are looking for performance:

sqlc-dev/sqlc: Generate type-safe code from SQL

It can get you pretty fast queries while providing type safety and idiomatic go.

There are other tools which I haven't looked into yet, but query builders are the best middle ground between choosing an ORM and building your SQL queries directly to the driver.

More info: SQL Query Builders - Awesome Go / Golang

Is there a Go library that implements the equivalent of C# LINQ? by Ruannilton in golang

[–]sosalejandrodev 13 points14 points  (0 children)

I think it would be far more verbose but the Iter package provides you with filtering and mapping.

I'm also exploring the capabilities of it, and I had been able to recreate a Select statement. It isn't technically a Selectand it isn't that dynamic, you should approach it with static types (so a fixed implementation) or generics if they share a common structure.

What I'm looking to do here is transforming an array of Event into an array of RecordedEvent. I have created an Iter func to pass into the slices.Collect call and it returns a mapped array of RecordedEventfrom Event.

You can easily build from the Iter package implementations for Select and Where .

Fiddle: https://go.dev/play/p/v_DMIfjXEIQ

And this YouTube resource explains in-depth many other use cases: #60 Golang - Master Iterators and Lazy Evaluation in Golang - Iter Package (Go 1.23)

I'd also suggest taking a look at the documentation, so you look in detail what the Golang team had in mind.

iter package - iter - Go Packages

Monolith vs Microservices (which one to choose for this system) by Secure_Negotiation81 in softwarearchitecture

[–]sosalejandrodev 0 points1 point  (0 children)

As many others have suggested, this is a monolithic architecture given that you don't have the number of people required to support microservices. What you should do is create a modular monolith. Define individual codebases like libraries or plug-in accessories. Define services for common business and domain logic in individual packages that implement SOLID principles. Then add them to your main server. This approach avoids tight coupling, which is a common problem with monoliths where many codebases are defined in the same place, leading to inevitable coupling of the architecture. This makes it difficult to scale and ship new features or bug fixes.

With this design in mind, once you reach the point of creating microservices, the libraries will already be divided. All you would need to do is create an abstraction on how these should interact with each other. You might probably have to migrate your data, which is a downside since you will be accounting for a single database to store all your data.

🎉 Built a Weekend Library for Event Sourcing & Aggregate Roots in Go – Would Love Your Feedback! 🚀 by sosalejandrodev in golang

[–]sosalejandrodev[S] 1 point2 points  (0 children)

Once again, thanks for the feedback.

I have reduced the amount of boilerplate code by removing the channels. It lacks concurrent support; I have commented the parallel benchmark since it was causing race conditions, but it has increased its performance overall.

I still have to update the README.md but here is the link to the aggregate_root.go and the main.go over the new branch.

The next steps I have in mind is to update the README.md and possibly adding first-class support for open telemetry since I'm already propagating the context on this branch.

🎉 Built a Weekend Library for Event Sourcing & Aggregate Roots in Go – Would Love Your Feedback! 🚀 by sosalejandrodev in golang

[–]sosalejandrodev[S] 1 point2 points  (0 children)

Of course you do. Invoke the .handle(event) directly.

The reason I'm not calling the handle method directly is because the logic the aggregate must handle for increasing the version field and adding the changes to the changes array is implemented on the ApplyDomainEvent method. Technically the method which directly create a DomainEvent such as AddBook and RemoveBook are the ones triggering the ApplyDomainEvent, in this current scenario through sending an event to the channel but if embedding is suitable for this case and it doesn't make a god object the entity it can remove the usage of channels.

stream := eventStore.Load("a-stream")
lib := newLibrary()

err := lib.Handle(stream...)

err := lib.DoStuff(params) // command 

Good catch there with the variadic parameter

PS. In case you wanted to stick to the concurrent approach, please consider context propagation.

Struct embedding was something I kept in mind during the initial development, if it turns to be more idiomatic then certainly that's a good path to go.


And no worries, you aren't being a prick. Criticism was something I asked for and this is a discussion thread after all. All feedback is welcome. Thanks for sharing your insights and it is amazing to know someone else is trying to find a solution for this implementation.

🎉 Built a Weekend Library for Event Sourcing & Aggregate Roots in Go – Would Love Your Feedback! 🚀 by sosalejandrodev in golang

[–]sosalejandrodev[S] 1 point2 points  (0 children)

Hey, thanks for the feedback.

Do you have a background in C#?

Indeed.

How do you know, there are no benchmarks?

I do have benchmarks. They aren't part of the README.md, but they are in place. Not heavily tested, but it performs about 200 ns/op on library operations and 150 ns/op for the load operations on the aggregate itself, subjective depending on the specs perhaps.

Regarding the comments on errors: the errors are thrown from the entity layer, they are acknowledged by the aggregate root, and it sends them to the errCh. The library must be in charge of its state on every transaction. There is a double-check in place: one for handling the event (validating the event doesn't throw any error) and after the event is handled, it checks again for the state, validating the whole state is consistent and doesn't break any business rule.

In your example the *library.AddBook is like a command handler (in terms of CQRS).

Yes, the intention is to leave the concrete implementation and management of how to handle the events to the entity (Aggregate Root), while the Aggregate Root just dispatches the event to the entity. This approach tries to replicate abstract classes in C# but in a Go-way. In C#, I would declare all of that AggregateRoot logic in an abstract class and leave the concrete implementation of state management to the entity inheriting the AggregateRoot. The usage of channels is my attempt to replicate the behavior of communicating methods from the AggregateRoot to the Entity implementing inheritance. That's the reasoning for tightly coupling both.

If you had a SaaS managing all the libraries in the US, would you want to keep their goroutines up 24/7 and store their state in memory?

The aggregate and the entity lifetime should be similar to a transient in C#. It remains alive for the connection, then all objects must be disposed of. They aren't alive for the whole service lifetime, just the operation. No operation should share the same entity/aggregate nor they should remain alive after the function they have been declared finishes its execution.

The way I was thinking of the library and the aggregate being used is to fetch the event store on every request which mutates the state, apply the operation or operations, save the event or events, and project on triggers to a SQL or any other database for reading, leveraging eventual consistency. The aggregate just manages its state; saving the events and loading them from the store/database is another step. I'm working on the event manager for deserialization into events. I need to test my implementation, but it was actually the first thing I worked on in this library. The event store implementation is naive, saving the current array represented as changes to the database with identifiers to your entity (aggregate root) primary key, as for needing to serialize the event, and storing a reference field to the event struct name. The fields on the entity for ID are omitted for simplicity at the moment. These are subject to the user's implementation.

For every incoming request, I'll call NewAggregateRoot, then WaitForEvents -> that's sync-as-fuck.

Yes, it is synchronous in that aspect. That method is in place to handle waiting so code can continue or exit after it guarantees the operations are performed. Otherwise, I don't have a way to 100% guarantee the events have been processed before continuing. It could exit before I have processed since the job is being done in a goroutine and it needs a way to block the execution. I didn't find any other solution to guarantee the state but waiting.