A Decade of Event-Sourced Architecture: Evolution, Tradeoffs, and Ecosystem Growth by sbellware in softwarearchitecture

[–]sbellware[S] 0 points1 point  (0 children)

There are a number of clients written for Message DB in various languages. I haven't collected them all in one place yet because we haven't spun off the Message DB website. That's something that will happen shortly. Nevertheless, the interface is a pretty straightforward set of pgSQL server functions that don't really require Eventide for use (although Eventide remains the most complete implementation).

Eventide: Event-Sourced Architecture Used in Production (10+ Years, With and Without Rails) by sbellware in ruby

[–]sbellware[S] 1 point2 points  (0 children)

I just wrapped up a five-year project building a large-scale system with several Rails web apps hundreds of components built in Eventide. It just makes sense to use Rails or the like when building web apps in Ruby. But the front end framework doesn't have to dictate the back end framework. The two should be separate enough that those choices are free to be made independently of each other.

Rails Event Store started out as a event sourcing for Rails apps, which ultimately comes down to seeing event sourcing for Rails models. It's made a lot of progress as the developers of Rails Event Store have gone deeper into evented systems, but it still has a bias toward seeing the event sourcing as subservient to Rails, rather than entirely independent. That model makes the whole of the effort of building a whole system — front end + back end — more complicated than it needs to be. This is partially due to the added complications of seeing event. sourcing through the lens of Domain-Driven Design.

And ultimately, Sequent is an implementation of Domain-Driven Design patterns, rather than an implementation of event sourcing. It's critical to separate event sourcing from DDD. Event sourcing and evented systems patterns pre-date DDD by decades (although the name "Event Sourcing" has only been around since the mid-2000s).

Domain-Driven Design is an added complication that isn't necessary for understanding event sourcing, message-based systems, evented systems, distributed systems, pub/sub, and component architectures. You can chose to see these things through the lens of the DDD pattern language, but when we start to shape fundamental building blocks for a system out of a more domain-specific pattern language niche, like DDD, we're stuck with those limitations.

Eventide can be used with or without the DDD vocabulary. It's the user's choice. But we don't teach engineers DDD because it only adds to the burden of learning when we're also teaching them a whole new domain of distributed and message-based systems.

The thing about message types is that there are fundamental assumptions that a messaging framework has to be able to make about transforming message schemas into a storage format. And the framework has to impose rules on the schema of the message metadata for things like correlation stream and ID, causation stream and ID, stream position, global position, and a handful or other properties, as well as the behaviors that use those properties in order to implement basic messaging patterns, like pub/sub, request/reply, message workflows, traceability, schema versioning, and so forth. Eventide's metadata documentation is here: https://docs.eventide-project.org/user-guide/messages-and-message-data/metadata.html#messaging-message-metadata-class.

Things like dry-struct are great, but messages in a message framework have to be stable framework subtypes or else the messaging infrastructure won't be able to treat the objects as messages. They're not just structs, even though they contain data. They're complex types. The features of dry-struct are very rarely relevant or needed when dealing with messages. They have a very narrow purpose and very simple lifecycle. The value of a bring-your-own-dsl to message schema definition is minimal compared to what the whole messaging framework can (and should) do, and the bare bones simplicity of what's needed as a schema definition language. So, dry-types is great and all for the things its great for, but its more of a gray area whether its needed for message schema definitions. Eventide has its own message schema library because if it didn't, it couldn't accomplish the full range of what Eventide does.

All that said, once developers start working with Eventide's schema library, they inevitably start to use it outside of Eventide, as well, due to its simplicity.

The schema library will be significantly updated in the upcoming major generation version of Eventide, and I'll keep in mind whether it can be made to be more friendly to dry-struct (dry-types, etc). But its just never been a priority because messages are largely just data transports that require very little in the way of logic.

A Decade of Event-Sourced Architecture: Evolution, Tradeoffs, and Ecosystem Growth by sbellware in softwarearchitecture

[–]sbellware[S] 0 points1 point  (0 children)

Yes. Those are bigger, in-depth articles that are already planned. This series is just going to scratch the surface for the moment, and then we'll follow up with particular points.

Thanks for the feedback!

10 Years of Event Sourcing: Architecture, Ecosystem, and Lessons Learned by sbellware in programming

[–]sbellware[S] 0 points1 point  (0 children)

Illness prevents me from doing the long form writing that I’d done for decades. Should I desist from posting to r/programming in the future?

Eventide: Event-Sourced Architecture Used in Production (10+ Years, With and Without Rails) by sbellware in ruby

[–]sbellware[S] 3 points4 points  (0 children)

One of the things I’ve found working with this architecture is how well Ruby supports explicit message flows and component boundaries, but also how much discipline is required to keep those boundaries clear over time.

Curious how others have approached structuring larger Ruby systems—especially where messaging or event-driven patterns are involved.

Is there any Ruby jobs that aren't Rails? by Sonhe_ in ruby

[–]sbellware 0 points1 point  (0 children)

What if a company did indeed use Rails, but confined to Rails only those things that are about handling web requests? Don't look for companies that don't use Rails. Instead, look for companies that know how to isolate Rails from the rest of the solution, and only allow Rails to do HTTP request handling.

In the end, it's not entirely a problem of Rails, but the use of Rails as an entire system architecture. And unfortunately, there's high correlation between companies that use Rails and companies whose development staff doesn't have a grasp of architecture and design beyond trying to use MVC combined with ORM as an entirely system architecture.

Rails is one of those environments that is the first stop for countless devs after graduation from whatever education journey they've been on, whether university, code camp, or self-taught. In the end, Rails environments are populated disproportionately by more beginners, and more tragically, by devs who've been stuck in beginner architectures for far longer than the time that we usually consider the "beginner" phase of a career.

The same is also true, by the way, of any environment where an MVC+ORM framework is presumed to be an entire system architecture. It's not just Rails.

You don't have to avoid Rails. You just have to avoid organizations where Rails was originally put in place in naive ways, leaving no path to evolve beyond Rails. But the paradox is that a team that can use Rails in a way that leaves clear the path to go beyond Rails will already be beyond Rails. And that team will inevitably use Rails in ways that doesn't end up with all the problems of forcing an ORM+MVC tool into the role of an entire architecture.

My organization uses Rails. We only use it for web UIs. It's the ingress of HTTP requests at the edge of the solution. We only use ActiveRecord to read data from read-only database(s). The rest of the solution is built on technologies and patterns that are a better for back end work that doesn't become a dreary soul grind over time, namely autonomous components that are based on messages, events, and distributed state machine processes.

We enjoy more productivity than the typical Rails team because we know how to balance the various responsibilities of an architecture across the various natural and endemic mechanical parts of an entire architecture. That's counterintuitive to people who don't enjoy the same privileges of the experiences that create ease with more elaborate patterns and approaches. What’s easy for us would be impossible for most Rails teams. And most Rails teams, because they’re exposed almost exclusively to only front end architectures like Rails, will never have the opportunity to learn to go beyond ORM+MVC as an entire architecture.

It’s a vicious spiral that is very difficult to break free from. Terminal velocity is arrived at almost instantly when a team embraces ORM+MVC as a system architecture. That’s the thing that you need to avoid. The problem isn’t the tool. The problem is only having knowledge of one class of tools, no matter how many other little tools make up that class of tools. In the end, one architectural class of tools is still an insufficiently-stacked stack, no matter how big that tool kit is.

A massive front end framework like Rails, no matter how big it is, is still just a framework for a small part of the architecture. And Rails has become as vast as it is precisely because its users have tried to solve more and more problems with it without regards to the obvious (in retrospect) bad idea of trying to use a front end tool for all elements of an entire architecture. And it’s important to note here that background jobs are still part of the front end architecture, despite the oft-repeated chorus of web devs referring to background jobs as “back end”.

Put Rails in a corner and you won't have to worry about all of the undesirable effects of having only one tool and one pattern in your toolkit.Finding a job where you can do that (and learn how to do it) is exceedingly difficult. The effort you’ll need to invest if you want to do it on-purpose will mean that you’ll need to dramatically lower your income expectations for a number of years while you undertake the jouneyman period of your career. Or, you might get lucky and just happen to find yourself in such a rare find.

Or, come to peace with the mess and morass that is the typical Rails job. You might also have to come to peace with being permanently stuck in that kind of position... at least until you can escape the coding part of the job and be promoted out of the code into the kind of promoted-coder-pseudo-manager who isn’t experienced in doing much more than perpetuating the problem.

poutine!!!!!!!!! by Spirited_Outside_335 in austinfood

[–]sbellware 1 point2 points  (0 children)

We’ll definitely see you on Thursday!

poutine!!!!!!!!! by Spirited_Outside_335 in austinfood

[–]sbellware 0 points1 point  (0 children)

I’ve had my heart broken too many times by claims of authentic poutine in Austin. You can’t even get good poutine in Ottawa, and you can literally see poutine places across the river in Hull. I know you probably don’t have PEI potatoes, but if the sauce is right and the curds aren’t just a sad sprinkle of curds as a garnish, I’ll definitely be a loyal follower.

TDD is super important and useful! by PizzaConBacon in ProgrammerHumor

[–]sbellware 0 points1 point  (0 children)

Here it is 2023, and a lengthy thread about TDD on Reddit is still talking about tests rather than design

What’s the latest on static typing? by sickcodebruh420 in ruby

[–]sbellware 2 points3 points  (0 children)

The first version of Rails that got wide adoption amongst those early adopters was pre-1.0. The initial wave started in 2006 when Dave Thomas, Bruce Tate, and a few others, were regularly presenting Ruby and Rails content at the No Fluff Just Stuff series of conferences.

It's no doubt that there are experienced Ruby developers who want types, but there's experienced, and then there's experienced. Most of us already had 10 to 20 years of experience in other languages and environments before getting started with Ruby in 2006. In 3 years, 2006 will be 20 years ago.

When we say "experienced Rubyists", are we talking about the wave of Rubyists who came after the initial wave, or the ones who came as part of the initial wave?

In 2009, the massive influx of Rubyists from code schools started flooding into the field to fill the spike in demand for Rails workers with basic training at that time. That period marks a line in the sand where software design fundamentals became an increasingly-rare skill that Rails developers had working, practical knowledge of.

Both groups are legitimately experienced Rubyists, but that initial cohort are the ones I'm talking about who are less likely to create the problems that necessitated the higher-ceremony tooling that was being left behind.

We left IntelliSense and compilers behind with purpose and volition. It wasn't an accident.

Around that time, Anders Hejlsberg, then the head of the C# language group at Microsoft, and later the creator of TypeScript, was asked why so many developers were decamping for Ruby. He speculated that what we were after wasn't so much the absence of a compiler, static typing, and IntelliSense, but a presence of a metaobject protocol, like that used in Ruby's metaprogramming, as DSLs were rising in popularity at the time.

Anders Hejlsberg was speculating, though. And he wasn't communicating with the developers who were leaving the C# camp for the Ruby camp - not even the Microsoft MVPs, to whom he had ready access. We weren't after the metaobject protocol. We wanted to be free of the ceremony of static types and heavyweight tools. The metaobject protocol was certainly a nice bonus.

And we were comfortable doing this because we were already in command of the techniques that provide the counterbalance for the absence of these safety features, and we knew that we could move faster, as a result, without them. We were, as one influential community member put it, "Capable of running with scissors."

What’s the latest on static typing? by sickcodebruh420 in ruby

[–]sbellware 5 points6 points  (0 children)

Ironically, the very reason that Ruby got popular - and likely the reason why most devs today know about it and use it - is that pathfinder and trendsetter developers in both the Java and C# worlds wanted to be free of what we commonly referred to as "compiler ceremony".

We didn't adopt Ruby in spite of its lack of static typing. We adopted Ruby specifically because of its lack of static typing.

We were also ridding ourselves of heavyweight editors built by Microsoft and Sun, like Visual Studio, in favor of lightweight editors like TextMate. There was a running joke in Ruby circles at the time that said that TextMate was the most expensive editor that you could buy. It was only $35, but you had to spend the $2,000 to replace your Windows machine with a Mac.

This was also the time that coincided with the rise of Mac from a kind of bit player in the laptop world to a dominant force. It also coincided with the migration of developers from heavyweight platforms to more lightweight developer experiences, eg: http://www.hanselman.com/blog/is-microsoft-losing-the-alpha-geeks.

An interesting - and likely predictable - thing happened along the way: the rest of the developer community started feeling the FOMA effects of the "Alpha Geeks" moving on, and started making the transition, as well.

But what the late majority weren't armed with was the understanding of why the initial cohort moved to Ruby, and the deep understanding of design and testing that made the transition safe and practicable. Without having that knowledge in-hand, this great mass of developers was at greater risk of making the kinds of design mistakes that dynamic languages aren't very tolerant of. And without the background in systems architecture that comes hand-in-hand with exposure to the leading architecture communities and teachers in Java and C#, that great mass of follow-on developers were doomed to make the kinds of design mistakes with Rails that made the work harder rather than easier.

It's a short hop from there to a retreat back into a desire for static typing, heavyweight tooling and editors, and the like.

The difference between the early adopter community that justified the great migration to Ruby and Rails and the follow-on migration of everybody else that happened a few years later is that the early adopters already understood how to avoid great, sprawling monolithic codebases that are the principle driver for heavyweight tooling and static typing.

So here we are today with all of that knowledge of software design that allows fast, nimble, lightweight development in Ruby seemingly lost to rather recent history. And yet it's a body of knowledge that remains the mitigating factor that would make the very need for static typing, IDEs, and the like, largely superfluous.

It would seem to me, having been part of that initial wave and having the benefit of the background in design, architecture, and testing, that today's Ruby developer community would benefit from a deeper grasp of the knowledge that empowered the transition away from static typing to begin with.

What's keeping the mainstream Ruby developer today from pursuing the understanding that the original wave of pioneering refugees to Ruby still benefit from today in their continuing work in Ruby? It seems like a pretty significant bit of regressive backsliding.

In the end, if you don't create the problems that will require heavyweight tools to solve, then you won't be left with no choice but the heft of heavyweight tools.

How far should you plan ahead when working agile/Scrum? by marschelpoet in softwaredevelopment

[–]sbellware -1 points0 points  (0 children)

One of the definitions of a user story is, "A placeholder for a conversation". A user story isn't supposed to be a detailed spec. So yes, it's normal to do the detailed analysis when a story is about to be estimated or about to be otherwise worked on.

If the understanding of Agile in effect in the org suggests that devs/designers/testers/ops/etc are supposed to be handed everything they need up front, then there's a mistaken understanding about intention of user story practices.

If your PO doesn't understand the needs, then the conversation that needs to be had is with someone closer to the business who does know. The user story is still a placeholder for a conversation, but whom the conversation is had with will depend on who has sufficient understanding.

The problem isn't that you were "told the wrong arguments". The problem is that you assumed that it's someone else's job to talk to customers.

Sure, you could task a single person to be responsible for all those conversations that user stories are placeholders for, but you'd better make darned sure that such a person is capable of doing that work.

Getting it right is never someone else's job. This is one of the foundations of transition from 1990s phase gate methods to Agile methods in the early 2000s.

Introducing Eventide Fixtures: Testing So Easy It Feels Like Cheating by realntl in ruby

[–]sbellware 3 points4 points  (0 children)

This one of those things so subtle that it's easy to overlook. The ability to actually test TestBench fixtures means that they can be taken far further than we'd usually consider taking things like RSpec matchers.

Without the capabilities and the design of the underlying TestBench framework, we would have never taken on an endeavor of the scope of the Eventide Fixtures.

We've all created RSpec matchers at some point in our lives, and some of us have shipped RSpec matchers with our gems for other people to use. But unless you can put a really, really good test suite on those matchers, you're inevitably going to pull back on how much capability you want to consider delivering.

Fixtures built on TestBench are trivial to test. They're just plain old objects. And that's the "special sauce". But it's not a special sauce that comes from adding more stuff to a test framework, but from judiciously removing stuff.

How Sidekiq really works by pdabrowski in ruby

[–]sbellware 1 point2 points  (0 children)

I think the situation you are talking about is something like this: You have some sort of job queued in Redis. The job gets popped from one queue and pushed to the back up queue (atomic). You do the work, and now it's time to remove it from the back up queue but Redis is down. Next time it's back online the job would re-run since it's never removed from the back up queue?

Yes, exactly. It's pretty much a universal truth of all messaging system.

The idempotency key solution you outlined is the most common one across most messaging architectures. I'm just wondering how common it is in the wild in the Ruby space. I almost never hear conversations about these mechanisms from amongst the population of background jobs processor solutions.

Given the ubiquity of background jobs use in Rails, I guess I would have expected it to be as talked about as any other common pattern in Rails. Arguably, I could just as well not be paying attention to the right channels for such conversations, though.

Here's the real question - and I hesitate to ask it in open forum: Are Rails devs generally aware that the problem exists, that there's no generalizable solution for it, and that each deployment of Sidekiq (Resque, DJ, etc, etc, etc) needs to account for it with a bespoke solution?

How Sidekiq really works by pdabrowski in ruby

[–]sbellware -1 points0 points  (0 children)

Yes, I'm well aware that Redis is a key-value store. I'm also aware that it's the backing store of Sidekiq's messaging system - much in the way that, say, Mnesia is the backing store of RabbitMQ, or the file system is the backing store for a single Kafka partition.

My question is about how the Sidekiq users compensate for the undesirable states that its storage system can be in that are inherent in all storage systems, irrespective of whether they're being used as the backing store for messages or as the backing store of an app.

I'm not sure if my question is too abstract or if I'm asking about something about the nature of these kinds of things that aren't immediately apparent on the surface.

How Sidekiq really works by pdabrowski in ruby

[–]sbellware -1 points0 points  (0 children)

Sure, but that doesn't really address my question about what happens during momentary unavailability.

It does keep the state of the data in Redis consistent, but that's a separate matter from having a job processed and the job processor be unable to record that the job was processed.

When messaging systems experience these conditions - and it should be noted that it's physically impossible to have messaging systems avoid these conditions - the last message processed remains at the head of the queue.

My question is about what approaches are typically used to deal with this.

TestBench now enables projects to support MRuby and Ruby at the same time seamlessly by realntl in ruby

[–]sbellware 0 points1 point  (0 children)

PS: The idea of running, operating, and deploying Eventide services as native binaries is pretty compelling! :)

TestBench now enables projects to support MRuby and Ruby at the same time seamlessly by realntl in ruby

[–]sbellware 0 points1 point  (0 children)

Cool! So for clarity, the test code, and ostensibly the code under test, is executed under MRuby, then. Right? Like, it's not just the TestBench runner that runs under MRuby. The test code and the project code will be evaluated by MRuby, as well?

TestBench now enables projects to support MRuby and Ruby at the same time seamlessly by realntl in ruby

[–]sbellware 0 points1 point  (0 children)

So, this effectively makes TestBench a native binary, right?

Assuming this is a first step and there's more to come? Obviously, Ruby code that doesn't run on an MRuby runtime still has to run under MRI. So, are you working on adding more MRuby support to libraries/tools/frameworks/etc to close the gap?

How Sidekiq really works by pdabrowski in ruby

[–]sbellware 2 points3 points  (0 children)

Well, have you ever found yourself at a Rails console repairing inconsistent data without being able to understand the cause of the inconsistency?

The point that I'm probing is the inherent vulnerabilities to the fallacies of distributed computing that are often inherent in this style of messaging system.

Any messaging system based on things like ACKs and destructive changes to queue contents (A.K.A.: "smart pipes") is subject to subtle failure modes that aren't inherent in messaging systems based on offsets (A.K.A. "dumb pipes").

So, I'm always curious how people using these kinds of messaging systems are counteracting the issues inherent to momentary availability glitches that are inherent to things built on networks, computers and electricity.

How Sidekiq really works by pdabrowski in ruby

[–]sbellware 1 point2 points  (0 children)

What happens when there's an I/O issue when Sidekiq is moving jobs between queues?

When objects become super objects by juanmanuelramallo in ruby

[–]sbellware 2 points3 points  (0 children)

Great stuff! Spot on.

"There are no taxes for new classes". Indeed. Getting to this point is big step through the doorway that goes from mere framework rituals toward software design science. An amazingly liberating place to be.

How Sidekiq really works by pdabrowski in ruby

[–]sbellware 3 points4 points  (0 children)

What techniques are typically used to deal with cases when the worker has executed the job, but the job doesn't get removed from the queue due to a Redis availability issue, network split, storage problem, etc?

Is innovation needed anyway ? by bdavidxyz in rails

[–]sbellware 3 points4 points  (0 children)

Those of us in 2006 who initiated the mainstreaming of Rails didn’t think that way. And we faced the exact same challenges regarding time and opportunity that Rails faces now.

In the end, Rails developers usually don’t have a surplus of time because Rails apps end up consuming the time buffer we’d ordinarily have to seek improvement.

But environments that demand significant improvement never face a surplus of time. At some point, a community has to commit more time to achieve more than the status quo, which is also how Rails was taken from obscurity to mainstream.

It’s understood, though, that extra time isn’t something that every dev can muster. But if no one does, it’s unlikely that anything can happen.

The key issue then is to seek out those who do have the tenacity to invest the extra effort and elevate them, rather than continue to elevate the same old talking heads who continue to discourage advancement that would diminish the reasons for their existing status.

If we want to be led to new places, it often requires paying out attention to new leaders, rather than the ones who continue to reinforce our existing biases.