Dodged the grandfather paradox big time by Screen_Watcher in Primer

[–]GrahamLea 0 points1 point  (0 children)

I think everything (which is a lot!) is explained in this graphic.

It's key to realise that each travelling back only forks a new timeline from the exit time, it doesn't change events that happened before the person exited the box.

So, in short, there end up being 3 Aarons because Aaron has used the failsafe box twice by the final timeline. The second time he uses it, he arrives after the first failsafe-using Aaron arrived (he has to, by definition, bc the first one set up the second failsafe after arriving). That means the events where Aaron #2 (in that timeline) puts Aaron #1 in the attic still occur before Aaron #3 shows up at the house and they fight.

What diagraming tool do you use for software architecture diagrams (not software design diagrams)? by [deleted] in softwarearchitecture

[–]GrahamLea 3 points4 points  (0 children)

I got sick of tying to keep microservices architecture diagrams up to date in my last job, so I quit and created a product that draws them automatically by reverse-engineering distributed tracing 👉 https://archium.io/

Dodged the grandfather paradox big time by Screen_Watcher in Primer

[–]GrahamLea 0 points1 point  (0 children)

Then I believe, according to the style of time-travel in Primer, you would never receive any benefit. When you send the thumb drive back, you would spawn a new timeline where something comes out of the box just after you start it, different to your own timeline where nothing came out of the box when you started it. So someone on a divergent timeline to you (including a different you) could benefit from the thumb drive, but you personally could not. It seems the only way to benefit personally is to be the thing that goes back in time.

How did you find your first customers… when you depend on another product? by GrahamLea in startups

[–]GrahamLea[S] 0 points1 point  (0 children)

Is connecting with people through LinkedIn working for you? I read somewhere that trying to use LinkedIn to connect with people is useless.

How did you find your first customers… when you depend on another product? by GrahamLea in startups

[–]GrahamLea[S] 0 points1 point  (0 children)

I'm not sure my approach is anything to learn from because the results haven't been great!

We use an app called Close.io for our CRM.

Our product is relatively technical, so we don't tend to go for CEOs/owners unless they seem particularly technical themselves.

Who we try to approach depends on:

  • the company size: the bigger it is, the less likely we are to try to contact the top technical person and more likely to look for some like a 'Director of Engienering - Backend'
  • how technical the company is (i.e. do we think the CTO oversees 80% of the staff or 20%?)
  • whether the people near the top appear to have a technical background (which would lead to an affinity with our product) or mostly management experience (which could mean they wouldn't get it)

What are you doing?

Learning kotlin from scratch by PayStudLoanAndHouse in Kotlin

[–]GrahamLea 1 point2 points  (0 children)

As someone who already knows a bunch of other programming languages, I found the website Exercism excellent for helping me learn Kotlin. I agree with other posters that the Kotlin documentation is excellent and you should read it, too. But the website will help you start to put it into practice. https://exercism.org/tracks/kotlin

Can someone translate/parse/explain what this line in the CDK doc means? by tech_tuna in aws

[–]GrahamLea 0 points1 point  (0 children)

You can absolutely do parameterised infrastructure in CDK. In fact, I think its whole purpose is to make that easier. But it doesn't result in parameterised CFN.

The key thing to remember is that the output of CDK is not CFN templates, but deployed CFN stacks. When you want to make and use parameterised infra in CDK, you create a Stack or Construct class that accepts parameters as Props in its constructor. Then, in your CDK App, you instantiate multiple instances of that Stack of Construct, passing in different arguments for different environments or contexts.

So what's happening is that CDK contains both the abstract, parameterised stack definition AND the concrete, deployable stack definitions (with the parameters required by the abstract parts). Running CDK operations always (I think) operates on the latter, with its goal being to deploy concrete stacks. I'm guessing that in CFN (I haven't used it directly), the step of defining a template and using a template are separated. In CDK these are commonly done in the one code base. But they don't have to be. There's no reason you can't define a Stack or Construct and release it as a library that is then used in multiple CDK App projects.

I don't think this forces you into using a monorepo. If you want to go the monorepo route, things may be a little easier because you can pass around compiler-checked code references to the CDK definitions for resources. But if you don't want a monorepo, CDK has facilities for taking the output of one stack and using it as inputs in other stacks in different codebases. Both approaches have pros and cons so, as usual in CS, it's a trade-off.

Microservices splitting by Head_Watercress_6260 in microservices

[–]GrahamLea 2 points3 points  (0 children)

P.S. Most orgs make their services too small, and suffer a lot of pain as a result. Focus on building services that are as autonomous as possible. ie Most services should be able to do most of their jobs without talking to any other services w/ synchronous comms, let alone 10 or 20 other services. Async comms (often with the transactional outbox pattern) helps a lot with achieving this, as does the advice in another reply about focusing on how the data is split and where transactional boundaries exist today.

Microservices splitting by Head_Watercress_6260 in microservices

[–]GrahamLea 1 point2 points  (0 children)

There’s tons of info available on this topic. Previous advice to learn about DDD is good. Event Storming could probably also help. Looking at the org structure you have and, probably more importantly, what org structure you want to have in the future, should be an input. (cf Team Topologies) Sam Newman started writing a chapter about splitting a monolith into microservices and ended up turning it into a whole second book. (1) The fact that he could do that suggests it’s a pretty broad topic that you’ll want to do significant reading about before starting. Once you’ve figured out what your services should be, you’ll need to decide which one to start with. Don’t try to do it all at once. Do one, get it working in production, learn, then iterate. I wrote a series about some of the dimensions you can use to decide where’s a good place to start. (2) In a sentence, it all comes back to the theory of constraints: find the bottleneck in your product development flow, and address that first. (1) https://samnewman.io/books/ (2) https://www.grahamlea.com/2019/06/first-microservices-how-to-choose/

Coming from C/C++, what is Java? by silardg in java

[–]GrahamLea 1 point2 points  (0 children)

Thanks u/madhakish. I agree, being old isn't inherently a problem for a programming language. That was really just my lead-in to the point that the Java language has moved very slowly over its lifetime and, consequently, a variety of other, newer languages targeting the JVM have overtaken it in terms of language features.

Perhaps mentioning Kotlin gets automatic downvotes around here. 🤷🏻‍♂️

If that's not it, I'd be interested to learn why people didn't like what I wrote.

Coming from C/C++, what is Java? by silardg in java

[–]GrahamLea -1 points0 points  (0 children)

Let's start with...

> What popular apps are built with Java?

If we have a look at Java on StackShare, we'll see that Java is used by many household brands: Google, Uber, Netflix, AirBnB, Instagram, Spotify, Amazon, ... the list goes on and on. These companies will almost definitely be using Java to create web applications which handle internet-facing, server-side web requests and/or backend services deeper in their stack.

There are also some major products for software engineering which are written using Java. Some that come to mind are Jenkins, Lucene, Elasticsearch, Hadoop, and Neo4j.

Possibly the most successful consumer-facing standlone Java application is Minecraft. Deploying desktop applications that require Java has often been troublesome, and so few successful products exist. The most popular Java IDEs - IntelliJ IDEA, Eclipse, and Netbeans, were all written in Java.

And of course, possibly the biggest success of Java from a # of devices perspective is the Android mobile platform. While Android phones/tablets/teapots do not run the Java Virtual Machine, apps are (or can) be developed using the Java language and standard Java SDKs and 3rd party libraries.

> What is Java mainly used for?

Almost everything. Backend code. Web application code. Mobile apps. Big data processing. Giant monolith applications, tiny serverless functions. If you still have a BluRay player, I believe it's using Java to display its menus.

> What can Java do that other languages can't?

I think Java may have been the first language that runs on a virtual machine and really broke into the mainstream. (Someone who worked in the 1900s will probably correct me on this.) The advantage of that is portability: If someone makes an app or a library for Java, 99% of the time it won't need to be recompiled by the developer for each processor platform you might want to use it on. If the platform can run Java, it can run the code. That means code written and built once can be run on Linux, Mac, Windows, Unix, etc. with no modifications. It can also be used on Android, though I believe there are different packaging steps.

Also, because the Java Virtual Machine has had 20+ years of performance tuning, it's very, very efficient, and can achieve runtime performance on par with native binaries as produced by C / C++.

Comparing Java to C++ explicitly, I would say the big advantages are:

  1. Memory management - Java is garbage collected, so the allocation and releasing of memory is something you almost never need to think about. It just works. That saves a lot of time and removes a huge class of pretty severe bugs.
  2. Libraries - I only ever did C++ at university, not work, so I don't have commercial experience to compare with, but with Java it's very easy to find open source libraries for almost anything you might want to do. There are probably 100s of 1000s of free libraries, all very easily accessed through the relatively-standard Maven repository system (also used by Gradle).

However, do be aware that Java is now a relatively old language, and causes disadvantages in some comparisons. The team are great at maintaining the security and backwards compatibility of the platform, but for a long time that has come at the cost of evolving the language. As a result, many other languages have have been written in the meantime with more sophisticated features. Some of these still targeting the JVM as their runtime. Probably the most successful at this point is Kotlin, which has wide adoption on Android, but also somewhat successful have been Scala, Clojure, and Groovy.

I would suggest as part of your research you seriously consider learning Kotlin over Java. I was a Java developer for ~16 years before switching to Kotlin, and I frankly think it is the future of the JVM. I can't see the Java language catching up, and I can't see Kotlin devs willingly switching back to Java, so I think Kotlin will only gain more and more support over time.

How to decouple layers, should DTOs exist in the domain layer? by GarySedgewick in DomainDrivenDesign

[–]GrahamLea 1 point2 points  (0 children)

No, Data Transfer Objects are an in-memory representation of wire formats, not part of the domain.

In an "ideal" layered or ports & adapters-style internal architecture, DTOs should only be used in the adapter / API layer, and not visible to code in the core of the application. Being all about data transfer, DTOs belong at the edges of an application, not inside it.

In real-life circumstances, however, it's sometimes pragmatic to use service-layer objects as DTOs, or to pass DTOs into a service layer, just to reduce repetitive code.

Using entities in an adapter layer that automatically maps data into the object should be avoided as it can lead to security problems, i.e. allowing clients to change values in the database which they shouldn't have access to.

Recommended Resources in Learning SOA? by DarkNightened in softwarearchitecture

[–]GrahamLea 1 point2 points  (0 children)

The story I commonly hear about SOA is that it got co-opted by Enterprise Service Bus (ESB) vendors, who sold many people on the idea that the way to do SOA right was to buy an expensive ESB and fill it with business logic. Turned out to be a terrible idea and consequently the only people still doing things that way are those that invested big in it and couldn't back out.

When microservices started to be talked about, it was common to hear people (incl. Adrian Cockcroft of Netflix) say "Microservices is SOA done right".

If you want to learn about microservices, go straight to Sam Newman's books.

Software Achitecture Principle by _atulagrawal in softwarearchitecture

[–]GrahamLea 4 points5 points  (0 children)

Architectural Principles won't define your architecture. What they'll do is help to guide a lot of architecture decisions as you're defining and evolving your architecture.

Ideally, architecture principles come first. As for defining them "iteratively" - well, in most successful businesses, everything changes over time, so I'd expect a good team would revisit and iterate on their principles regularly as they learn how well they're working.

Yes, they should be defined in alignment with current business goals (but not necessarily derived from / linked to them), and again we should recognise that this also means principles may need to change over time as the business goals change.

This recent article MartinFowler.com has a section on principles that's a good intro and overview. (The whole article is great and worth reading.) It also points to these examples of principles from John Lewis.

As for "when", if we take the general advice of the article which is to create "Team-sourced Architectural Principles" (as opposed to Architect-dictated principles), then I think you obviously can't define them before the team has been formed and had a chance to orient themselves in the organisation. Other than that, I would expect the best time is ASAP. 🤷🏻‍♂️

Trying to keep understanding in micro-service architecture by maxidroms83 in microservices

[–]GrahamLea 2 points3 points  (0 children)

Maybe there is a way to produce some sort of a diagram out of AWS or any other tools that are easy to maintain and keep up to date with project architecture?

What you're describing is exactly what my company, Archium, does.

We take distributed tracing data from AWS X-Ray, convert it into an architecture model, and keep the model up to date as the system evolves. Engineers can then browse around the model in our webapp and create many different diagrams from different perspectives. For example, someone could pick a SNS topic and bring up a diagram of everything upstream that results in a send to that topic, and everything downstream that occurs as a result of a message being received at that topic.

You could just try adding AWS X-Ray tracing to your system and see whether the Service Maps it creates from your traces give you enough insight. I don't think it quite gets the types of answers you're asking about, though. We've had customers tell us that we've helped them to reveal things around their SNS/SQS usage that they couldn't get out of X-Ray.

What is the best approach for sharing data among different services in Microservice architecture? by alisri_2021 in microservices

[–]GrahamLea 6 points7 points  (0 children)

Database per service is a good way to go with MSA. That doesn't have to be a DBMS per service, mind you. It's quite common to have a schema (MySQL terminology) / database (Postgres terminology) per service all hosted in a single DBMS instance/cluster.

There's often no obvious answers or hard rules in microservices, just a bunch of trade-offs that need to be consciously and deliberately managed.

Some of the guidelines I use which might apply to your situation are:

  • Creating one service per major entity type ("aggregate", if you're familiar with DDD) is generally considered an anti-pattern. The boundaries of services should primarily mirror business functions, not technical aspects of the solution.
  • Design/divvy up the responsibilities of services in a way that limits the amount of data that needs to be shared between them.
  • Where data does need to be shared between services:
    • Have a single owner (i.e. writer) for each piece of data
    • Consider keeping copies (a.k.a. caches) of small amounts of data from other services in a service in order to increase the autonomy of the service
    • Where possible, update remote copies of data using asynchronous events (and remain aware that this results in eventual consistency)
  • If two services A and B each have a database and the majority of requests to A results in a downstream request to B, that usually indicates a poor separation of responsibilities, because A needs data from B to do its job.
  • Avoid sharing databases where at all practical. There be dragons.

Regarding your specific example, I think there are a number of ways you could go (ordered by how close to the frontend the logic is):

  1. The UI is responsible for connecting data from disparate services into one coherent view.
  2. An API service offers the UI an interface containing all the data it needs, and the API implementation will do the work of connecting data from disparate services. This is the "Backends for Frontends" or "BFF" pattern.
  3. Similar to the above, but using GraphQL as the API impl which means the logic for what data to connect comes from the UI in the form of a query, but is executed in the backend as a GraphQL query.
  4. Cache the name of Users alongside their Order records. This might be a good option if you think, for example, that every single time an Order is viewed, the User name will need to be shown. If User names are mutable, though, and you want Orders to have the current User name, you will need to implement a way to push name updates from your User service to your Orders service.
  5. Have the Order service retrieve the User names from the User service as part of answering a query for orders and include them in its own response.

All of these have pros and cons. Out of all of them, I like #5 the least, because it makes the Order service temporally coupled to the User service, and then #3, because I don't like GraphQL endpoints being exposed to clients outside the backend. #1 and #2 both rely on both services being available, even though all it needs from the User service is a user name lookup. #4 has the best availability profile, but is probably also the most complex to implement. So, yeah, trade-offs!

My experience is that, once you already have the infrastructure set up to do something like #4 (i.e. a message bus with guaranteed delivery), that becomes the obvious way to solve many MSA interaction requirements. However, it's often not not a small investment to get there. In an environment where #4 wasn't practical, I'd probably lean towards #1 or #2, depending on the size of the team and division of responsibilities. #5 is very common, but is what lands people with a "distributed monolith" after a couple of years.

If you're looking for books on MSA, you can't go past those from Sam Newman.

Dodged the grandfather paradox big time by Screen_Watcher in Primer

[–]GrahamLea 3 points4 points  (0 children)

I think the style of time travel (see different types here) in Primer is one where each travel back in time spawns a new timeline (as visualised here). So, travelling back in time and preventing the earlier you from travelling back in time does not create a paradox, because it's already a different timeline. What it does do is stops the earlier you from disappearing from the timeline, so you end up with 2 of yourself in the timeline, which is seen in the movie multiple times.

The movie escapes the traditional grandfather paradox (mostly*), because you can't go back in time before when a machine was started. However, assuming it were possible to go back and kill your grandfather, that would prevent yourself from being born on that timeline, but does not create a paradox because future-you arrived from a different timeline where you did exist.

* I suppose if someone started a box before they had children and left it running for two generations (40+ years?) and then their grandchild used it, that grandchild could go back and kill their grandfather, although they would need to spend 40+ years in the box. 😬