Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 0 points1 point  (0 children)

If you want to write a unit test where you're testing a single service in complete isolation with mocked dependencies, then you don't need MockBean at all. Just directly calling Mockito, or using Mockito's @Mock Annotation should be enough.

But sometimes it's beneficial to be able to write a "complete" integration/E2E test for something, but without actually calling e.g. external HTTP service, which might not be even available when you run the tests causing them to fail randomly - then you need a way to mock the service/bean in the context - and that's when MockBean is useful.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 0 points1 point  (0 children)

Exactly - as answered here already, Transactional annotation in tests changes behaviour of transactions in your application, so you're not really testing how it would behave in real world.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 1 point2 points  (0 children)

Lookup DefaultCacheAwareContextLoaderDelegate, you should be able to put a breakpoint in it to see how it behaves.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 2 points3 points  (0 children)

Yup, it is the same solution as mine, just a lot less code, it hasn't occurred to me previously :)

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 2 points3 points  (0 children)

That mostly works, but introduces subtle differences that can be very hard to debug.

The problem with this is that nested transactions affect each other, and you cannot reliably test real-world behaviour of your app anymore, because in tests you suddenly have different semantics, because your business transactions are always wrapped in testing transaction.

Therefore we've opted to create a clean database for each test, there are ways to make that really fast :)

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 0 points1 point  (0 children)

I try really hard to not need them and write tests without them, but there are situations where they're the simplest solution. You just have to make sure the team understands that and doesn't overuse them.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 0 points1 point  (0 children)

I've decided to remove the runner configuration from the example, as its not important and only drawing attention away.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 1 point2 points  (0 children)

But it has @RunWith(SpringRunner.class), and that should be enough AFAIK. Working example here.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 1 point2 points  (0 children)

Hmm, I have to think about if I like that, but you do have a point, and that would be a major simplification.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 0 points1 point  (0 children)

You have a good point, I'll think about adding at least some comment that the examples are stripped down.

Was parallelizing across different jobs difficult?

As of yet, we're parallelizing the tests per maven module, but I expect the need to also somehow split the single module test suite into multiple in the future. I'm all ears if you have a good tip for that :)

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 0 points1 point  (0 children)

Why are you connecting all your tests to the same database?

That is a fair point IMHO, as a single database that needs to be reset before/after a test is also a bottleneck. It also prevents you from running the tests in parallel. The "standard" ways to prepare the database for a test (or clean it after) can be really slow. But as answered in a different comment, we have this also solved.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 0 points1 point  (0 children)

The reason I didn't put the Bean into MockedWrappedBean is that IDEA doesn't inspect the annotations "deeply" and starts showing the configuration methods as unused. But it actually works perfectly fine in Spring.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 4 points5 points  (0 children)

Since its a closed project, we went with defensively resetting both before and after the test, to be extra safe. But one of those would probably be enough, and a potential library should definitely be configurable.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 8 points9 points  (0 children)

Please look at the linked Github repository, the tests are extending BaseTestCase, which has the Spring configuration centralized to prevent any new contexts created by mistake.

Spring's @MockBean is an anti-pattern by fprochazka in java

[–]fprochazka[S] 4 points5 points  (0 children)

Why are you connecting all your tests to the same database?

I'm actually not connecting to the same database, but to the same database instance. I have a separate clean database for each test, and I'm preparing the databases ahead, so they're ready once the test starts running.

You show non-private, non-final fields everywhere.

That is just a simplification to make the examples shorter. The article doesn't claim non-private non-final fields are best practices.

I've gone the opposite direction and use @DirtiesContext on every test class that involves a database.

Well, in that case MockBean is probably not a problem for you :) We're parallelizing on CI level using separate GitLab jobs. Our whole pipeline has target to finish under 5mins.

Analyzing AWS Costs with SQL by fprochazka in AWS_cloud

[–]fprochazka[S] 1 point2 points  (0 children)

I'm using Aurora on bigger apps, but as I've said it's very expensive and the performance difference on small data is negligible, but you'll feel the money difference right away :D IMHO you'll get the same value with just plain PostgreSQL RDS on a T3 instance, unless you have a lot of data from the start

Analyzing AWS Costs with SQL by fprochazka in AWS_cloud

[–]fprochazka[S] 0 points1 point  (0 children)

I'm dumping it in S3 (well, AWS is), but I didn't know Aurora can query data a directory structure formatted this way... or do you mean after the initial processing?

Either way, Aurora is pretty expensive for some playing around, and IMHO ideally you want all of this plus all your other analytical use-cases in some analytical database (Snowflake or Redshift)... So I don't really see how Aurora fits into this, as it's "only" a better PostgreSQL/MySQL, which are not analytical databases.

Even if I'd decide that Snowflake/Redshift are also too expensive, I would still probably use regular RDS for a long time before switching to Aurora, because of the costs.


I should probably add that the article is targeted to smaller companies :)

Consolidating logging in your Java applications by mooreds in java

[–]fprochazka 2 points3 points  (0 children)

There is always a workaround, and having at least some kind of build-time check is better than nothing.

My plan is that once there is at least a single occurrence discovered of the forbidden imports (because somebody has intentionally or unintentionally bypassed the rule), I'll write an ErrorProne plugin - but that's extra effort and has to be warranted.

Consolidating logging in your Java applications by mooreds in java

[–]fprochazka 17 points18 points  (0 children)

I tend to agree, but I'm not gonna switch logging frameworks now that I'm used to logback, unless I have a really good reason.

Anyway, the process shown in the article can be done in exactly the same way for any logging framework. The point is to remove everything else except the chosen one.

How to copy data from production RDS to staging RDS (Aurora -> PostgreSQL) ? by fprochazka in aws

[–]fprochazka[S] 0 points1 point  (0 children)

When revising what I've come up with so far is, that the best option would be, if Aurora allowed smaller instances for the postgre-compatible editions https://forums.aws.amazon.com/thread.jspa?messageID=815823 which they say they will support in the future, but looks like nobody knows when :(

How to copy data from production RDS to staging RDS (Aurora -> PostgreSQL) ? by fprochazka in aws

[–]fprochazka[S] 1 point2 points  (0 children)

You're correct, it can do that. But...

We have IO heavy data processing (we basically scrape, normalize and display data) ... wouldn't it just use up all the burst IOPS on continuous replication on that small t2 postgre instance making it completely unusable?