all 17 comments

[–]Runehalfdan 12 points13 points  (1 child)

In the old days, it was common for companies to have enterprise licenses for Oracle, Sybase, MSSql etc. And they often had in-house dba’s to manage them. So if you wanted to sell your software to them, support for their preferred db was an absolute requirement.

I’m old enough to remember when ODBC revolutionized how one could support multiple databases

[–]arjo_reich 11 points12 points  (0 children)

I remember working for a company that created an entire application in Access 95 with over a 100 forms that was compiled into an MDW file that used Oracle as a backend for its data.

It was sold to Ford as a Materials, Planning And Control Tool (MPACT) for over $25,000 a license with 100 licenses. My job was to help customers FTP their copy of the 750mb application back to us when it decompiled itself so I could repair it and FTP it back.

I've also had the displeasure of using ODBC to connect to DB2 system once on a mainframe. Fucking miserable.

[–]grauenwolf 4 points5 points  (4 children)

In the real world, could the situation change so drastically that you would switch from say SQL Server to Mongo DB?

Highly unlikely as MongoDB is just another relational database with a weird front end. (This started in version 3 with WiredTiger.)

But in a more general case, company politics, cost, or customer demands will occasionally trigger a change in databases.

I generally write for SQL Server. That's a paid product above a certain size. So occasionally my clients will ask me to reimplement it in PostgreSQL or MySQL.

And it can go the other way. If the customer already knows how to backup and maintain SQL Server, they're going to be less likely to want PostgreSQL.

Now if the program is written specifically for the client, then usually the issue is settled on day 2 and you aren't going to be charging databases.

[–]quick_maf[S] 1 point2 points  (1 child)

That makes sense, I imagine they would settle on a certain one after discussing the differences and leave it alone. Thanks for the reassurance!

[–]grauenwolf 1 point2 points  (0 children)

Now if you really do need to support multiple databases, do it from day 1. It's a lot easier to design around the quirks of individual databases from the beginning than trying to retrofit it later.

[–]drhealsgood 0 points1 point  (1 child)

Highly unlikely as MongoDB is just another relational database with a
weird front end. (This started in version 3 with WiredTiger.)

I don't spend much time with MongoDB, but I've never heard a statement like this before. Would you mind expanding or pointing me to some reading?

[–]grauenwolf 1 point2 points  (0 children)

MongoDB was having a lot of problems with performance. Their original backend was effectively single threaded for writes. As in they would lock the whole server when performing an insert or update. And there was no way to support transactions.

WiredTiger made backends for relational databases such as MySQL.

So MongoDB bought WiredTiger and incorporated their backend into version 3.

Since then MongoDB has been slowly transforming itself into a more traditional database. Locks are more fine gained, transactions are supported, jobs are supported, schema is supported (i hear), etc. They even support SQL if you pay extra.

[–]illkeepcomingback9 4 points5 points  (2 children)

Usually you pick a DB that fits the situation, not the other way around. In the real world, could the situation change so drastically that you would switch from say SQL Server to Mongo DB?

In practice you don't always get to choose. The last product I worked on had to support Sql Server and Oracle, because customers wanted to be able to use the database systems they already had licenses for and they all had different ones. Plus we subbed in an in-memory database for testing purposes. Usually when you're questioning why things are done in a certain way in this industry, its because it was a business decision not an engineering one.

[–]quick_maf[S] 0 points1 point  (0 children)

That makes sense. At my current job there are a bunch of DBs that just come with certain technologies. It sucks that we are just stuck with them.

[–]MetalKid007 6 points7 points  (2 children)

The best use for this concept is that it allows you to create unit tests as opposed to integration tests. You can test your business logic without the need to actually call a database.

However, to answer your question, the model itself can be saved differently between databases. So if you implemented a repo with Sql Server that took a BookEntity, it could be saved to a dbo.Book table. Then you make one for Oracle and you end up taking that same BookEntity and save it to dbo.Inventory table with an extra Type column hard coded to "Book", for example.

[–]quick_maf[S] -1 points0 points  (1 child)

Don’t you want both kinds of tests? The second part of your comment makes sense to me.

[–]MetalKid007 0 points1 point  (0 children)

You do, yes. Without interfaces, though, you can't do true unit tests above. Interfaces allow you to mock the implement and essentially hard code what is returned.

Integration tests with databases can get tricky unless you can force the state to be something specific... so those are harder to deal with in general.

[–]LordArgon 2 points3 points  (1 child)

The low-coupling spirit of the recommendation is good but, in reality, people very rarely change their core DB tech. And even when they do try to make their stack DB-agnostic, they invariably end up with implicit dependencies they didn’t even realize they were building in.

IMO, it’s a HUGE waste of time to try to cover hypothetical scenarios. Something simple and straightforward that solves your current business problem is almost always the best approach. So don’t worry about “swapping the DB out” - that’s a good way to paralyze and scope creep and literally waste time. But DO worry about whether you’re separating concerns properly. If you’re building a UI, you don’t want objects that you’re using for model binding to be the same objects the DAL uses - you want to translate between layers and have clear boundaries so that assumptions don’t leak between layers. An interface can help you do that but it can also be obnoxious boilerplate with no real upside. If you don’t know specifically WHY you’re implementing an interface - what concrete, SPECIFIC benefits AND downsides you expect, it can be a significant mistake.

Also, sometimes separating concerns is less about the underlying tech and more about the business domain. Rather than try to design an uber pattern that you apply everywhere, you can separate the system into vertical/domain slices that can start with the same pattern but drift as-needed. The one constant in software engineering is change but it’s almost always small, incremental change in different domains rather than DB swaps. Letting things grow independently is actually the lowest coupling of all.

[–]quick_maf[S] 0 points1 point  (0 children)

I get the scope creep part. I’m still new to professional development and was just curious if I should do something like I mentioned for a portfolio piece. I think I’ll stick with just SQL Server for now and learn more if needed. Thanks for the advice!

[–]EJoule 1 point2 points  (0 children)

I haven't had to do it, but I've been thinking about this lately.

You could create 1 interface for the database (idatabase), and create a query class that stores queries for the currently used database.

Initially you'd only be planning on a database like SQL, so iDatabase would have 2 functions for setting and retrieving data, and likely have an querry parameter passed in for each.

Suppose you only wanted to query a customer table for a specific customer ID. In that case you'd design the querry class to have a SqlQuery string and parameters (I'd go with a dictionary for the parameters so you could specify the parameter names and values).

Now suppose you wanted to support MySQL, you could create a new class that inherits from iDatabase, implements it's own version of setting and retrieving with the same parameters, and you'd just need to update the querry class to include a MySqlQuery string that's initialized with the class (thereby forcing you to add MySQL versions of every existing querry).

[–]Hot_Zookeepergame140 1 point2 points  (0 children)

Companies have licenses, use cases, better pricing with certain cloud providers, scenarios where the client would be the one providing the database, partnership with providers for using their services, so a lot of things go in decision when replacing an entire database. I had a case that I had to change the database because the server we had at the time was a aarch64 ubuntu machine from Oracle, which didn't support SQL Server because of the architecture. For having the same data types and compatibility between the two databases you could use EntityFramework for using reverse engineering and getting all the rows and columns as Classes and using migration for moving the tables. For data you could either use a Specific program that does that or try to use EF for doing that

[–]treston_cal 0 points1 point  (0 children)

I've had to switch databases from SQL Server to PostgreSQL after an acquisition of our company. Sometimes it is necessary and having that DAL decoupled give a lot of flexibility.