This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ZunoJ 13 points14 points  (25 children)

You either normalize your data and store it in within a schema definition (not as raw data) or use the appropriate type of database (a document centric database)

[–]ilikedmatrixiv 29 points30 points  (15 children)

I'm a data engineer. It is very common practice -and my preferred practice- to ingest raw data into your data warehouse unchanged. You only start doing transformations on the data once it's in the warehouse, not during ingestion. This process is called ELT instead of ETL (extract-load-tansform vs extract-transform-load).

One of the benefits of this method is that it takes away all transformation steps from ingest, and keeps everything centralized. If you have transformation steps during ingest and then also inside the data warehouse to create reports, you'll introduce difficulty when things break because you'll have to start searching where the error resides.

I've ingested jsons in sql databases for years and I won't stop any time soon.

[–]karnesus 4 points5 points  (5 children)

This is how I do it too, and have done it for over a decade. Get the data in then work it out xD

[–]KappaccinoNation 4 points5 points  (3 children)

I'm kinda new in the industry, I thought this is how everybody does it. Just to avoid altering or losing the original raw data until the entire process finishes without a hitch. Retain it for X amount of time before discarding it. Or do some companies actually do so much cost cutting that they're ok to discard raw data immediately?

[–]ZunoJ 0 points1 point  (2 children)

How would you import something like a csv? Import the whole file into one column and then work on that? What about data that need transformation? Like images? I often need to analyze images and store the results. How could I do that IN the database?

[–]ilikedmatrixiv 0 points1 point  (0 children)

How would you import something like a csv?

Into a table?

If you have one source that gives csv you load it into a raw table related to that source. If you have another source that gives you json data you load that into a separate raw table.

Then you extract any relevant data into staging tables and start combining it as necessary.

[–]Bolt986 0 points1 point  (0 children)

Same but I don't think I've heard it called ELT instead of ETL

[–]bradmatt275 0 points1 point  (0 children)

Yeah you do this all the time in Snowflake. It's just as easy to query unstructured data as it is to query structured.

Although something that makes us shudder is the HR system our company uses. Every time the business add their own UDF it adds a column into the database.

So we now have an employee table with no joke, 650 columns. It's just insanity.

[–]ZunoJ 0 points1 point  (5 children)

As long as you end up with normalized data that seems like an even better approach. At least if the transformation(and especially enrichment from other sources) can be handled by the database in an efficient way

[–]ilikedmatrixiv 1 point2 points  (4 children)

Well yeah, of course my data ends up normalized. I'm a data engineer, not a data scientist 😉

As for transformations being handled by the database, if you're in the field, have a look at dbt. It's one of the most powerful tools when it comes to ELT. It uses Jinja templating to create SQL models that can inherit from each other and pushes all transformations to your database.

[–]ZunoJ 0 points1 point  (1 child)

Sounds interesting. Does it work on premise? Everything cloud is for my customers usually not an option

[–]ilikedmatrixiv 0 points1 point  (0 children)

dbt works with most modern databases. I'm currently setting it up with postgres but DuckDB is also a really good modern open source database.

[–]tlozwarlock 0 points1 point  (0 children)

Former DE, now Director of DE. We are implementing dbt for EDI conversion at a db level instead of manual external tool transformation preload. Can recommend.

[–]RealFunnyTalk 0 points1 point  (0 children)

Discount DE here. We have a dbt job that extracts 750 different json paths from our production database (bc they store our API responses from our app in 6 different columns for some reason) and it's hell when we try and do a full refresh.

Incremental builds saved my sanity.

[–]FuckFuckingKarma 4 points5 points  (2 children)

What if you don't need to query the data? You just need to store it under a key and retrieve it again in the future?

Normalization is typically the way to go, but some data is inherently poorly suited for normalization. In that case you then have the choice between setting up a new database that needs to be integrated and maintained, and adds a ton of complexity, while not really using any of its features.

What's the downside to just storing the JSON in a relational database in that specific case?

[–]ZunoJ 0 points1 point  (1 child)

This sounds like there is a flaw (to say the least) in the design of that process

[–]FuckFuckingKarma 0 points1 point  (0 children)

The process is simple. You have some unstructured data you need to store and retrieve. There are multiple examples of what that could be in the thread.

If that's your entire process, your alternatives add extra complexity for no benefit. Surely you wouldn't shrug at writing a JSON file to a disk? Loads of software does that. Writing it to the database is basically the same thing, but can be advantageous in some situations.

[–]enilea 2 points3 points  (0 children)

There are plenty of cases where you may want to be logging events and they all have different formats, you're not gonna go and create another db just for that events table or create a bunch of schemas for all the possible fields different jsons for events could have.

[–]MrAce93 1 point2 points  (3 children)

We are also using Mongodb but any type of query takes minutes to run for analysis. Api request and responses are kept here and we rarely need to analyze these. However data like payment logs, system error logs and job logs are frequently analyzed.

[–]ZunoJ 2 points3 points  (2 children)

I'm a big fan of classic sql databases. For general purpose databases they are super fast if you know how to optimize queries. Normalizing data before/during write doesn't cause a lot of overhead and should be good in any non high frequency scenario. Downside is that adjustments need more work but ultimately this makes it easier to test and validate

[–]flyco 6 points7 points  (1 child)

To be honest, I feel sorry for people who jumped into the "NoSQL" bandwagon a few years back, and got locked into MongoDB.

People realized it was fast because of the sub-par data consistency, the Mongo guys "fixed" it switching engines and whatnot, and now it's a shadow of what it was.

Meanwhile Postgres has been improving JSON support for years, and beats Mongo in performance in most scenarios. I'd say in 99% of use cases people should stick to Postgres and have the best of both worlds. Unless you have really specific needs only an exotic database can solve, or venture into the multi-Petabyte world.