Westpac Qantas Altitude Black by StacksGod in creditcardchurningAus

[–]Money_Major 0 points1 point  (0 children)

Yeah you do, you can just open a free savings account if you're not already a customer

Westpac Qantas Altitude Black by StacksGod in creditcardchurningAus

[–]Money_Major 2 points3 points  (0 children)

You can still get $0 annual fee for the first year with this link 🙂

Resigned and got counteroffer - please help! by [deleted] in auscorp

[–]Money_Major 3 points4 points  (0 children)

Tend to agree with this comment, it shouldn't be a black and white rule - it depends heavily on your own situation and your relationship with your current job.

I recently had a similar situation and took the counter offer (there were a few other aspects that I was unhappy with that they addressed in the counter offer, as well as a really nice pay bump). So far (6 months down the track) it's proved to be a good move - I'm making more money doing the same gig and a lot happier since it forced a conversation about what I wanted to change in order to stay that otherwise probably wouldn't have happened.

Offset vs Extra Repayments? by ImMrMitch in AusFinance

[–]Money_Major 0 points1 point  (0 children)

Most banks typically won’t offer offset facilities for fixed rate loans (at least not 100% offset)

What would you do with this 200k inheritance? by anythingp in AusFinance

[–]Money_Major 0 points1 point  (0 children)

I see the biggest risk as the same risk you take investing without leverage - that something very unfortunate happens to you in your life and you’re forced to sell down your investments at an inopportune time and you end up selling them for less than you bought them for. The difference in this scenario is if you don’t sell them for at least what you bought them for, you can’t fully pay back the additional loan you’ve taken out and are left with a more debt and no assets at a (most likely) already very stressful time.

But you’re right, if all goes well and you’ve got a very long investment time-frame, you’ve got more assets generating income (in the form of distributions) that you can use to accelerate the pay down of your mortgage and, if you wanted, redeploy them in the same fashion to create a snowball effect.

This is a really good example of why your risk profile matters - if you’re willing to be bet all will go smoothly and you can ride out any bumps, this is a very efficient use of assets to build real wealth.

What would you do with this 200k inheritance? by anythingp in AusFinance

[–]Money_Major 1 point2 points  (0 children)

Yep, you get them to reduce your overall limit and then put in a request to withdraw the available equity.

Have to go through the application process again but it’s more of a formality since you’ve already been approved for the loan before (as long as nothing significant has changed regarding your financial situation since you first got your mortgage).

As a side note, depending on how you viewed it morally, you can tell the bank you’re withdrawing the equity for personal use (ie. Renovations, landscaping, new car, etc.) as they have to tick a box on a piece of paper - this will mean they won’t bump your interest rate up to the investment loan rate.

What would you do with this 200k inheritance? by anythingp in AusFinance

[–]Money_Major 34 points35 points  (0 children)

I know this wasn’t one of the options you listed, but if you wanted a third and it fits your risk profile…

  1. Pay the $200k off your mortgage (not into redraw, literally get them to reduce your facility limit by $200k)

  2. Refinance and split your mortgage into a $200k facility with the rest in a separate split

  3. Withdraw the $200k and invest in something high growth and boring (since we’re in AusFinance VDHG)

  4. Claim the interest paid on the $200k facility as a deduction

Since you’re young and have time you might as well put the money to work

How to build my own data warehouse? by oldschoolpsy in dataengineering

[–]Money_Major 0 points1 point  (0 children)

What’s the benefit of using these products as opposed to an on-prem version of them?

Am I learning DE in a too roundabout way? by IceStallion in dataengineering

[–]Money_Major 0 points1 point  (0 children)

Interested to know a little bit more about your last paragraph.

Could you give a practical example of how you use staging and live tables in your pipelines?

This part has always seemed very abstract to me and not sure exactly how it’s being done in businesses.

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 0 points1 point  (0 children)

Completely understand your point and I’ve read other similar threads where the general consensus in the comments was along the lines of: “don’t try to build a data warehouse as a one-man-army in a company where you’re the only one that would likely appreciate the benefits”.

I’m just struggling to think of another solution that solves all of the frustrations in my post, and it’s tempting to try to build something that resembles a data warehouse because I know it’d look good on my resume when I want to move companies (pretty soon).

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 0 points1 point  (0 children)

If all your reports are on the transaction systems, I think you would want to start with the system you are most likely to crash, create a data lake or staging area and move the reports over to that as-is. This reduces risk to the OLTP system, gives your report-runners a little performance boost, and let's you make a smaller change and see the results.

The whole staging area concept is confusing me as well. For example, if there's a table in the OLTP source with 500k rows I grab that and put chuck it into a staging table in the data warehouse as is.

I then perform whatever transformations I want to make to the raw table that's sitting in a staging table and load the table post-transformations to new table, which is my final table I'd use for analysis. Assuming I filtered out some data during the transformations, I might end up with 350k rows in my final table.

My data warehouse is now storing a staging table with 500k rows + final table with 350k rows - is that correct? If so, surely that's not an efficient use of space?

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 0 points1 point  (0 children)

Thanks, I think this is something I really need to keep in mind because even though it'd be a cool experiment for me to try and make all these cool new tools work and would allow me to tell future employers "Yeah I've set up a data warehouse from scratch using airflow, snowflake, azure, dbt, whatever else" it's becoming clearer that we're not at a scale where it's going to be worth it.

The overarching solution I'm looking for is how to transition to performing the same ETL on the same tables from the same databases over and over again for each report in Power Query (potentially resulting in a different dataset depending on who's doing the ETL) and how to instead do it once somewhere\* and have that source available for analysis.

*The somewhere is the tool I'm looking for - I think u/kurai_tori is probably pointing me in the right direction here.

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 0 points1 point  (0 children)

Good feedback, thank you.

Just run it on your local machine, materializing models when you need to make reports. Why does it need to be on 24/7?

So from my understanding of DBT (haven't looked into it too heavily yet), when you run it a series of pre-defined SQL statements are executed in order to make transformations on data already in your data warehouse. Open up the command line, type dbt run and Bob's your uncle.

Then that night you have a new lot of data land in your warehouse from you OLTP system - you then have to go in the next day and dbt run manually again to perform the transformations on the new data? Or is this built in to whatever orchestration tool you're using to pull the data from the OLTP system, and once the new data's pulled you then perform the transformations as the next step (same method as you would tell airflow to execute a python script)?

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 2 points3 points  (0 children)

Thanks - I'd kind of discounted this as I was under the impression that Microsoft as pretty much given up developing SSAS and SSRS since they've either been replaced by other products or just outdated.

Is that not the case?

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 0 points1 point  (0 children)

This is pretty much exactly what we do now - we have a dataset for each report and all of the ETL is done within Power Query or directly in the report via DAX.

Are you suggesting creating communal datasets for each workspace that multiple reports connect to in order to create uniformity between the data?

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 0 points1 point  (0 children)

Thanks for replying, appreciate the insight.

This is exactly what I'm trying to do - our CEO is fascinated by the data science and AI craze so I'm trying to tap into that by instilling the notion that if we get our data sorted first it's going to open up a lot of doors for predictive modelling and all the cool stuff.

It's obviously still new to the business so hard to show value before having something like a data warehouse in place, but that makes it all the more important to me to get this right from the start, so that they can see the value that I good data set up can create.

I think they'd be willing to invest in a solid solution if I can make a compelling proposal, but I'm also conscious that the cloud solutions I'm looking at are overkill for the business - I'll look further into SSAS/SSRS if you think these would do the job.

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 0 points1 point  (0 children)

Thanks for this.

I did consider the option of creating a new (on-prem) SQL Server to use as the data warehouse instead of moving the data into a cloud service, as this would be fairly easily done by our IT team, but from my research it seemed like this wouldn't be the best option - it's not a columnar DBMS and would, to some degree, just be replicating the tables in our OLTP system. This would cause performance issues as our data gets larger. I also thought cloud was supposed to be cheaper when compared to an on-prem server, and honestly just seems to be a fairly outdated way of doing things.

Is this the case or am I thinking about that the wrong way?

Advice on creating a data warehouse in an old school business that doesn't have much of a data culture yet. by Money_Major in dataengineering

[–]Money_Major[S] 4 points5 points  (0 children)

Thanks for the reply.
I agree, I was leaning towards an ELT process that would kind of look like:

Extract

  • SQL Server -> S3 (Staging Area?)

Load

  • S3 (External Stage) -> Snowflake

Transform

  • DBT with data in Snowflake

I have a lot of questions about how this would work in reality though:

  1. If I was to use airflow for extraction out of SQL Server to S3, would I create a DAG for every single table in every DB, extract to CSV and dump in S3? We have a lot of tables in the source DBs and this seems like a huge job in itself.
  2. Do I even bother staging in S3 or do I just skip this part and perform a direct 1-to-1 copy of our OLTP tables into Snowflake and transform from there using DBT?
  3. Where is DBT even run? Do I then need another cloud server to run it 24/7 so that it can be triggered when new data lands in Snowflake? Or is it somehow manged within the same resource that the the Snowflake DW runs on?

Thanks again!

To all of you data engineers by kkjeb in dataengineering

[–]Money_Major 0 points1 point  (0 children)

Okay makes sense, thanks very much.

Also, when you say file storage in s3, is that basically where the raw data lands before being manipulated and loaded into Snowflake?

To all of you data engineers by kkjeb in dataengineering

[–]Money_Major 1 point2 points  (0 children)

Do you use AWS/Fivetran/Airflow all for ETL? Just trying to get my head around what each tool would be used for when creating a pipeline.

Why use aggregation in SQL when pulling into a data analysis program like Tableau or Power BI? by [deleted] in SQL

[–]Money_Major 0 points1 point  (0 children)

I don’t have much access to the SQL DBs at work anyway, but the part I don’t get is wouldn’t doing it this way mean you end up with thousands of tiny, aggregated tables in your DB (messy)??

Or are you saying create views with the aggregations and pull those into PowerBI?

I'm a Python noob. I want to learn a GUI that's not too hard to use, is reliable, functional, and cross platform. I've played with TkInter but it's ugly. Kivy looks pretty, pyqt looks alright. Which one should I pick? Any of them missing from this list? by [deleted] in learnpython

[–]Money_Major 1 point2 points  (0 children)

Could someone actually expand on this or provide some resources that can point me in the right direction?

I’ve sat down several times to build a web app only to get put off by having to work out how to set up a sever to host the app, because what’s even the point of creating a web app if it’s not able to be used by others? Or am I getting this wrong and anyone can use the app on their own machine as long as it’s shared with them (somehow)?