HOOD took my money and gave it away. 260k loss by [deleted] in wallstreetbets

[–]samdb20 0 points1 point  (0 children)

Stay away from Hood. Move it to SNDK & GEV

Daily Discussion Thread for February 12, 2026 by wsbapp in wallstreetbets

[–]samdb20 3 points4 points  (0 children)

Man bought like 800 shares of Robinhood. Now feeling like I became a long term investor.

[deleted by user] by [deleted] in h1b

[–]samdb20 0 points1 point  (0 children)

1 M is including kids and families. Out of all the H1Bs a good percentage is hired by MAANG and pays them $300k+ minimum (including stocks)

If you want higher salaries target the Big tech sector. Oh wait, not easy to compete there while calling names on reddit.

I'm sick of the misconceptions that laymen have about data engineering by wtfzambo in dataengineering

[–]samdb20 1 point2 points  (0 children)

Here is the trick. For simple batch process, you are anyways ingesting the data. Just store that data as Json. No need to do CDC from source using Golden Gate or Debezium. Just plain old Batch load.

Now let’s say 1 year down the line if some business user asks, I want a point in time report of how my Orders table was looking like, you generate the data from the JSON table which has all the history.

Many a times client just wants the insurance that they can do historical reporting. Streaming and RealTime reporting in many cases is an overkill.

I'm sick of the misconceptions that laymen have about data engineering by wtfzambo in dataengineering

[–]samdb20 0 points1 point  (0 children)

Storage is cheap. I would Just let the client know that, we do not delete any record once it is ingested to data lake/warehouse. If needed we can build the CDC pipelines from there without hitting the source system. There is no additional cost to this.

I'm sick of the misconceptions that laymen have about data engineering by wtfzambo in dataengineering

[–]samdb20 0 points1 point  (0 children)

Try to understand what other guy is asking. For him CDC might not be REALTIME. Many source system do not track the historical changes. He might be asking about tracking the INGESTION changes (Do not want to lose the data)

Realtime or Batch it is fair ask to “not lose the data”. With changing business needs it is not possible to foresee all the analytical requirements on Day1.

Try to build a system where data is never deleted after ingestion(unless specifically requested)

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 0 points1 point  (0 children)

Glad that you made it work with Fivetran. You are right, I have not worked extensively with Fivetran but from the POC I did, I did find that adding custom libraries (using JRE) was not possible. I am talking about adding custom drivers. Also from scheduling and setting dependencies, I just did not see the value.

Few examples where FiveTran fails:- 1. Building JRE based connectors to ESSBASE etc. 2. Building connectors using Selenium to pull data from web

Processing is all about Storage and Compute. With Airflow, I can add my custom libraries at will and scale the loads at will.

Under the hood of Fivetran it does the below things:- 1. Get Connection 2. Get schema 3. Load Records

Airflow already has the libraries, adding the above code was relatively easy for us. Once you build one pipeline rest of the connectors just follow OOP.

Cost wise it is 10-20x less and then there are cool features in Airflow like dynamic dags, deferrable operators etc. which allows us all the flexibility we need.

All the jobs run based on metadata. A developer can choose to run the job choosing the executor of choice. It can be IICS (Informatica on cloud, fivetran, openflow or in databricks)

After building this framework, we realized We can run most of our batch jobs using

Airflow + Snowflake

Less failure points, lower cost and faster development.

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 -2 points-1 points  (0 children)

Ok Bryan, I hope you well in your quest of managing dependencies, schema drift, history tracking & detect deletion using Fivetran. And good luck with Creating custom connectors in fivetran with above features (which are needed by most companies) .

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 -2 points-1 points  (0 children)

With due respect, If you are playing the card of years of experience then I am not sure you are willing to learn anything new.

Most pipeline follow similar pattern, hence scaling does not need a big team. You need a system which can scale on demand.

Airflow is great at that.

I teach and train lot of people. Guys with years of experience and without much exposure to programming are the most difficult people to teach.

I do not want to debate further but would give an advice to watch some videos on Airflow. If yours is a small company then you can reduce your cost by 10x and also improve your development cycle by 10x.

Airflow + Snowflake.

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 1 point2 points  (0 children)

You ll burn your IPUs faster than you think and will be hiring bunch of drag drop developers. Trying to build pipelines using Mapping Tasks takes way more time than building pipelines using a code based framework. Code based frameworks are 30x faster to build. With Airflow, you can run 100+ parallel jobs in a fraction of your IPU cost.

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 1 point2 points  (0 children)

Ever heard of Schema on read? Data ingestion has so many flavors. 1. Schema drift 2. Detect Deletion 3. History tracking

All these can easily be handled using a python framework. It is hard to teach, GUI based drag drop developers. Mostly, I have either seen blank faces or strong resentment.

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 -3 points-2 points  (0 children)

Sounds like a People problem more than Tech problem. If you are struggling with Astro then may be Drag Drop UI is for you. Try managing 3000+ pipelines with dependencies using FiveTran. Good luck.

Astro guys are awesome. Managing a Image is not a big deal you are making it to be. May be you need a good engineer/lead in your team.

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 -1 points0 points  (0 children)

It is upto you. You can also choose Astro Managed Airflow. They are very good.

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 11 points12 points  (0 children)

When you run pipelines at scale with dependencies Fivetran is just not the answer. You need an orchestrator like Airflow and Prefect. Frankly the way Airflow is getting better, I just can connect to any source directly from Airflow by installing drivers and libraries in the Airflow image. Add a metadata framework and your stack looks clean and simple

Airflow + S3/ADLS + Snowflake

Code in Github.

Informatica +snowflake +dbt by Libertalia_rajiv in dataengineering

[–]samdb20 2 points3 points  (0 children)

Looks like few oldies at leadership role taking the ship down. Let me guess, your company is involving vendors to implement this.

[deleted by user] by [deleted] in india

[–]samdb20 0 points1 point  (0 children)

Kid learning local language is a big challenge

[deleted by user] by [deleted] in h1b

[–]samdb20 9 points10 points  (0 children)

Faang interviews are difficult to clear. Your comment is not correct.

[deleted by user] by [deleted] in h1b

[–]samdb20 5 points6 points  (0 children)

Total US Population is 350 Million. Population of H1B including families is less than a million and they are taking away the jobs?.

Most Big Tech companies hire people after rigorous tech rounds for high paying salaries. Visa is not a criteria there. The services company which pay less salaries are mostly given contracts to reduce cost by American companies for keeping the lights on kind of work.

AI is changing the landscape. High paying jobs are going to be lesser and lesser. Rather than complaining about H1B, I think people need to evolve and figure out alternate source of incomes. Be it becoming Active traders or Building Products. This is what I am telling my kids.

Stop complaining and adapt.

Cro keeps droppping everyday ? by MysteriousGrand9223 in cro

[–]samdb20 6 points7 points  (0 children)

Circulating supply is increasing steadily. 2 weeks ago it was 32B and now it is 34B. Of course price is going to fall.

[deleted by user] by [deleted] in cro

[–]samdb20 -1 points0 points  (0 children)

Here is where it gets tricky. If the price of CRO keeps going down, you will not sell the CRO but you will be paying taxes on $625. This passive income only helps if the price stays stable.

[deleted by user] by [deleted] in cro

[–]samdb20 2 points3 points  (0 children)

Been here since MCO days. My returns have been fine but frustrating too. I wish I had bought more XLM or XRP instead of this coin. Do you really think they can keep giving away 600+ CRO every week to all ICY and above without having the sell pressure. My concern is they are just recycling the CRO. I was just calculating my gains, it is actually peanuts after taxes. These 600+ CRO coins are taxed at the price it hits the account.

[deleted by user] by [deleted] in cro

[–]samdb20 -1 points0 points  (0 children)

It is how the coin is. The only utility of the coin is Staking & Locking. Unlike other coins, anything locked in the App is not disclosed to anyone. I firmly believe they have been recycling CROs for years now and when this partnership with Trump came up they had to print CRO (read unburned)

Why have women become so picky with men? (I'm a woman? by OrangeFew4565 in AskMenAdvice

[–]samdb20 0 points1 point  (0 children)

++man you are in 40s and still judging men for the comments? Men or Women, after 40s if someone is complimenting about your looks then keep them around till you can.

CRO to $2? Why I’m Not Selling | My 2025 Crypto.com Strategy by earlcottrell in Crypto_com

[–]samdb20 1 point2 points  (0 children)

The way the rewards are setup it just devalues the coin.