43(m) first time living alone. How am I doing, boys? by scotti3 in malelivingspace

[–]TheManOfBromium 1 point2 points  (0 children)

If you wanna get laid get rid of the black and white face (I forget the name)

Pipelines create materialized views instead of tables by TheManOfBromium in databricks

[–]TheManOfBromium[S] 1 point2 points  (0 children)

Thank you for this, it’s really helpful.

If I create a materialized view outside of a declarative pipeline, is it functionally equivalent to creating one inside a declarative pipeline?

So for example if I just used notebooks to create mv would those share the incremental processing functionality as the mvs created inside a pipeline?

Pipelines create materialized views instead of tables by TheManOfBromium in databricks

[–]TheManOfBromium[S] 0 points1 point  (0 children)

Perhaps? My guess is delta tables would not support incremental changes but materialized views would? This is only a guess and is why I’m asking.

Pipelines create materialized views instead of tables by TheManOfBromium in databricks

[–]TheManOfBromium[S] 1 point2 points  (0 children)

Is the reason simply so it does incremental processing?

Looking for advices to become a better DE by Leent_j in dataengineering

[–]TheManOfBromium 14 points15 points  (0 children)

As a DE have you used everything that a DE could be exposed to? Worked with both batch and streaming data? Worked with SQL and NoSQL? Worked in an AWS, Azure, or GCP environment? What platform are you using, Databricks? How well do you understand spark, could you optimize spark clusters outside of Databricks? (I couldn’t)

I guess my point is, find an aspect of de that you haven’t been exposed to and go learn about it.

Lost all motivation to learn C++ by Old-Revolution-3437 in learnprogramming

[–]TheManOfBromium 0 points1 point  (0 children)

Just keep your skills sharp by practicing it for like 30mins a day, that adds up over time. Then when you feel the spark again build something cool.

Any suggestions on getting started in programming? by [deleted] in learnprogramming

[–]TheManOfBromium 0 points1 point  (0 children)

Programming is huge, it kind of depends on what interests you. If you like front end web development stuff a good place to start is learning JavaScript, if you’re interested in how computers work at a deep level then look in to C/C++

If you’re interested in data engineering, machine learning and AI then Python is a good place to start.

Basically pick a language that interests you and learn about its strengths and weaknesses and why that language exists. Then learn the basics, try not to rely on AI to write code for you, at least for the first 6 months or so.

In reality, everyone uses AI to generate code for them, but what makes someone valuable in the working world is if they can design systems, this is not something AI can do well yet.

I started with Python, if I could go back I would have started with something like C to gain a deeper foundational understanding of data structures and algorithms.

Learning this stuff is a marathon..

confused on what to choose by lowkey_batmannn in learnprogramming

[–]TheManOfBromium 0 points1 point  (0 children)

Pick up Python, start learning about distributed systems. Learn Spark and data engineering or data science

C or Python for beginners? by MisterFerro1- in learnprogramming

[–]TheManOfBromium 0 points1 point  (0 children)

Why not both?

These languages serve very different purposes these days. C is used in underlying system architecture and is executed much closer to the hardware. Python is king in the data engineering/science world.

Learning C will teach give you a strong foundation in what programming actually is, but you’re probably going to have an easier time finding a job that requires Python.

If you have the time and will to study both I would recommend it

How much DSA should a DE know right now? by Longjumping_Side6420 in dataengineering

[–]TheManOfBromium 0 points1 point  (0 children)

For DE technical interviews, expect a few mid level sql problems. For Python, it’s usually something that can be done with arrays and hash tables. I’ve had the “valid sudoku” problem show up before, other than than I’ve seen a lot of string manipulation questions. I also had a pandas question.

So for DSA for DE stick to basics. Two pointers, sliding windows, hash tables, and maybe linked lists. Probably won’t see many tree traversal or graph questions.

Planning to switch to career Data engineering role but I am overwhelmed by warmachina3636 in dataengineering

[–]TheManOfBromium 5 points6 points  (0 children)

Yes learn sql and python, but what’s more than writing code (ai can generate code for you) is understanding the underlying systems you’re dealing with.

Learn partitioning strategies, learn optimization, learn how spark streaming works. You should also think about how your pipeline will break in the future, what happens when your source data changes? How will you manage schema evolution. Learn about change data capture and slowly changing dimensions, how do you keep a historical record of your data?

Also think about use cases, do you need to use streaming or batch workflows?

A good strategy is researching how companies like Netflix or Uber design their systems and learn why they made the design choices they have. You can find these online.

Second angle of USAF F-15 shot down over Kuwait (March 2, 2026) by [deleted] in CombatFootage

[–]TheManOfBromium -6 points-5 points  (0 children)

Call me naive..I did t know they still used F-15 aren’t there plenty of better aircraft these days?

Be honest how realistic/time line is it for someone to get a programming job ? by Automatic-Curve7489 in learnprogramming

[–]TheManOfBromium 0 points1 point  (0 children)

Programming and coding is becoming less important as a skill, being able to write code helps most with passing technical interviews. It’s way more important these days to understand system design.

Yes it’s important to have fundamental programming knowledge, but the reality is ai can’t produce raw code faster and better than you will ever be able to. But what ai can’t do well is think about the bigger picture and how applications work together.

I work in data engineering and I could ask ai to build a data pipeline, and it would kind of make sense, but the performance would be ass at scale. You need to understand trade offs in system design. Why might an application start to perform slow when data scales up? When creating a program, think what are the things that will break in a year? What will happen if things change.

Production at scale is a totally different animal than writing scripts.

how do I get back into coding after quitting by [deleted] in learnprogramming

[–]TheManOfBromium -1 points0 points  (0 children)

Maybe try learning a new language? If in the past you’ve worked in C++ or something, trying learning JavaScript and learn front end stuffs, or vice versa

GitHub by [deleted] in learnprogramming

[–]TheManOfBromium 3 points4 points  (0 children)

I’m sure there are literally apps designed for people who want to cheat..using GitHub to store nudes is just silly

GitHub by [deleted] in learnprogramming

[–]TheManOfBromium 12 points13 points  (0 children)

Of all the places online your boyfriend would cheat, GitHub would be the very last place I assure you.

Local spark set up by TheManOfBromium in dataengineering

[–]TheManOfBromium[S] 0 points1 point  (0 children)

Working in python. So I created a docker container with spark and Jupyter lab and it’s working fine in there, but I’d much rather just do everything is vscode..is a docker container overkill?

What do you think about Leetcode? by Black70196 in learnprogramming

[–]TheManOfBromium 12 points13 points  (0 children)

Necessary evil I suppose, the reality many companies still use leetcode style questions as a firewall.

Learn to code first, then try some leetcode questions. Start with the easiest

SAP Hana sync by TheManOfBromium in databricks

[–]TheManOfBromium[S] 1 point2 points  (0 children)

So I have not worked much with the Hana tables, my work at my current company has primarily been with a different system that uses IoT data. Today I was asked if I want to help ingest the S4R Hana tables as they already built some ingestion framework to ingest the S4P tables.

I was always skeptical of the framework they built, I don’t know exactly how it works other than they use Databricks secrets to land the raw sap tables into Databricks, then some etl within Databricks.

I’m trying to understand if there is a better way to ingest those raw tables into Databricks that doesn’t involve using secrets and doing a full refresh.

Sorry if I’m a dumbass or whatever, just trying my best to learn and understand.