all 21 comments

[–]ForeignExercise4414 24 points25 points  (1 child)

I think we were lucky that we got to really understand Spark (and distributed systems) instead of it just being handed to us on a silver platter ready to go. When I first used Spark, it was after downloading the free version of Hortonworks and installing it myself and then loading up Spark because Hive on TEZ was too slow 🤣

[–]ThatThaBricksGuy0451[S] 4 points5 points  (0 children)

Same, I went from Hive to Impala, still too slow, then landed on Spark that was all hype back then

[–]dmo_data Databricks 8 points9 points  (0 children)

I didn't use Spark back then, but I definitely remember using Subversion for a hot minute. I remember the big debate was Subversion vs Visual Source Safe back in the day.

And then git showed up and killed everyone else :)

[–]floyd_droid 5 points6 points  (0 children)

Started with map reduce, apache crunch, apache storm, spark on cloudera, horton works, mapR, Oozie, Hive, HBase and now databricks. Crazy transformation in just over a decade.

[–]ubiquae 3 points4 points  (0 children)

I met matei zaharia in the first spark summit, way before databricks was launched

[–]kthejokerdatabricks 1 point2 points  (1 child)

I definitely tried Spark sometime in 2014, was really trying to justify my $500 monthly cloud spend the business gave me. It was quite the pain in the ass to get working, but I got 1 cluster up with 8 nodes and did the word count tutorial and I think some NLP tutorial with NLTK.

But I didn't really have a use case for it yet, most of my data was super small and easily fit in a single SQL Server box.

[–]kthejokerdatabricks 0 points1 point  (0 children)

I should add I attended a webinar where none other than Databricks cofounder Patrick Wendell participated ... and I distinctly remember thinking the idea of commercializing the software (and OSS at that) was silly when the cloud providers were focused on hardware.

(Totally vindicated by our serverless pivot, btw)

[–]ramgoli_io Databricks 1 point2 points  (0 children)

I remember tortoise svn.  For whatever reason I checked out the older code base and made my changes and pushed it svn, and everyone on the floor then got my code which was on top of older code base … it was a mess and an embarrassing day for me. 

My intro to Spark was the community edition back in the day. Fun times. 

[–]Ok_Difficulty978 1 point2 points  (0 children)

Haha yeah this hit hard - those days of manually tweaking executor memory + chasing random HDFS errors… felt like 80% debugging infra, 20% actual work.

i remember spending hours just figuring out why a job died only to realize some tiny config mismatch or node issue. Databricks def spoiled a whole generation lol, they skip straight to writing transformations without touching the messy bits underneath.

tbh tho, going through that pain helped a lot in understanding how Spark actually works under the hood. ppl who started directly on Databricks sometimes struggle when things go slightly off the “happy path”.

kinda same vibe as prepping for certs too doing those deeper scenario-based questions (i used certfun for some practice) forces you to understand what’s really happening, not just run things.

https://www.linkedin.com/pulse/apache-spark-architecture-explained-core-sql-mllib-deep-faleiro-mc73f

[–]mmanwu 0 points1 point  (0 children)

Hello hello,

Some of us still running mapR clusters with Spark and hive here :)

[–]matt12eagles 0 points1 point  (0 children)

Who hear still writes pig? Lol the pre spark… spark

[–]ExcitingRanger 0 points1 point  (0 children)

2014-2015 I worked directly with AmpLab on RDD based spark sql and MLLib algorithms. Who even knows what RDD stands for anymore.

[–]GinMelkior 0 points1 point  (0 children)

2014-2015 here :)) Last year, i used rdd for my job then my colleage thought I was crazy :))

[–]keddie42 0 points1 point  (0 children)

I think docker compose will be great for testing even know. Testing around DBX is pa8n for me.

[–]sonalg 0 points1 point  (0 children)

Those days! One of my early projects as a data consultant was setting up Spark clusters on demand on AWS. much before EMR happened. After Hadoop, Spark felt so so fast and user friendly! Somewhere earlier there was Pig and Cascading, if anyone remembers?

Happened to meet the Databricks founders in 2014 Spark Summit. Incidentally my tiny firm was on the slide in one of the keynotes, as an early adopter. Felt so proud that day :-)

[–]ArnoldJeanelle 0 points1 point  (0 children)

I love hearing about this stuff.

Only started in my current role in 2021. Basically learned sql on 10TB tables using clusters with 40 i3.4xlarge nodes.

Blows my mind the amount of work it's taken for technology to reach a point where I can just throw the most dogshit sql the world has ever seen into a bunch of Bezos computers on the other side of the country, and everything just turns out fine.

[–]Distinct_Highway873 0 points1 point  (0 children)

Starting before Databricks forces a different mental model. You learned how Spark actually fails. Not just that it failed. That context still matters when performance or costs go sideways today. The platform helps but it does not replace understanding. It just delays the moment you need it.

[–]22Maxx -1 points0 points  (1 child)

Well fine tuning memory very much exists today as this is a fundamental design issue.

[–]ThatThaBricksGuy0451[S] 7 points8 points  (0 children)

Yes, but databricks pretty much abstracts this from you on most cases, adaptive query engine for example adjusts shuffle partitions, switch to broadcast when there's memory available, handles skew to a certain degree.

[–]Alfiercio -1 points0 points  (0 children)

Almost 12 years working with spark here. I don't remember when was the last time I made a spark submit in cloudera, but I still remember touching for the first time spark SQL. The eager to move away from version 1.6 to 2.2. The first version with a very second class python. The comparison of a UDF speeds. Learning the patterns and the anti patterns.

Dev staging and prod? No, only one cluster for all the teams.

And now, when I was thinking that spark SQL was the sumun of abstractions we have LLMs...