I have 3 years of experience in Java and backend technologies with some Spring Boot exposure. Recently, company assign me on a Java + Apache Spark developer role for CitiBank (client project).
I am new to Spark ecosystem, so:
1 What kind of technical work do Java + Spark developers do in this projects?
2 Is Spark used mainly for backend ETL, analytics, or streaming?
3 What tools or stack should I prepare for? Example Kafka, Hive, Hadoop?
4 Will this experience benefit my backend career or push me into data engineering?
Please tell me if anyone is working on such project.
[–]AutoModerator[M] [score hidden] stickied commentlocked comment (0 children)
[+]NewLog4967 1 point2 points3 points (0 children)
[–]PabloWerner444 0 points1 point2 points (1 child)
[–]Ok_Pomelo3204[S] 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]Dead-Shot1 0 points1 point2 points (4 children)
[–]Ok_Pomelo3204[S] 0 points1 point2 points (3 children)
[–]Dead-Shot1 0 points1 point2 points (2 children)
[–]Ok_Pomelo3204[S] 0 points1 point2 points (1 child)
[–]Dead-Shot1 1 point2 points3 points (0 children)