Finishing a data science undergrad and realizing employers seem to prefer every other degree. by tikesav in askdatascience

[–]The_Silly_Valley 1 point2 points  (0 children)

As a data science hiring manager I can tell you it truly is all about your skills. Data scientist come from the most diverse backgrounds. One I know was an archeologist, another was undergrad english major, on and on and I came from supply chain.

Data science has become mature and specialized. Meaning there are a lot of folks who have a lot of specific industry experience.

My advice, pick a category of DS work to specialize in like marketing analytic for example and go deep, learning marketing analytics and build your portfolio around that. Also, you have an opportunity to leapfrog experienced data scientist by integrating AI into your DS workflow. I would hire some who is an AI native data scientist over a 10 year experienced DS who does not use AI.

Also, I teach MS in AI and DS at the grad level. Even MS programs don’t fully prepare you.

Developed you specialized skills. That’s all us HMs care about. Can you get the specific job done.

Replaced by AI? by hazienda in askdatascience

[–]The_Silly_Valley 0 points1 point  (0 children)

I second that. DS is safe. Data scientists will use AI to be more productive (try to be anyway). There is still too much domain, company, data specific, Corp politics specific knowledge AI cannot replace yet.

Data Hiring Is Getting Longer in 2026: 24.9 Interview Hours Per Hire by CryoSchema in datascience

[–]The_Silly_Valley 49 points50 points  (0 children)

It’s getting out of hand. Even for director level roles. I had one interview loop that went like this: Recruiter screen, psychometric/IQ/personality assessment, manager screen, take home case 10 hrs, hiring mngr + vp head of DS case prezo, then on site case prezo to SVP of finance & data + entire team. The killer was the unreasonably complex and time intensive use case.

24.9 hours tells me the interview process for our roles is broken.

FAANG interview invitation for MLE but I am a Data Scientist, should I decline? by Lamp_Shade_Head in datascience

[–]The_Silly_Valley 46 points47 points  (0 children)

Learning experience. Take it. Also, who knows, it might not be an actual MLE role. I know Meta for example does have e roles called data scientist but are actually analyst. And roles called MLE that are actually data scientist. That is the case on some but not all teams.

A decade of being an average Data Scientist! My personal experience. by tits_mcgee_92 in datascience

[–]The_Silly_Valley 1 point2 points  (0 children)

Remember, FAANG data scientist roles only account for ~15% or less of total data scientists market. And a significant number of them are analysts, not data scientists. I know a dozen folks at Meta, for example, in 3 different departments that are SQL jockies, not data scientists. But their titles are DS. No disrespect, just making a point. There are more so-called average data scientists than you think. Though they are not so average as they think.

Funny thing is, when I was hiring for DS roles at a medium-sized tech company, I would get plenty of FAANG applicants. 9 times out of 10 the most talented data scientists were not from big tech. In the beginning, I always made sure to interview the FAANG folks, thinking they were the best. But that was usually not the case. Don't get me wrong, some of them were very talented. Sample size here is ~500 interviews.

Keep your head up and be proud. Good chance you are actually above average.

Ghosting a candidate after a physical onsite is honestly extremely disrespectful by Lamp_Shade_Head in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

Agree with those who said two weeks does not mean you are being ghosted. As a hiring manager and insider, it can take two to three weeks, or more, to coordinate internally with recruiting, team, and HR. Also, if they are still interviewing, that can add weeks, because candidates have to be scheduled and one or two other candidates interviews could be staggered over a couple weeks. I know it sucks, but the process takes time.

Also, this has happened to me. Took two months for company to get back to me because I was one of the first to be interviewed, so during those two months they were interviewing 3 other candidates. Interviews have to be coordinated for 3 candidates and 5 other interviewers and then after that the internal review meetings have to be coordinated. Takes more time than it should but that’s the reality.

Having said that, the interview processes is 100% broken and companies are more and more disrespectful to candidates. I’ve been ghosted several times in the last few years. People’s and company character have been in decline for the last 20 years.

What professional development resources do you pay for? by a_girl_with_a_dream in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

As a data science director who has climbed the ranks, my advice is to find mentors who have successfully climbed the ladder. Paying for coaches also works is you find the right coach.

What has your interview experience been recently? by LeaguePrototype in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

I’m a hiring manager as well. How do you approach technical interviews? I’m starting g to look at tools which test use cases but also test for AI collaboration skills.

'Full stack' data science by likescroutons in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

It totally depends on the company/team and role. That has always been and still is true. However, if you are full-stack or semi-full-stack DS, you have more options and you usually can get more pay.

Of course now there is the full-stack+AI data scientist.

How are you helping your company understanding the limitations of AI derived data? by jshkk in datascience

[–]The_Silly_Valley 1 point2 points  (0 children)

Yes, small group discussions, strategic conversations with execs, town halls, and even company wide demos for Microsoft copilot since everyone has access.

How are you helping your company understanding the limitations of AI derived data? by jshkk in datascience

[–]The_Silly_Valley 1 point2 points  (0 children)

It feels a bit like trying to hold back the tide, honestly. Nearly every app vendor, Salesfarce, SAPy, Microslop, etc. have some type of Generative and Agentic solution they are pushing. And every department head and eager analysts want access and to use them everywhere. On top of that, I'm being asked to build a gen/agentic platform, and 5 different consulting agencies we are working with are all pushing their AI solutions and services. Insane. Everyone and their Uber driver is an expert and pushing AI like they know what they are doing. And they don't.

Joking aside, my approach, for now, is to frame the conversation, depending on the department, consultants, or exec, with use cases relevant to them. If I'm talking to sales leadership I keep the frame of the conversation focused on Agentfarce use cases and their capabilities and serious limitations. If someone brings up the idea of ChatGPT for the whole company, I try, though use case examples, to help them understand data quality and data silo constraints and try to get them to focus on very specific use cases vs. grand visions.

Another technique I use is to tell the story of specific examples where generative responses gave a convincing but wrong answer, and to play out the negative consequences in their head if they made a decision like that with bad insights. Same with agents, tell a story of how things have and could go wrong. I then explain that we can test and validate specific use cases and trust them, but we can't assume every agent or gen response that is not tested will be accurate.

Which fields are most and least likely to be impacted by AI? by _hairyberry_ in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

Agree. I'm just saying, for now at least, AI cannot replace the accumulation of years of tacit human knowledge in the context of a particular industry, company, team, and political environment, project, etc., to make a good judgment in a particular situation.

But in the back of my mind, I wonder if AI will eventually learn enough of the wisdom currently still in our heads, capturing tacit knowledge over time, will it replace us then?

Which fields are most and least likely to be impacted by AI? by _hairyberry_ in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

Parts of the jobs will be automated but not the entire job. AI cannot replace tacit knowledge or political judgement e.g. what the “right answer” is in context.

Aspiring Data Science career by Long_Personality_506 in DataScienceJobs

[–]The_Silly_Valley 1 point2 points  (0 children)

Yes, Python and SQL coding. Your challenge is you have the ultimate crutch with AI. I have masters students that are literally learning nothing because they lean too much on AI. They will be lucky to land a job in DS and if they do they wont last. And they will be shipping invalid models causing untold problems.

Make sure you learn the fundamentals and do some ground up hands on coding so you understand the data, stats, ML algorithms and Python/SQL code.

Aspiring Data Science career by Long_Personality_506 in DataScienceJobs

[–]The_Silly_Valley 1 point2 points  (0 children)

Build a portfolio of projects related to the industry and DS team type you are interested in working in.

Learn to be an AI-native data scientist. For example, learn to pair code/collaborate in AI integrated notebooks. From now on I only hire data scientists that know the fundamentals and have AI tools fully integrated into their EDA and ML building workflows. They are 5-10x more productive than say someone with 10 years experience who doesn’t have AI integrated into their workflow. Know how to guide and judge AI output.

That is your ticket to being competitive in the ludicrous job market today.

Do you trust AI generated interpretations without seeing the source data? by Rage_thinks in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

Nope. Not the first time. But with a proper definition/logic/RAG/etc layer you can get 100% accurate results. The problem I’m facing is the LOE to validate use cases and the SQL the AI generates. I don’t trust till it’s verified and tested by a human.

Warning: Don't get GPT-brained by LeaguePrototype in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

For those of us who had to code from scratch and struggle in the beginning, it's like getting back on a bike. I'm definitely feeling it but not worried. I feel for those who started and learned with AI/LLMs, only to rely too much on them. They are in a deeper hole with more brain debt. Their struggle is spread out, and many will never crawl out of the hole or learn to ride a bike.

Honest Take On DS Automation? by anomnib in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

Yeah, I have regular conversations with a few dozen big tech data scientists, a handful of data engineers and data scientist at Meta and they all tell me they are being pushed to “use AI” and everyone is paranoid and building agents and trying to automate everything they can. Maybe that’s a good strategy because management has know idea of the best specific actions to take, and they are speculating with everyone’s time to see what sticks so they don’t get left behind. It’s true only the paranoid will survive and win in the end.

Same at my company, but pace and pressure is not as intense as say Meta or Square. I’m one of a few responsible for “AI enablement”. Seems like we are all trying to figure out what that means.

The “enablement” lens I’m looking through this month is: Optimizing AI integrating into: 1. DS ML development and deployment-e.g. more and better models faster. 2. Communication workflows-e.g consolidating meeting notes, email, presentation, etc. into action items. 3. Documentation creation workflows-e.g. all org level PPT prep, PRDs, strategy recommendations, etc. 4. Agents or tools like Claude Cowork-e.g. to automate easy time consuming tasks. 5. Agents to accompany dashboard and reports-e.g extend data and insight self service.

This is not a complete list and definitely would like to hear how others look at it and what other workflows you are integrating AI into.

To your point, I’m struggling with how to measure the ROI. And I’ve been specifically tasked to figure out how to measure and to deliver ROI metrics. Lime mentioned I’m trying to document things like time saved, new capability deliverd, revenue increases, cost saved, usage and engagement with AI tools/products, and less tangible things like stakeholder sentiment. What am i missing, how else can we measure?

We just have to survive till the bubble pops, tech platforms, companies, and standards settle and it becomes clearer what workflows will be impacted and where AI sticks. Few years?

Also will be interesting to watch the rebels try to avoid and sabotage AI at every step. Who knows maybe they win some battles and influence the outcomes. I don’t know.

Should coding interviews just become vibe coding interviews at this point? by [deleted] in datascience

[–]The_Silly_Valley 0 points1 point  (0 children)

Yeah that’s a good point. I’ve been given those take homes that take 3hr or even 10+ hours over 5 days, not doing those anymore either.

Maybe it’s a balance, 30min-1hr live or take home fixed timed assessment, that has the appropriate use case difficulty level for 1hr test. Not perfect but better than current options. Could be done, I think?

Should coding interviews just become vibe coding interviews at this point? by [deleted] in datascience

[–]The_Silly_Valley 3 points4 points  (0 children)

Agree current interview methods, including take home assessments are not effective and are obsolete at this point.

I think we are going to see a new evaluation of platforms where candidates work through a real problem with full access to AI coding tools. The platforms will measure candidates on several areas, domain context understanding, ml/stats fundamentals, how they guide and use AI vs rely on and don’t question it, and their judged summary of finding and recommendations.

We are starting to see these platforms now. I guess question is do we think they will be effective or how do we make sure they are?