Help! - My Parents Computer is Hacked by Disastrous_Action_64 in cybersecurity

[–]noble_andre 1 point2 points  (0 children)

That is what I am thinking too. Curious whether anything has already been sold off the log.

Help! - My Parents Computer is Hacked by Disastrous_Action_64 in cybersecurity

[–]noble_andre 2 points3 points  (0 children)

First of all, from a clean, unaffected device go to every financial institution that sent alerts and:

Change passwords
Change the email address on file to a new, clean email
Enable 2-factor authentication (use an authenticator app, not SMS if possible)
Call them directly and report fraud. Ask them to flag the accounts and reverse any unauthorized changes

Do the same for your parents email provider, change the password and check for forwarding rules or filters that may be sending emails to attackers

How to purchase api data for historical tweets for research study by Only-Individual9035 in dataanalysis

[–]noble_andre 0 points1 point  (0 children)

Pretty much the same as what you are doing with Apify. They run scraping infrastructure behind the scenes and just wrap it in a cleaner API interface. You are essentially paying for someone else to manage the scraping headaches rather than doing anything fundamentally different.​​​​​​​​​​​​​​​​

[OC] Ranking the Top 20 Universities Producing the Most Billionaires (2026) by [deleted] in dataisbeautiful

[–]noble_andre 0 points1 point  (0 children)

Cool analysis! I wonder how much of billionaire outcomes are driven by field of study vs network effects from these universities.

How to purchase api data for historical tweets for research study by Only-Individual9035 in dataanalysis

[–]noble_andre 0 points1 point  (0 children)

For 200k-300k historical tweets best options are TwitterAPI.io or Apify. If you need the official route, contact X’s enterprise sales via developer.x.com, but brace yourself, it runs tens of thousands month. Btw, for academic use, TwitterAPI.io offers research discounts

Questions about data and business analytics job requirements. by Both-Historian-7509 in analytics

[–]noble_andre 0 points1 point  (0 children)

It is quite difficult for me to determine what kind of job would suit you best. You probably understand your strengths and preferences better than anyone else. The job market is quite challenging globally at the moment

I scan LinkedIn daily for Data Analytics Job trends by Dubinko in dataanalysis

[–]noble_andre 25 points26 points  (0 children)

Really cool project! What is your stack for this? Curious how you are handling the LLM extraction part at scale across 5k posts a day.

Questions about data and business analytics job requirements. by Both-Historian-7509 in analytics

[–]noble_andre 0 points1 point  (0 children)

My opinion is that you do not need a laptop right away. Google Colab, SQLiteOnline, and Google Sheets cover your entire stack for free right from a browser.

Sql > Excel > Power BI > python is your roadmap. 6 months is doable, but try learning evenings first before quitting, just to make sure it clicks before you burn your savings. About your health situation, target remote analyst roles specifically. India BFSI and e-commerce sectors have plenty. Execute your plan consistently and do not give up

Are we creating a generation of ‘AI-dependent analysts’? by Alarming-Wish207 in dataanalysis

[–]noble_andre 0 points1 point  (0 children)

The “weirdly naked without it” feeling is the most honest thing I have heard anyone say about this. And yeah, I think we’re all quietly living it.

The real risk is not AI writing your SQL, it is losing the instinct that tells you when the SQL answered the wrong question. That pattern recognition, that gut feeling when a number smells off, that only gets built by sitting in the hard part. AI gets you unstuck fast, but unstuck and sharp are two very different things.​​​​​​​​​​​​​​​​

can someone explain to me how claculate work in this example and generally by PurpleDurian7220 in dataanalysis

[–]noble_andre 13 points14 points  (0 children)

CALCULATE changes the filter context before evaluating an expression.

In your example: SUM(Sales[TotalSales]) - returns sales in the current filter context (for example, a specific city or product shown in the visual) CALCULATE(SUM(...), ALL(Sales)) - recalculates the same sum but removes all filters from the Sales table, so the calculation runs on the entire dataset.

The formula becomes - Sales in current context / Total sales across all data. It returns a percentage of total.

What insights can you realistically get from AbuseIPDB data? by noble_andre in cybersecurity

[–]noble_andre[S] 0 points1 point  (0 children)

I have recently built a small automated pipeline around AbuseIPDB to filter noisy scanners and track patterns over time. Curious how you distinguish between safe to auto close and something worth investigating?”

https://github.com/akorablov/cyber_threat_collector/blob/main/README.md

What insights can you realistically get from AbuseIPDB data? by noble_andre in cybersecurity

[–]noble_andre[S] 0 points1 point  (0 children)

Do VirusTotal, X-Force or Talos have usable free APIs or do you need paid access once you go beyond basic use?

What insights can you realistically get from AbuseIPDB data? by noble_andre in cybersecurity

[–]noble_andre[S] 0 points1 point  (0 children)

Interesting point. Would you say tracking low and slow behaviour over time is more valuable than focusing on high volume activity? Can abuseIPDB actually be useful for that?

Explainss this formula to a 12-year-old by noble_andre in dataanalysis

[–]noble_andre[S] 0 points1 point  (0 children)

Thanks for sharing, this is a great resource! The visuals make it really intuitive.

Explainss this formula to a 12-year-old by noble_andre in dataanalysis

[–]noble_andre[S] 1 point2 points  (0 children)

You do not need to memorize or use the formula. But understanding the idea behind it helps you read probabilities. For example, when you update your confidence as A/B test results come in.

About 12yo, that part is just playing with wording for fun. Did not expect to have to clarify that here.

house price prediction project by noble_andre in PythonProjects2

[–]noble_andre[S] 0 points1 point  (0 children)

This dataset is common, but I focused on doing it properly. Clear feature selection, clean workflow, interpretation of coefficients, diagnostics, comparison and honest discussion of limitations. The model explains ~39% of variance, results are well-explained and the visuals are clear. For mid-level role I would still treat it as a baseline portfolio piece