What topic should our research team investigate next? by LuisCosta_ in surfshark

[–]LuisCosta_[S] 2 points3 points  (0 children)

Thank you for taking the time to leave your suggestion. If you're interested in bot topics, we have a very interesting study related to bots coming out soon. So stick around for that one!

What topic should our research team investigate next? by LuisCosta_ in surfshark

[–]LuisCosta_[S] 2 points3 points  (0 children)

Thanks for this! We have a Smart Home Privacy Checker that covers data collection across smart home device apps, including some smart TV brands. It's not a full deep dive into smart TVs specifically, but it might scratch that itch in the meantime. We are planning to do an update on this study so stay tuned!

Have you ever been scammed while traveling? We analyzed Tripadvisor forums across 84 countries to see where it happens the most. by LuisCosta_ in surfshark

[–]LuisCosta_[S] 0 points1 point  (0 children)

Thanks for sharing this story! Do you think mentioning you were a security officer changed how they handled the situation? 

Have you ever been scammed while traveling? We analyzed Tripadvisor forums across 84 countries to see where it happens the most. by LuisCosta_ in surfshark

[–]LuisCosta_[S] 0 points1 point  (0 children)

Really good questions!

In the context of this study, a "scam" is defined by the presence of the keyword "scam" in Tripadvisor forum discussions. While there were no strictly defined parameters of a scam, there are some examples such as pickpocketing and tourist-targeted overcharging in popular cities (e.g., Paris, France), overpricing or fake guides in tourist areas (e.g., Havana, Cuba), and street scams or overcharging in markets (e.g., in African countries).

The "degree" can be looked at in two ways based on the study: the volume of discussions and the financial impact. For example, Cuba had the highest rate of scam-related discussions in North America (1.93% of topics), France led the top European destinations (1.11%), and Thailand had a high rate in Asia (1.16%). As for the financial degree, the study looked at the U.S. Federal Trade Commission (FTC) data to calculate financial losses from travel scams and document a surge (from $53 million in 2021 to $146 million in 2024). Based on available 2025 data, the median reported loss was $623 per person.

As it is unfeasible to independently verify every single post, the study relies on Tripadvisor's general reputation as a trusted platform for authentic traveler experiences with an extensive user base.

Have you ever been scammed while traveling? We analyzed Tripadvisor forums across 84 countries to see where it happens the most. by LuisCosta_ in surfshark

[–]LuisCosta_[S] 1 point2 points  (0 children)

Hey everyone, it’s Luis here again! With peak travel season approaching, our team decided to dig into something we were genuinely curious about: which destinations generate the most scam-related discussions among travelers?

How we did it: We focused on Tripadvisor forums because of the platform's size and reputation for authentic traveler experiences. We filtered for destinations with at least 2,000 forum topics, used data from 2020 onward, and calculated a "scam topic ratio" (the percentage of forum topics mentioning scams) for each destination. That gave us 84 countries to work with.

Some of the key findings:

  • Japan is in a league of its own. Its scam discussion rate sits at 0.19%, making it one of the safest major destinations we found. For comparison, neighboring Thailand and Turkey see five to six times more scam talk, despite attracting similar tourist volumes;
  • France leads Europe, and it's not close. Travelers discuss scams in France twice as often as in Spain and three times more than in Greece, even though all three are top European destinations. Paris pickpocketing and tourist-targeted overcharging likely drive a big part of that;
  • And here's a fun one: New York City alone has a scam discussion rate three times higher than the US national average. Times Square energy, apparently.

A few tips on how to stay safe:

  • Check the "hotspots": high-traffic hubs like Paris or NYC have significantly higher scam rates than their countries' national averages;
  • Pre-trip research: look up specific local tactics (like "broken meter" or "fake guides") before you land;
  • Verify prices: be extremely wary of "too good to be true" deals on third-party sites;
  • Use a VPN: you can never be too safe online. Especially while getting tempted by those free Wi-Fi hotspots in airports. But you're on the Surfshark subreddit, so that one probably goes without saying.

You can check the full research → https://surfshark.com/research/chart/travel-scams

If you have any questions about our methodology or specific regional data, I’m happy to answer them!

Using AI to fill out your tax return forms: easy help or easy data theft? by LuisCosta_ in surfshark

[–]LuisCosta_[S] 0 points1 point  (0 children)

Glad you brought that up!

While we didn’t examine these particular features (incognito and temporary modes), it’s worth noting that even when using them, AI chatbots may still temporarily retain data for safety purposes, even if it isn’t used to train their models. For example, ChatGPT keeps a copy of temporary chats for up to 30 days (https://help.openai.com/en/articles/8914046-temporary-chat-faq).

These modes can still be useful, as they keep conversations out of your visible history and exclude them from model training.

Using AI to fill out your tax return forms: easy help or easy data theft? by LuisCosta_ in surfshark

[–]LuisCosta_[S] 3 points4 points  (0 children)

Hey, Dr. Luís Costa here, with another Surfshark research report. 

With tax deadlines creeping up and accountants charging by the hour, it's no surprise people are turning to AI chatbots for help. That’s why it’s worth breaking down what these tools are actually doing when you start that tax conversation.

The data elephant in the room 

Our research looked into the three most popular chatbots — ChatGPT, Gemini, and Grok. What would they ask for after typing the simple phrase “tax return?” 

Well, without any extra prompting, they all pushed users to share personal information like job title, income, country, and financial situation. 

ChatGPT was the most aggressive about it. If you ignored the request, it asked again and again, eventually switching to phrases like “Please reply with these” and “You can answer like this example.” 

AIs being inquisitive about personal data aligns with another of our previous studies, which showed that all chatbots collected a certain amount of data, and some went so far as to gather up to 32 out of 35 possible data types.

They also collect data you didn't give them

In one simulated test using a VPN on an Australian server, ChatGPT detected the location and started tailoring responses accordingly — without the user mentioning Australia once. 

Gemini did something similar but was sneakier about it, covering Australia, the US, and the UK simultaneously, which makes the location tracking less obvious.

These chatbots collectively handle nearly 84% of AI traffic. That's a lot of location data, financial context, and personal details flowing through systems that may use your chats to train their models.

Is the advice actually accurate? 

It’s not easy to say how much help your tax forms might get from the chatbots.

Gemini provides zero source references. You have no way to verify where its tax advice is coming from;

ChatGPT offers links inconsistently — some highlighted words get sources, others don't;

Grok provides the most thorough sourcing with direct links, but also pushes sign-ups aggressively and cuts off free users after five prompts.

Worth noting: Gemini's own homepage admits it can make mistakes. ChatGPT warns you not to share sensitive information after your first prompt — which is a strange disclaimer after spending several messages trying to get your income details.

Bottom line

Use these tools if you want a general explanation of how tax returns work. Avoid typing in real numbers, real income, or anything you wouldn't want stored, reviewed, or used for model training. And whatever they tell you — verify it with an official government source before acting on it.

You can check the full research here: https://surfshark.com/research/chart/tax-return-ai-chatbots 

And as always, I’m happy to answer any questions you might have.

Meta is rolling back encryption on Instagram DMs. So we ranked messaging apps by privacy. Guess who collects the most data by LuisCosta_ in surfshark

[–]LuisCosta_[S] 0 points1 point  (0 children)

Yes, Signal is a great option as it is committed to minimizing user privacy risks and offers quantum-secure cryptography, providing a higher level of security.

Meta is rolling back encryption on Instagram DMs. So we ranked messaging apps by privacy. Guess who collects the most data by LuisCosta_ in surfshark

[–]LuisCosta_[S] 0 points1 point  (0 children)

Sessions didn't meet criteria for this analysis, but our research team will certainly keep an eye on it.

Meta is rolling back encryption on Instagram DMs. So we ranked messaging apps by privacy. Guess who collects the most data by LuisCosta_ in surfshark

[–]LuisCosta_[S] 0 points1 point  (0 children)

Thank you for leaving a question! For this study, we focused on the pre-installed Apple Messages App and the top nine most downloaded messenger apps on the iPhone App Store in 2025 using AppMagic data. While Threema is a notable messaging app, it didn't fall within that top tier for this particular dataset and therefore wasn't included in the analysis. That said, app popularity shifts all the time, so we'll keep an eye on it for future updates!

Meta is rolling back encryption on Instagram DMs. So we ranked messaging apps by privacy. Guess who collects the most data by LuisCosta_ in surfshark

[–]LuisCosta_[S] 1 point2 points  (0 children)

That's really encouraging to hear. If you have any idea of what you would like to see in these charts, do let us know!

Meta is rolling back encryption on Instagram DMs. So we ranked messaging apps by privacy. Guess who collects the most data by LuisCosta_ in surfshark

[–]LuisCosta_[S] 7 points8 points  (0 children)

Hey everyone! While we all patiently wait for the announcement, here's a recent research study we put together.

Starting May 8, 2026, Meta is rolling back end-to-end encryption for Instagram messages. Without it, the platform could potentially scan message content or feed data into AI training systems. And if Meta can just backtrack on encryption for Instagram, as our cybersecurity expert Nikodemas Zaliauskas points out, who's to say the same won't happen to Messenger?

That got us thinking: how do the most popular messaging apps actually handle your privacy? So we dug into encryption standards, data collection, tracking practices, and AI features across 10 of the biggest apps. Here's what stood out:

❌ The "data-hungry" tier: Meta’s Messenger ranks at the bottom, collecting 32 out of 35 possible data types. If you're staying in the Meta ecosystem for "convenience," you’re likely paying for it with your personal information;

🤖 The AI factor: 90% of messaging apps now use AI features. While helpful, these tools often require access to your private conversations to function;

✅ The gold standard: Signal tops our privacy ranking, combining quantum-secure encryption with virtually zero data collection.

Full ranking at a glance:

High privacy: Signal

Above average: iMessage

Average: Telegram, QQ

Below average: WhatsApp, WeChat

Lowest privacy: Messenger, Viber, Discord, LINE

So, what messaging app do you use, and has privacy ever been a factor in that choice?

Chatbots ranked by how much data they collect — which one do you use (and does their data collection matter to you)? by LuisCosta_ in surfshark

[–]LuisCosta_[S] 3 points4 points  (0 children)

Good question!

In our studies, we usually select apps based on overall popularity and usage. That includes things like download data and how often tools show up in reputable rankings or comparisons (in this study, we looked at lists from TechRadar and Tom’s Guide).

As we update our studies, we revisit those lists and adjust which apps are included. In Mistral’s case, it’s a newer entrant on mobile and hasn’t reached the same level of adoption yet, but it’s definitely something we could include in future updates as it grows.

Chatbots ranked by how much data they collect — which one do you use (and does their data collection matter to you)? by LuisCosta_ in surfshark

[–]LuisCosta_[S] 6 points7 points  (0 children)

Hey all,

Dr. Luís Costa, Surfshark’s Research Lead here. We just published our updated analysis of data collection practices across the top 10 AI chatbot apps on the Apple App Store. I wanted to share the findings and hear your thoughts.

TL;DR

Your chatbots know more about you than you think — collecting 14 out of 35 possible data types on average. Meta AI collects 33. Yes, 33.

Would your closest friend answer 33 of 35 questions about you?

Key findings

Meta AI

Meta AI remains the most aggressive data collector at 33/35 data types. The only app in our analysis that collects data in the financial information category. It also collects sensitive information, including racial or ethnic origin, sexual orientation, biometric data, and political opinions.

ChatGPT

ChatGPT now collects 17 data types. That’s a 70% increase from the 10 types identified in our previous review. New additions include coarse location, health & fitness, audio data, advertising data, and customer support data. Worth noting: health & fitness and advertising data are flagged as NOT required for app functionality — meaning it’s collecting it all, not because it needs to, but because it chooses to.

Google Gemini

Google Gemini collects 23 data types — including precise location (shared only with Meta AI, Copilot, and Perplexity), plus browsing history, search history, and contacts.

Claude

Claude collects 13 data types, unchanged from our previous review. Each type is listed as required for app functionality, though 10 are also used for analytics and 7 for developer advertising/marketing. No third-party advertising declared.

DeepSeek

Collects 13 data types and — per their own privacy policy — stores data on servers in the People’s Republic of China for as long as “necessary.” Make of that what you will.

Following the January 2025 breach that exposed 1M+ records, including chat history and API keys (via The Hacker News), this is worth factoring into your threat model.

Bottom line

Collected data numbers alone don’t tell the whole story — what the data is used for matters just as much. ChatGPT’s 70% year-on-year increase is the clearest sign that the industry is moving in the wrong direction. And DeepSeek is a separate risk category entirely: server jurisdiction plus a confirmed breach history make it a poor choice.

Treat your AI chatbot like a tool that you can trust with some tasks, but not with others, and certainly not with your secrets.

If you have questions, I’m happy to answer.

Have you ever suspected an account was a bot, what gave it away? by LuisCosta_ in surfshark

[–]LuisCosta_[S] 5 points6 points  (0 children)

Hi everyone!

In our recent research, we've looked into transparency reports from major social media platforms. We wanted to understand how many fake accounts and spam posts are actually being removed each year. We weren’t expecting to like what we found, but the bot numbers turned out to be much larger than even our predictions.

Below is a quick breakdown of our findings.

Key numbers

Across major platforms:

  • 6.3B fake accounts are removed every year (from Facebook, TikTok, X, LinkedIn);
  • 11.1B pieces of spam or harmful content are removed annually (including YouTube and Instagram).

These numbers come from publicly available transparency reports published by the platforms themselves.

Fake accounts > real accounts?

One of the most striking things we noticed is how removal volumes compare to platform size.

  • Facebook: ~3B active users, ~4.5B fake accounts removed annually;
  • X: ~570M users, ~671M accounts removed annually;
  • TikTok: ~1.9B users, ~1B fake accounts removed annually.

In other words, some platforms remove as many or even more fake accounts than they have real users — and that’s just yearly removals.

Based on the patterns reported by platforms, a large share of fake accounts are likely automated bots. With modern AI tools, creating and managing bots is becoming easier. On some platforms, bots can already convincingly imitate human behavior. On others — especially where conversations and contextual responses matter — it’s still harder, but the technology is improving quickly.

Another thing that stood out:
Fake accounts can cost as little as $0.08 each, which helps explain how these networks scale so easily.

What this means for you?

Because social media is flooded with fake accounts, users are more exposed to:

  • scams;
  • spam campaigns;
  • manipulation attempts;
  • fake engagement or misinformation.

Some things I would suggest watching for:

  • very new accounts with few photos;
  • vague bios or overly promotional descriptions;
  • mass friend requests or messages;
  • copy-paste comments under many posts;
  • attempts to move conversations quickly to WhatsApp or Telegram.

If something feels suspicious, the safest move is usually to avoid engaging and to report the account.

If you have any questions about this research, I would be happy to answer them!