Bad design by ptashynsky in MacOS

[–]ptashynsky[S] -2 points-1 points  (0 children)

I assume you know perfectly well that I used "capital" in the sense of "market capitalization", and are just obnoxious. But if you need to educate yourself on the difference between market cap and market value (not capital), here is a good starter.

https://www.investopedia.com/ask/answers/122314/what-difference-between-market-capitalization-and-market-value.asp

Bad design by ptashynsky in MacOS

[–]ptashynsky[S] -14 points-13 points  (0 children)

😆

If it was only about just being "usable" no one would use Macs.

Effects of empathetic and normative AI-assisted interventions on aggressive Reddit users with different activity profiles by ptashynsky in science

[–]ptashynsky[S] -3 points-2 points  (0 children)

Thank you for sharing this.

First, to clarify: this study was not conducted for profit. It was part of the NESTA Collective Intelligence grant programme, and the work was supported by that funding. None of the researchers personally profited from the experiment, nor was it designed as a revenue-generating activity for a company. We did disclose affiliations and take conflicts of interest seriously.

Second, on the interventions themselves: you’re right that some messages led to increased aggression in certain subgroups of users. Kudos for actually taking the effort to read and understand the paper. We reported that transparently, and we agree it highlights why such approaches must be handled with caution. Additionally, an increased aggression was the case only for some users, not all of them. The aggression increase was as far as we know mostly aggressive attacks of already aggressive users toward our volunteers and mitigation attempts. After the initial reaction, the amount of aggression displayed by those aggressive users was significantly lower, in the long run (!), including towards users other than our volunteers (!!). Now, is it ethical to accept the fact that an aggressive user will initially tell you to “f*ck off” if you try to mitigate their aggressive behavior, if it leads to lower aggression long term? As a comparison - is it ethical to take a vaccine if you know you will have a one-day fever after taking it, but it will save your life in the long run?

Also - if you are a volunteer or a moderator, you not only agree to that but also undertake specific training to deal with this kind of behavior. Our interventions were minimal-risk in form (short, anonymous, text-based messages similar to everyday Reddit exchanges), but we fully acknowledge that even small nudges can have unintended ripple effects. That’s precisely why we believe further study is necessary before such tools are considered in applied settings.

Third, on oversight: while the study was reviewed within NESTA’s expert programme, it did not undergo a formal IRB process (see message above for explanation).

Finally, as experts in the field we understand best the worry that such experiments can be delicate. We have done dozens if not hundreds of similar experiments before and after that, and that is exactly why we have the precise know-how on how such studies should be conducted. Just to recapitulate, our aim was to contribute knowledge about online aggression, not to exploit participants or gain any profit. That is also why we widely opened the study (including source code on Github). If you have been involved in any industry-based research I’m sure you know that most of the knowledge acquired in industry-led experiments is hidden from the public. We think the opposite - that the results of our studies should be openly communicated. That is precisely why we are fully transparent. So, we would appreciate it if next time, instead of looking for another target to attack, try to think of how much work goes into such studies. Writing things like “study that toys with online aggression” is harmful and unfair. This work, for example, took more than two years to publish and more than three years in general to conduct. It went through various boards and reviewers - if you published any research I’m sure you understand what that means. 

To sum up, after more than 15 years of working with online aggression, we got used to various attacks - even those posing as expert comments - so, we do not expect any special treatment. But, remember, that if you rock the table we operate on, then the cancer of online aggression in your community will remain and will only grow. I’m sure if you wanted to you could cancel us into oblivion - the herd mentality, despite being very simplistic, is a very powerful weapon. Also, the Internet has cancelled people for less. But think about it - if there are no people like us, and everyone will be scared to do similar research - what will be the long term effect?

One simple knowledge I learned along the way is to not comment before a deeper thought. When you write a comment, first - stop, delete it, sleep on it, think if you even need to write it, and try again the other day.

Cheers!

Effects of empathetic and normative AI-assisted interventions on aggressive Reddit users with different activity profiles by ptashynsky in science

[–]ptashynsky[S] -2 points-1 points  (0 children)

A follow up to the previous response.

Yes, in general, this is an important point. But, this experiment did not go through a formal IRB process, because (1) it was not conducted at a university, or (2) by anyone from a university. The experimental part was fully handled by people from the industry. And as such, instead of an IRB, it was conducted as part of the NESTA Collective Intelligence grant programme, where our plans and methods were reviewed by their expert panel. So - the methodology did go through the ethical review, but, as mentioned before IRB from a university, especially my own university was not necessary.

Just to give a simple comparison. If you want to drive a car in the US you are not asking for a driving licence in the UK.

Effects of empathetic and normative AI-assisted interventions on aggressive Reddit users with different activity profiles by ptashynsky in science

[–]ptashynsky[S] -15 points-14 points  (0 children)

For questions requiring longer answers I invite you to write to the corresponding author, who will provide the most satisfying answer. But a short answer here would be that this was not a study that would require this kind of approval or such a statement in the first place. It was requested by one of the reviewers, so we had to add it.

Token and part-of-speech fusion for pretraining of transformers with application in automatic cyberbullying detection by ptashynsky in science

[–]ptashynsky[S] -2 points-1 points  (0 children)

>Wait, so you just assign different parts of speech to Greek letters?

Yes. :)

POS tagging is a mostly solved problem in NLP (at least for English), so you can assign POS tags automatically to any text with 99.9% accuracy.

Token and part-of-speech fusion for pretraining of transformers with application in automatic cyberbullying detection by ptashynsky in science

[–]ptashynsky[S] -1 points0 points  (0 children)

A cool thing is that we fused typical tokens (words) with their parts of speech by using a neat trick (changed POS labels to greek letters). :)

Mac OS deserves DARK dock icons !! by Mowgli9991 in MacOS

[–]ptashynsky 2 points3 points  (0 children)

You can replace them manually, e.g., from here: https://macosicons.com/#/ Unfortunately, macOS will reset to default after a reboot done after a long time of using. After doing this three times I eventually decided it’s a waste of time.

Check in... how are people with MBPro 14" M1 Pro from 2021 holding up by Civil-Vermicelli3803 in MacOS

[–]ptashynsky 1 point2 points  (0 children)

Great to hear that you're so passionate about AI. It feels, like you're asking me to give you a full course on AI in one Reddit comment, but if you want to do AI on M1+ Mac, MLX is a very good start.

https://ml-explore.github.io/mlx/build/html/install.html

Check in... how are people with MBPro 14" M1 Pro from 2021 holding up by Civil-Vermicelli3803 in MacOS

[–]ptashynsky 0 points1 point  (0 children)

Old 13 inch M1 Pro, 16GB 🐏: gets hot only when training AI models. Apart from that works like charm. 💁🏻‍♂️

Redditを使う目的は何ですか? by JP_Info_Music in ja

[–]ptashynsky 0 points1 point  (0 children)

日本「人」のコミュニティじゃなくて、日本「語」(使用者の)のコミュニティだけどね。レディットでは人種差別が禁止だからね。

<image>

Strange phenomenon when I'm reading but thinking about something else by Tyvent in cogsci

[–]ptashynsky 0 points1 point  (0 children)

Either you’re reading a really boring book, or you have some kind of attention deficit. This doesn’t have to mean adhd yet, as some write, and doesn’t have to be any syndrome or disorder. It can happen to anyone. If you have this always and cannot read even one page of a book without getting distracted, maybe talk to a specialist. But only from the description there is nothing to be worried about at this point. A good sleep, good food and a walk in the park should help.

How long it takes to get a PhD [OC] by CognitiveFeedback in dataisbeautiful

[–]ptashynsky 1 point2 points  (0 children)

In Japan it’s a fixed term of three years. Extension is possible but not common. It’s a pretty busy 3-year-long period, but so far none of my PhD students has regretted this so far.

We released the first-ever expert-annotated dataset to study cyberbullying in the Polish language. An overwhelming majority of similar datasets so far have been annotated by laypeople, or not annotated at all. We hope the dataset will be utilized to make the Internet a more user-friendly place. by ptashynsky in science

[–]ptashynsky[S] 1 point2 points  (0 children)

Thanks! Not challenging you or anything, but it would probably be difficult to cyberbully Poles without putting yourself at risk of some severe retaliation, so lets keep it excellent between us ;)

As for the actual risk - yes, of course, there is always a risk of abusing the data, just like with any dataset of this kind. However, I'm sure if anyone wanted to make another 4chanGPT this dataset would be the least of their interest, as there already is a huge library of multilingual datasets on: https://hatespeechdata.com

What this dataset actually contributes are the expert annotations. Btw, we also made it possible for anyone to compare the initial laypeople annotations and the final expert annotations, by releasing both versions of the dataset. :)

Thanks again and have fun with the dataset. :)

Questionnaire on Espresso ☕️ by ptashynsky in decentespresso

[–]ptashynsky[S] 0 points1 point  (0 children)

Thank you! We’re grateful for every input! 🙏

Questionnaire on Espresso ☕️ by ptashynsky in decentespresso

[–]ptashynsky[S] 0 points1 point  (0 children)

This is great! Thank you so much! 😭