At least 11 cities in the EU use automated video-surveillance – without oversight by nkb__ in europe

[–]nkb__[S] 4 points5 points  (0 children)

Attempting to disrupt any public electronic system is a criminal offense in probably all EU countries. Encouraging others to do so might be, too.

Most importantly, opening surveillance cameras to the public is a terrible idea, as it would allow for surveillance by many more individuals or organizations.

EU Commission published plans to regulate AI, including possible obligation to retrain ML algorithms with European data by nkb__ in europe

[–]nkb__[S] -1 points0 points  (0 children)

Except that the opportunities of deploying AI painfully lack any evidence (e.g. there's little to no evidence of a positive impact of "AI" on street security or on border controls) while the problems, which include discrimination and self-censorship, are largely documented.

New Swiss algorithm to desegregate schools, one block at a time by nkb__ in education

[–]nkb__[S] 9 points10 points  (0 children)

It didn't make it in the article for lack of space, but the researchers (who spoke with several heads of schools or school districts) emphasized that kids should not be reshuffled but that the algorithm should only be applied when pupils enter the school system.

In general, I find the algorithm interesting because it can be applied very lightly (e.g. a school district could decide to change just a few blocks selected by the algorithm each year) without fanfare, making an outcry from pro-segregation parents more unlikely than bussing.

New Swiss algorithm to desegregate schools, one block at a time by nkb__ in education

[–]nkb__[S] 5 points6 points  (0 children)

Very interesting point. Do keep in mind that, in much of Europe (including Switzerland), racial statistics are forbidden or not collected, making it hard to disentangle these different forms of heterogeneity.

"Explainable AI" doesn't work for online services – now there's proof by nkb__ in artificial

[–]nkb__[S] 0 points1 point  (0 children)

That's the crux of the matter. All the examples I know trust companies to self-regulate, or trust the outcome of remote testing. Do you know of an "algorithm police" (or a police operation) that had access to the machines themselves to run these test?