nerdculture.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
Be excellent to each other, live humanism, no nazis, no hate speech. Not only for nerds, but the domain is somewhat cool. ;) No bots in general. Languages: DE, EN, FR, NL, ES, IT

Administered by:

Server stats:

1.2K
active users

#algorithmicbias

0 posts0 participants0 posts today
axleyjc<p>Oh, hell no Atlassian!</p><p><a href="https://federate.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://federate.social/tags/JobHunt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>JobHunt</span></a> <a href="https://federate.social/tags/algorithmicbias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>algorithmicbias</span></a></p>
Fedizen ⁂ Fediverse News<p><a href="https://mastodon.social/tags/DemocraticPoliticians" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DemocraticPoliticians</span></a> Should Leave <a href="https://mastodon.social/tags/ElonMusk" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ElonMusk</span></a>’s <a href="https://mastodon.social/tags/XTwitter" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>XTwitter</span></a>.</p><p>Undermining Democratic Discourse 🤬 Since <a href="https://mastodon.social/tags/Musk" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Musk</span></a>’s takeover, <a href="https://mastodon.social/tags/Twitter" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Twitter</span></a> has become a platform for <a href="https://mastodon.social/tags/misinformation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>misinformation</span></a>, <a href="https://mastodon.social/tags/hatespeech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hatespeech</span></a>, and <a href="https://mastodon.social/tags/algorithmicbias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>algorithmicbias</span></a>, threatening democratic values.</p><p>Amplification of Extremism 📢 By removing <a href="https://mastodon.social/tags/contentmoderation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>contentmoderation</span></a> safeguards, <a href="https://mastodon.social/tags/X" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>X</span></a> allows <a href="https://mastodon.social/tags/farright" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>farright</span></a> voices and <a href="https://mastodon.social/tags/conspiracymyths" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>conspiracymyths</span></a> to spread unchecked, distorting public debate. (1/3)</p>
Miguel Afonso Caetano<p>"After the entry into force of the Artificial Intelligence (AI) Act in August 2024, an open question is its interplay with the General Data Protection Regulation (GDPR). The AI Act aims to promote human-centric, trustworthy and sustainable AI, while respecting individuals' fundamental rights and freedoms, including their right to the protection of personal data. One of the AI Act's main objectives is to mitigate discrimination and bias in the development, deployment and use of 'high-risk AI systems'. To achieve this, the act allows 'special categories of personal data' to be processed, based on a set of conditions (e.g. privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The GDPR, however, seems more restrictive in that respect. The legal uncertainty this creates might need to be addressed through legislative reform or further guidance."</p><p><a href="https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)769509" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">europarl.europa.eu/thinktank/e</span><span class="invisible">n/document/EPRS_ATA(2025)769509</span></a></p><p><a href="https://tldr.nettime.org/tags/EU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EU</span></a> <a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/AIAct" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIAct</span></a> <a href="https://tldr.nettime.org/tags/GDPR" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GDPR</span></a> <a href="https://tldr.nettime.org/tags/DataProtection" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataProtection</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicDiscrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicDiscrimination</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://tldr.nettime.org/tags/Privacy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Privacy</span></a></p>
ResearchBuzz: Firehose<p>The Conversation: Unrest in Bangladesh is revealing the bias at the heart of Google’s search engine. “…while Google’s search results are shaped by ostensibly neutral rules and processes, research has shown these algorithms often produce biased results. This problem of algorithmic bias is again being highlighted by recent escalating tensions between India and Bangladesh and cases of […]</p><p><a href="https://rbfirehose.com/2025/02/17/the-conversation-unrest-in-bangladesh-is-revealing-the-bias-at-the-heart-of-googles-search-engine/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/02/17/the-conversation-unrest-in-bangladesh-is-revealing-the-bias-at-the-heart-of-googles-search-engine/</a></p>
PUPUWEB Blog<p>As JD Vance criticizes EU's AI regulation, 12+ US states are considering algorithmic discrimination bills strikingly similar to the EU's AI Act. <a href="https://mastodon.social/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIRegulation</span></a> <a href="https://mastodon.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://mastodon.social/tags/TechPolicy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechPolicy</span></a> <a href="https://mastodon.social/tags/JDVance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>JDVance</span></a> <a href="https://mastodon.social/tags/USStates" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>USStates</span></a> <a href="https://mastodon.social/tags/AIAct" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIAct</span></a> <a href="https://mastodon.social/tags/Discrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Discrimination</span></a> <a href="https://mastodon.social/tags/GovTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GovTech</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a></p>
Alexia Gaudeul<p>The comprehensive 110-page study, "The Impact of Human Oversight on Discrimination in AI-Supported Decision-Making," is now available.</p><p><a href="https://data.europa.eu/doi/10.2760/0189570" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">data.europa.eu/doi/10.2760/018</span><span class="invisible">9570</span></a> </p><p>This expanded report delves deeper into how human oversight can mitigate biases in AI systems, building upon the initial findings presented at ECAI 2024.</p><p><a href="https://ebooks.iospress.nl/doi/10.3233/FAIA240598" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ebooks.iospress.nl/doi/10.3233</span><span class="invisible">/FAIA240598</span></a></p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://mastodon.social/tags/EthicalAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EthicalAI</span></a></p>
Miguel Afonso Caetano<p>"In October 2021, we sent a freedom-of-information request to the Social Insurance Agency attempting to find out more. It immediately rejected our request. Over the next three years, we exchanged hundreds of emails and sent dozens of freedom-of-information requests, nearly all of which were rejected. We went to court, twice, and spoke to half a dozen public authorities.</p><p>Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education.</p><p>Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer."</p><p><a href="https://www.lighthousereports.com/investigation/swedens-suspicion-machine/?utm_source=pocket_shared" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">lighthousereports.com/investig</span><span class="invisible">ation/swedens-suspicion-machine/?utm_source=pocket_shared</span></a></p><p><a href="https://tldr.nettime.org/tags/Sweden" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Sweden</span></a> <a href="https://tldr.nettime.org/tags/SocialInsurance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SocialInsurance</span></a> <a href="https://tldr.nettime.org/tags/ChildSupport" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChildSupport</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicDiscrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicDiscrimination</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a></p>
Miguel Afonso Caetano<p>"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.</p><p>Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.</p><p>Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."</p><p><a href="https://theconversation.com/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">theconversation.com/ai-harm-is</span><span class="invisible">-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://tldr.nettime.org/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIRegulation</span></a></p>
Miguel Afonso Caetano<p>"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.</p><p>Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.</p><p>Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."</p><p><a href="https://theconversation.com/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">theconversation.com/ai-harm-is</span><span class="invisible">-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://tldr.nettime.org/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AIRegulation</span></a></p>
Stephane Bilodeau<p>🚨NEW study, from Dr Graham &amp; Dr. Andrejevic from <span class="h-card" translate="no"><a href="https://mas.to/@qutdmrc" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>qutdmrc</span></a></span> with eye-opening! 👀 findings!<br>The computational analysis of engagement found that X's algorithm was changed in July 2024 to boost Republican-leaning &amp; Elon Musk's accounts during US election.<br>Elon Musk's Engagement 🚀 They found a significant boost in Musk's view, retweet, and like counts around July 13th, 2024. This coincides with his Trump endorsement! 🤔 </p><p><a href="https://mastodon.top/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://mastodon.top/tags/USElection2024" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>USElection2024</span></a> <a href="https://mastodon.top/tags/Twitter" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Twitter</span></a> <a href="https://mastodon.top/tags/X" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>X</span></a><br> eprints.qut.edu.au/253211/</p>
Miguel Afonso Caetano<p>"This technical report presents findings from a two-phase analysis investigating potential algorithmic bias in engagement metrics on X (formerly Twitter) by examining Elon Musk’s account against a group of prominent users and subsequently comparing Republican-leaning versus Democrat-leaning accounts. The analysis reveals a structural engagement shift around mid-July 2024, suggesting platform-level changes that influenced engagement metrics for all accounts under examination. The date at which the structural break (spike) in engagement occurs coincides with Elon Musk’s formal endorsement of Donald Trump on 13th July 2024.</p><p>In Phase One, focused on Elon Musk’s account, the analysis identified a marked differential uplift across all engagement metrics (view counts, retweet counts, and favourite counts) following the detected change point. Musk’s account not only started with a higher baseline compared to the other accounts in the analysis but also received a significant additional boost post-change, indicating a potential algorithmic adjustment that preferentially enhanced visibility and interaction for Musk’s posts.</p><p>In Phase Two, comparing Republican-leaning and Democrat-leaning accounts, we again observed an engagement shift around the same date, affecting all metrics. However, only view counts showed evidence of a group-specific boost, with Republican-leaning accounts exhibiting a significant post-change increase relative to Democrat-leaning accounts. This finding suggests a possible recommendation bias favouring Republican content in terms of visibility, potentially via recommendation mechanisms such as the "For You" feed. Conversely, retweet and favourite counts did not display the same group-specific boost, indicating a more balanced distribution of engagement across political alignments."</p><p><a href="https://eprints.qut.edu.au/253211/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">eprints.qut.edu.au/253211/</span><span class="invisible"></span></a></p><p><a href="https://tldr.nettime.org/tags/USA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>USA</span></a> <a href="https://tldr.nettime.org/tags/Trump" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Trump</span></a> <a href="https://tldr.nettime.org/tags/PresidentialElections" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PresidentialElections</span></a> <a href="https://tldr.nettime.org/tags/SocialMedia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SocialMedia</span></a> <a href="https://tldr.nettime.org/tags/Musk" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Musk</span></a> <a href="https://tldr.nettime.org/tags/Twitter" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Twitter</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicRecommendation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicRecommendation</span></a></p>
EDRi<p>Commissioner-designate Virkkunen falls into the trope claiming that digitalisation will provide a solution to all problems, from the <a href="https://eupolicy.social/tags/TwinTransition" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TwinTransition</span></a> for the <a href="https://eupolicy.social/tags/climatecrisis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>climatecrisis</span></a>, to improving public services and healthcare. </p><p>But what about its exclusionary effects, <a href="https://eupolicy.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> and <a href="https://eupolicy.social/tags/discrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>discrimination</span></a>? 🤔</p>
Miguel Afonso Caetano<p>""Some of the starkest examples looked at how Google treats certain health questions. Google often pulls information from the web and shows it at the top of results to provide a quick answer, which it calls a Featured Snippet. Presch searched for "link between coffee and hypertension". The Featured Snippet quoted an article from the Mayo Clinic, highlighting the words "Caffeine may cause a short, but dramatic increase in your blood pressure." But when she looked up "no link between coffee and hypertension", the Featured Snippet cited a contradictory line from the very same Mayo Clinic article: "Caffeine doesn't have a long-term effect on blood pressure and is not linked with a higher risk of high blood pressure".</p><p>The same thing happened when Presch searched for "is ADHD caused by sugar" and "ADHD not caused by sugar". Google pulled up Featured Snippets that argue support both sides of the question, again taken from the same article. (In reality, there's little evidence that sugar affects ADHD symptoms, and it certainly doesn't cause the disorder.)""</p><p><a href="https://www.bbc.com/future/article/20241031-how-google-tells-you-what-you-want-to-hear" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bbc.com/future/article/2024103</span><span class="invisible">1-how-google-tells-you-what-you-want-to-hear</span></a> </p><p><a href="https://tldr.nettime.org/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://tldr.nettime.org/tags/Search" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Search</span></a> <a href="https://tldr.nettime.org/tags/SearchEngines" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SearchEngines</span></a> <a href="https://tldr.nettime.org/tags/PostModernism" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PostModernism</span></a> <a href="https://tldr.nettime.org/tags/Rhetorics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Rhetorics</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/EU" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EU</span></a> <a href="https://tldr.nettime.org/tags/ContentModeration" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ContentModeration</span></a> <a href="https://tldr.nettime.org/tags/Copyright" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Copyright</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a>: "This chapter offers a reflection on the topic of content moderation and bias mitigation measures in copyright law. It explores the possible links between conditional data access regimes and content moderation performed through data-intensive technologies such as fingerprinting and machine learning algorithms. In recent years, various pressing questions surrounding automated decision-making and their legal implications materialised. In European Union (EU) law, answers were provided through different regulatory interventions often based on specific legal categories, rights, and foundations contributing to the increasing complexity of interacting frameworks. Within this broader background, the chapter discusses whether current EU copyright rules may have the effect of favouring what we call the propagation of bias present in input data to the output algorithmic tools employed for content moderation. The chapter shows that a reduced availability and transparency of training data often leads to negative effects on access, verification and replication of results. These are ideal conditions for the development of bias and other types of systematic errors to the detriment of users' rights. The chapter discusses a number of options that could be employed to mitigate this undesirable effect and contextually preserve the many fundamental rights at stake."</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913758" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">papers.ssrn.com/sol3/papers.cf</span><span class="invisible">m?abstract_id=4913758</span></a></p>
Harald Klinke<p>New research highlights why fairness in AI can't be fully automated. Key points:<br>- EU's non-discrimination laws rely on context and judicial interpretation, not easily automated.<br>- Algorithmic bias differs from human bias, lacking clear signals.<br>- Proposed "Conditional Demographic Disparity" aligns with EU standards for assessing AI fairness.<br><a href="https://arxiv.org/abs/2005.05906" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2005.05906</span><span class="invisible"></span></a><br><a href="https://det.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://det.social/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://det.social/tags/EUlaw" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>EUlaw</span></a></p>
Netopia EU<p>How Tech Disrupted State Services<br><a href="https://netopia.eu/how-tech-disrupted-state-services/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">netopia.eu/how-tech-disrupted-</span><span class="invisible">state-services/</span></a><br> 🎯An excellent review by Ralf Grötker covering:</p><p>1/ Backend vs. Frontend Focus: </p><p>2/Control of Backend Infrastructure </p><p>3/ Impact on Welfare States: </p><p>4/ Digital Sovereignty and Security </p><p>5/ Regulation Gaps </p><p>6/ Commercial vs. Public Ownership </p><p>Read Ralf Grötker's review at <a href="https://netopia.eu/how-tech-disrupted-state-services/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">netopia.eu/how-tech-disrupted-</span><span class="invisible">state-services/</span></a> <a href="https://eupolicy.social/tags/algorithmicbias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>algorithmicbias</span></a> <a href="https://eupolicy.social/tags/backend" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>backend</span></a> <a href="https://eupolicy.social/tags/underseacable" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>underseacable</span></a> <a href="https://eupolicy.social/tags/Universalaccess" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Universalaccess</span></a> <a href="https://eupolicy.social/tags/InternetExchangePoints" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>InternetExchangePoints</span></a> <a href="https://eupolicy.social/tags/DigitalSovereignty" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalSovereignty</span></a> <a href="https://eupolicy.social/tags/DigitalSovereignty" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DigitalSovereignty</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://tldr.nettime.org/tags/Gemini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemini</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a>: "Fixing stochastic systems is trickier than it looks. Drawing up guardrails for AI models are the same, and can be subverted, unless you revert to brute-force blocking (Google has previously “fixed” image recognition software that would identify Black people as gorillas by preventing the software from recognizing any actual gorillas). Then it isn’t a stochastic system, which means that the thing that makes generative AI unique is gone.</p><p>The whole brouhaha raises an interesting question, says Chowdhury. “It is really difficult to define whether or not there is a correct answer to what images should be generated,” she says. “Relying on historical accuracy may result in the reinforcement of the exclusionary status quo. However, it could run the risk of being simply factually incorrect.”"</p><p><a href="https://www.fastcompany.com/91034044/googles-gemini-ai-was-mocked-for-its-revisionist-history-but-it-still-highlights-a-real-problem" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fastcompany.com/91034044/googl</span><span class="invisible">es-gemini-ai-was-mocked-for-its-revisionist-history-but-it-still-highlights-a-real-problem</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/Dating" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Dating</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a>: "This outdated superficial matching, based on physical similarity, may work for some, but it misses the mark for many daters who are seeking to connect with others around shared values such as approaches to health and safety during a pandemic or alignment on climate change . Why? Perhaps the online dating industry has read the culture so well that they know our secret. We purport to be liberally minded daters who prioritize our values above all else. Yet, the hushed taboo of sexual racism, defined as personal racialized reasoning in sexual, intimate, and/or romantic partner choice or interest, connotes a set of beliefs, practices, and behaviors that provide commentary on what is considered socially acceptable desirability. Sexual racism presents a barrier to meaningful connections when we can’t see past stereotypes about groups of people.</p><p>If we think of the dating industry as a mirror of social truth, quietly reflecting sexual racism, online dating companies’ outdated approach to a socially stratified society is unsurprising. The ideas which shape and drive online dating culture, and the tech industry at large, come from a society that routinely fails to deal with social inequity at both systemic and individual levels."</p><p><a href="https://time.com/6694129/dating-app-inequality-essay/" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://</span><span class="ellipsis">time.com/6694129/dating-app-in</span><span class="invisible">equality-essay/</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/Recruiting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Recruiting</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicDiscrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicDiscrimination</span></a>: "Body-language analysis. Vocal assessments. Gamified tests. CV scanners. These are some of the tools companies use to screen candidates with artificial intelligence recruiting software. Job applicants face these machine prompts – and AI decides whether they are a good match or fall short.</p><p>Businesses are increasingly relying on them. A late-2023 IBM survey of more than 8,500 global IT professionals showed 42% of companies were using AI screening "to improve recruiting and human resources". Another 40% of respondents were considering integrating the technology.</p><p>Many leaders across the corporate world hoped AI recruiting tech would end biases in the hiring process. Yet in some cases, the opposite is happening. Some experts say these tools are inaccurately screening some of the most qualified job applicants – and concerns are growing the software may be excising the best candidates.</p><p>"We haven't seen a whole lot of evidence that there's no bias here… or that the tool picks out the most qualified candidates," says Hilke Schellmann, US-based author of the Algorithm: How AI Can Hijack Your Career and Steal Your Future, and an assistant professor of journalism at New York University. She believes the biggest risk such software poses to jobs is not machines taking workers' positions, as is often feared – but rather preventing them from getting a role at all." </p><p><a href="https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">bbc.com/worklife/article/20240</span><span class="invisible">214-ai-recruiting-hiring-software-bias-discrimination</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/USA" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>USA</span></a> <a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/Algorithms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Algorithms</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicBias" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicBias</span></a> <a href="https://tldr.nettime.org/tags/AlgorithmicDiscrimination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AlgorithmicDiscrimination</span></a>: "“AI is just a model that is trained on historical data,” said Naeem Siddiqi, senior advisor at SAS, a global AI and data company, where he advises banks on credit risk.</p><p>That’s fueled by the United States’ long history of discriminatory practices in banking towards communities of colour.</p><p>“If you take biased data, all AI or any model will do is essentially repeat what you fed it,” Siddiqui said.</p><p>“The system is designed to make as many decisions as possible with as less bias and human judgment as possible to make it an objective decision. This is the irony of the situation… of course, there are some that fall through the cracks,” Siddiqi added.</p><p>It’s not just on the basis of race. Companies like Apple and Goldman Sachs have even been accused of systemically granting lower credit limits to women over men.</p><p>These concerns are generational as well. Siddiqi says such denials also overwhelmingly limit social mobility amongst younger generations, like younger millennials (those born between 1981 and 1996) and Gen Z (those born between 1997 and 2012), across all demographic groups.</p><p>That’s because the standard moniker of strong financial health – including credit cards, homes and cars – when assessing someone’s financial responsibility is becoming increasingly less and less relevant. Only about half of Gen Z have credit cards. That’s a decline from all generations prior."</p><p><a href="https://www.aljazeera.com/economy/2024/2/12/as-corporate-america-pivots-to-ai-consumers-rejected-for-loans-jobs" rel="nofollow noopener noreferrer" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">aljazeera.com/economy/2024/2/1</span><span class="invisible">2/as-corporate-america-pivots-to-ai-consumers-rejected-for-loans-jobs</span></a></p>