![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Deep context: my conviction that Sam Harris is an idiot, and his idea of finding an objective measure of wellbeing is misguided from the outset. Making morality objective is like trying to make aesthetics objective -- it's just a fake way of baking in your own subjective opinions and declaring them objective.
Pull-quote:
The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.
Headline: A beauty contest was judged by AI and the robots didn't like dark skin
Article also has a relevant link to a related story:
"To take just one example, judges, police forces and parole officers across the US are now using a computer program to decide whether a criminal defendant is likely to reoffend or not. ... If you’re black, the chances of being judged a potential reoffender are significantly higher than if you’re white. And yet those algorithmic predictions are not borne out by evidence.
...
The big puzzle is how the bias creeps into the algorithm. We might be able to understand how if we could examine it. But most of these algorithms are proprietary and secret, so they are effectively “black boxes” – virtual machines whose workings are opaque. Yet the software inside them was written by human beings, most of whom were probably unaware that their work now has an important moral dimension."
Pull-quote:
The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.
Headline: A beauty contest was judged by AI and the robots didn't like dark skin
Article also has a relevant link to a related story:
"To take just one example, judges, police forces and parole officers across the US are now using a computer program to decide whether a criminal defendant is likely to reoffend or not. ... If you’re black, the chances of being judged a potential reoffender are significantly higher than if you’re white. And yet those algorithmic predictions are not borne out by evidence.
...
The big puzzle is how the bias creeps into the algorithm. We might be able to understand how if we could examine it. But most of these algorithms are proprietary and secret, so they are effectively “black boxes” – virtual machines whose workings are opaque. Yet the software inside them was written by human beings, most of whom were probably unaware that their work now has an important moral dimension."