essentialsaltes (
essentialsaltes) wrote2020-02-17 03:38 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
You Look Like A Thing And I Love You / The Witchfinder's Sister
You Look Like A Thing And I Love You, by Janelle Shane
The book seems to be a reworking of material Shane has shared in her AI weirdness blog. On the one hand, it shows, on the other hand, it makes for light entertaining reading.
I was a little disappointed that the scope of the book focuses on various flavors of machine learning AI (as opposed to symbolic or strong AI). Of course, the latter is a super hard problem, but it's of infinitely more interest.
But the book does reinforce the idea that machine learning AI is terrible and terrifying. If you set it the task of winning at chess and other games, it does an amazing job, because it's easy to 'reward' the program with victory or points. But when the goal is to teach an AI to, for example, act as a customer service agent, or identify what's in a picture, or drive a car down the street, it's harder to train them when the victory condition is 'act like a human would act'.
So the book is a compilation of more or less hilarious and horrifying failures of AI. The basic idea is to train your AI to absorb training data and make a bajillion connections so that it can spit out more of the same. The problem seems to be that we're bad at setting tasks, bad at giving clean data, and bad at making the original judgments the AI is trying to emulate. One of the common examples is text generating AIs, where you train your AI on recipes or Harry Potter novels and let it make up its own once it gets the hang of it. Another common thing of the sort is the autocomplete or Google's help in finishing your search request, which is how "Why won't my parakeet eat my diarrhea?" became a thing. Once the AI randomly latched onto that as a cromulent phrase, people making google searches also latched onto it. That's some serious click bait there. And thus the problem. The Google AI takes those clicks as 'rewards' that it is very accurately predicting what people were going to ask.
But mostly they're just odd, like the motto hearts Shane recently posted:

Other examples look at trying to decipher what's in a picture. For training data, the AI used people-generated descriptions of pictures and then went number crunching away. Now ask yourself, honestly, how many times have you described a picture by saying, "It has zero giraffes in it." probably never. But if there were a giraffe in the picture, you'd probably be likely to mention it. So the AI has seen a few pictures that are described as containing giraffes, but it has never seen a picture described as having zero giraffes in it. The result is that the AI often declares that there are giraffes in pictures that do not contain giraffes.
This kind of bias in the sample has real-world consequences as well. An AI trained to make hiring or loan application decisions turns out to be very good at modelling the human-generated data and discriminating against the same kind of people the employers and banks do. Since the AIs are rather opaque black boxes, it's hard to root out such bias, since we don't really know what the AI is paying attention to.
A related example from driving. The researchers thought they were teaching the AI to keep the car in the middle of the road, and not driving off the sides. Instead, they seems to have taught the AI to keep the green grass at fixed locations on either side of the field of view. When the car went onto an overpass, the green disappeared and the car was flummoxed.
Great, amusing read that shines a light on some important issues as inevitably medical and financial data gets churned through AIs like this.
---
The Witchfinder's Sister, by Beth Underdown
I found this a disappointment. Matthew Hopkins was an English witch-finder who had a brief, but very nasty career. I was ready to get my hate on, but the Hopkins of this novel does not seem to much resemble the original. As the title suggests, the book follows Hopkins' [entirely fictional] sister Alice as she returns to join his household after she is widowed. There are dim family secrets that slowly get winkled out, but the action stays on Alice, so much of the witchfinding occurs offstage. It does provide a slow excruciating crescendo as our narrator gets ever more closely involved with the witchfinding and the deplorable details emerge.
But the motivations provided by the backstory conflict with what we know of the real Hopkins, and the author toys with witchcraft being real, which seems like a bizarre step. If people really can and do magically murder people, it's worth finding them out. The tragedy of the witchhunts is that it was all bullshit.
The book seems to be a reworking of material Shane has shared in her AI weirdness blog. On the one hand, it shows, on the other hand, it makes for light entertaining reading.
I was a little disappointed that the scope of the book focuses on various flavors of machine learning AI (as opposed to symbolic or strong AI). Of course, the latter is a super hard problem, but it's of infinitely more interest.
But the book does reinforce the idea that machine learning AI is terrible and terrifying. If you set it the task of winning at chess and other games, it does an amazing job, because it's easy to 'reward' the program with victory or points. But when the goal is to teach an AI to, for example, act as a customer service agent, or identify what's in a picture, or drive a car down the street, it's harder to train them when the victory condition is 'act like a human would act'.
So the book is a compilation of more or less hilarious and horrifying failures of AI. The basic idea is to train your AI to absorb training data and make a bajillion connections so that it can spit out more of the same. The problem seems to be that we're bad at setting tasks, bad at giving clean data, and bad at making the original judgments the AI is trying to emulate. One of the common examples is text generating AIs, where you train your AI on recipes or Harry Potter novels and let it make up its own once it gets the hang of it. Another common thing of the sort is the autocomplete or Google's help in finishing your search request, which is how "Why won't my parakeet eat my diarrhea?" became a thing. Once the AI randomly latched onto that as a cromulent phrase, people making google searches also latched onto it. That's some serious click bait there. And thus the problem. The Google AI takes those clicks as 'rewards' that it is very accurately predicting what people were going to ask.
But mostly they're just odd, like the motto hearts Shane recently posted:
Other examples look at trying to decipher what's in a picture. For training data, the AI used people-generated descriptions of pictures and then went number crunching away. Now ask yourself, honestly, how many times have you described a picture by saying, "It has zero giraffes in it." probably never. But if there were a giraffe in the picture, you'd probably be likely to mention it. So the AI has seen a few pictures that are described as containing giraffes, but it has never seen a picture described as having zero giraffes in it. The result is that the AI often declares that there are giraffes in pictures that do not contain giraffes.
This kind of bias in the sample has real-world consequences as well. An AI trained to make hiring or loan application decisions turns out to be very good at modelling the human-generated data and discriminating against the same kind of people the employers and banks do. Since the AIs are rather opaque black boxes, it's hard to root out such bias, since we don't really know what the AI is paying attention to.
A related example from driving. The researchers thought they were teaching the AI to keep the car in the middle of the road, and not driving off the sides. Instead, they seems to have taught the AI to keep the green grass at fixed locations on either side of the field of view. When the car went onto an overpass, the green disappeared and the car was flummoxed.
Great, amusing read that shines a light on some important issues as inevitably medical and financial data gets churned through AIs like this.
---
The Witchfinder's Sister, by Beth Underdown
I found this a disappointment. Matthew Hopkins was an English witch-finder who had a brief, but very nasty career. I was ready to get my hate on, but the Hopkins of this novel does not seem to much resemble the original. As the title suggests, the book follows Hopkins' [entirely fictional] sister Alice as she returns to join his household after she is widowed. There are dim family secrets that slowly get winkled out, but the action stays on Alice, so much of the witchfinding occurs offstage. It does provide a slow excruciating crescendo as our narrator gets ever more closely involved with the witchfinding and the deplorable details emerge.
But the motivations provided by the backstory conflict with what we know of the real Hopkins, and the author toys with witchcraft being real, which seems like a bizarre step. If people really can and do magically murder people, it's worth finding them out. The tragedy of the witchhunts is that it was all bullshit.