Artificial intellegence as a whole is fascinating, if only for the amount of layers and facets the topic has. Kate Crawford, in her book Atlas of AI, wants to map out machine learning’s reach in more depth than previous writings about it have been able to accomplish. In order to do this, she covers topics from intellegent horses to lithium mining, from overseas jobsourcing to stockpiled mugshots.
AI’s quest is to become more and more optimized–but the big question Crawford wants to raise is “what is being optimized, and for whom, and who gets to decide”? Why does AI need to be hyper-efficient? What would that accomplish? Why is it that the people whose data AI is currently being trained on don’t get to decide how their information is used?
Crawford argues that AI and machine learning systems are inherently political; the people that built them have biases, and the data the systems are trained on have biases. This was evident to me when Crawford discussed how it was popular among AI developers to train their facial recognition systems on thousands of mugshots. They did so with no regard for the people whose likenesses they were utilizing, and I would be willing to argue that the fact these were mugshots and not professional headshots made a difference in how blithely these photos were treated. Images of people at some of their lowest moments, sometimes visually upset or even injured, were just tossed into AI databases. Add to this the fact that these people’s stories aren’t known–it’s possible some were never even charged or indicted–and it’s evident that, to tech developers, these people weren’t people. They were just points of data.
And, it’s not just governmentally gathered photos being fed into machines, either. Now, it’s your photos and my photos posted to social media sites that the AI consumes. This kind of data scraping and dehumanizing affects us all. Woody Bledsoe, cited by Crawford, stated, “In the long run, AI is the only science.” Tech companies believe that they are allowed to do what they’re doing, no matter if people raise objections.
In class, we asked ourselves, “What is intellegence, anyway? Can computers and machines truly be intellegent?” Some offered up that intellegence could be measured by IQ tests, or how well someone could recognize patterns. Others thought more abstractly, claiming intellegence had to do with the notion of common sense or rational thought.
Regardless of how you choose to define intellegence, it’s likely an incorrect assumption that the human brain operates like a computer–and therefore, we could construct a human-brain-level computer one day. Crawford argues, “this belief that the mind is like a computer, and vice versa, has ‘infected decades of thinking in the computer and cognitive sciences,’ creating a kind of original sin for the field.”
We saw this disparity between human brains and current AI systems in class, when we tested both ChatGPT and Grammarly AI to see how well they could write a how-to guide for writing an MLA-style paper in Microsoft Word. The answer: fairly okay, but with a few shortcomings each–shortcomings that human minds likely would not come up with.
Firstly, ChatGPT included incorrect information, and both systems ommitted critical details about correct stylization standards. Also, both systems, at different points, told readers to just check the MLA handbook for themselves, instead of just including the pertinent information. Some directions were unclear, and neither system decided to include information about how to create MLA citations for a Works Cited page–arguably the most difficult part of the MLA style.
If a person were to write a guide, it’s incredibly likely that this wouldn’t be the case. However, even if AI isn’t human-level, it’s still being implemented into society at a startling rate. Therefore, it’s important that we understand the biases built into AI systems, and we’re able to ferret out what trends and patterns AI relies on when it spits out an answer. If we can do this, we can tailor our interactions with AI systems to better achieve what we want–whether that’s information, or the downsizing of machine learning’s domain.
Comments
One response to “Assumptions about–and made by–AI”
Your mention of biases in the system took me back to a reading I encountered last year called “The Ghost in the Machine” by Arthur Koestler. In our discussion of that reading, we talked about how the human consciousness behind the machine is the ghost that is actually operating that machines. Your mention of the mugshots reminded me of the facial recognition technology that was mostly trained on white people. The people running the AI put their biases in and those biases became the ghost in the machine. For reasons like this, I do not think AI will ever be able to operate like a human brain. Unless there is a way to map all of physical existence into data and create a way to process and translate new data (like the human senses) then AI won’t even be able to have all the data that humans have.