The concept of real-world consequences of an online program is brought up in Kate Crawford’s “Atlas of AI,” which immediately caught my attention immediately because of its rarity when talking about AI. When someone talks about ChatGPT, they usually aren’t talking about how the development and maintenance of the AI program is damaging not only the environment but the lives of the people who are largely forced to work in these systems to uphold them. These computer systems not only change the online landscape of the Internet but also the physical landscape of the planet.
Another interesting concept in the article was the comparison of AI to an atlas. Like an atlas, AI is attempting to catalog all human behavior and thought in a legible and accessible form. However, when we acknowledge what AI’s “maps” contain, we are faced with the reality that AI is as much a political and social endeavor as it is a technological one. The “maps” of AI are leylines of power, revealing the underlying motivations of people in power who want to maintain that power.
One of the more shocking parts of the article was the way that AI was developed using mugshot photos. In her description of the dataset maintained by NIST (National Institute of Standards and Technology), Crawford elaborates on how the technology of facial recognition has developed as a system and pervaded into law enforcement. It is scary to think that within these systems are thousands if not millions of photos of people – taken without their consent – that are potentially being used as a profiling tactic. The people within these datasets aren’t being seen as people; they are sets of facial features and characteristics that are used as a resource, something to be taken advantage of.
Another rarity when discussing AI is its variability. Or rather the variability of the information being fed into AI systems. Especially when it comes to language, as Crawford explains. Language is full of rules and exceptions to those rules; there is nothing static about any language. Therefore, training these systems to be able to predict the next work in a sequence, such as a sentence or paragraph, is super complicated and variable.
Overall, I think that Crawford’s article reveals a lot about the consequences of the development of AI and machine learning. Her points about the effects of AI on the planet and on those who are forced to work to keep these systems running and in place are great. I thought the most interesting part of the article was the comparison of AI to an atlas and framing our thinking that way was a great choice.
Comments
One response to “Crawford’s AI”
Bailey,
Great post! I agree that Crawford’s comparison of AI to an atlas was a good choice on her part–it’s definitely illustrative and helps us understand the broader context of her argument. It is wild that the environmental and social impacts of machine learning are often overlooked or disregarded in discussions. I feel like those are some of the most important parts, since they’re the most tangible and have some of the most direct impacts on us. Good point about language, also; it’s so varied between speakers I’m surprised AI’s been able to get as good of a grasp on it as it has. Thanks for your insightful work!