Abbey Smith – Week 3

The readings from this week were extremely interesting to me. They were longer readings compared to ones I have had in past classes but I felt hooked all the way through. I’ve never supported AI and I have a surface level knowledge of the unethical things surrounding it. It was nice reading these studies and truly understanding the mechanics of AI and how it influences the world.  

I found it very insightful in the Crawford reading to think of AI as a map, or atlas. And to think of it in different forms of maps, for example, a topographic map that measures the spectrum of AI. It’s important to go very in-depth with the topic while also looking at the surface. The study also confirmed my opinion on how AI is purely corporate, and doesn’t seem to benefit anyone except the elites of our society. The concept of AI was perfectly metaphorized in the beginning of the article, as it discussed the story of Hans the Horse. I found this analogy to be very entertaining while also being very insightful. Again, I feel like a lot of the things in this reading were “common knowledge” but it doesn’t feel impactful until it’s all laid out for me.  

Of both the readings this week, I do think the Hao reading was my perfect. I enjoyed their writing style and the topic of the writing a bit more. I feel like this article described even more of the morals of AI rather than the mechanics, which is something I’m drawn more to. A lot of things that stood out to me is that companies change their ways of research to support the findings they want, AI can be discriminatory to other races or other oppressed groups, etc. It’s a reminder that AI is human-made, so it cannot avoid mistakes or faults. So, in my opinion, it really can’t be trusted and it isn’t “intelligence.” I believe my favorite part of the reading was the mention of the term anthropomorphizing. In this context, it discussed that Artificial Intelligence was only given the word intelligence as a marketing tool, not because it was actually intelligence. It was appealing to the human mind by applying these traits to something inanimate. This is something that’s concerning, further proving by dislike of AI. And finally, one of the final lines of the readings describes AI as a “self-fulfilling prophecy.” I think this is how I’ve always thought of AI and could never put it into words. I appreciated this article and how it described the concepts of thoughts I already had.  

AI is something that has concerned me for some time and I think these readings have opened my eyes up even more to it. It’s something that everyone should be aware of and everyone should understand the mechanics and morals of AI.


Posted

in

by

Tags:

Comments

2 responses to “Abbey Smith – Week 3”

  1. davidninja Avatar
    davidninja

    Hans the Horse was a good opening for the article. I had read about the horse who could do math and how it got its answers by reacting to its audiences, but it is interesting to read how artificial intelligence can be used similarly. It provides the answers for us and also feeds us information based on what it already knows. It is kind of funny how they changed the name to “artificial intelligence” just to get the recognition it deserved. After all these years with changing technology, like ELIZA, changing the name really got the attention it deserved.

  2. LKSOC1004 Avatar
    LKSOC1004

    I like that you brought up how intelligence was used as a mere marketing tool. I think that point is a lot deeper than it initially seems because it somewhat demonstrates that humans, at least in some capacity, tend to appreciate things confirmed by appearances as opposed to rigorous substantiation. We have the capacity to investigate things to a level that other beings do simply can’t fathom, yet we sometimes appreciate a quick and palatable appearance and forego deeper investigation. Using the word ‘intelligence’ combined with peoples’ general lack of knowledge as to how it actually functions creates a believable appearance of thought or consideration. What does this mean for how we approach other subjects? And what does this mean for how we approach each other? AI has clearly exposed a major epistemological issue within the general public consciousness that we have to find a way around. I don’t think that we can always ruthlessly investigate everything; sometimes, the best we can do is approximate. For example, we can’t quite “know” that others are conscious, so we have to assume from their behavior and our interactions with them that they are. AI is so different because we have the ability to overcome such a misconception. However, we still have to reconsider our relationship to accepting things as they seem while lacking meaningful context.

Leave a Reply