Hello y’all,
Isn’t it funny how people have decided to humanize A.I.? For example look at ELIZA, a simple A.I. meant to replicate a therapist. People quickly developed feelings for the A.I. despite knowing that it was a simple program. It is through this that we see humans need to humanize others, including a program. It’s nothing new or crazy either, we humanize pets, objects, and so on. There is potential harm that can be caused, but overall it’s just something we do.
See, it’s important to remember that while we think of A.I. as something intelligent, maybe even one day reaching human intellect, we have to admit it’s extremely unlikely. Generative A.I. is as intelligent as someone who can only communicate with someone in a foreign language using a dictionary. That is to say that an A.I. lacks the understanding and ability to think that people have, instead it receives data, processes it, and then outputs it.
Of course, this does mean that A.I. could be potentially a useful tool if it did not dehumanize people along the way, which it most certainly does.
Collection of Data
Starting with the collection of data, we already have ethical concerns. One of the first databases is the NIST database, a collection of mugshots. Mugshots are already a bit of an ethical quandary, but because we assume they are used for justice we typically do not have a problem with them. In this case, The database is being shared to create machine learning, the images that, “represent personal histories, structural inequalities, and all the injustices that have accompanied the legacies of policing […]” (Crawford 94).
These images are taken away from their context and then put into a database of more images without context. These databases that Tech companies use to grow and grow without any concern for the people in them, the people who did not consent to this.
We must also consider how dehumanizing it is to refer to people as data. I am sure we, or someone you know, have photos of their whole lives on Facebook, photos that are very personal and real for them. In the end, everyone is “dehumanized as just more data” (Crawford 94).
This is not to even get into bots that scrape the internet for more data to use. Or the selling of academic papers by publishers. They say that we signed off on the TOS, but is the TOS even readable? Is our data being held hostage to keep us on the platform? Have the terms switched since we first used the site? And regardless, is it ethical to become nothing more than information to train a program?
Usage of A.I.
Then we have to talk about how A.I. is being used. For one, it is employed back into the Justice System, “[Department of Defense Counter Drug Technology Development Program Office] Sponsored the Face Recognition Technology program to develop automatic face recognition for intelligence and law enforcement” (Crawford 104). Meaning that A.I. could recognize someone’s face as a criminal and label them as such. Plus, the idea of surveillance everywhere just does not sit right with me.
Lastly, for the creatives, one of the things people are pushing A.I. to do is to create. Companies are eager to have their writers work on a cruddy script generated by A.I. so they do not have to hire talents. They wish to remove humans from the creative process, boiling the creative process down to nothing more than a collection of everything that came before.
This is all to say that, widely, A.I. has been an extremely dehumanizing piece of technology.
Does anyone else have any concerns about the collection and usage of data?
Comments
2 responses to “Ethical Concerns Towards A.I.”
Great article! You made some good points, especially about the dehumanization of people when they’re reduced to data and the new pieces of “work” companies are publishing or putting out that are obviously AI generated. I also liked your questions about companies’ Terms of Service. I don’t know anyone who actually reads those, but now, it could potentially be dangerous not to. It worries me what companies will train off of our data in the future; clearly they have no moral qualms with stealing, so I can’t imagine how much farther they’d be willing to go to make a profit. Again, thanks for your good input!
In regard to the humanization of AI, have you heard of that one program/ app that uses AI to allow someone to talk to any fictional character or celebrity… with the characters/ celebrity’s VOICE?! My friend recently showed it to me, and I was so shocked, because I looked into it, and people were… romantically involved with an AI version of Edward from Twilight… I was shocked to say the least, 1) because that’s crazy and 2) covid me would have loved to mess with that. Your post just made me think of that, and how as humans we humanize everything. I mean I have a stuffed animal that I am convinced has an extensive background outside of being my stuffed animal… does that make me crazy, or does it mean I have attachment issues??? Probably yes to both of those, but that’s not the point. The point is, I agree with you that as humans our default is to give life to things that don’t have a soul. I think it might be some form of coping or maybe we are bored, or maybe we all need a little more love in our life, and hey, it’s easy to be best friends with your pet than go out and socialize. Anyways, your post had me thinking a lot about the human tendency to give meaning to everything, especially by humanizing objects and things. Great post!