I’ve never trusted Generative AI or AI chatbots, but I understood that they were considered the “future of technology.” They were new and exciting tools for people to use. Any argument I could level against them would get inevitably drowned out by their loyal supporters. I think of them as the One Ring from Lord of the Rings. Prolonged access to them would corrupt the mind. You would notice it in small ways at first: creating writing prompts, generating funny image, using it like Google. As time went on, the methods of use expanded. An easy way to bypass creators. A convenient way of finishing an essay. They became less like crutches and more like wheelchairs, supporting the crux of the work through intricate prompts.
Me being a student, I noticed the impact of ChatGPT immediately. I noticed that tools like this always had a way of being abused. They were an opportunity to bypass the work. A way to leave your thinking to a machine.
This was because ChatGPT and other large language models (LLMs) could generate text to respond to almost any prompt. Did the information have to be accurate? Absolutely not.
The trick with these LLMs was that they looked accurate. Accurate enough to perhaps fool a teacher or professor. Even accurate enough to fool a researcher. So much so that these LLMs’ false information slithered themselves into actual published articles. And what happens when people use those sources in their own papers. It’s a never-ending cycle.
The longer that LLMs exist, the sooner our information becomes polluted with half-truths and downright lies. The truth of the matter is that these LLMs don’t care about the truth. They are not intelligent beings with any code of ethics. They learn from information scraped from all parts of the internet. When someone asks for an LLM to produce sources for them, the LLM aims to please. It doesn’t matter if it’s not an actual source. As long as it pleases the user, the LLM has done its job.
That brings up another question: How will we know what to trust in the future? This chain of misinformation is bound to propagate if there are no further solutions. It’s going to get to a point that is too deep to verify. That thought terrifies me. It misshapes the foundation of our knowledge.
There truly is no bright side with AI.
Leave a Reply
You must be logged in to post a comment.