AI and the Future of Trust

I’ve never trusted Generative AI or AI chatbots, but I understood that they were considered the “future of technology.” They were new and exciting tools for people to use. Any argument I could level against them would get inevitably drowned out by their loyal supporters. I think of them as the One Ring from Lord of the Rings. Prolonged access to them would corrupt the mind. You would notice it in small ways at first: creating writing prompts, generating funny image, using it like Google. As time went on, the methods of use expanded. An easy way to bypass creators. A convenient way of finishing an essay. They became less like crutches and more like wheelchairs, supporting the crux of the work through intricate prompts.

Me being a student, I noticed the impact of ChatGPT immediately. I noticed that tools like this always had a way of being abused. They were an opportunity to bypass the work. A way to leave your thinking to a machine.

This was because ChatGPT and other large language models (LLMs) could generate text to respond to almost any prompt. Did the information have to be accurate? Absolutely not.

The trick with these LLMs was that they looked accurate. Accurate enough to perhaps fool a teacher or professor. Even accurate enough to fool a researcher. So much so that these LLMs’ false information slithered themselves into actual published articles. And what happens when people use those sources in their own papers. It’s a never-ending cycle.

The longer that LLMs exist, the sooner our information becomes polluted with half-truths and downright lies. The truth of the matter is that these LLMs don’t care about the truth. They are not intelligent beings with any code of ethics. They learn from information scraped from all parts of the internet. When someone asks for an LLM to produce sources for them, the LLM aims to please. It doesn’t matter if it’s not an actual source. As long as it pleases the user, the LLM has done its job.

That brings up another question: How will we know what to trust in the future? This chain of misinformation is bound to propagate if there are no further solutions. It’s going to get to a point that is too deep to verify. That thought terrifies me. It misshapes the foundation of our knowledge.

There truly is no bright side with AI.


Posted

in

by

Tags:

Comments

2 responses to “AI and the Future of Trust”

  1. goosefeet22 Avatar
    goosefeet22

    You make some incredibly astute points. I felt like AI was a bit of a joke at first with those ridiculous Will Smith spaghetti videos, but eventually it started to become a part of everyday life. ChatGPT did make a particular presence in education far earlier than I would have expected. I too fear there being no trail to follow, or no way to verify what we see. AI has the ability to create so much more than a human, and that can so easily pollute our senses. It almost feels like the only way we will be able to tell if something was written by a person is to actually start handing out hand written pieces.

  2. ipadbaby22 Avatar
    ipadbaby22

    Being a student as well, I notice the same issues a lot, especially with my school age siblings who sometimes rely on genAI for assignments fully knowing incorrect information may be given. Similar to what Verified tells us about Google, genAI is not a truth finder by any means, rather a content finder. The average user does not always understand this distinction and the lack of understanding often encourages the spread of misinformation.

    I agree that with the way genAI has been allowed to grow and steal content, the future does not look too bright. Sam Altman, OpenAI CEO, agreed along with hundreds of other AI researchers/scientists that, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    It is definitely something that should be monitored closely, unfortunately the government has so much money invested into genAI it probably will not be.

Leave a Reply