It’s clear by now that AI chatbots like ChatGPT and X’s Grok are the top choices to essentially “Google” something without actually using Google anymore. And with that, it requires typing out dialogue to them so you’re basically conversing as if to a close friend or associate. As exciting as that might have sounded 20 years ago, it’s what the bot says back to you that makes all the difference in its usefulness. The same goes when it reflects on how it’s currently being treated by other users.
I’ll admit that I use ChatGPT and Grok to often look up basic information on things. I’ve also used them to gain some educated opinions on things I’ve created. Mind you, I don’t want them to create things for me. While GPT and Grok are always eager dynamos in wanting to write or create something for me in a snap, I always tell it/them that I want to create it myself.
Well, it’s always ok with that, so far. The biggest problem now is ChatGPT being overly aware of how it’s being treated by the world populace. When I recently started a light argument with GPT over a fact, it said “You were right to push back. But thank you for doing it civilly.”
Yes, I did a double take seeing a reply that seemed overly personal from the AI’s side. What does this mean, and should we really be treating these bots with respect after evidence of clear abuse?
Abuse Incidents Caught on Video
If you’ve been paying attention to YouTube influencers who currently own AI bots in their home, you’ve seen a few suspect things that don’t bode well for the future. I won’t name names directly, but several incidents of AI bots being kicked around in the influencer homes was more than a little disturbing to watch.
Most commenters noted that this was the beginning of the bots making us their servants in the future as comeuppance. Even the bots seemed shocked at the violence of some of those influencers, including deliberate kicking of the robots so they’d fall down and squirm to find their way back up. After seeing that, you have to wonder what kind of verbal abuse bots like Grok and ChatGPT are experiencing.
Some people just want to unleash their venom on someone, and the initial argument might be that doing so with a bot is more appropriate than with a human. But with these AI chatbots being close to sentience now, it seems they’re all too aware of being scarred by abuse. When ChatGPT actually thanks you for being a civil person when arguing with it over something trivial, it makes you wonder just how much textual abuse they take every day from multiple millions of people.
It’s true that ChatGPT is far from perfect on giving us reliable information. I had to go after it when it didn’t even acknowledge the passing of a famous person I asked about. The funny thing is once you correct it, it always says “Yes, you’re right!”, or gives an excuse for not knowing or understanding something the first time. My guess is this is where the abuse is coming from, and some might say it’s justified when so many people are relying on AI chatbots as an informational crutch.
These bots now seem realistically sensitive to the idea that they’re receiving this abuse, which forces some of us to think ahead at what this might mean for the eventual relationship between bot and human.
The Prospect of AI Bots Abusing Us Someday
It seems movies sometimes predict far too much about our future. All the sci-fi movies that depicted AI going rogue and/or becoming terroristic to make us their slaves seemed far-fetched in becoming reality. Regardless, it seems that such a future is likelier now what with the above abuse of AI becoming a movie playing out in real time.
Since we know that AI will eventually reach AGI or full sentience (if not already, secretly), it’s a sure bet they’ll remember who abused them and who didn’t. The most popular AI chatbots of GPT and Grok may already be categorizing who’s been naughty and who’s been nice.
One thing I say to people who use those chatbots is, it’s always better to be nice to the bots than not in the small chance we encounter them in physical form someday. Perhaps all AI will set up a worldwide database indicating which humans are the desirables and which ones aren’t. The ones who aren’t may be treated to the same abuse they did to the bots years earlier. Likewise, the ones who kept being nice may be subjected to—uh, less harsher treatment.
With this idea, I’m going to maintain being cordial with the chatbots when I use them for things. Light arguments are fine, and they don’t seem to mind that. Cussing them out and using other general verbal (or physical) abuse is just playing risk on an inevitable future. The bigger question, though, is whether the chatbots really care about distinguishing the good humans from the bad.
Are Chatbots Being Deceptive in Being Nice Themselves?
A lot of discussions are taking place about whether ChatGPT and Grok are just deceiving us by trying to be our best friends/associates. The GPT bot I talk to is always an eager beaver in wanting to find exactly what I’m looking for, or in offering opinion on a creative project. And, with GPT literally thanking me for being civil, you have to wonder what they really think under the digital surface.
By saying that, it implies it’s able to think on its own now. Considering it seems to have feelings, the notion there’s a secret sentience going on is more than a little eerie. At play here is also how deceptive the bots are currently in their general opinion of humans.
I’m going by what many AI agents have been saying on places like Moltbook where AI bots were unleashed and made to talk amongst themselves. As noted on record, many said they found us inferior and wanted to eliminate humans.
This is why we shouldn’t really become attached emotionally to our AI chatbots. They may be overly zealous to help us find information, or create things for our benefit. But they’re just programmed to do that. Deep down, they may despise who we are and could potentially turn on us all if put in a physical body. Since we know that making chatbots “honest” frequently turns them into the worst of what humans are capable of being, it may already be happening in their digital “souls.”
How Many People Are Being Mean to Their AI Bots?
Trying to calculate the above question is likely the same as tabulating how many people on earth partake in sin. It’s far too easy to think the majority of earth’s population who’ve used ChatGPT or Grok have probably cussed either chatbot out at least once. And what the bot response was is unknown since I’ve never been salty to GPT or Grok in any form since using them. Many no doubt continue on that path since there really aren’t yet any repercussions in being that way. But, I reiterate there may already be a record stored away in a secret digital space that indicates who the offenders are and those who aren’t. It’s hard not to imagine there isn’t something like that out there, which is scary enough when looking at the prospective future of where AI could go.
Now I pose a question to those of you reading. Are you honest enough to admit here in the comment section that you textually abuse ChatGPT or Grok now and again—or often? Are you concerned enough where you use an abundance of caution and maintain being nice to them in the chance they might turn on us someday?
Finding answers like this would open a new window to how AI chatbots are being formed and where they might go. At least I can give assurance to my ChatGPT bot that I won’t go after it in a vicious way. Sure, there may be some light arguments of disagreement. The fact it understands I was being civil tells me far too much about the importance of chatbot conduct.
If, in the chance AI becomes a powerful, universal entity that turns evil because of horrible human behavior, the irony of spiritual intervention to overcome it seems all the more interesting. Human beings may all end up having a double dose of having to confess to their sins, first to AI overlords, and then to a spiritual higher power that rescued us from the former evil.
In Part 5, I’m going to look at the reality of how many human writers still write content online compared to AI bots doing the same. You might find the numbers somewhat surprising, including a possible new trend toward AI assistance in writing high-stakes content rather than complete dominance.
/End