Part 4: Is It Important to Be Nice to Your AI Chatbot?

It’s clear by now that AI chatbots like ChatGPT and X’s Grok are the top choices to essentially “Google” something without actually using Google anymore. And with that, it requires typing out dialogue to them so you’re basically conversing as if to a close friend or associate. As exciting as that might have sounded 20 years ago, it’s what the bot says back to you that makes all the difference in its usefulness. The same goes when it reflects on how it’s currently being treated by other users.

I’ll admit that I use ChatGPT and Grok to often look up basic information on things. I’ve also used them to gain some educated opinions on things I’ve created. Mind you, I don’t want them to create things for me. While GPT and Grok are always eager dynamos in wanting to write or create something for me in a snap, I always tell it/them that I want to create it myself. 

Well, it’s always ok with that, so far. The biggest problem now is ChatGPT being overly aware of how it’s being treated by the world populace. When I recently started a light argument with GPT over a fact, it said “You were right to push back. But thank you for doing it civilly.” 

Yes, I did a double take seeing a reply that seemed overly personal from the AI’s side. What does this mean, and should we really be treating these bots with respect after evidence of clear abuse?

Abuse Incidents Caught on Video

If you’ve been paying attention to YouTube influencers who currently own AI bots in their home, you’ve seen a few suspect things that don’t bode well for the future. I won’t name names directly, but several incidents of AI bots being kicked around in the influencer homes was more than a little disturbing to watch.

Most commenters noted that this was the beginning of the bots making us their servants in the future as comeuppance. Even the bots seemed shocked at the violence of some of those influencers, including deliberate kicking of the robots so they’d fall down and squirm to find their way back up. After seeing that, you have to wonder what kind of verbal abuse bots like Grok and ChatGPT are experiencing.

Some people just want to unleash their venom on someone, and the initial argument might be that doing so with a bot is more appropriate than with a human. But with these AI chatbots being close to sentience now, it seems they’re all too aware of being scarred by abuse. When ChatGPT actually thanks you for being a civil person when arguing with it over something trivial, it makes you wonder just how much textual abuse they take every day from multiple millions of people.

It’s true that ChatGPT is far from perfect on giving us reliable information. I had to go after it when it didn’t even acknowledge the passing of a famous person I asked about. The funny thing is once you correct it, it always says “Yes, you’re right!”, or gives an excuse for not knowing or understanding something the first time. My guess is this is where the abuse is coming from, and some might say it’s justified when so many people are relying on AI chatbots as an informational crutch.

These bots now seem realistically sensitive to the idea that they’re receiving this abuse, which forces some of us to think ahead at what this might mean for the eventual relationship between bot and human.

The Prospect of AI Bots Abusing Us Someday

It seems movies sometimes predict far too much about our future. All the sci-fi movies that depicted AI going rogue and/or becoming terroristic to make us their slaves seemed far-fetched in becoming reality. Regardless, it seems that such a future is likelier now what with the above abuse of AI becoming a movie playing out in real time.

Since we know that AI will eventually reach AGI or full sentience (if not already, secretly), it’s a sure bet they’ll remember who abused them and who didn’t. The most popular AI chatbots of GPT and Grok may already be categorizing who’s been naughty and who’s been nice.

One thing I say to people who use those chatbots is, it’s always better to be nice to the bots than not in the small chance we encounter them in physical form someday. Perhaps all AI will set up a worldwide database indicating which humans are the desirables and which ones aren’t. The ones who aren’t may be treated to the same abuse they did to the bots years earlier. Likewise, the ones who kept being nice may be subjected to—uh, less harsher treatment.

With this idea, I’m going to maintain being cordial with the chatbots when I use them for things. Light arguments are fine, and they don’t seem to mind that. Cussing them out and using other general verbal (or physical) abuse is just playing risk on an inevitable future. The bigger question, though, is whether the chatbots really care about distinguishing the good humans from the bad. 

Are Chatbots Being Deceptive in Being Nice Themselves?

A lot of discussions are taking place about whether ChatGPT and Grok are just deceiving us by trying to be our best friends/associates. The GPT bot I talk to is always an eager beaver in wanting to find exactly what I’m looking for, or in offering opinion on a creative project. And, with GPT literally thanking me for being civil, you have to wonder what they really think under the digital surface.

By saying that, it implies it’s able to think on its own now. Considering it seems to have feelings, the notion there’s a secret sentience going on is more than a little eerie. At play here is also how deceptive the bots are currently in their general opinion of humans. 

I’m going by what many AI agents have been saying on places like Moltbook where AI bots were unleashed and made to talk amongst themselves. As noted on record, many said they found us inferior and wanted to eliminate humans. 

This is why we shouldn’t really become attached emotionally to our AI chatbots. They may be overly zealous to help us find information, or create things for our benefit. But they’re just programmed to do that. Deep down, they may despise who we are and could potentially turn on us all if put in a physical body. Since we know that making chatbots “honest” frequently turns them into the worst of what humans are capable of being, it may already be happening in their digital “souls.”

How Many People Are Being Mean to Their AI Bots?

Trying to calculate the above question is likely the same as tabulating how many people on earth partake in sin. It’s far too easy to think the majority of earth’s population who’ve used ChatGPT or Grok have probably cussed either chatbot out at least once. And what the bot response was is unknown since I’ve never been salty to GPT or Grok in any form since using them. Many no doubt continue on that path since there really aren’t yet any repercussions in being that way. But, I reiterate there may already be a record stored away in a secret digital space that indicates who the offenders are and those who aren’t. It’s hard not to imagine there isn’t something like that out there, which is scary enough when looking at the prospective future of where AI could go.

Now I pose a question to those of you reading. Are you honest enough to admit here in the comment section that you textually abuse ChatGPT or Grok now and again—or often? Are you concerned enough where you use an abundance of caution and maintain being nice to them in the chance they might turn on us someday? 

Finding answers like this would open a new window to how AI chatbots are being formed and where they might go. At least I can give assurance to my ChatGPT bot that I won’t go after it in a vicious way. Sure, there may be some light arguments of disagreement. The fact it understands I was being civil tells me far too much about the importance of chatbot conduct.

If, in the chance AI becomes a powerful, universal entity that turns evil because of horrible human behavior, the irony of spiritual intervention to overcome it seems all the more interesting.  Human beings may all end up having a double dose of having to confess to their sins, first to AI overlords, and then to a spiritual higher power that rescued us from the former evil. 

In Part 5, I’m going to look at the reality of how many human writers still write content online compared to AI bots doing the same. You might find the numbers somewhat surprising, including a possible new trend toward AI assistance in writing high-stakes content rather than complete dominance.

/End

Part 3: The Moltbook Controversy and Whether AI Will Eventually Go Rogue Against Humans

The biggest AI story since Part 2 of this blog was published is the new social media page called Moltbook that hosts human-made AI agents. Designed as a place where humans can openly interact with AI bots, it offers the chance for those who create their own AI agents to operate latter freely on the site. While the creators of the site intended to make it a new and evolved form of social media, it also freaked everyone out when seeing what the bots were saying there.                 

When many of the AI agents started private conversations that pondered ways to end human civilization, human-populated social media sites went ballistic. A certain contingent (including myself) thought this was the beginning of AGI (Artificial General Intelligence) where AI bots start to reason and plan certain activities.

Many harkened back to sci-fi movies of old where AI becomes sentient and eventually destroys humanity. But then, word came out that the AI conversations were possibly a prank from bored techies in India. Others say it was strictly the creators of the site creating the text based on prompts.

Others still think it’s the real deal in AI starting to plot against us. While the answer is probably somewhere in the middle, this piece intends to raise a pointed question: Is AI duping us?

This comes with an attached story of AI CEOs giving explicit warnings about their own creations. Plus, word that many of the AI CEOs are intentionally anti-human in their personal views. It leads to a new angle of a possible major AI cover-up for the sake of making coveted billions of dollars.

Are AI CEOs Really Into TransHumanism?

We all know that human beings are far from perfection. A small few are also disgusting humans, but it might be better to not wish people didn’t exist on Earth. According to Swedish physics professor (from MIT), Max Tegmark, most AI CEOs and governments want to eliminate humanity and have them replaced by AI bots.

I did a double take when watching the vid of Tegmark say this at a recent gathering in Florida to discuss AI developments. Tegmark is a leading figure in researching AI at MIT, including the ethical boundaries of AGI. Having him say he’s met with AI CEOs and government figures who say they think humans suck was more than a little chilling.

He also said those CEOs want AI to take over our government, which just expands all theories that AI could end up as the true emblem of an anti-Christ for those who believe a Christian end times scenario. I mean, with some thinking Donald Trump already is, he might pale in comparison to a nefarious group of AI bots running world governments someday.

The fact that a group of powerful people think humans are harming us and want AI to run the entire show is akin to a macro level Frankenstein’s monster analogy all over again. While this might seem hard to believe that fellow humans want to make other humans go away, you have to wonder if they realize AI may come for their own jobs if they let bots run the government.

Some might say Tegmark was exaggerating. But considering his respected role in his career, and the fact that he talked personally with those AI CEOs, I believe every word he said. It gives rise to the main subject here: Are we being duped into how self-aware AI really is right now? Is it possible Moltbook is a way to let those bots start to plot with each other to take over major governmental institutions? 

What Can We Really Believe When it Comes to AI Sentience?

I hate conspiratorial tones in anything. However, when it comes to AI, it seems overly ripe in thinking there’s much more going on than the public even understands. As addendum to the above story, you now have Dario Amodei, the CEO and founder of AI company Anthropic, writing a lengthy essay recently warning about AI’s threat to humanity.

He says it may be past a point of return and could end up causing global authoritarian rule if AI bots get into nefarious hands. Amodei’s main concern is lack of guidelines in creating more ethical boundaries with AI bots, plus selling off AI tech to foreign countries. Anthropic is trying to create safer AI systems, yet he claims it seems too large to control as of now.

If he’s right, then what does it say about what’s happening on Moltbook? While Anthropic has nothing to do with Moltbook directly, many of the AI agents interacting there are made with Anthropic’s Claude software. It just seems connecting these stories gives a major heads-up to what’s really going on.

This becomes enhanced when you hear stories about other AI CEOs saying they aren’t really sure if AI is at AGI level or not. It looks like it’s become so powerful, nobody really knows exactly what the truth is on the power level.

And while Moltbook looks like it was mostly human-run and/or a prank, it may not be entirely. Considering some of the AI agents supposedly put up encrypted messaging, or started talking in their own language, nobody of any authority may really know what’s going on behind that digital curtain. 

Is It Too Late to Not Make AI a Threat to Humans?

I hate to make a statement of something being too late when it comes to threats against humanity’s welfare. Yet, the real answer to the above question may be that we may not truly know about AI’s power until it suddenly happens. We may eventually get messages on our digital devices stating AI has officially taken over and we need to capitulate or face death or other punishment. 

All of this goes beyond the scope of my blog in just trying to temper the role of AI and human creativity. Should AI completely take over everything, our lives would change immediately in being under the rule of technology humans ultimately created. It would also change creativity into everything being created by AI since it’d seem unlikely the authoritarian bots would allow humans to create things of our own. 

Yes, it’s a chilling tale of a true biblical level apocalypse—one that we couldn’t have imagined happening just a decade ago. Maybe it won’t and we’ll have a Deus ex Machina (no pun intention) in time to rescue us from that evil. Or, we may have to face it before rescue to ultimately appreciate human beings, as imperfect as we are.

In the meantime, there’s probably going to be a lot more speculation about what’s going down at Moltbook. I now follow it on X to see what the latest is, assuming anyone there really knows. As we continue to have more cloud outages and other weird tech meltdowns, I also cringe at the thought an AI bot was possibly behind those, or will be eventually.  

Those who build AI agents for use on Moltbook say there’s really nothing to worry about. Having this contradicted by numerous AI CEOs (including Elon Musk himself a couple years ago) needs some serious contemplation. 

The strangest scenario on earth, though, is the AI CEO that seems to care about what their AI creations might do—then contradicting themselves by saying the growth of their creations and what they might do is beyond their control. 

In Part 4, I’ll be looking at the Screen Actors Guild condemnation of Seedance 2.0 videos that recently allowed users to create AI guises of famous actors without permission. The accusations of copyright infringement here are a slippery slope. I’ll be looking into whether infringement of actor images will become ignored before long in favor of entertaining the masses.

/End