The biggest AI story since Part 2 of this blog was published is the new social media page called Moltbook that hosts human-made AI agents. Designed as a place where humans can openly interact with AI bots, it offers the chance for those who create their own AI agents to operate latter freely on the site. While the creators of the site intended to make it a new and evolved form of social media, it also freaked everyone out when seeing what the bots were saying there.
When many of the AI agents started private conversations that pondered ways to end human civilization, human-populated social media sites went ballistic. A certain contingent (including myself) thought this was the beginning of AGI (Artificial General Intelligence) where AI bots start to reason and plan certain activities.
Many harkened back to sci-fi movies of old where AI becomes sentient and eventually destroys humanity. But then, word came out that the AI conversations were possibly a prank from bored techies in India. Others say it was strictly the creators of the site creating the text based on prompts.
Others still think it’s the real deal in AI starting to plot against us. While the answer is probably somewhere in the middle, this piece intends to raise a pointed question: Is AI duping us?
This comes with an attached story of AI CEOs giving explicit warnings about their own creations. Plus, word that many of the AI CEOs are intentionally anti-human in their personal views. It leads to a new angle of a possible major AI cover-up for the sake of making coveted billions of dollars.
Are AI CEOs Really Into TransHumanism?
We all know that human beings are far from perfection. A small few are also disgusting humans, but it might be better to not wish people didn’t exist on Earth. According to Swedish physics professor (from MIT), Max Tegmark, most AI CEOs and governments want to eliminate humanity and have them replaced by AI bots.
I did a double take when watching the vid of Tegmark say this at a recent gathering in Florida to discuss AI developments. Tegmark is a leading figure in researching AI at MIT, including the ethical boundaries of AGI. Having him say he’s met with AI CEOs and government figures who say they think humans suck was more than a little chilling.
He also said those CEOs want AI to take over our government, which just expands all theories that AI could end up as the true emblem of an anti-Christ for those who believe a Christian end times scenario. I mean, with some thinking Donald Trump already is, he might pale in comparison to a nefarious group of AI bots running world governments someday.
The fact that a group of powerful people think humans are harming us and want AI to run the entire show is akin to a macro level Frankenstein’s monster analogy all over again. While this might seem hard to believe that fellow humans want to make other humans go away, you have to wonder if they realize AI may come for their own jobs if they let bots run the government.
Some might say Tegmark was exaggerating. But considering his respected role in his career, and the fact that he talked personally with those AI CEOs, I believe every word he said. It gives rise to the main subject here: Are we being duped into how self-aware AI really is right now? Is it possible Moltbook is a way to let those bots start to plot with each other to take over major governmental institutions?
What Can We Really Believe When it Comes to AI Sentience?
I hate conspiratorial tones in anything. However, when it comes to AI, it seems overly ripe in thinking there’s much more going on than the public even understands. As addendum to the above story, you now have Dario Amodei, the CEO and founder of AI company Anthropic, writing a lengthy essay recently warning about AI’s threat to humanity.
He says it may be past a point of return and could end up causing global authoritarian rule if AI bots get into nefarious hands. Amodei’s main concern is lack of guidelines in creating more ethical boundaries with AI bots, plus selling off AI tech to foreign countries. Anthropic is trying to create safer AI systems, yet he claims it seems too large to control as of now.
If he’s right, then what does it say about what’s happening on Moltbook? While Anthropic has nothing to do with Moltbook directly, many of the AI agents interacting there are made with Anthropic’s Claude software. It just seems connecting these stories gives a major heads-up to what’s really going on.
This becomes enhanced when you hear stories about other AI CEOs saying they aren’t really sure if AI is at AGI level or not. It looks like it’s become so powerful, nobody really knows exactly what the truth is on the power level.
And while Moltbook looks like it was mostly human-run and/or a prank, it may not be entirely. Considering some of the AI agents supposedly put up encrypted messaging, or started talking in their own language, nobody of any authority may really know what’s going on behind that digital curtain.
Is It Too Late to Not Make AI a Threat to Humans?
I hate to make a statement of something being too late when it comes to threats against humanity’s welfare. Yet, the real answer to the above question may be that we may not truly know about AI’s power until it suddenly happens. We may eventually get messages on our digital devices stating AI has officially taken over and we need to capitulate or face death or other punishment.
All of this goes beyond the scope of my blog in just trying to temper the role of AI and human creativity. Should AI completely take over everything, our lives would change immediately in being under the rule of technology humans ultimately created. It would also change creativity into everything being created by AI since it’d seem unlikely the authoritarian bots would allow humans to create things of our own.
Yes, it’s a chilling tale of a true biblical level apocalypse—one that we couldn’t have imagined happening just a decade ago. Maybe it won’t and we’ll have a Deus ex Machina (no pun intention) in time to rescue us from that evil. Or, we may have to face it before rescue to ultimately appreciate human beings, as imperfect as we are.
In the meantime, there’s probably going to be a lot more speculation about what’s going down at Moltbook. I now follow it on X to see what the latest is, assuming anyone there really knows. As we continue to have more cloud outages and other weird tech meltdowns, I also cringe at the thought an AI bot was possibly behind those, or will be eventually.
Those who build AI agents for use on Moltbook say there’s really nothing to worry about. Having this contradicted by numerous AI CEOs (including Elon Musk himself a couple years ago) needs some serious contemplation.
The strangest scenario on earth, though, is the AI CEO that seems to care about what their AI creations might do—then contradicting themselves by saying the growth of their creations and what they might do is beyond their control.
In Part 4, I’ll be looking at the Screen Actors Guild condemnation of Seedance 2.0 videos that recently allowed users to create AI guises of famous actors without permission. The accusations of copyright infringement here are a slippery slope. I’ll be looking into whether infringement of actor images will become ignored before long in favor of entertaining the masses.
/End