Zed’s Dead

The Dead Internet Theory Lives

The Dead Internet Theory has been debunked by the Mainstream, then “rebunked” by Elon buying Twitter.  It’s time to consider who your online friends were..

the mighty humanzee
By The Mighty Humanzee
severed conscience

“Zed’s dead, baby”, quipped Elon as Fabienne encircled his waist with her arms.  “Zed’s dead”.  Kicking the clutch, Musk launched the chopper forward and roared off into the sunset, his work done for the denizens of Twitter.

Zed is dead, and so is the Internet.   The Dead Internet is a theory that much of commerce and conversation is fueled by bots, AI programs run en masse to generate volumes of clicks and responses.  When Elon Musk took over Twitter he announced that the number of bots attached to the platform had inappropriately bolstered accounts’ followers and falsely drove ad revenue.  Once he ceased their activity, many mainstream pundits across the spectrum witnessed a huge drop in activity.  Despite the moniker of “conspiracy theory”, Elon proved that much of what we witnessed on social media was fake.

Swatting the bots off the Twitter picnic table like flies resulted in a rejiggering of followers for many of the “prominent” accounts.  They screamed and howled as their engagement counts and popularity measured in presumably actual Twitter users plummeted faster than Silicon Valley Bank’s bloated credit ratings.  Zed was dead, and Elon killed him by sweeping all those fake accounts away.

I hate to tell you but much of the acclaimed amplifying factor of social media and Twitter in particular is fake.  You may recall that the Internet was to be an equalizer, and your ideas and work funneled through social media channels enabled you to rise and fall based on your ability.  It was an implied measure of your worth.  Low follower count meant that the masses had spoken and, well,. if you’ve been on Twitter for years and tweeting away yet still under 2000 followers you basically sucked.  Here’s the good news:  that was a lie and Elon killing Zed and the other bots, proving Big Tech was a king maker.  Big Tech plucked individuals from the masses like God plucked Elvis.  Yes, they created the controlled opposition that you identified with, who you worshiped.

And while the media claimed there was no targeting of unpopular views in disfavor with the left, we all saw examples of the egregious censorship.  What we didn’t realize was the extent to which social media was propped up by Artificial Intelligence.  This is not merely the accounts that swarm you when you express outrage over COVID policy.  Increasingly AI is used to create articles supporting central themes as well.  The online publisher CNET has been using AI to write content for you. 

AI has begun nipping at the heels of the traditional media business for months and even drawn blood from it in some cases. CNET, a longtime staple of technology journalism, laid off 10 percent of its employees earlier this month, with its editor-in-chief changing her title to “senior vice president of AI content strategy.” Red Ventures, owner of CNET, had been using an AI to generate articles on basic subjects within personal finance and other topics. The move attracted little fanfare, and CNET did not publicize it, until the outlet had to issue corrections on nearly 80 articles, half the AI’s output. In spite of the fiasco, Red Ventures plans to continue publishing AI-written articles on CNET.

Consider that an article consists of many paragraphs, then remember the fact that Twitter only allows you 240 characters in your message, and it is easy to see how a conversation can be automated.  Orange and I demonstrated how easy it was to utilize AI for creating essays which appeared crafted by a human.  We published those results for you here.  AI has also demonstrated that it can pass MBA exams and the US Medical License exams.  So consider again in that light:  an application with can generate answers that pass an MBA exam certainly can produce realistic responses limited to 240 characters.  If you look at the output of AI systems like Chat GPT, could you really distinguish an article generated by a human versus AI?  Would any of us have the time to try?

We have to be aware of gradual approximation, the slow process of introducing a harmful concept couched in appealing terms.  Keep repeating that concept more frequently and eventually we become accustomed to its presence and eventually accept that concept.  The fanfare around the wonder of AI is now being used to distract from the invasive dangers of its use.

Here is an example that should make you uncomfortable, Ford wants to observe you in your car, and filed a patent for such a system.  You will be alarmed when you hear the percentage of people who accept AI “assisting” us with decisions.  But they don’t seem to be aware of the controlling nature of corporate worship of ESG.  Maybe they don’t want certain habits in drivers of their vehicles.  Maybe that lease you signed, which remember, doesn’t mean you OWN the car, will enable Ford to dictate how and where you drive.  But worse, what about that MAGA tshirt you wore to a rally?

… AI powered image recognition to construct a digital sketch of the driver – which then, in turn, provides enough details for the system to guess what the driver is doing. The system can determine things like when a driver is sipping a cup of coffee or looking at their phone.

Another way AI is promoted as useful is as an assistant.  Some apps claim that AI can be used to help select articles based on your reading habits.  After you have read 25 articles, the AI system has been trained to provide you what matches based on previous materials.  Well, if this was all that was presented to you, how would you know?  Could you spot if an item was boosted or excluded from your view?  Subtle censorship can be easily instituted and you would be none the wiser.

The app won’t require you to sign up after downloading, but will prompt you to select ten areas of interest which range from home products, to U.S. politics, to men’s style. Artifact also allows you to link subscriptions to your account to prioritize the news you pay for. But the app’s pièce de résistance is its artificial intelligence engine, which learns about your reading habits as you dive into the news.

“Feed improves each time you read,” Artifact tells you when you finish signing up. “Read 25 articles for Artifact to better personalize your feed. Track progress on your profile.”

It seems that AI is always making you a more passive participant and in the process your exercise of judgement has been eliminated.  Like any other characteristic or ability, when you use something less it atrophies.  AI will be presented as a tool to improve your efficiency or enhance a capability.  But at what cost?  LinkedIn will be offering a service where AI will recommend changes to your professional profile and in some cases even write content for you.  As we’ve discussed before, will this process select the “appropriate phrases” for you?  By extension, will LinkedIn rate your profile unfavorably should you elect phrases that are not among the suggested vocabulary?  Consider, too, that employment search services may favor the phrases that LinkedIn AI proffers, and you could run the risk of being sidelined when you elect to write your own profile.  How would you know if this were the case?

Let’s go beyond convenience and assisting you with tasks that you should really be adept at accomplishing yourself.  Should AI be your guide at work for managerial tasks, such as monitoring your progress on projects and evaluating the decisions you have made during the course of business?  What is alarming are the number of people who have already accepted the inerrant superiority of AI at performing tasks.  There seems to be a consensus that AI’s capability is the sum of all good experiences of humanity up to this point, and thereby superior to your own judgement.  In a survey of 17,000 people spanning 17 countries, the percentage of those who favor AI navigating their actions for them is alarmingly high.

Take note of Direct tasks for employees to complete and Set goals for employees to meet.   This is command and control management, and assume that one employee is interchangeable with another.  It presupposes the innate infallibility of the information fed to AI in order to make the decision.  Leading people is not accomplished with simple data input.  Indeed, much of the failure of project management is the arrogant lack of bottom up estimation – many times project managers are not specialists, they are either generalists or worse, management specialists with limited skill in the tasks at hand.  At least in those cases feedback between employee and employer can communicate the wisdom of those with hands on involvement in a process to their managers.  Who now collects the information to adjust AI should the system’s recommendations prove to not adhere to reality?

AI is influencing us in ways we are not aware of, and beyond the realm of Twitter and social media.  Our exercise of judgement is eliminated, the act of informing ourselves is corralled into channels of information and we are unaware of other sources that could lead us to independent conclusions.  We are not shown the truth.  We should not expect to be shown the truth, we don’t need AI in order to seek the truth and know it for ourselves.

Leave a Reply