

What can This Is Spinal Tap tell us about the Tech Bros expectation that AI becomes super intelligent by just adding processing, is that the same as cranking it to 11? And what does it mean when the techies say “it’s inevitable that AI will be better at creative elements like music”?

It’s Inevitable That All Need To Strive and Create Will be Supplanted
What can This Is Spinal Tap tell us about the Tech Bros expectation that AI becomes super intelligent by just adding processing, is that the same as cranking it to 11? And what does it mean when the techies say “it’s inevitable that AI will be better at creative elements like music”? Tonight on Sunday Nights Radio we’re going to look at the depth that of creativity and knowledge that music can impart that demonstrates how AI lacks the dimensions of the creative activities the Tech Bros want to supplant. Do we hate our hobbies so much, is music being hard something that we should eliminate for the sake of democratizing creativity? A Natural Aristocracy, as Jefferson coined it, may not be a bad thing if we learn to strive and improve. But the chant that AI is inevitable will stifle all of the great ancillary skills we acquire by pursuing hard tasks.
Favor Ability Over Pedigree. Every. Single. Time.
When we hear the word aristocracy we think of pedigree, but I am taking this phrase “natural aristocracy” from Thomas Jefferson where he described rewarding those of skill, character and talent with prestige that beyond what a landed class deserved.

It’s Inevitable That All Need To Strive and Create Will be Supplanted
What can This Is Spinal Tap tell us about the Tech Bros expectation that AI becomes super intelligent by just adding processing, is that the same as cranking it to 11? And what does it mean when the techies say “it’s inevitable that AI will be better at creative elements like music”? Tonight on Sunday Nights Radio we’re going to look at the depth that of creativity and knowledge that music can impart that demonstrates how AI lacks the dimensions of the creative activities the Tech Bros want to supplant. Do we hate our hobbies so much, is music being hard something that we should eliminate for the sake of democratizing creativity? A Natural Aristocracy, as Jefferson coined it, may not be a bad thing if we learn to strive and improve. But the chant that AI is inevitable will stifle all of the great ancillary skills we acquire by pursuing hard tasks.
How Instruments Can A Trio Play To Become An Orchestra?
That gear grew from Geddy’s experimentation and collaboration with his bandmates, and at first was just the pedals that you see at his feet. That is a Moog Taurus bass pedal, which could be configured or “programmed” to produce specific sounds. Geddy has the bass and guitar double neck because for some the songs that Rush wrote, they were faced with a challenge that backing melody was required when the lead guitar player soloed. As Geddy describes it, when Alex Lifeson had to solo during a live performance, there would be a “hole” in the wall of sound that they achieved on their albums. If Geddy accompanied on a second guitar, they would lose the bass. If he played his bass, they would lose the backing rhythm of the guitar chords. The band members, while eager to expand their sound and complexity, did not want to add another member. The bass foot pedals freed up Geddy for other things. As he describes it, his new job in addition to singing and playing or switching to guitar was to be a “footman”.
This led quickly to other synthesizers and the Minimoog. As Geddy describes it, the Minimoog was challenge, yet something that he could tinker with to “wrap his head around” how to PRODUCE a sound. In other words, those iconic notes from Tom Sawyer, Xanadu and other songs were not preprogrammed. Those were created with trial and error as Rush wrote their music.. Crafted. But in order to do so, Geddy had to learn about wave forms and how that would ultimately impact the sounds he would create. Today this seems so arduous, because we no longer need modular synthesizers because AI has recreated the wheel while eliminating the requirement of long hours of learning how to mix and modulate sound with dials and buttons. You don’t even need to know that different shapes to sound waves produce different results. You no longer have to turn knobs.
When we destroy that drive to try, we eliminate the multitude of ancillary experience and knowledge that the singular dimension of AI will not impart. When I read the prognostications of prominent figures in AI who describe how AI will surpass human ability to create, particularly in the field of music, I know immediately that those experts are not musicians. They show no depth of knowledge of the production of music because they only see the end product of a “song” by a famous artist that can be easily mimicked. Those who believe in the investable displacement of creativity by AI will describe what I have just written is a sig. desperation and hidden resignation, because I appeal to emotion. That, too, demonstrates their lack of depth regarding the process of musical creation.
Today We Should Hate All Work, Even Our Hobbies
“It’s not really enjoyable to make music now… it takes a lot of time, it takes a lot of practice, you have to get really good at an instrument or really good at a piece of production software. I think the majority of people don’t enjoy the majority of time they spend making… pic.twitter.com/zkv73Bhmi9
— Mike Patti (@mpatti) January 11, 2025
Today We Should Hate All Work, Even Our Hobbies
The AI evangelists, who are primarily the Silicon Valley CEOs and their cheerleaders, are just as absurd when it comes to spawning consciousness in AI. They insist that by adding more processing power, sentience will leap onto the stage. AI performs better when it has more parameters at its disposal to process requests. This essentially increases its ability to answer questions with a more eloquent response as well as tackle solving problems of greater complexity.
It is a very impressive display of skill, but it is not intelligence. Intelligence also encompasses the ability to adapt to new stimuli, and currently LLMs need to be programmed – added to – to incorporate new capacity. And it won’t ask you to solve problems, it waits for you to provide the problem to solve.
Lab rats demonstrate a greater intelligence with respect to adaptability. In a study rats were trained to drive. They were not genetically modified, so their genes were not cranked to 11, they posed the innate ability to adapt to new stimulation and perform tasks outside their given function.
Confusing Skill with Intelligence
“if you scale up the size of your database and you cram into it more knowledge, more patterns and so on, you are going to be increasing its performance as measured by a memorization benchmark. That’s kind of obvious. But as you’re doing it, you are not increasing the intelligence of the system one bit. You are increasing the skill of the system. You are increasing its usefulness, its scope of applicability, but not its intelligence because skill is not intelligence. And that’s the fundamental confusion that people run into is that they’re confusing skill and intelligence.”
There Is NO Adaptation
AI can only adapt to stimuli when that stimuli fits the system through which it was designed. It is not self-learning and adaptive to any stimuli that it may encounter. A language-model AI will always be a language-model AI. It cannot, for example, teach itself to drive a car, operate machinery, or pilot a missile without a human first updating its source-code to give it the ability to do these things.
https://arxiv.org/pdf/2303.07103
Where future LLMs and their extensions are concerned, things look quite different. It seems entirely possible that within the next decade, we’ll have robust systems with senses, embodiment, world models and self-models, recurrent processing, global workspace, and unified goals. (A multimodal system like Perceiver IO already arguably has senses, embodiment, a global workspace, and a form of recurrence, with the most obvious challenges for it being worldmodels, self-models, and unified agency.). I think it wouldn’t be unreasonable to have a credence over 50 percent that we’ll have sophisticated LLM+ systems (that is, LLM+ systems with behavior that seems comparable to that of animals that we take to be conscious) with all of these properties within a decade. It also wouldn’t be unreasonable to have at least a 50 percent credence that if we develop sophisticated systems with all of these properties, they will be conscious. Those figures together would leave us with a credence of 25 percent or more. Again, you shouldn’t take the exact numbers too seriously, but this reasoning suggests that on mainstream assumptions, it’s reasonable to have a significant credence that we’ll have conscious LLM+s within a decade.30