The fear of the left has no bounds – Trump will unleash dangerous AI as standards are relaxed. Yet other standards are ignored, such as the moving goal posts for AGI, AGsI and the justification for more hacking of biology. Didn’t we already try this with mRNA?
Will Trump Unleash Dark MAGA AI?
At issue is the Biden Administration has signed several Executive Orders which created a government agency that would have direct hand in steering the development of AI. President Biden issued Executive Order 14110 on October 30, 2023 to govern the development and use of AI. This order aimed to harness the potential benefits of AI while mitigating its risks with an emphasis on safety, security, and trustworthiness.
Show Notes: Kamala’s Blue Screen of Death and Severed Conscience
In short the establishment fear that Trump will weaponize AI.
- Compromise AI Safety and Security: A primary concern is that dismantling Biden’s executive order would strip away critical safety and security measures for AI development. The order mandates companies to report on their AI training processes, security measures against tampering, and results of vulnerability testing (“red-team tests”). Experts, including a US government official speaking anonymously, argue that these measures are crucial for ensuring the trustworthiness of AI models, particularly as they become increasingly integrated into critical areas like healthcare and transportation. Removing these safeguards could lead to unforeseen risks and harms arising from poorly designed or malicious AI systems.
- Fuel a “Race to the Bottom”: The sources suggest that a deregulation approach could incentivize companies to prioritize speed and profit over safety and ethical considerations in AI development. This could lead to a “race to the bottom” where companies cut corners to gain a competitive edge, potentially putting the public at risk .
- WaPo Article Amplify Misinformation and Deepen Partisan Divides: The article “AI didn’t sway the election, but it deepened the partisan divide” highlights concerns about the role of AI in exacerbating existing societal divisions. The article points to the proliferation of AI-generated deepfakes and misinformation during the 2024 election cycle, which eroded trust in truth and amplified partisan narratives. Some fear that Trump’s return to office, coupled with his potential appointment of Elon Musk to a position of power over AI policy, could further empower the spread of such content, creating a more fragmented and polarized information environment.
The Establishment has openly proposed campaigns, laws and the creation of government agencies that would control our sources of information:
Pete always nails it! 🎯🎯🎯 pic.twitter.com/PCb4n3AwAB
— Ashley Votes Blue ☮️ (@KuckelmanAshley) November 19, 2024
- Facilitate Censorship and Suppress Opposing Views: Conservatives have voiced concerns that the Biden administration’s focus on mitigating bias and disinformation in AI, particularly through the National Institute of Standards and Technology (NIST) guidelines, amounts to censorship of conservative viewpoints. They argue that these efforts are driven by a “woke” agenda and will unfairly restrict the expression of certain political perspectives. This fear highlights how AI regulation can become entangled with broader cultural and political battles.
- Hamper Innovation and Benefit China: Some, including tech executive Jacob Helberg (“Silicon Valley’s Trump whisperer”), fear that the reporting requirements and potential licensing regimes under Biden’s order are overly burdensome and will stifle innovation in the U.S. They argue that this will ultimately benefit China, which is aggressively pursuing AI dominance, by allowing it to outpace the U.S. in AI development.
Trump May Impact CHiPS Funding and Prevent More Manufacturing Being Established in Michigan
The CHIPS Act aims to bolster domestic chip manufacturing and reduce US dependence on foreign manufacturers, a dependence that is viewed as a strategic weakness. Trump has described CHiPS as awarded an anointed few.
Spurred by the passage of the CHIPS and Science Act of 2022, this week, companies have announced nearly $50 billion in additional investments in American semiconductor manufacturing, bringing total business investment to nearly $150 billion since President Biden took office:
https://www.bridgemi.com/business-watch/election-puts-support-flint-area-megasite-shakier-ground
The Biden administration is reportedly trying to quickly close pending deals with about 20 companies — including one considering Michigan — under the CHIPS and Science Act, which offers subsidies to bring semiconductor manufacturing to the United States.
With time running out before the state and federal political landscape shifts, efforts are continuing this fall to turn about 1,300 acres near the Flint Bishop Airport into a megasite for a high-tech factory, even as new questions emerge about ongoing support for the project.
The Mundy Township property has been marketed to “those that are going to create at least 2,000 direct jobs and invest $2 billion or more,” Tyler Rossmaessler, executive director of the Flint & Genesee Economic Alliance, the economic development group leading the project, told Bridge this week.
WaPo: AI Really Didn’t Affect The Election But It Created Divides
https://www.washingtonpost.com/technology/2024/11/09/ai-deepfakes-us-election/
https://therecord.media/ai-generated-disinfo-concern-elections-michigan
Michigan Secretary of State Jocelyn Benson said Wednesday that one of her top worries about the 2024 elections stems from the potential for artificial intelligence to foment what she called “hyper-localized” dissemination of mis- and disinformation.
“Imagine on election day, information goes out about long lines [in a given precinct] that are calling for violence that is false, but it’s generated through artificial intelligence,” Benson said during an interview at the Aspen Cyber Summit in New York.
AI Has Been Named Critical to National Defense
The Biden administration issued a National Security Referendum on AI, stating that it was now a matter of national security that the US maintain superiority in the field of Artificial Intelligence. The NSM created an alliance between the DoD, NIST and the voluntary cooperation of the largest software developers that produce AI products. The NSM also named supply chains as a risk factor critical to national defense.
Shote Notes: In Bed With The NSM and Militarized AI
Palantir, Claude AI and Amazon Contracted by Federal Government
What does Palantir do? Palantir is a technology firm founded by Peter Thiel that provides data analytics for defense industry and the DoD.
The US spy tech company Palantir has been in talks with the Ministry of Justice about using its technology to calculate prisoners’ “reoffending risks”, it has emerged.
The proposals emerged in correspondence released under the Freedom of Information Act which showed how the company has also been lobbying new UK government ministers, including the chancellor, Rachel Reeves.
Patient privacy fears as US spy tech firm Palantir wins £330m NHS contract
Read more
Amnesty International is among the organisations expressing concern about the expanding role Palantir is attempting to carve out after it was controversially awarded a multimillion-pound contract with the NHS last year.
The DoD has now enlisted Peter Thiel’s company Palantir, Amazon and Anthropic for AI services.
- Anthropic: The developer of Claude, an advanced AI model designed with a “safety-first” approach. Their “constitutional AI” concept involves training AI on a set of principles to guide its decisions and mitigate risks.
- Palantir: A data analytics company deeply entrenched in defense and intelligence, known for its ability to handle top-secret information due to its IL 6 accreditation, allowing it to securely work with classified data.
- AWS (Amazon Web Services): Provides the cloud infrastructure, specifically GovCloud, designed for government agencies, to support this partnership, ensuring the secure processing and storage of sensitive information.
Note that Claude AI reports to be the Constitutional AI, where there is a governing model that will prevent certain answers to be provided.
https://www.anthropic.com/news/claudes-constitution
What is Constitutional AI?
Constitutional AI responds to these shortcomings by using AI feedback to evaluate outputs. The system uses a set of principles to make judgments about outputs, hence the term “Constitutional.” At a high level, the constitution guides the model to take on the normative behavior described in the constitution – here, helping to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest, and harmless.
Our current constitution draws from a range of sources including the UN Declaration of Human Rights [2], trust and safety best practices, principles proposed by other AI research labs (e.g., Sparrow Principles from DeepMind), an effort to capture non-western perspectives, and principles that we discovered work well via our early research. Obviously, we recognize that this selection reflects our own choices as designers, and in the future, we hope to increase participation in designing constitutions.
Areas of Use:
- Data Analysis: Claude can sift through vast quantities of intelligence data, including satellite images, intercepted communications, and social media chatter, identifying potential threats and patterns humans might miss.
- Sentiment Analysis: Claude can analyze language to understand the intent and sentiment behind messages, potentially identifying subtle shifts that could indicate danger.
- Threat Prediction: By processing and analyzing data, Claude can provide insights that help intelligence analysts and government officials make more informed decisions in response to global events.
Ethical Considerations:
- Autonomous Warfare: The US is using AI with autonomous drones in warfare, raising ethical concerns about machines making life-or-death decisions without human intervention.
- Transparency and Accountability: As AI plays a larger role in intelligence and military operations, questions arise about transparency, accountability, and the need for international cooperation to prevent misuse and unintended consequences.
Hype: Artificial General Intelligence – Replacing People
There is a lot of hype today concerning Artificial General Intelligence, or AGI. AGI is a theory, that AI sophisticated enough to perform as efficiently as a human with respects to cognitive functions. Artificial general intelligence (AGI), which is seen as a stepping stone to ASI development. However, even with notable progress in areas like IBM’s Watson supercomputer and Apple’s Siri, current computers cannot fully replicate the cognitive abilities of an average human. Companies like OpenAI, in order to continue pull investor money, have intimated that AGI will be achieved in 2025. The tech fans are running with this as fact because of Sam Altman’s demeanor when delivering this news.
Hype: We May Not Have the Energy Creation Capability for Artificial General Intelligence or For Super Intelligence
While investors may be excited by the creation of a super intelligent workforce that can run 24 x 7, there are constraints that are being ignored. The main fact is that our brains are incredibly efficient using energy when thinking. The duplicate the pure processing power in digital terms is beyond our current capacity, according to the Erasi Equation.
The Erasi Equation underscores the incredible energy efficiency of biological brains, a feat unmatched by our current computing technologies. The source attributes this efficiency to the intricate, multi-level architecture of brains, which allows for a vast number of computations at a minimal energy cost. This stands in stark contrast to the energy-intensive nature of silicon-based processors. In short a single human brain consumes 12 watts of electricity when thinking. For AI to perform calculations that are on par with human complex thinking, it most perform 27 million times more efficiently.
The Erasi Equation:
EASI = Ebrain * f * G * s
Let’s break down each component of the equation:
- EASI: This represents the total energy consumption of a hypothetical ASI system.
- Ebrain: This is the energy consumption of a single human brain, approximately 12 watts.
- f: This is the relative computational efficiency of a human brain compared to AI systems built on current computer hardware. The source estimates this to be a factor of 2.7 x 1013, meaning a human brain is roughly 27 trillion times more efficient than today’s silicon-based processors when running AI algorithms.
- G: This factor represents the “group intelligence” of humans, accounting for the collaborative and problem-solving capacity of our species. The source uses the global population (8 billion) as a rough estimate for this value.
- s: This signifies the desired level of “AI superiority,” meaning how much more intelligent an ASI system would need to be compared to humans. The source suggests a factor of 3, drawing an analogy to the intelligence gap between humans and chimpanzees.
AI In The Field of Medicine
AI Used For Censorship
The Election Integrity Partnership (EIP) is described as a cross-platform content-flagging operation formed before the 2020 US Presidential election. While it’s nominally operated by Stanford University, the sources characterize the EIP as “government censorship in a ski mask”. CISA, the same organization that warned the Dominion Voting Machines would be hacked by Russia, is very involved with envisioning Pete Buttigieg’s description of misinformation.
- The EIP was set up at the request of the Department of Homeland Security (DHS) and its sub-agency, the Cybersecurity and Infrastructure Security Agency (CISA).
- Alex Stamos, Director of the Stanford Internet Observatory, stated the EIP was created because CISA lacked the funding and legal authority to conduct its work.
- The sources assert that, contrary to media claims, the EIP is a tool of government censorship
We have discussed this concept earlier with Logically.AI, a company that uses AI for anticipatory intelligence to identify and counter trending ideas and narratives.
Show Notes: Michigan Shrinkage and Logically AI
State and Local Agency Use This Technology Too.
Yes, government will pressure social media platforms to remove content.
In response, the Oregon Secretary of State’s office, which initiated the contract with Logically, claimed “no authority, ability, or desire to censor speech.” Diehl disputes this. He pointed out that the original proposal with Logically clearly states that its service “enables the opportunity for unlimited takedown attempts” of alleged misinformation content and the ability for the Oregon Secretary of State’s office to “flag for removal” any “problematic narratives and content.” The contract document touts Logically as a “trusted entity within the social media community” that gives it “preferred status that enables us to support our client’s needs at a moment’s notice.”
Will We Forget About the Agencies and the NSM?
Will these agencies continue to operate despite Trump taking office, will DOGE, Elawn and Vivek eliminate the budget for such pursuits?