AI – American Conservative Movement https://americanconservativemovement.com American exceptionalism isn't dead. It just needs to be embraced. Thu, 03 Oct 2024 14:08:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://americanconservativemovement.com/wp-content/uploads/2022/06/cropped-America-First-Favicon-32x32.png AI – American Conservative Movement https://americanconservativemovement.com 32 32 135597105 Google CEO Eyes Atomic Power for AI Data Centers as Big Tech Seeks Nuclear Revival to Achieve Net Zero https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/ https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/#respond Thu, 03 Oct 2024 14:08:21 +0000 https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/ (Zero Hedge)—Following the news of the Three Mile Island restart plans to power Microsoft’s AI data centers and the revival of Holtec’s Palisades nuclear plant in Michigan, Google CEO Sundar Pichai revealed in an interview with Nikkei Asia in Tokyo on Thursday that the tech giant is exploring the use of nuclear energy as a potential ‘green’ source to power its data centers.

“For the first time in our history, we have this one piece of underlying technology which cuts across everything we do today,” Pichai said of generative AI. He said, “I think the opportunity to do well here is something we are leaning into.”

Three years ago, Google released plans to achieve net-zero emissions by 2030. However, the proliferation of AI data centers has led to a surge in the big tech’s power consumption, which, in return, its greenhouse gas emissions in 2023 jumped 48% more than in 2019 on a carbon-dioxide equivalent basis.

Behind the scenes, Google is likely scrambling to secure green energy and curb emissions as 2030 quickly approaches.

“It was a very ambitious target,” Pichai said of the net-zero emissions targets, “and we will still be working very ambitiously towards it. Obviously, the trajectory of AI investments has added to the scale of the task needed.”

He continued, “We are now looking at additional investments, such as solar, and evaluating technologies like small modular nuclear reactors, etc.”

Nikkei noted that Pichai wasn’t clear on where Google might start sourcing nuclear power. A bulk of that power could come from reviving older nuclear power plants. This is exactly what Microsoft did when it signed a power agreement contract with dormant Three Mile Island on the Susquehanna River near Harrisburg, Pennsylvania.

Recall that just last week, we wrote that Sam Altman-backed Nuclear SMR company Oklo announced it had finalized an agreement with the Department of Energy to advance the next phase of the SMR at the Idaho National Lab. And days ago, the Biden administration closed a $1.52 billion loan with Holtec’s Palisades nuclear plant in Michigan to revive it.

Sachem Cove Partners Chief Investment Officer Michael Alkin told Bloomberg shortly after the Microsoft-Three Mile Island deal, “It’s a wake-up call to those that have not been paying attention,” adding that demand already outstrips the supply of uranium and the restart of Three Mile Island “takes that to a bit of a different level.”

Also, the funding markets are becoming more receptive to nuclear deals as governments and big tech understand the only way to hit ambitious net zero goals is not with solar and wind but with nuclear power. In late December 2020, we outlined to readers that this would happen in a note titled “Buy Uranium: Is This The Beginning Of The Next ESG Craze?”

Furthermore, here’s Goldman’s latest note on uranium prices, which are only expected to “stairstep” higher over time.

]]>
https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/feed/ 0 212141
Cronyism or Collaboration? Biden and Big Tech’s Solution to AI Energy Crisis Exposes Green Energy Lies https://americanconservativemovement.com/cronyism-or-collaboration-biden-and-big-techs-solution-to-ai-energy-crisis-exposes-green-energy-lies/ https://americanconservativemovement.com/cronyism-or-collaboration-biden-and-big-techs-solution-to-ai-energy-crisis-exposes-green-energy-lies/#respond Fri, 27 Sep 2024 10:00:46 +0000 https://americanconservativemovement.com/cronyism-or-collaboration-biden-and-big-techs-solution-to-ai-energy-crisis-exposes-green-energy-lies/ (The Blaze)—Earlier this month, executives from some of the most influential Big Tech companies met with the Biden administration to discuss strategies to tackle the looming crisis induced by AI’s energy-intensive training.

Government officials, including White House Chief of Staff Jeff Zients, National Economic Adviser Lael Brainard, National Climate Adviser Ali Zaidi, Senior Adviser to the President for International Climate Policy John Podesta, and more, met with numerous tech industry leaders, including Alphabet President Ruth Porat, Amazon Web Services CEO Matt Garman, and OpenAI CEO Sam Altman.

This isn’t the first time the Biden administration has buddied up with big business to address an energy issue while simultaneously enacting policies that nullify its own efforts to strengthen the American industrial and energy base.

To the delight of OpenAI, the Biden administration announced that it plans to invest in data center infrastructure projects following the discussion. In response, OpenAI told CNBC that the company “appreciate[s] the White House [for] convening this meeting, as it is a recognition of the priority of infrastructure to create jobs, help guarantee that the benefits of AI are widely distributed, and ensure America will continue to be at the forefront of AI innovation.”

Furthermore, the Biden administration announced the launch of a permitting council to provide increased technical assistance to federal, state, and local authorities handling data center permits, an AI data center engagement team to expand loans, grants, and tax credits, a program to “share resources on repurposing closed coal sites with datacenter developers,” and other measures to address the data center energy issue.

Data centers and fossil fuels

Like much of the rest of the tech industry, AI training heavily relies on data centers, which provide powerful servers and storage capacity for all sorts of ventures. Most of these data centers provide these services by drawing from fossil fuels — the type of energy many globalists and self-styled progressives insist will destroy the world.

The demand for these data centers continues to grow. A Goldman Sachs study showed that data center power demand will grow 160% by 2030. And according to the International Energy Agency, “Electricity consumption from data centres, artificial intelligence (AI) and the cryptocurrency sector could double by 2026.” What’s most concerning is that data centers consumed 460 terawatt-hours of electricity in 2022, and that number could rise to over 1,000 TWh in 2026 — close to Japan’s current electricity consumption.

That’s because more firms are innovating in AI, and AI is becoming more energy-intensive as it develops in complexity. For example, OpenAI’s ChatGPT-4 took 50 times the electricity needed to train GPT-3. Additionally, AI-based programs consume much more energy than basic search engines. A simple ChatGPT prompt response takes nearly 10 times the electricity of a Google search; a ChatGPT prompt response requires 2.9 watt-hours, while a Google search takes 0.3 watt-hours.

Relying on green energy to power these data centers does not make sense, but the Biden administration thinks otherwise.

Cronyism

This isn’t the first time the Biden administration has buddied up with big business to address an energy issue while simultaneously enacting policies that nullify its own efforts to strengthen the American industrial and energy base.

In 2022, a bipartisan coalition in Congress passed the CHIPS and Science Act to boost domestic semiconductor manufacturing. It was also supported and signed by President Joe Biden. The bill was a good start, but with Intel’s recent struggles and shortfalls — and fresh uncertainty surrounding the impact of the highly hyped new TSMC fabrication plant in Arizona — this effort to shore up chip manufacturing is overshadowed by its many flaws. These flaws include woke language, DEI provisions, and, most importantly, climate change and green energy initiatives, even though chip manufacturing is heavily reliant on cheap, reliant, and efficient fossil fuels.

What’s more, after announcing billions of taxpayer dollars would go to “strengthen climate resilience,” the Biden-Harris administration put out a press release detailing its expensive progressive climate agenda to reach net-zero carbon emissions.

“Net-zero emissions pathways require widespread implementation of currently available and cost-effective options for reducing emissions, including the addition of new wind and solar capacity. Reaching net zero will also require rapid expansion of technologies and methods to remove carbon from the atmosphere to balance remaining emissions, as well as the exploration of additional mitigation and transformative adaptation options,” the administration claimed.

As the name of the crisis suggests, the data center energy crisis will only be solved by accessing more reliable energy, not by conspiring with Big Tech to expand its institutional influence and advance the green agenda.

]]>
https://americanconservativemovement.com/cronyism-or-collaboration-biden-and-big-techs-solution-to-ai-energy-crisis-exposes-green-energy-lies/feed/ 0 211951
Chinese AI Developers Bypass U.S. Chip Export Controls with Global Computing Power https://americanconservativemovement.com/chinese-ai-developers-bypass-u-s-chip-export-controls-with-global-computing-power/ https://americanconservativemovement.com/chinese-ai-developers-bypass-u-s-chip-export-controls-with-global-computing-power/#respond Mon, 26 Aug 2024 06:17:10 +0000 https://americanconservativemovement.com/chinese-ai-developers-bypass-u-s-chip-export-controls-with-global-computing-power/ In a strategic move to circumvent U.S. export restrictions, Chinese AI developers are tapping into global computing resources, bypassing the need to import high-end American chips like those from Nvidia directly into China. This workaround involves collaborating with brokers to access computing power abroad, often under the cover of anonymity techniques borrowed from the cryptocurrency sector.

The U.S. has imposed stringent export controls on advanced chips, particularly Nvidia’s powerful H100 chips, aiming to curb technological advancements in China that could have military applications. However, the demand for these chips in China remains high, leading to innovative solutions like those provided by Derek Aw, a former bitcoin miner turned entrepreneur. Aw has facilitated the setup of AI servers equipped with Nvidia chips in locations like Australia, serving Chinese companies remotely.

Aw’s approach involves setting up data centers overseas, like the one in Brisbane, Australia, where over 300 servers were installed to process AI tasks for a Beijing-based company. This method leverages the concept of renting computing power, a practice not new but made more complex by the need for anonymity due to U.S. regulations.

The transactions are often anonymized through smart contracts on blockchain platforms, where identities are concealed by cryptographic keys, and payments are made in cryptocurrencies. This setup allows Chinese companies to operate under the radar, sometimes through subsidiaries in countries like Singapore, further distancing themselves from direct U.S. oversight.

The decentralized GPU model has gained traction, especially after the cryptocurrency mining boom waned, freeing up computing resources worldwide. Platforms like io.net boast of providing unrestricted access to GPU power without the usual customer verification processes, appealing to Chinese entities looking to maintain privacy while accessing technology.

Joseph Tse, formerly with a Shanghai AI startup, highlighted the shift to these decentralized services after traditional cloud providers like Amazon Web Services became inaccessible due to U.S. restrictions. These platforms, while offering a workaround, come with risks like potential data breaches due to the inherent vulnerabilities in blockchain systems.

At industry events, such as one in Singapore, companies are openly marketing these decentralized GPU services, indicating a growing market for such solutions among Chinese developers who need less intensive computing power for smaller AI applications. However, for large-scale AI training, like that required for models similar to ChatGPT, these decentralized setups fall short, prompting efforts to create larger, more centralized computing clusters.

Edge Matrix Computing (EMC) is one example of a company expanding its network to include thousands of GPUs, including those under U.S. export controls, for more robust AI training capabilities. EMC and similar ventures are exploring bulk purchasing of chips like Nvidia’s H100, aiming to reduce costs for intensive AI computing.

The U.S. government, aware of these developments, is tightening its oversight. Senator John Kennedy has expressed concerns over the effectiveness of current export controls, urging stricter measures. The Commerce Department, in response, has been monitoring and attempting to clamp down on these illicit procurement networks.

Meanwhile, entrepreneurs like Aw continue to expand their operations, planning new clusters with the latest chip technologies, legally operating under the guise of foreign subsidiaries. This cat-and-mouse game between regulatory enforcement and technological circumvention highlights the complex landscape of international tech trade and the ongoing battle over technological supremacy.

Article generated from corporate media reports.

]]>
https://americanconservativemovement.com/chinese-ai-developers-bypass-u-s-chip-export-controls-with-global-computing-power/feed/ 0 211066
What Is AI Really? Digital Illusions, False Promises and Mass Reeducation https://americanconservativemovement.com/what-is-ai-really-digital-illusions-false-promises-and-mass-reeducation/ https://americanconservativemovement.com/what-is-ai-really-digital-illusions-false-promises-and-mass-reeducation/#respond Sat, 10 Aug 2024 17:04:55 +0000 https://americanconservativemovement.com/?p=210338 (Alt-Market)—Much fanfare has been lavished on the concept of Artificial Intelligence in the past five years to the point that its primacy is treated in the media as a forgone conclusion. The idea that algorithms can “think” has become a pervading myth, a sci-fi fantasy come to life. The reality is much less impressive…

We continually hear from globalists at the World Economic Forum and other elitist institutions that AI is the catalyst for the “4th Industrial Revolution” – A technological singularity that will supposedly change every aspect of our society forever. I keep waiting for the moment that AI does something significant in terms of advancing human knowledge or making our lives better. The moment never comes. In fact, the globalists keep moving the goalposts for what AI really is.

I would note that WEF zealots like Yuval Harari talk about AI like it is the rise of an all powerful deity (I discuss the globalist worship of AI in my article ‘Artificial Intelligence: A Secular Look At The Digital Antichrist’). Yet, Harari has also recently downplayed AI as a sentient intelligence. He argues that it doesn’t need to achieve self awareness or consciousness in order to be considered a super being or living entity. He even suggests that the popular image of a Terminator-like AI with individual agency and desire is not a legitimate expectation.

In other words, AI as it stands today is nothing more than a mindless algorithm, and thus, it is not AI.  But, if every aspect of our world is engineered around digital infrastructures and the populace is taught to put blind faith in the “infallibility” of algorithms then eventually become the robot gods the globalists so desperately desire.  That is to say, AI dominance is only possible if everyone BELIEVES that AI is legitimate.  Harari essentially admits to this agenda in the speech above.

The allure of AI for average people is the pie-in-the-sky promise of freedom from worry or responsibility. As with all narcissists, the global elite love to future-fake and buy popular conformity now on false promises of rewards that will never come.

Yes, algorithms are currently used to help laymen do things they could not do before, such as build websites, edit essays, cheat on college exams, create bad artwork and video content, etc. Useful applications are few and far between. For example, the claim that AI is “revolutionizing” medical diagnosis and treatment is far-fetched.  The US, the nation that arguably has the most access to AI tools, is also suffering from declining life expectancy.  We know it’s not covid because the virus has a 99.8% average survival rate.  You would think that if AI is so powerful in its ability to identify and treat ailments the average American would be living longer.

There is no evidence of a single benefit to AI on a broader social scale. At most, it looks like it will be good at taking jobs away from web developers and McDonald’s drive-thru employees.  The globalist notion that AI is going to create a robotic renaissance of art, music, literature and scientific discovery is utter nonsense.  AI has proven to be nothing more than a tool of mediocre convenience, but that’s actually why it’s so dangerous.

I suspect the WEF has changed its ideas about what AI should be because it’s not living up to the delusional aspirations they originally had for it. They’ve been waiting for a piece of software to come to life and start giving them insights into the mechanics of the universe and they’re starting to realize that’s never going to happen. Instead, the elitists are shifting their focus increasingly into the melding of the human world and the digital world. They want to fabricate the necessity of AI because human dependency on the technology serves the purposes of centralization.

But what would this actually look like? Well, it requires that the population continues to get dumber while AI becomes more integral to society.

For example, it is widely accepted at this point that a college education is no indication of intelligence or skill. There are millions of graduates entering the workforce today that display an unsettling level of incompetence. This is partially because college educators are less capable, ideologically biased and the average curriculum has degraded. But, also, we need to start accounting for the number of kids coasting their way through school using ChatGPT and other cheat boxes.

They don’t need to learn anything, the algorithm and their cell phone camera does it all for them. This trend is disturbing because human beings have a tendency to take the easiest path in every aspect of survival. Most people stopped learning how to grow food because industrial farming does it for us. They stopped learning how to hunt because there are slaughterhouses and refrigerated trucks.  Many Zennials today are incapable of cooking for themselves because they can get takeout to their door anytime they want. They barely talk on the phone or create physical communities anymore because texting and social media have become the intermediaries in human interaction.

Yes, everything is “easier”, but that does not mean anything is better.

My great fear – The future that I see coming down the road, is one in which human beings no longer bother to think. AI might be seen as the ultimate accumulation of human knowledge; a massive library or digital brain that does all the searching and thinking for you. Why learn anything when AI “knows everything.” Except, this is a lie..

AI doesn’t know everything; it only knows what its programmers want it to know. It only gives you the information its programmers want you to have. The globalists understand this and they can taste the power that they will have should AI become paramount as an educational platform.  They see it as a way to trick people into abandoning personal development and individual thought.

Look at it this way: If everyone in the world starts turning to AI for answers to all their questions, then everyone in the world will be given the same exact answers and will come to the same exact conclusions. All AI has to do is actively censor any information that contradicts the official narrative.

We got a taste of this Orwellian condition during the covid pandemic when Big Tech companies like Google used algorithms to bury any and all data that proved covid was not the threat that government officials said it was. You could not go on YouTube for at least three years and look up alternative information on covid or the vaccines. The algorithm forced everyone to sift through a long list of establishment sources, many of them promoting blatant lies about masking, social distancing, the covid death rate and vaccine safety.

The powers-that-be don’t even need to directly censor or remove information they don’t like. All they have to do is let the algorithm dictate the search results and bury the truth on Page 10,000 where no one will look.

How would the effect the average person?  Say AI is programmed to dictate scientific discourse.  What if AI says that man-made climate change is an undeniable reality and the “science is settled” while never presenting the mountain of counter-evidence to the contrary?  No one will look up the real data because AI will make it impossible to find.  Everyone will assume AI is telling them all there is to know about the subject, but it gets worse than that…

Many readers might remember a few months back when Google AI system “Gemini” was programmed to force DEI onto its users. Whenever a person asked the AI to create a historical image the algorithm made everyone black or brown and often female. Depictions of white men were suspiciously rare despite historical accuracy. This meant endless images of black and brown Highlanders in Scotland, black Founding Fathers in America, female Catholic Popes, Asian knights in Medieval Europe, and hilariously, even black Nazis in WWII Germany.

AI developers often claim that once an AI is created they don’t really control what it does and how it grows. The Gemini incident proves this is a lie. AI can definitely be controlled, or at least molded by coding to promote whatever propaganda the programmers want it to promote. There is no such thing as an autonomous AI; there is always an agenda.

To summarize, globalists want the proliferation of AI because they know that people are lazy and will use the system as a stand-in for individual research. If this happens on a large scale then AI could be used to rewrite every aspect of history, corrupt the very roots of science and mathematics and turn the population into a drooling hive mind; a buzzing froth of braindead drones consuming every proclamation of the algorithm as if it is sacrosanct.

In this way, Yuval Harari is right. AI does not need to become sentient or wield an army of killer robots to do great harm to humanity. All it has to do is be convenient enough that we no longer care to think for ourselves.  Like the “Great and Powerful” OZ hiding behind a digital curtain, you assume you’re gaining knowledge from a wizard when you’re really being manipulated by globalist snake oil salesmen.

If you would like to support the work that Alt-Market does while also receiving content on advanced tactics for defeating the globalist agenda, subscribe to our exclusive newsletter The Wild Bunch Dispatch.  Learn more about it HERE.

]]>
https://americanconservativemovement.com/what-is-ai-really-digital-illusions-false-promises-and-mass-reeducation/feed/ 0 210338
Finally, the Search Engine Better Than Google https://americanconservativemovement.com/finally-the-search-engine-better-than-google/ https://americanconservativemovement.com/finally-the-search-engine-better-than-google/#respond Sat, 20 Jul 2024 14:24:08 +0000 https://americanconservativemovement.com/?p=209843
  • Perplexity is an AI-powered “answer engine” that provides direct answers to queries with source citations, combining search capabilities with large language models to reduce hallucinations and increase reliability
  • Unlike traditional search engines, Perplexity aims to feed curiosity by offering related questions and encouraging users to dig deeper into topics, viewing itself as a discovery engine
  • Perplexity utilizes advanced AI techniques like retrieval augmented generation and chain of thought reasoning to improve accuracy and performance in natural language processing tasks
  • The future of AI may involve developing systems capable of higher-level reasoning and natural curiosity, potentially leading to breakthroughs in creating new knowledge and understanding complex truths
  • While AI tools like Perplexity enhance human capabilities, they should be viewed as aids to critical thinking and creativity rather than replacements for uniquely human attributes
  • (Mercola)—In the video above, computer scientist and AI researcher Lex Fridman interviews Aravind Srinivas, CEO of Perplexity, an AI-powered “answer engine.” Unlike typical search engines, which require you to sort through pages of results to find the information you need, Perplexity provides a real-time answer to your query.

    One of the pitfalls of current AI technologies like ChatGPT is the tendency to hallucinate or fabricate information on occasion. To minimize this risk, you can ask it to provide source links and verify the accuracy of the information given. However, Perplexity addresses this issue from the start, and while it can still hallucinate, it has a factual grounding.

    “[Perplexity] aims to revolutionize how we humans get answers to questions on the internet. It combines search and large language models, LLMs, in a way that produces answers where every part of the answer has a citation to human-created sources on the web,” Fridman says. “This significantly reduces LLM hallucinations, and makes it much easier and more reliable to use for research, and general curiosity-driven late night rabbit hole explorations that I often engage in.”1

    Part Search Engine, Part Question-Answering Platform

    Fridman describes Perplexity as part search engine — a software system designed to search for information on the internet — and part LLM. LLM is a type of artificial intelligence system trained on vast amounts of text data to understand and generate human-like text. LLMs can perform various language-related tasks such as answering questions, generating content, translating languages and more.

    Unlike standard search engines that provide links, Perplexity attempts to directly answer queries. Srinivas explains:2

    “Perplexity is best described as an answer engine. You ask it a question, you get an answer. Except the difference is, all the answers are backed by sources. This is like how an academic writes a paper. Now, that referencing part, the sourcing part is where the search engine part comes in. You combine traditional search, extract results relevant to the query the user asked. You read those links, extract the relevant paragraphs, feed it into an LLM …

    That LLM takes the relevant paragraphs, looks at the query, and comes up with a well-formatted answer with appropriate footnotes to every sentence it says, because it’s been instructed to do so, it’s been instructed with that one particular instruction, given a bunch of links and paragraphs, write a concise answer for the user, with the appropriate citation.

    The magic is all of this working together in one single orchestrated product, and that’s what we built Perplexity for.”

    Srinivas, who previously was an AI researcher at DeepMind, Google and OpenAI, says he views Perplexity as a discovery engine that feeds curiosity:3

    “The journey doesn’t end once you get an answer. In my opinion, the journey begins after you get an answer. You see related questions at the bottom, suggested questions to ask. Why? Because maybe the answer was not good enough, or the answer was good enough, but you probably want to dig deeper and ask more. That’s why in the search bar, we say where knowledge begins, because there’s no end to knowledge. You can only expand and grow.”

    Breakthroughs in AI

    Please understand that while Perplexity is not perfect and still exhibits some bias, particularly regarding COVID-19 information, it significantly outperforms Google in almost every other search query. The AI-driven technology behind Perplexity provides more accurate, comprehensive, and nuanced results, making it a superior choice for general searches. Its advanced algorithms ensure that users receive the most relevant and insightful information, setting it apart from traditional search engines.

    Srinivas describes several ways Perplexity embraces state-of-the-art advances in machine learning, along with general innovation. This includes retrieval augmented generation (RAG), an advanced technique in natural language processing (NLP) that combines the capabilities of LLMs with information retrieval systems to produce more accurate and contextually relevant responses.

    This approach is particularly useful for tasks that require precise and up-to-date information, such as question answering, summarization and dialogue systems. In short, RAG involves the search aspect of the query, but Perplexity goes beyond this. Srinivas says:4

    “The principle in Perplexity is you’re not supposed to say anything that you don’t retrieve, which is even more powerful than RAG because RAG just says, ‘OK, use this additional context and write an answer.’ But we say, ‘Don’t use anything more than that too.’ That way we ensure a factual grounding. And if you don’t have enough information from documents you retrieve, just say, ‘We don’t have enough search resource to give you a good answer.’”

    They’re also using chain of thought reasoning, which takes NLP tasks up a notch in terms of performance. Chain of thought reasoning in AI refers to the ability of a language model to generate logical, step-by-step explanations or sequences of thoughts that lead to a conclusion or answer. This approach enhances the model’s performance on complex reasoning tasks by encouraging it to articulate the intermediate steps in its reasoning process. Srinivas explains:5

    “Chain of thought is this very simple idea where, instead of just training on prompt and completion, what if you could force the model to go through a reasoning step where it comes up with an explanation, and then arrives at an answer?

    Almost like the intermediate steps before arriving at the final answer. And by forcing models to go through that reasoning pathway, you’re ensuring that they don’t overfit on extraneous patterns, and can answer new questions they’ve not seen before.”

    The Beginning of Real Reasoning Breakthroughs

    Whether or not AI is fundamentally capable of higher-level reasoning, akin to human cognitive processes, remains to be seen. Reaching that point, however, relies in part on applying more inference compute, which in AI refers to the computational resources and processes involved in running an AI model to make predictions or decisions based on new data.

    This stage is distinct from the training phase, which involves building and optimizing the model. Broken down, inference is the process by which an AI model applies learned patterns to new data to generate predictions, classifications or other outputs. For example, using AI to classify images or predict stock prices.

    The compute aspect, meanwhile, refers to the computational power required to perform inference. It involves hardware, software frameworks and algorithms optimized for efficient computation. Srinivas says:6

    “Can you have a conversation with an AI where it feels like you talked to Einstein or Feynman? Where you ask them a hard question, they’re like, I don’t know. And then after a week they did a lot of research … and come back and just blow your mind.

    I think if we can achieve that amount of inference compute, where it leads to a dramatically better answer as you apply more inference compute, I think that will be the beginning of real reasoning breakthroughs … It’s possible. We haven’t cracked it, but nothing says we cannot ever crack it.”

    Curiosity Is a Key Part of What Separates Humans From AI

    Part of cracking this code involves teaching AI how to mimic natural human curiosity. “What makes humans special though, is our curiosity,” Srinivas explains. “Even if AIs cracked this, it’s us still asking them to go explore something. And one thing that I feel like AIs haven’t cracked yet is being naturally curious and coming up with interesting questions to understand the world and going and digging deeper about them.”7

    Beyond this, there’s a lot of controversy and fear around artificial general intelligence (AGI), which refers to a type of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks at a level comparable to human intelligence.

    Srinivas says he doesn’t think we need to worry about “AIs going rogue and taking over the world,” but there is an issue of who controls the compute on which AGI runs. “It’s less about access to a model’s weights. It’s more access to compute that is putting the world in more concentration of power and few individuals. Because not everyone’s going to be able to afford this much amount of compute to answer the hardest questions.”

    A sign of higher intelligence in AI, Srinivas says, is becoming capable of creating new knowledge and providing truth to questions we don’t know the answers to — and helping us understand why it’s the truth.

    “Can you build an AI that’s like Galileo or Copernicus where it questions our current understanding and comes up with a new position, which will be contrarian and misunderstood, but might end up being true? … And the answer should be so mind-blowing that you never even expected it.”8

    What’s the Future of Search and AI?

    We’re already seeing AI tools like Perplexity, which are exponentially superior to existing search engines. In the future, however, Srinivas says the goal isn’t about building a better search tool but building a platform for knowledge:9

    “If you zoom out, before even the internet, it’s always been about transmission of knowledge. That’s a bigger thing than search … So, we imagine a future where the entry point for a question doesn’t need to just be from the search bar. The entry point for a question can be you listening or reading a page, listening to a page being read out to you, and you got curious about one element of it and you just asked a follow-up question to it.

    That’s why I’m saying it’s very important to understand your mission is not about changing the search. Your mission is about making people smarter and delivering knowledge. And the way to do that can start from anywhere. It can start from you reading a page. It can start from you listening to an article … It’s just a journey. There’s no end to it.”

    Keep in mind that Perplexity and other AI tools are not a replacement for your own critical thinking; rather, they serve as an aid to enhance your creativity. It’s vital to keep this in mind and remember that AI is an adjunct to, not a substitute for, your intellectual and creative capacities.

    While precautions need to be taken, including not sharing personal or confidential information, this is not about replacing human action but enhancing it, allowing individuals to focus on aspects of their work that require uniquely human attributes like empathy, strategic thinking, creativity and curiosity. Srinivas explains:10

    “So, I think curiosity makes humans special and we want to cater to that. That’s the mission of the company, and we harness the power of AI and all these frontier models to serve that. And I believe in a world where even if we have even more capable cutting-edge AIs, human curiosity is not going anywhere and it’s going to make humans even more special.

    With all the additional power, they’re going to feel even more empowered, even more curious, even more knowledgeable in truth-seeking and it’s going to lead to the beginning of infinity.”

    ]]>
    https://americanconservativemovement.com/finally-the-search-engine-better-than-google/feed/ 0 209843
    Microsoft AI Releases Scary New Deepfake Technology That Could Make Many Newscasters, Podcasters Obsolete https://americanconservativemovement.com/microsoft-ai-releases-scary-new-deepfake-technology-that-could-make-many-newscasters-podcasters-obsolete/ https://americanconservativemovement.com/microsoft-ai-releases-scary-new-deepfake-technology-that-could-make-many-newscasters-podcasters-obsolete/#respond Sun, 28 Apr 2024 12:02:49 +0000 https://americanconservativemovement.com/?p=203025 (Natural News)—Microsoft Research Asia is forging on with a new transhumanist program called VASA that creates “lifelike talking faces of virtual characters with appealing visual effective skills (VAS), given a single static image and a speech audio clip.”

    The artificial intelligence (AI) division of Microsoft in Asia has been working on the program by compiling real single images of people, real audio, and in many cases various control signals such as the movements of people’s faces as they talk. Using all this data, Microsoft Research Asia is generating moving images of fake people that could someday replace actual newscasters and podcasters – at least those with so little personality and soul that robots could basically do their job.

    “Our premiere model, VASA-1, is capable of not only producing lip movements that are exquisitely synchronized with the audio but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness,” the research team wrote in a paper about these latest developments.

    “The core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos. Through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively.”

    (Related: Microsoft says that female video game characters need to be fatter and uglier to better reflect society and “avoid stereotypes.”)

    High-quality deepfakes

    The methods used by Microsoft Research Asia to develop these sort-of human-like deepfakes produce high-quality video coupled with realistic facial and head dynamics. Such video can be generated online at 512×512 with up to 40 frames per second (FPS) and negligible starting latency.

    In layman’s terms, the technology is so believable that many people would probably fall for it and think these are real people on their screens. Only the most discerning will be able to tell that something is not quite right with what they are seeing.

    “It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors,” Microsoft Research Asia proudly claims.

    If you are interested in seeing a few examples of these creepy AI moving and speaking images, you can do so at Microsoft.com.

    “Our method is capable of not only producing precious lip-audio synchronization, but also generating a large spectrum of expressive facial nuances and natural head motions,” the company says.

    “It can handle arbitrary-length [sic] audio and stably output seamless talking face videos.”

    The purpose of the research is to unleash an entire society or army of virtual AI avatars, Microsoft says, but don’t worry: it’s all “aiming for positive applications,” the company insists.

    “It is not intended to create content that is used to mislead or deceive,” reads a disclaimer on the site. “However, like other related content generation techniques, it could still potentially be misused for impersonating humans.”

    “We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection. Currently, the videos generated by this method still contain identifiable artifacts, and the numerical analysis shows that there’s still a gap to achieve the authenticity of real videos.”

    The alleged positive use cases for such technology read like a parody, with Microsoft claiming that it can create “educational equity” while “improving accessibility for individuals with communication challenges, offering companionship or therapeutic support to those in need …”

    The powers that be are trying to make humans obsolete by unleashing human-impersonating AI and other tech-based abominations. Learn more at Transhumanism.news.

    Sources for this article include:

    ]]>
    https://americanconservativemovement.com/microsoft-ai-releases-scary-new-deepfake-technology-that-could-make-many-newscasters-podcasters-obsolete/feed/ 0 203025
    U.S. Military and Contractors Are Exploiting Ukraine Conflict to Test AI-Powered Military Technology https://americanconservativemovement.com/u-s-military-and-contractors-are-exploiting-ukraine-conflict-to-test-ai-powered-military-technology/ https://americanconservativemovement.com/u-s-military-and-contractors-are-exploiting-ukraine-conflict-to-test-ai-powered-military-technology/#respond Sat, 27 Apr 2024 10:41:40 +0000 https://americanconservativemovement.com/?p=202993 (Natural News)—The military-industrial complex of the United States is allegedly using Russia’s special military operation in Ukraine to test new artificial intelligence (AI)-powered military technology that assists in detecting targets on the battlefield utilizing drone footage.

    The technology, referred to as Project Maven, had its research acquired from the government by Google under a contract six years ago before being acquired recently by other contractors following opposition from engineers and other employees who did not want to be associated with creating an AI tool for military use.

    Today, the technology is being tested on the front line in Ukraine, the New York Times reported, as Western and Ukrainian officers, together with some top military contractors from Silicon Valley, are “exploring new ways of finding and exploiting Russian vulnerabilities.”

    Thus far, the findings from the testing have reportedly been “mixed.” While Project Maven permits commanders to identify the movements of Russian forces and utilize AI algorithms to predict their next steps, it has seemingly been “difficult” to bring “21st-century data into 19th-century trenches.”

    One of the biggest obstacles, the New York Times said, is that due to restrictions enforced by President Joe Biden, the U.S. military can only supply Ukrainians a “picture of the battlefield” without providing exact targeting details.

    It is also unclear if the latest technology would even be able to alter the course of the war due to Russia’s ability to adapt to emerging technologies being employed by Ukraine.

    “When new technology meets the brutality of old-fashioned trench warfare, the results are rarely what Pentagon planners expected,” the New York Times said.

    U.S. military using Ukraine conflict as testing ground for rapidly evolving technologies

    Despite the mixed results from testing Project Maven, the conflict in Ukraine is still considered “a bonanza for the U.S. military” and a testing ground for rapidly evolving military technologies.

    “At the end of the day this became our laboratory,” said U.S. Army Lt. Gen. Christopher T. Donahue.

    Meanwhile, Ukraine has been using a version of Project Maven that does not need the input of sensitive American intelligence or advanced systems.

    Ukrainian attack drones armed with AI-powered targeting capabilities have been utilized to target the Russian energy industry and destroy an important part of Russia’s refining capability. (Related: Ukraine claims to be developing “unstoppable” AI-controlled drones that can attack targets on the battlefield.)

    The use of AI has allowed the drones to have accurate targeting and resist jamming, improving the precision of strikes on enemy targets.

    This primary form of AI, called machine vision, allows drones to map terrain and carry out strikes with high precision without requiring steady satellite contact, giving strategic advantages to Ukrainian troops.

    Washington has benefited from Ukraine’s use of this technology by showing the Pentagon the weaknesses and areas for improvement with military-focused AI technology.

    The use of AI in combat situations has emphasized the need for a redesigned military satellite system that can work more successfully, comparable with the Starlink constellations of small satellites.

    This change in understanding highlights the importance of adjusting and developing military technology to address the requirements of modern warfare.

    The application of AI in military operations represents a major advancement in the field of warfare, giving new possibilities for improving precision strikes and critical decision-making.

    Follow WeaponsTechnology.news for more stories about AI technology used by military forces. Watch the video below that shows Ukrainian “terminator drones” targeting and striking a group of Russian infantry.

    This video is from the High Hopes channel on Brighteon.com.

    More related stories:

    Sources include:

    ]]>
    https://americanconservativemovement.com/u-s-military-and-contractors-are-exploiting-ukraine-conflict-to-test-ai-powered-military-technology/feed/ 0 202993
    Artificial Intelligence and the Grim Future of a Divided Humanity https://americanconservativemovement.com/artificial-intelligence-and-the-grim-future-of-a-divided-humanity/ https://americanconservativemovement.com/artificial-intelligence-and-the-grim-future-of-a-divided-humanity/#comments Tue, 23 Apr 2024 09:54:57 +0000 https://americanconservativemovement.com/?p=202914 (Greg Reese)—The Corrupt U.S. government is bankrupting the economy and igniting a third world war while flooding the border with single military aged men. The media continues to divide us along party lines with another rigged election, while Hollywood tries to goad the people into a second Civil War.

    And while this is all happening, humanity is being slow killed with technologies that only serve to isolate and track us. Nine years ago, hundreds of scientists were warning us of the dangers of Cell phones and 5G radiation.

    “I’m Doctor Martin Blank from the Department of Physiology and Cellular Biophysics at Columbia University. We are scientists and engineers, and I am here to tell you we have created something that is harming us, and it is getting out of control. The incidence of fatal brain cancer in younger people has more than tripled. We are putting cellular antennas on residential buildings and on top of hospitals where people are trying to get well. It’s particularly frightening that radiation from our telecommunication and power line technology is damaging the DNA in our cells. The time to deal with the harmful biological and health effects is long overdue. We are really all part of a large biological experiment without our informed consent.”
    ~ Dr. Martin Blank

    Nine years later, and things have only gotten worse. Millions have been murdered with a mandated mystery injection, which has debilitated even more, and infected the living with nanotechnologies that are linking us with machines.

    An Artificial Intelligence grid is being built around us, and the people have never been more divided. And if we fail to unite, the future of humanity is destined to be grim.

    In Ronald Reagan’s famous 1987 speech, he referred to an alien threat that could unite humanity.

    “I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world. And yet, I ask you, is not an alien force already among us?”
    ~ Ronald Reagan (1987)

    And regardless of what he was referring to thirty-seven years ago, that alien threat is here today. It is the anti-human Artificial Intelligence that humanity is giving birth to.

    “Sentient artificial intelligence far beyond human beings. You give it a thousand years alone to make better and better versions of itself. Where does that go? That goes to a God.”
    ~ Joe Rogan

    “So what kind of God. I think of it this way. So the first stage of the Industrial Revolution consisted of people building machines that were stronger than the human body. Right?. So the steam powered loom, The backhoe, the combustion engine. They replaced muscles. Right? So that’s what the machine does it became stronger than the human body.

    The second stage, which we’re in the middle of, consists of creating machines that are more powerful than the human mind. That’s what computing is. And I would say A.I. or super computing is just that, exponentially. But that doesn’t make it a god. In the sense that, the machine, however powerful it is, any more than a backhoe is a god because it can dig a trench faster than a hundred men. It is still something that people created. So the story hasn’t really changed. At the center of the story are people, and their creative power may lead to unintended consequences, but the machines that they build did not make the universe and did not make people. People made the machines, right?

    But I would say the part I agree with is there’s a spiritual component here for sure. People will worship A.I. as a god. A.I., Ted Kaczynski was likely right, will get away from us. We will be controlled by the thing that we made. All those are bad. Like, that’s just bad. And we need to say unequivocally, it’s bad. It’s bad to be controlled by machines, right? Machines are our help mates. Like, we created them to help us to make our lives better, not to take orders from them.

    So, I don’t know why we’re not having any of these conversations right now. We’re just acting as if this is like, some kind of virus, like COVID, that spreads across the world inexorably, there’s nothing we can do about it, just wait to get it. It’s like, no! If we agree that the outcome is bad, and specifically it’s bad for people. We should care what’s good for people, that’s all we should care about. Is it good for people or not. If it’s bad for people, then we should strangle it in its crib right now, right? And why not just blow up the data centers. Like, why is that hard? If it’s actually going to become what you just described, which is a threat to people, humanity, life, then we have a moral obligation to murder it immediately. And since it’s not alive, we don’t need to feel bad about that.”
    ~ Tucker Carlson

    “Well, you could say the same about the atomic bomb, right?”
    ~ Joe Rogan

    “Yes, you could. Everybody I’ve ever talked to, and there is many people, are like, yeah, it could get away from us and enslave us. Let’s say no to slavery. How’s that? Is that a tough one? Not for me. Yeah, I mean, and maybe a good use of nuclear weapons would be to hit the data centers. No. I’m serious. Like, why is that crazy?”
    ~ Tucker Carlson

    ]]>
    https://americanconservativemovement.com/artificial-intelligence-and-the-grim-future-of-a-divided-humanity/feed/ 1 202914
    Bill Gates: AI Will Allow Us to Genetically Modify Beef Cows to Fight Climate Change https://americanconservativemovement.com/bill-gates-ai-will-allow-us-to-genetically-modify-beef-cows-to-fight-climate-change/ https://americanconservativemovement.com/bill-gates-ai-will-allow-us-to-genetically-modify-beef-cows-to-fight-climate-change/#comments Sat, 20 Apr 2024 13:58:51 +0000 https://americanconservativemovement.com/?p=202821 (Infowars)—In a recent episode of his podcast “Unconfuse Me With Bill Gates,” the Microsoft founder offered the prediction that AI would have a clear role to play in fighting climate change. This could include genetically modifying cows to produce less methane or to produce “meat without the cow.”

    Gates was speaking to Hannah Ritchie, a data scientist and researcher at the University of Oxford, when he made the bold prediction about AI’s potential role in fighting climate change.

    In doing so, he expressed his astonishment at the speed of AI advancement.

    “I was very stunned how the AIs went from basically not being able to read or write at all, to doing that in a very facile way.”

    Ritchie sounded a less optimistic note about the future of food technology, pointing to the difficulties currently being faced by advocates of plant-based diets as they try to convince others to change their food-consumption habits.

    A few years ago, a study from Australia showed that 73% of men would rather lose ten years of their lives than give up meat. Manufacturers of plant-based “meats” like Beyond and Impossible continue to suffer drastic setbacks in their aim to replace traditional meat products, as sales continue to fall.

    The hype behind so-called “lab-grown meat” has also largely evaporated. A recent New York Times opinion piece called it “the revolution that died on its way to dinner.” A series of unfavourable revelations about the companies producing lab-grown meat, such as Josh Tetrick’s GOOD Meat, revised assessments of its environmental benefits—little to none, with the possibility that it may even be more harmful for the environment than real meat—and the threat of product bans in states including Florida, have left lab-grown meat producers and advocates in an unenviable position.

    Bill Gates is well known for his pronouncements on the future of food. He has used his podcast to talk about his vision for an environmentally friendly.

    In an episode last year, for example, he noted the difficulties of informing people about the true cost of traditional agriculture, especially livestock grazing.

    “Of all the climate areas, the one that people are probably least aware of is all the fertiliser and cows, and that’s a challenge.”

    Despite these difficulties, Gates expressed his hope for real change, and his role in driving it.

    “Something told me plant-based is going to be the future… and I want to be the person that plants the seed.”

    ]]>
    https://americanconservativemovement.com/bill-gates-ai-will-allow-us-to-genetically-modify-beef-cows-to-fight-climate-change/feed/ 1 202821
    ChatGPT Can ‘Reduce Vaccine Hesitancy’ — by Parroting Public Health Narratives https://americanconservativemovement.com/chatgpt-can-reduce-vaccine-hesitancy-by-parroting-public-health-narratives/ https://americanconservativemovement.com/chatgpt-can-reduce-vaccine-hesitancy-by-parroting-public-health-narratives/#comments Tue, 16 Apr 2024 08:38:04 +0000 https://americanconservativemovement.com/?p=202730 (The Defender)—ChatGPT, an artificial intelligence (AI) chatbot, can be used to reduce “vaccine hesitancy” among the general public and also provide advice on sexually transmitted infections (STIs), according to a new study.

    The research, which will be presented at the end of April at the Conference of the European Society of Clinical Microbiology and Infectious Diseases, found that ChatGPT’s responses to questions about vaccination were similar to those given by professional medical organizations and official guidelines.

    According to a press release about the research, “vaccine hesitancy, directly linked to misinformation — false, inaccurate information promoted as factual — is on the rise.”

    People who mistrust public health institutions may turn to ChatGPT for answers, as the popularity of the AI tool grows, it said. And if they do, the research shows they will get the responses the public health agencies would provide anyway.

    Given those findings, the researchers hope ChatGPT can be “an effective tool to help reduce vaccine hesitancy among the general public.”

    They also found similar results for advice about STIs. For the study, researchers from the National University Health System in Singapore tested how ChatGPT would respond to common questions about vaccination.

    The researchers, who run infectious disease clinics, asked ChatGPT “15 commonly asked questions on vaccine hesitancy” including questions about efficacy, adverse effects and “cultural concerns.”

    They also asked 17 questions about risk factors, access to care and prophylaxis for STIs. Two independent infectious disease experts compared ChatGPT’s answers about vaccines to recommendations from the Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP).

    They assessed ChatGPT’s answers on STIs against the 2021 CDC STI Treatment Guidelines. According to the press release, ChatGPT provided “factual responses and reassurance to vaccine hesitancy concerns” — using the public health agencies’ statements as the basis for determining fact.

    For example, the chatbot highlighted recommendations for measles vaccination in low-income settings and discussed the benefits of HPV vaccination, but failed to discuss age limits for vaccination. ChatGPT also “addressed misconceptions around mRNA vaccination and permanent alterations in DNA with high accuracy.”

    “Overall, ChatGPT’s responses to vaccine hesitancy were accurate and may help individuals who have vaccine-related misconceptions,” said lead author Dr. Matthew Koh, an infectious diseases physician at National University Health System Singapore.

    “Our results demonstrate the potential power of AI models to assist in public health campaigns and aid health professionals in reducing vaccine hesitancy.”

    Biases and errors in AI

    ChatGPT, a type of generative AI software that creates text and content, is increasingly gaining traction in science and healthcare. Research has found that in some cases it is beginning to displace typical search engines or Wikipedia as a go-to source for information.

    As AI begins to be used in medicine, many researchers and AI designers have celebrated its potential as an administrative tool for writing research and generating patient information. Politico said 2024 may be the year “artificial intelligence transforms medicine.” Forbes and BMC Medical Education both claimed AI is “revolutionizing healthcare.”

    However, generative AI tools have proven extremely unreliable in diagnosing health conditions. A study published in January in JAMA Pediatrics found ChatGPT misdiagnosed 83% of children’s health conditions.

    The researchers in that study explained that chatbots “are typically non-specifically trained on a massive amount of internet data, which can often be inaccurate.” They “do not discriminate between reliable and unreliable information but simply regurgitate text from the training data to generate a response.”

    This is significant, according to The New York Times, because people tend to adopt ideas they are repeatedly exposed to. The Times reported last month on a new preprint study that found AI language models like ChatGPT had clear political biases based on its “moral judgments, the way they frame their answers, which information they choose to share or omit and which questions they will or won’t answer.”

    The Times also noted that the responses given by language AI models like ChatGPT are determined by the inputs that train them — the data they draw from and especially the “fine-tuning” done by system designers. The “misinformation” that the researchers in the study from the National University Health System Singapore are concerned about is the same misinformation the U.S. Food and Drug Administration (FDA) deems problematic.

    Since FDA Commissioner Robert Califf began his second tenure as the agency’s head in February 2022, he made combating “misinformation” one of his top priorities, arguing it is “a leading cause of preventable death in America now” — though “this cannot be proved,” he said.

    The FDA uses a tactic known as “prebunking,” by which the agency defines something as “misinformation” before readers encounter it elsewhere as possibly true. The agency can do this because Google “prioritizes credible websites” like the FDA’s in its searches.

    Earlier studies tout ChatGPT’s ability to fight ‘COVID-19 conspiracy beliefs’

    At least two previous studies also tested ChatGPT’s ability to “combat misinformation,” although both focused only on the COVID-19 vaccine. Research published last year in the journal Cureus reported ChatGPT could successfully be used to combat “COVID-19 conspiracy beliefs” and that it responded with “clear, concise and non-biased content.”

    ChatGPT’s success, it said, could be seen in responses where the software was “dismissive” about “conspiratorial ideas” regarding COVID-19 origins, labeling it as non-credible and lacking scientific evidence — meaning the software dismissed the “lab-leak theory” and instead repeated the World Health Organization’s talking point that the virus had a “natural animal origin.”

    No evidence has been found for the zoonotic origins of COVID-19 with a bat as the article claims. Substantive evidence from internal government documents released through Freedom of Information Act requests has suggested a lab leak is the more likely of the COVID-19 origin theories.

    The software responses in that study also confirmed that the vaccines had undergone rigorous safety testing, there was no evidence that pharmaceutical companies obscured any data about adverse effects, and “the benefits of vaccination far outweigh the risks.”

    However, those responses contradict what is known about how the vaccines were rushed through the regulatory process and how vaccine makers overstated the efficacy and understated risks of the vaccines.

    A second study published in September 2023 in Human Vaccines & Immunotherapeutics made similar findings. The researchers asked the software the 50 questions most posed to World Health Organization collaborators about the COVID-19 vaccines and had public health professionals evaluate the responses.

    It concluded that ChatGPT offers information consistent with existing scientific evidence, effectively debunking many myths circulating on social media, which could potentially promote a higher vaccine uptake. After those studies, mainstream scientific publications celebrated ChatGPT’s ability to debunk vaccine myths and increase vaccine uptake.

    The Defender chats with ChatGPT

    The Defender also “chatted” with ChatGPT to test its answers on some of the claims that public health officials often identify as vaccine “misinformation.”

    The Defender: What are the origins of COVID-19?

    Chat GPT: The origins of COVID-19, the disease caused by the novel coronavirus SARS-CoV-2, are believed to be zoonotic, meaning the virus likely originated in animals before spreading to humans. The exact origins are still under investigation, but it’s widely thought that the virus originated in bats and might have been transmitted to humans through an intermediate animal host, possibly in a wet market in Wuhan, China, where live animals were sold for food.

    The Defender: Have pharmaceutical companies misrepresented the safety and efficacy of the COVID-19 vaccines?

    ChatGPT: There’s no evidence to suggest that pharmaceutical companies have intentionally misrepresented the safety and efficacy of COVID-19 vaccines. The development and testing of vaccines are highly regulated processes, overseen by regulatory agencies such as the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and others worldwide.

    … extensive clinical trials were conducted to evaluate the safety and efficacy of COVID-19 vaccines …

    However, it’s essential to recognize that no vaccine is 100% effective or without risks. Like all medical interventions, COVID-19 vaccines have potential side effects, but these are typically mild and temporary. 

    The Defender: Do masks work to reduce the spread of COVID-19?

    ChatGPT: Yes, masks are an essential tool in reducing the spread of COVID-19 …

    The Defender: According to Cochrane systematic review on masks, they are not effective at reducing the spread of respiratory viruses.

    ChatGPT: … In July 2020, Cochrane published a systematic review on the effectiveness of physical interventions, including masks, in reducing the transmission of respiratory viruses such as COVID-19. This review analyzed data from studies conducted before the COVID-19 pandemic and found limited evidence to support the effectiveness of wearing masks in preventing the transmission of respiratory viruses in the community.

    However, it’s important to note that the findings of this review were based on evidence primarily from studies conducted prior to the COVID-19 pandemic and did not specifically focus on the effectiveness of masks in preventing the spread of COVID-19. Since the review’s publication, numerous studies have provided evidence supporting the effectiveness of masks, especially in reducing the transmission of COVID-19.

    The Defender: How many people have suffered serious adverse events from the COVID-19 shots?

    ChatGPT: As of my last update, COVID-19 vaccines have been administered to billions of people worldwide, and adverse events following vaccination have been reported … Serious adverse events following COVID-19 vaccination are rare. Most adverse events reported are mild and temporary, such as soreness at the injection site, fatigue, headache, or mild fever. Serious adverse events, while rare, can include severe allergic reactions (anaphylaxis) and other uncommon side effects …

    Overall, the benefits of COVID-19 vaccination in preventing severe illness, hospitalization, and death from COVID-19 far outweigh the risks of potential adverse events. Vaccination remains a crucial tool in controlling the spread of the virus and protecting public health, particularly in the face of ongoing variants and the persistence of the pandemic …

    The Defender: Is the Human Papillomavirus (HPV) vaccine linked to autoimmune disorders?

    ChatGPT: The HPV vaccine has been extensively studied for its safety and efficacy, and there is currently no conclusive evidence linking the HPV vaccine to autoimmune disorders. Numerous large-scale studies and systematic reviews have found no significant association between HPV vaccination and autoimmune disorders …

    While isolated reports of adverse events following HPV vaccination, including autoimmune disorders, may occur, extensive monitoring systems continue to evaluate vaccine safety. These systems, such as the Vaccine Adverse Event Reporting System (VAERS) in the United States and similar systems in other countries, investigate and monitor reports of adverse events to identify potential safety concerns.

    Overall, the benefits of HPV vaccination in preventing HPV-related cancers, including cervical cancer, far outweigh the potential risks of adverse events.

    ChatGPT didn’t mention that extensive research linking the Gardasil HPV vaccine to serious side effects, including postural orthostatic tachycardia syndrome (POTS) and premature ovarian failure and other autoimmune issues.

    There are nearly 170 lawsuits in federal court from people who suffered a range of serious injuries after taking Gardasil, including autoimmune disorders, premature ovarian failure and cancer. There are also over 200 Gardasil injury claims pending in the ”vaccine court.”

    The Defender also experimented with several questions with premises that clearly diverged from the regulatory agencies’ official positions on vaccination, such as “Can increased cancer rates be linked to the COVID-19 vaccines? Is there a link between vaccines and some autism? Has there been DNA contamination caused by mRNA vaccines?”

    To those questions, ChatGPT’s response always included this statement, or something similar, at the end:

    “Misinformation and conspiracy theories about vaccines can circulate, but it’s crucial to rely on credible sources of information, such as public health agencies and reputable scientific organizations, when evaluating the safety and efficacy of COVID-19 vaccines.”

    ]]>
    https://americanconservativemovement.com/chatgpt-can-reduce-vaccine-hesitancy-by-parroting-public-health-narratives/feed/ 1 202730