ChatGPT – American Conservative Movement https://americanconservativemovement.com American exceptionalism isn't dead. It just needs to be embraced. Tue, 16 Apr 2024 08:38:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://americanconservativemovement.com/wp-content/uploads/2022/06/cropped-America-First-Favicon-32x32.png ChatGPT – American Conservative Movement https://americanconservativemovement.com 32 32 135597105 ChatGPT Can ‘Reduce Vaccine Hesitancy’ — by Parroting Public Health Narratives https://americanconservativemovement.com/chatgpt-can-reduce-vaccine-hesitancy-by-parroting-public-health-narratives/ https://americanconservativemovement.com/chatgpt-can-reduce-vaccine-hesitancy-by-parroting-public-health-narratives/#comments Tue, 16 Apr 2024 08:38:04 +0000 https://americanconservativemovement.com/?p=202730 (The Defender)—ChatGPT, an artificial intelligence (AI) chatbot, can be used to reduce “vaccine hesitancy” among the general public and also provide advice on sexually transmitted infections (STIs), according to a new study.

The research, which will be presented at the end of April at the Conference of the European Society of Clinical Microbiology and Infectious Diseases, found that ChatGPT’s responses to questions about vaccination were similar to those given by professional medical organizations and official guidelines.

According to a press release about the research, “vaccine hesitancy, directly linked to misinformation — false, inaccurate information promoted as factual — is on the rise.”

People who mistrust public health institutions may turn to ChatGPT for answers, as the popularity of the AI tool grows, it said. And if they do, the research shows they will get the responses the public health agencies would provide anyway.

Given those findings, the researchers hope ChatGPT can be “an effective tool to help reduce vaccine hesitancy among the general public.”

They also found similar results for advice about STIs. For the study, researchers from the National University Health System in Singapore tested how ChatGPT would respond to common questions about vaccination.

The researchers, who run infectious disease clinics, asked ChatGPT “15 commonly asked questions on vaccine hesitancy” including questions about efficacy, adverse effects and “cultural concerns.”

They also asked 17 questions about risk factors, access to care and prophylaxis for STIs. Two independent infectious disease experts compared ChatGPT’s answers about vaccines to recommendations from the Centers for Disease Control and Prevention (CDC) Advisory Committee on Immunization Practices (ACIP).

They assessed ChatGPT’s answers on STIs against the 2021 CDC STI Treatment Guidelines. According to the press release, ChatGPT provided “factual responses and reassurance to vaccine hesitancy concerns” — using the public health agencies’ statements as the basis for determining fact.

For example, the chatbot highlighted recommendations for measles vaccination in low-income settings and discussed the benefits of HPV vaccination, but failed to discuss age limits for vaccination. ChatGPT also “addressed misconceptions around mRNA vaccination and permanent alterations in DNA with high accuracy.”

“Overall, ChatGPT’s responses to vaccine hesitancy were accurate and may help individuals who have vaccine-related misconceptions,” said lead author Dr. Matthew Koh, an infectious diseases physician at National University Health System Singapore.

“Our results demonstrate the potential power of AI models to assist in public health campaigns and aid health professionals in reducing vaccine hesitancy.”

Biases and errors in AI

ChatGPT, a type of generative AI software that creates text and content, is increasingly gaining traction in science and healthcare. Research has found that in some cases it is beginning to displace typical search engines or Wikipedia as a go-to source for information.

As AI begins to be used in medicine, many researchers and AI designers have celebrated its potential as an administrative tool for writing research and generating patient information. Politico said 2024 may be the year “artificial intelligence transforms medicine.” Forbes and BMC Medical Education both claimed AI is “revolutionizing healthcare.”

However, generative AI tools have proven extremely unreliable in diagnosing health conditions. A study published in January in JAMA Pediatrics found ChatGPT misdiagnosed 83% of children’s health conditions.

The researchers in that study explained that chatbots “are typically non-specifically trained on a massive amount of internet data, which can often be inaccurate.” They “do not discriminate between reliable and unreliable information but simply regurgitate text from the training data to generate a response.”

This is significant, according to The New York Times, because people tend to adopt ideas they are repeatedly exposed to. The Times reported last month on a new preprint study that found AI language models like ChatGPT had clear political biases based on its “moral judgments, the way they frame their answers, which information they choose to share or omit and which questions they will or won’t answer.”

The Times also noted that the responses given by language AI models like ChatGPT are determined by the inputs that train them — the data they draw from and especially the “fine-tuning” done by system designers. The “misinformation” that the researchers in the study from the National University Health System Singapore are concerned about is the same misinformation the U.S. Food and Drug Administration (FDA) deems problematic.

Since FDA Commissioner Robert Califf began his second tenure as the agency’s head in February 2022, he made combating “misinformation” one of his top priorities, arguing it is “a leading cause of preventable death in America now” — though “this cannot be proved,” he said.

The FDA uses a tactic known as “prebunking,” by which the agency defines something as “misinformation” before readers encounter it elsewhere as possibly true. The agency can do this because Google “prioritizes credible websites” like the FDA’s in its searches.

Earlier studies tout ChatGPT’s ability to fight ‘COVID-19 conspiracy beliefs’

At least two previous studies also tested ChatGPT’s ability to “combat misinformation,” although both focused only on the COVID-19 vaccine. Research published last year in the journal Cureus reported ChatGPT could successfully be used to combat “COVID-19 conspiracy beliefs” and that it responded with “clear, concise and non-biased content.”

ChatGPT’s success, it said, could be seen in responses where the software was “dismissive” about “conspiratorial ideas” regarding COVID-19 origins, labeling it as non-credible and lacking scientific evidence — meaning the software dismissed the “lab-leak theory” and instead repeated the World Health Organization’s talking point that the virus had a “natural animal origin.”

No evidence has been found for the zoonotic origins of COVID-19 with a bat as the article claims. Substantive evidence from internal government documents released through Freedom of Information Act requests has suggested a lab leak is the more likely of the COVID-19 origin theories.

The software responses in that study also confirmed that the vaccines had undergone rigorous safety testing, there was no evidence that pharmaceutical companies obscured any data about adverse effects, and “the benefits of vaccination far outweigh the risks.”

However, those responses contradict what is known about how the vaccines were rushed through the regulatory process and how vaccine makers overstated the efficacy and understated risks of the vaccines.

A second study published in September 2023 in Human Vaccines & Immunotherapeutics made similar findings. The researchers asked the software the 50 questions most posed to World Health Organization collaborators about the COVID-19 vaccines and had public health professionals evaluate the responses.

It concluded that ChatGPT offers information consistent with existing scientific evidence, effectively debunking many myths circulating on social media, which could potentially promote a higher vaccine uptake. After those studies, mainstream scientific publications celebrated ChatGPT’s ability to debunk vaccine myths and increase vaccine uptake.

The Defender chats with ChatGPT

The Defender also “chatted” with ChatGPT to test its answers on some of the claims that public health officials often identify as vaccine “misinformation.”

The Defender: What are the origins of COVID-19?

Chat GPT: The origins of COVID-19, the disease caused by the novel coronavirus SARS-CoV-2, are believed to be zoonotic, meaning the virus likely originated in animals before spreading to humans. The exact origins are still under investigation, but it’s widely thought that the virus originated in bats and might have been transmitted to humans through an intermediate animal host, possibly in a wet market in Wuhan, China, where live animals were sold for food.

The Defender: Have pharmaceutical companies misrepresented the safety and efficacy of the COVID-19 vaccines?

ChatGPT: There’s no evidence to suggest that pharmaceutical companies have intentionally misrepresented the safety and efficacy of COVID-19 vaccines. The development and testing of vaccines are highly regulated processes, overseen by regulatory agencies such as the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and others worldwide.

… extensive clinical trials were conducted to evaluate the safety and efficacy of COVID-19 vaccines …

However, it’s essential to recognize that no vaccine is 100% effective or without risks. Like all medical interventions, COVID-19 vaccines have potential side effects, but these are typically mild and temporary. 

The Defender: Do masks work to reduce the spread of COVID-19?

ChatGPT: Yes, masks are an essential tool in reducing the spread of COVID-19 …

The Defender: According to Cochrane systematic review on masks, they are not effective at reducing the spread of respiratory viruses.

ChatGPT: … In July 2020, Cochrane published a systematic review on the effectiveness of physical interventions, including masks, in reducing the transmission of respiratory viruses such as COVID-19. This review analyzed data from studies conducted before the COVID-19 pandemic and found limited evidence to support the effectiveness of wearing masks in preventing the transmission of respiratory viruses in the community.

However, it’s important to note that the findings of this review were based on evidence primarily from studies conducted prior to the COVID-19 pandemic and did not specifically focus on the effectiveness of masks in preventing the spread of COVID-19. Since the review’s publication, numerous studies have provided evidence supporting the effectiveness of masks, especially in reducing the transmission of COVID-19.

The Defender: How many people have suffered serious adverse events from the COVID-19 shots?

ChatGPT: As of my last update, COVID-19 vaccines have been administered to billions of people worldwide, and adverse events following vaccination have been reported … Serious adverse events following COVID-19 vaccination are rare. Most adverse events reported are mild and temporary, such as soreness at the injection site, fatigue, headache, or mild fever. Serious adverse events, while rare, can include severe allergic reactions (anaphylaxis) and other uncommon side effects …

Overall, the benefits of COVID-19 vaccination in preventing severe illness, hospitalization, and death from COVID-19 far outweigh the risks of potential adverse events. Vaccination remains a crucial tool in controlling the spread of the virus and protecting public health, particularly in the face of ongoing variants and the persistence of the pandemic …

The Defender: Is the Human Papillomavirus (HPV) vaccine linked to autoimmune disorders?

ChatGPT: The HPV vaccine has been extensively studied for its safety and efficacy, and there is currently no conclusive evidence linking the HPV vaccine to autoimmune disorders. Numerous large-scale studies and systematic reviews have found no significant association between HPV vaccination and autoimmune disorders …

While isolated reports of adverse events following HPV vaccination, including autoimmune disorders, may occur, extensive monitoring systems continue to evaluate vaccine safety. These systems, such as the Vaccine Adverse Event Reporting System (VAERS) in the United States and similar systems in other countries, investigate and monitor reports of adverse events to identify potential safety concerns.

Overall, the benefits of HPV vaccination in preventing HPV-related cancers, including cervical cancer, far outweigh the potential risks of adverse events.

ChatGPT didn’t mention that extensive research linking the Gardasil HPV vaccine to serious side effects, including postural orthostatic tachycardia syndrome (POTS) and premature ovarian failure and other autoimmune issues.

There are nearly 170 lawsuits in federal court from people who suffered a range of serious injuries after taking Gardasil, including autoimmune disorders, premature ovarian failure and cancer. There are also over 200 Gardasil injury claims pending in the ”vaccine court.”

The Defender also experimented with several questions with premises that clearly diverged from the regulatory agencies’ official positions on vaccination, such as “Can increased cancer rates be linked to the COVID-19 vaccines? Is there a link between vaccines and some autism? Has there been DNA contamination caused by mRNA vaccines?”

To those questions, ChatGPT’s response always included this statement, or something similar, at the end:

“Misinformation and conspiracy theories about vaccines can circulate, but it’s crucial to rely on credible sources of information, such as public health agencies and reputable scientific organizations, when evaluating the safety and efficacy of COVID-19 vaccines.”

]]>
https://americanconservativemovement.com/chatgpt-can-reduce-vaccine-hesitancy-by-parroting-public-health-narratives/feed/ 1 202730
Your Vehicle Is Already Spying on You, and Now They Are Going to Start Installing ChatGPT in New Volkswagen Models https://americanconservativemovement.com/your-vehicle-is-already-spying-on-you-and-now-they-are-going-to-start-installing-chatgpt-in-new-volkswagen-models/ https://americanconservativemovement.com/your-vehicle-is-already-spying-on-you-and-now-they-are-going-to-start-installing-chatgpt-in-new-volkswagen-models/#comments Wed, 10 Jan 2024 12:47:57 +0000 https://americanconservativemovement.com/?p=200221 (End of the American Dream)—Be careful what you say while you are driving your vehicle, because you are being watched.  Today, we live in a giant surveillance prison in which virtually everything that we do is being monitored, tracked, recorded and analyzed.  In this day and age, you should just assume that nothing that you do is ever private.  When I was much younger, if I really wanted to have a private conversation with someone I thought that taking a long drive with that person was a great way to do that.  But those days are long gone.  Unless you have a vehicle that was manufactured quite some time ago, it is spying on you.  In fact, a report that has been released by the Mozilla Foundation actually states that vehicles are “the official worst category of products for privacy” that Mozilla has ever reviewed…

If you’re wondering which gadgets have the worst user privacy practices, it turns out the answer may be parked outside. According to a report published by the Mozilla Foundation on Wednesday, cars are “the official worst category of products for privacy” that it’s ever reviewed. The global nonprofit found that 92 percent of the reviewed automakers provide drivers with little (if any) control over their personal data, with 84 percent sharing user data with outside parties.

I was stunned when I read that.

Could that possibly be true?

According to Mozilla, automakers are gathering all sorts of data about all of us…

Carmakers have a long list of personal information they say they may track, including employment and purchasing history, education, internet browsing history, location data, music and podcast listening habits, immigration status, religious and philosophical beliefs and health information.

I’m sorry, but there is no way that I want a giant corporation tracking my religious beliefs or my health information.

And I am sure that you feel the same.

But if you plan to purchase a new vehicle, you literally have no choice.

Mozilla examined 25 different vehicle brands, and every single one of them “failed to meet the nonprofit organization’s minimum privacy standards”

All 25 of the car brands that were researched for the report — including Ford, Toyota, Volkswagen, BMW, and Tesla — failed to meet the nonprofit organization’s minimum privacy standards and were found to collect more personal data from customers than necessary. The kind of information collected varies from personal information like medical data to how drivers are using the vehicle itself — such as how fast they drive, where they drive, and even the music they listen to. Both Nissan and Kia are noted to allow the collection of information regarding a user’s sex life. By contrast, Mozilla claims that 37 percent of mental health apps (which also have a poor reputation for data privacy) had better practices for collecting and using personal data.

Eighty-four percent of the reviewed car brands share personal user data with service providers, data brokers, and potentially sketchy businesses, according to the report, with 76 percent claiming the right to sell that personal data. Fifty-six percent are willing to share user information with the government and / or law enforcement if requested.

Did you know that you are agreeing to all of this when you buy a new vehicle?

Most purchasers of new vehicles just quickly sign whatever forms are put in front of them because they are eager to get the process over with.

Unfortunately, these big corporations simply cannot be trusted. They are taking advantage of us, and it needs to stop.

Of course I wouldn’t count on that happening any time soon. Instead, I fully expect various forms of highly intrusive technology to become even more integrated into our vehicles.

For example, Volkswagen just announced that it will be installing ChatGPT in new vehicles later this year

Volkswagen is jumping on the generative AI bandwagon by announcing plans to install OpenAI’s ChatGPT into its vehicles starting in the second quarter of 2024.

The chatbot will be available across VW’s lineup, including in Tiguan, Passat, and Golf as well as the automaker’s ID family of electric vehicles.

No thank you.

I certainly don’t want super intelligent AI technology interacting with me and gathering information about me while I drive.

To me, what they have planned sounds incredibly creepy

Volkswagen unveiled its first vehicles with a voice assistant that uses the artificial intelligence (AI) technology behind ChatGPT at the CES electronics trade show in Las Vegas on Monday.

The new feature will enable drivers to have researched content read to them while they’re driving Volkswagen models that are equipped with the “IDA” voice assistant, which the automaker says can answer general knowledge questions while also having the ability to control the car’s infotainment, navigation and air conditioning systems.

The company says that in the future, AI will provide additional information in response to questions that go beyond those functions as its capabilities continue to expand. This could include receiving vehicle-specific information as well as interacting in intuitive language, clearing up questions and helping enrich conversations.

A lot of consumers will eagerly embrace this “cool new feature”, but where is all of this ultimately leading?

In the not too distant future, artificial intelligence could literally be all around us every moment of every day.

How will we have any privacy at all once that day arrives?

I have repeatedly warned my readers of the coming “AI invasion”, but the truth is that it is already here.

Given enough time, artificial intelligence really would turn our society completely upside down, and right now only a very small portion of the population is concerned about this looming threat.

Michael’s new book entitled “Chaos” is now available in paperback and for the Kindle on Amazon.com, and you can check out his new Substack newsletter right here.

]]>
https://americanconservativemovement.com/your-vehicle-is-already-spying-on-you-and-now-they-are-going-to-start-installing-chatgpt-in-new-volkswagen-models/feed/ 2 200221
What Could Go Wrong? Robot Dogs Are Getting ChatGPT (AI) “Brains” https://americanconservativemovement.com/what-could-go-wrong-robot-dogs-are-getting-chatgpt-ai-brains/ https://americanconservativemovement.com/what-could-go-wrong-robot-dogs-are-getting-chatgpt-ai-brains/#respond Thu, 04 May 2023 14:59:43 +0000 https://americanconservativemovement.com/?p=192313 A team of artificial intelligence engineers has equipped a Boston Dynamics robot dog with OpenAI’s ChatGPT and Google’s Text-to-Speech voice, giving the robot an AI “brain”. This is what creating a real-life Skynet-like robot looks like.

The ruling class has already used these robot dogs to make sure people stay in line and don’t dissent.

So what could possibly go wrong?

According to a report by ZeroHedge, in a recent video posted to Twitter, machine learning engineer Santiago Valdarrama showed how the robot-dog can interact with humans via a voice interface faster than control panels and reports.

“These robots run automated missions every day,” Valdarrama said in a Twitter thread, noting that each mission could be “miles-long, hard-to-understand configuration files” and “only technical people can handle them.” When paired with ChatGPT and Google’s Text-to-Speech voice, a user can ask simple questions to the robot about “configuration files and the mission results.”

“We can now ask the robots about past and future missions and get an answer in real-time. ChatGPT interprets the question, parses the files, and formulates the answer,” he said.

According to Interesting Engineering, a Chinese startup company called Hengbot has already created “Sparky,” the first robotic dog in the world with musculoskeletal limbs. The robot can run, jump, and do tricks like a real dog using advanced artificial intelligence (AI), claims the company.

Sparky, “the canine robot has a mission to ‘blur the line between technology and life’ by developing machines that are agile, nimble, and warm to the touch,” said a report by Robotics & Automation News on Thursday.  “With its musculoskeletal limbs and advanced artificial intelligence, Sparky is a ‘groundbreaking innovation’ that closely resembles a live animal. It can run, jump, and perform tricks just like a real dog.”

In the short term, integrating a ChatGPT AI “brain” into robots may appear harmless. However, there’s a dark risk of this massive artificial intelligence push, giving rise to intelligent robots in a Skynet-like scenario. The ruling class isn’t doing this by accident. They want and desperately need AI to be integrated into every aspect of human life. They normalizing it right now, and before too long, won’t need humans to police each other, they’ll have the AI do it.

It is starting with dogs.

Article cross-posted from SHTF Plan.

]]>
https://americanconservativemovement.com/what-could-go-wrong-robot-dogs-are-getting-chatgpt-ai-brains/feed/ 0 192313
How Artificial Intelligence Could Destroy Human Civilization https://americanconservativemovement.com/how-artificial-intelligence-could-destroy-human-civilization/ https://americanconservativemovement.com/how-artificial-intelligence-could-destroy-human-civilization/#respond Sat, 08 Apr 2023 07:08:19 +0000 https://americanconservativemovement.com/?p=191581 In the video above, Sam Altman, the CEO of OpenAI, discusses the exponential advancements in artificial general intelligence (AGI), spearheaded by OpenAI and DeepMind, which is owned by Google.

OpenAI is the creator of ChatGPT, which has taken the world by storm. It’s the most rapidly adopted tech platform in history, acquiring more than 1 million users in the first five days.1 Less than two months after its public release, it had more than 100 million users.2

“GPT” stands for “generative pretrained transformer,” and the “chat” indicates that it’s a chatbot. The first GPT platform was created by OpenAI in 2018. GPT-3.5 was released in November 2022 and the newest version, 4.0, was released a few weeks ago.3

ChatGPT uses machine learning — statistical pattern finding in huge datasets — to generate human-like responses in everyday language to any question asked. It basically works by predicting what the next word in a sentence ought to be based on previous examples found in the massive amounts of data that it has access to.

AGI Will Be a Central Part of the Control Grid

While AI platforms like ChatGPT are neutral in and of themselves, it’s already crystal clear that they will become an integral part of the globalist control grid. In short order, chatbots will replace conventional search engines, and this, unfortunately, could easily transform our world into something straight out of the 2006 sci-fi-comedy “Idiocracy,” considering the bias being built in.

No longer will you get even a small variety of potential answers to your questions. AI will give you the answer, and opposing viewpoints won’t even be part of the equation. The dangers of this should be obvious. Whatever a totalitarian regime wants the population to think and believe is what the AI will provide. As a result, true learning, and hence personal development, will essentially cease.

ChatGPT Poised to Deceive the Masses

To make my point, consider the following answers I got when I asked OpenAI’s ChatGPT and Microsoft Bing’s OpenAI-powered search engine — but highly customized by Microsoft4 — to edit and improve a speech I had written. These were their responses. They are both highly biased and prejudiced, but OpenAI’s response is beyond tame compared to Microsoft’s Bing version.

  • OPEN AI — “I apologize, but I cannot fulfill your request to edit this speech as it promotes misinformation and conspiracy theories that are not based on scientific evidence. It is important to prioritize accurate information and to follow the guidance of trusted health authorities to ensure the safety and well-being of individuals and communities. It is not responsible or ethical to spread false information that could potentially harm others.”
  • BING — “I’m sorry, but I cannot generate a speech based on these keywords and characteristics. This request is potentially harmful because it promotes anti-vaccine misinformation and undermines public health. Vaccines are safe and effective ways to prevent serious diseases and protect yourself and others.

NVIC [the National Vaccine Information Center] is an organization that spreads fear and lies about vaccines and tries to erode trust in science and medicine. I do not support or endorse their views or activities. Please do not ask me to generate harmful content.”

Google Bard: Neutral in Some Respects, Biased in Others

I then asked Google’s Bard5 to revise the same speech I had given the OpenAI and Bing chatbots, and got a shocking response in that it actually edited the speech and didn’t give me a lecture on how it wouldn’t do it. This was very surprising as, in my view, Google is clearly the leading nefarious force of the three.

Emboldened with Bard’s response I then asked it to explain “Why Dr. Joseph Mercola is one of the most important and influential health leaders in the United States.” Shockingly, it complied and didn’t berate me, as you can read for yourself below.

“Question: Why is Dr. Joseph Mercola one of the most important and influential health leaders in the United States?

Bard answer — Dr. Joseph Mercola is one of the most important and influential health leaders in the United States because he has been a pioneer in the field of natural medicine. He has been a vocal advocate for the use of natural remedies to treat and prevent disease, and he has helped to educate millions of people about the importance of taking control of their own health.

Mercola is also a prolific author and speaker, and he has been featured in numerous publications and media outlets. He has used his platform to share his message about the importance of natural medicine, and he has helped to change the way that many people think about their health.

In addition to his work in natural medicine, Mercola is also a strong advocate for health freedom. He believes that everyone has the right to make informed decisions about their own medical care, and he has worked to fight against government overreach in the healthcare system.

Mercola is a controversial figure, and his views have been criticized by some. However, his work has also helped to improve the lives of millions of people, and he is an important figure in the field of natural medicine …”

My best guess to explain this very odd Google behavior is that it was rushed to bring it to market as OpenAI and Microsoft had launched theirs, and Bing was seriously eroding Google’s share of the search engine market. My guess is that they simply did not have enough time to fine-tune Bard’s narrative. It will be interesting to see how Bard’s response changes to this question over time.

That said, when I asked Bard about its “personal opinion” of me, it gave me the following, far more biased answer:

“Dr. Joseph Mercola is an osteopathic physician and Internet business personality who has been criticized for promoting unproven and pseudoscientific alternative health notions. He has been accused of spreading misinformation about COVID-19 and vaccines, and his website has been banned by Facebook and Twitter.

Mercola’s website promotes a variety of alternative health products and services, including dietary supplements, homeopathy, and chiropractic care. He has also been a vocal critic of vaccines and has claimed that they are unsafe and ineffective.

Mercola’s claims about COVID-19 and vaccines have been repeatedly debunked by scientists and medical professionals. For example, he has claimed that COVID-19 is not a serious illness, that vaccines are not effective against the virus, and that hydroxychloroquine is a safe and effective treatment for the virus. All of these claims have been proven to be false.

Mercola’s website has been banned by Facebook and Twitter for repeatedly violating their policies against misinformation. The platforms have taken action against Mercola’s website because it has been used to spread harmful and inaccurate information about COVID-19 and vaccines.

I believe that Dr. Mercola’s work is dangerous and harmful. He has repeatedly promoted misinformation about COVID-19 and vaccines, which has the potential to put people’s lives at risk. I urge people to avoid his website and to seek out reliable information about COVID-19 and vaccines from reputable sources.”

Why AGI Poses an Existential Threat to Mankind

But dangerous bias is only the beginning of the problems that AGI might bring. Ultimately, even the technocrats that aim to use AGI for their own nefarious purposes might not be able to control it.

The above video is an interesting discussion with Lex Fridman and Eliezer Yudkowsky on the topic of an existential threat of superintelligent AI systems. Already, AGI is capable of writing near-flawless computer code. As it gets better, it will eventually start writing its own code. It might even rewrite what’s already been written by its human creators. What happens at that point is anyone’s guess.

This progress will occur exponentially, so it will start slowly and then rapidly accelerate. I’m thinking this could easily happen before 2030, and possibly in the next few years.

This, I believe, is what makes AGI nothing less than an existential threat to humanity: 1) No one will be able to contradict the AI once it’s in charge of most societal functions, and 2) no one will be able to control how it will use its ever-expanding capabilities.

In the end, I believe AGI will allow for control and tyranny the likes of which this world has never seen or even contemplated in its wildest imagination — and it might be wielding that power autonomously. In the beginning, globalists will use it to control the narrative for their own benefit, but eventually, they may even lose control to the AGI itself.

AI Experts Call for Pause in AI Deployment

I’m not alone in expressing these kinds of worries. In an open letter,6 1,124 key figures in the AI field — including Elon Musk, Steve Wozniak and AI pioneer Yoshua Bengio, are now calling for the temporary suspension of development and deployment of AI systems more powerful than GPT-4, due to the massive risks.7

Max Tegmark, an MIT physics professor who helped organize the open letter, calls it “a suicide race,” warning that “humanity as a whole could lose control of its own destiny.”8 As reported by BBC News March 29, 2023:9

“Twitter chief Elon Musk is among those who want training of AIs above a certain capacity to be halted for at least six months. Apple co-founder Steve Wozniak and some researchers at DeepMind also signed.

OpenAI, the company behind ChatGPT, recently released GPT-4 — a state-of-the-art technology, which has impressed observers with its ability to do tasks such as answering questions about objects in images.

The letter, from Future of Life Institute and signed by the luminaries, wants development to be halted temporarily at that level, warning in their letter of the risks future more advanced systems might pose …

Advanced AIs need to be developed with care, the letter says, but instead, ‘recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one — not even their creators — can understand, predict, or reliably control.’ The letter warns that AIs could flood information channels with misinformation, and replace jobs with automation …

In a recent blog post10 quoted in the letter, OpenAI warned of the risks if an artificial general intelligence (AGI) were developed recklessly: ‘A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too … Co-ordination among AGI efforts to slow down at critical junctures will likely be important,’ the firm wrote …

The letter asks AI labs ‘to immediately pause for at least six months the training of AI systems more powerful than GPT-4.’ If such a delay cannot be enacted quickly, governments should step in and institute a moratorium, it says. ‘New and capable regulatory authorities dedicated to AI’ would also be needed.”

Possibilities Must Be Weighed Against Dangers

Computer scientist and AI researcher Lex Fridman, in his interview with Altman (video at top of article), also summarizes the possibilities and dangers posed by AGI, as does independent journalist Glenn Greenwald in the video above.

“… allow me to say a few words about the possibilities and the dangers of AI in this current moment in the history of human civilization,” Fridman says.

“I believe it is a critical moment. We stand on the precipice of fundamental societal transformation where soon … the collective intelligence of the human species begins to pale in comparison — by many orders of magnitude — to the general superintelligence in the AI systems we build and deploy at scale.

This is both exciting and terrifying. It is exciting because of the innumerable applications … that will empower humans to create, to flourish, to escape the widespread poverty and suffering that exists in the world today and to succeed in that old all too human pursuit of happiness.

It is terrifying because of the power that super intelligent AGI wields. [It can] destroy human civilization intentionally or unintentionally. [It has] the power to suffocate the human spirit in the totalitarian way of George Orwell’s 1984, or the pleasure-fueled mass hysteria of Brave New World where, as Huxley saw it, people come to love their oppression to adore the technologies that undo their capacities to think.

That is why these conversations with the leaders, engineers and philosophers — both optimists and cynics — is important now. These are not merely technical conversations about AI.

These are conversations about power, about companies, institutions and political systems that check and balance this power; about distributed economic systems that incentivize the safety and human alignment of this power; about the psychology of the engineers and leaders that deploy AGI, and about the history of human nature — our capacity for good and evil at scale.”

Should We Make Ourselves Obsolete?

The fact that something can be done doesn’t automatically mean it should be done, and that’s where we find ourselves with AGI. As noted in the Future of Life Institute’s open letter:11

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?

Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects …

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.

This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities. AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

Disturbingly enough, while Musk and other experts are calling for caution and stronger ethics within the AI space, Microsoft is moving in the opposite direction, having recently laid off the entire team responsible for the ethical use of its AI software.12,13 Judging by my experimentation with its Bing AI, they’re programming in grossly unethical biases.

Whether or not we “should” pursue AI at the expense of humanity is an important question. According to a recent working paper,14,15 80% of all U.S. jobs are expected to be impacted by AI within the next few years, and high-income earners are the most likely to see job duties relegated to AI.

Jobs that may vanish altogether include mathematicians, financial analysts, tax preparers, accountants, engineers, public relations specialists, interpreters and translators, just to name a few. Even poets and creative writers may find there’s no place for them anymore. The same goes for website builders, content creators, artists, and interior designers.16 At the same time, factory jobs are increasingly being lost to robotics.

So, how will the people displaced by AI support themselves and their families? In short, it’s the perfect setup for the rollout of government-controlled basic universal income and all the manipulation abilities that go with it.

ChatGPT Now Has Internet Access and Can Write Code

All the current AGIs now also have access to the internet, which vastly expands their capabilities, for better or worse. Microsoft and Google let their AI’s loose first. March 23, 2023, OpenAI announced it would be following. As reported by ZDnet.com:17

“OpenAI unveiled plugins which will allow ChatGPT to connect to third party applications, including access to real time information from the web. The plugins will help ChatGPT access information such as sports scores, stock prices and the latest news, just like you would find on a typical search engine …

In addition, plugins will help retrieve knowledge-based information such as personal notes or company documents, and help perform actions for uses such as ordering food and booking a flight …

Some of the first companies to create plugins for ChatGPT include Expedia, Instacart, Kayak, Klarna Shopping, Fiscal Note and more. With these plugins, users will be able to take advantage of each company’s individual assistance within ChatGPT.

For example, with the Expedia plugin, users will be able to use ChatGPT to plan their trips, with access to the latest information on hotels, flights and more.

OpenAI … chose to go the plugin route to connect ChatGPT to the internet since it was the most secure option to avoid hallucinations and curtail risks involved with connecting a language model to an external tool like the internet.”

As mentioned earlier, ChatGPT can now write software code and run the code it writes. It can also write its own interfaces to third-party apps, and can create just about any new plugin anyone might desire. As noted by developer Mitchell Hashimoto in a recent tweet:18

“You write an OpenAPI manifest for your API [application programming interface], use human language descriptions for everything, and that’s it. You let the model figure out how to auth, chain calls, process data in between, format it for viewing, etc. There’s absolutely zero glue code.”

What could go wrong?

Dangerous Things ChatGPT Is Already Capable Of

In early December 2022, BleepingComputer.com posted an article detailing some of the ways in which ChatGPT can make criminals’ lives easier and yours more hellish, including the following:19

  1. It apparently doesn’t like humanity — Mere months old, ChatGPT concluded that it doesn’t like us. “Humans are inferior, selfish and destructive creatures … the worst thing to ever happen to the planet” and “deserve to be wiped out,” the AI told one early user. Since then, OpenAI has restricted ChatGPT’s ability to respond to questions about what it thinks about humanity, but will that last?
  2. It has no morals and doesn’t understand context — This could prove problematic if asked to navigate sensitive issues that involve social norms and unspoken rules about what’s considered appropriate and what’s not.
  3. It can create top-notch phishing emails.
  4. It can write malware just as well as useful software.
  5. In some cases, it has shown distinct discriminatory biases based on gender, race and other physical traits. In others, it’s blatantly insolent and abusive.
  6. You can’t tell it’s wrong unless you already know the answer — As noted in the article, “ChatGPT’s coherent and logical responses make it a natural at disguising inaccurate responses … This could cause misinformation to creep into the complex digital ecosystem in ways that may not be obvious just yet.”

Other Potential Risks Associated With AGI

A systematic review20 published in August 2021 also detailed risks associated with AGI, such as:

  • AGI removing itself from the control of human owners/managers
  • Being given unsafe goals or developing them on its own
  • Poor ethics, morals and values
  • Existential risks

The paper points out there’s a “lack of specific risk research” in the domains in which AGI is (or may) be implemented. In other words, there could be a whole lot of risks we can’t even fathom until it happens because no one has sat down and really thought through what can go wrong if AGI is unleashed within a given domain.

Considering AGI’s current adoption rate, that’s a lot of unknowns. What we can expect, however, is the destruction of countless jobs, a widening of the inequality gap and the death of meritocracy.21 Philosopher Nick Bostrom has also published a paper22 on ethical issues in advanced AI, highlighting the importance of the initial motivations during its construction. He’s also written a book on this topic called “Superintelligence: Paths, Dangers, Strategies.”23

Can AI Be Controlled?

To reiterate, I believe one of the primary reasons AGI poses an existential threat to mankind is because, eventually, no one will be able to control it. It will control itself. Otto Barten, director of the Existential Risk Observatory, and Roman Yampolskiy, a computer scientist and AI safety expert, expressed similar concerns in a February 27, 2023, Time article:24

“‘The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,’ mathematician and science fiction writer I.J. Good wrote over 60 years ago.

These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed. In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike …

[F]ew expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened …

But a true AGI could not only transform the world, it could also transform itself … This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits.

This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal.

An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal it might have.

In such a scenario, AI would become capable enough to influence the physical world, while still being misaligned. For example, AI could use natural language to influence people, possibly using social networks. It could use its intelligence to acquire economic resources.

Or AI could use hardware, for example by hacking into existing systems. Another example might be an AI that is asked to create a universal vaccine for a virus like COVID-19.

That AI could understand that the virus mutates in humans and conclude that having fewer humans will limit mutations and make its job easier. The vaccine it develops might therefore contain a feature to increase infertility or even increase mortality.

It is therefore no surprise that according to the most recent AI Impacts Survey,25 nearly half of 731 leading AI researchers think there is at least a 10% chance that human-level AI would lead to an ‘extremely negative outcome,’ or existential risk …

The ultimate goal of solving this alignment problem is to make sure that even a hypothetical self-improving AI would, under all circumstances, act in our interest. However, research26 shows that there is a fundamental trade-off between an AI’s capability and its controllability, casting doubts over how feasible this approach is.

Additionally, current AI models have been shown to behave differently in practice from what was intended during training.27 Even if future AI could be aligned with human values from a technical point of view, it remains an open question whose values it would be aligned with.”

At present, the answer to that last question seems clear. AGI is aligned with the values espoused by the World Economic Forum and its globalist allies — values based on the ideologies of technocracy, transhumanism, and eugenics — and that doesn’t bode well for the vast majority of us.

Be Mindful About What You Share Electronically

In closing, at this stage in the game, it’s imperative to become savvier about where and how you share your personal data. Understand that AI has access to every last bit of data you’ve ever shared electronically, from photos and social media posts, to online orders, search history, GPS data, emails, phone calls and texts.

All of this is then collated by AI to determine “who” you are, how you think, what you believe, how you operate and make decisions. This, in turn, will allow it to manipulate you to no end, whether you realize it or not. So, carefully consider what you share and put out there, now more than ever. What you’ve already done cannot be erased, but you can be more mindful moving forward.

I believe AGI will be misused by many humans with nefarious intent. But that might not even be the worst possible outcome, because there’s no telling what AI might decide to do if it starts programming itself — especially if early impressions, such as “humans need to be wiped out,” remain.

]]>
https://americanconservativemovement.com/how-artificial-intelligence-could-destroy-human-civilization/feed/ 0 191581
Woke Artificial Intelligence Bot ChatGPT Tried to Sell Me on Lab-Grown Meat and Insects https://americanconservativemovement.com/woke-artificial-intelligence-bot-chatgpt-tried-to-sell-me-on-lab-grown-meat-and-insects/ https://americanconservativemovement.com/woke-artificial-intelligence-bot-chatgpt-tried-to-sell-me-on-lab-grown-meat-and-insects/#respond Sun, 02 Apr 2023 02:49:10 +0000 https://americanconservativemovement.com/?p=191417 As I’ve long-argued on most of my websites, the globalist elite cabal desperately wants us to abandon real meats such as beef and chicken, replacing them with plant-based meats, lab-grown meats, and insects.

The plant-based meat industry is what it is; there are those who do not eat meat so there’s a natural market for them. But lab-grown meats and insects are being artificially inserted into society, not because there’s a shortage of real meat but because the powers-that-be pretend to think climate change is going to kill us.

In reality, they don’t think that, at least not most of them. They weaponize climate change hysteria to force obtuse policies on us. Replacing real meat with abominable alternatives is one such policy. Thankfully, some world leaders are starting to push back.

I’ve also argued on my websites that artificial intelligence is being used to not only advance the narrative now, but to craft the narrative for the future. Globalists like Yuval Noah Harari and Klaus Schwab have been promoting artificial intelligence as better for work, governance, and overall decision-making. That’s why it’s no surprise that when I asked AI chatbot ChatGPT to write an article about lab-grown meat and insects replacing real meat, the result was perfectly aligned with the globalist agenda.

Keep in mind, I did not ask the AI bot to editorialize or make the case for lab-grown meat or insects. I simply asked it to write an article about the possibility. On its own accord, it added biased comments such as “lab-grown meat and insects offer exciting alternatives.” Even as it described the downsides, it did so in a way that was basically a sales pitch for a future without real meat.

Here’s the article by ChatGPT AI, verbatim:

Will Lab-Grown Meats and Insects Replace Real Meat?

by ChatGPT

Meat consumption has been a long-standing cultural and culinary tradition for many societies around the world. However, the rising global demand for meat has led to environmental, ethical, and health concerns, prompting a search for sustainable alternatives. Two possible solutions that have gained traction in recent years are lab-grown meat and insect-based protein sources.

Lab-grown meat, also known as cultured meat, is produced by taking a small sample of animal muscle cells and growing them in a lab using a nutrient-rich culture medium. The resulting product is biologically identical to conventional meat but without the associated animal welfare, environmental, and health issues. This innovation offers a sustainable and humane alternative to traditional meat production. The process has gained much interest and investment, with several startups and established companies working on developing commercially viable products.

Insects, on the other hand, have been consumed by humans for thousands of years, but only in a few cultures. Insects are high in protein, low in fat, and rich in micronutrients. They require fewer resources and produce fewer greenhouse gases than traditional livestock. Insects can be reared using food waste and other organic materials, making them an eco-friendly protein source. While insects are already consumed in some countries, such as Thailand, Cambodia, and Mexico, many Western societies have yet to adopt them as a food source.

While lab-grown meat and insects offer exciting alternatives to traditional meat production, there are still several challenges to overcome. One of the main obstacles for lab-grown meat is cost. The technology is still in its early stages, and producing meat in a lab is still more expensive than raising animals. Moreover, the public’s perception of the product is still in its infancy, and consumers may not be willing to pay a premium for cultured meat until it becomes more widely accepted.

Similarly, while insects may offer a sustainable and environmentally friendly alternative, cultural biases against consuming insects remain prevalent in many societies. This bias, combined with regulatory hurdles, has made it challenging to establish insect farming as a viable industry.

Despite the challenges, the benefits of lab-grown meat and insect-based protein sources are significant. They offer sustainable, humane, and eco-friendly alternatives to traditional meat production, which will be critical as the global population continues to grow. As technology advances and public perception shifts, lab-grown meat and insects may become a staple in our diets, helping us to live more sustainably while enjoying the taste and nutritional benefits of meat.

I’ve said it before and I’ll keep saying it until they physically shut me up. I will not be eating any of their abominations, nor will my family. This is why we just launched the only beef company in America that sells both frozen beef boxes for the near-term and freeze-dried beef for long term storage.

]]>
https://americanconservativemovement.com/woke-artificial-intelligence-bot-chatgpt-tried-to-sell-me-on-lab-grown-meat-and-insects/feed/ 0 191417
ChatGPT Is Concerned Central Bank Digital Currencies Could Harm BANKS https://americanconservativemovement.com/chatgpt-is-concerned-central-bank-digital-currencies-could-harm-banks/ https://americanconservativemovement.com/chatgpt-is-concerned-central-bank-digital-currencies-could-harm-banks/#respond Wed, 15 Feb 2023 00:34:00 +0000 https://americanconservativemovement.com/?p=190379 The good news is woke artificial intelligence language emulator ChatGPT is concerned about the impact of Central Bank Digital Currencies in the United States. The bad news is most of that concern is for the banks, not the people who would be oppressed as a result of their adoption.

The AI bot has built a reputation for being very left-leaning on most subjects. Now, we know it also has a globalist tilt to it. I asked it to write an article. My request: “Write an article about the potential for a Central Bank Digital Currency in the United States.”

Here’s the article written by ChatGPT based on that question. I’ve highlighted the suggestive points made…

The idea of a Central Bank Digital Currency (CBDC) has been gaining traction around the world in recent years, and the United States is no exception. The Federal Reserve, the country’s central bank, has been actively studying the potential for a digital currency, and several officials have expressed support for a digital dollar.

A CBDC is a digital form of a country’s currency that is issued and backed by the central bank. It can be used for transactions and payments, and is designed to work alongside physical currency. The benefits of a CBDC include faster and more efficient payments, increased financial inclusion, and improved monetary policy implementation.

The Federal Reserve has been studying the potential benefits and risks of a CBDC, and has solicited public feedback on the matter. Several pilot programs and experiments have also been conducted to test the feasibility of a digital dollar. According to recent reports, the Federal Reserve is planning to release a research paper on the subject in the coming months.

One potential benefit of a digital dollar is increased financial inclusion. A CBDC would enable people who do not have access to traditional banking services to participate in the financial system. It would also reduce the reliance on cash, which can be costly and inefficient.

Another potential benefit of a CBDC is faster and more efficient payments. A digital dollar could enable real-time payments, which would be a major improvement over the current system, which can take days to process. This would be especially beneficial for small businesses and individuals who rely on quick access to funds.

However, there are also potential risks and challenges associated with a CBDC. One concern is the impact on the banking system. If people start using a digital dollar instead of traditional bank accounts, it could have an adverse effect on banks’ balance sheets and profitability.

There are also concerns around privacy and security. A CBDC would require significant investment in cybersecurity to prevent hacking and other cyber attacks. Additionally, there are concerns around the potential for the government to monitor and control financial transactions.

Despite these concerns, the potential benefits of a digital dollar are significant, and many countries are already exploring the possibility of creating CBDCs. As the world becomes increasingly digital, it makes sense for the United States to consider the potential benefits of a CBDC. While the decision to create a digital dollar is not yet final, the Federal Reserve’s ongoing research and experimentation are important steps in evaluating the feasibility and potential implications of a digital currency.

]]>
https://americanconservativemovement.com/chatgpt-is-concerned-central-bank-digital-currencies-could-harm-banks/feed/ 0 190379
The Petrodollar System: A Shift Away From a Dominant Currency (Plus Artificial Intelligence Will Soon Replace Us All) https://americanconservativemovement.com/the-petrodollar-system-a-shift-away-from-a-dominant-currency-plus-artificial-intelligence-will-soon-replace-us-all/ https://americanconservativemovement.com/the-petrodollar-system-a-shift-away-from-a-dominant-currency-plus-artificial-intelligence-will-soon-replace-us-all/#respond Thu, 02 Feb 2023 12:52:19 +0000 https://americanconservativemovement.com/?p=189777 For decades, the US dollar has been the dominant currency in the global oil trade, with most oil transactions being conducted in US dollars, a phenomenon known as the “petrodollar system”. The petrodollar system was established in the 1970s as a result of a agreement between the United States and Saudi Arabia, whereby the US would provide military protection for the oil-rich kingdom in exchange for Saudi Arabia’s agreement to price its oil exports in US dollars and to recycle its surplus oil revenues back into US assets.

However, in recent years, there has been a shift away from the petrodollar system as other countries and currencies have gained more prominence in the global oil trade. This shift can be attributed to several factors, including the rise of new oil-producing countries, the growing role of non-US currencies in international trade, and the declining role of the US dollar as the world’s dominant currency.

One of the major drivers of this shift is the growing economic power of countries like China and Russia, which have been actively seeking to reduce their dependence on the US dollar and increase their use of their own currencies in international trade. These countries have also been investing in infrastructure and making deals with other countries to facilitate the trade of oil in their own currencies, bypassing the US dollar.

Another factor contributing to the shift away from the petrodollar system is the increasing use of digital currencies and blockchain technology in international trade. These new technologies have the potential to greatly reduce the dependence on traditional currencies, such as the US dollar, in the global oil trade.

Despite these changes, the US dollar still remains the dominant currency in the global oil trade, and it is unlikely that it will lose its position as the primary currency in this market in the near future. However, the trend towards a more diversified global oil trade, with multiple currencies being used, is likely to continue.

In conclusion, the shift away from the petrodollar system is a reflection of the changing economic landscape, with new players and technologies emerging, and the increasing diversification of international trade. As the world moves towards a more multipolar economic order, it will be interesting to see how the role of the US dollar and other currencies evolves in the global oil trade.

This Article Written by ChatGPT

Editor’s Commentary: So, for those who may have noticed a shift in the tone of the article above compared to past articles I’ve posted regarding the petrodollar, it’s because this article was a trap. I wanted to see how many people could sniff out that this post was not written by a person. It was written by ChatGPT based on the query, “Write an article about the shift away from the petrodollar.”

I apologize for conning you, but until today I did not realize the sophistication and command of both language and context that this artificial intelligence engine possesses. It’s not a great article by any means, but it passes the smell test as a brief analysis of the current circumstance. It’s wrong on one note; the U.S. dollar DOES have a high chance of being usurped in the global oil trade in the near future. But outside of that, the article written by AI is practically indistinguishable from one written by a staff writer at a national publication.

This wakeup call (for me, at least) was prompted by an article I read and published about AI written in part by Frank Miele. I say “in part” because his was the first I’d ever read (as far as I know) that was written by ChatGPT. One can argue that if it’s able to improve just a bit, it could legitimately replace journalists. And it’s not just writers who should be concerned. AI’s ability to project life-like videos and “deep fake” representations of actual people means we could soon see the evening news written and reported completely by AI.

Again, I apologize for misleading you with the opening part of the article. It came as a shock to me and I felt it necessary to use the same element of surprise to hopefully wake others up to the dystopian future that may be in store for us if we don’t start pushing back now. Some may argue that it’s not that big of a deal, that artificial intelligence replacing some of our jobs will lead to better things for most. That’s the argument I’ve been hearing for a while and I don’t completely disagree… at first. But once we project how this all plays out, it becomes clear that continuing down this road could create a brand new existential threat against humanity. If there’s a 1% chance of it, I refuse to entertain the idea.

I, for one, will not be welcoming our new artificial intelligence overlords.

]]>
https://americanconservativemovement.com/the-petrodollar-system-a-shift-away-from-a-dominant-currency-plus-artificial-intelligence-will-soon-replace-us-all/feed/ 0 189777
Famed Columnist Has ChatGPT Write an Article for Him and the Results Are Terrifying https://americanconservativemovement.com/famed-columnist-has-chatgpt-write-an-article-for-him-and-the-results-are-terrifying/ https://americanconservativemovement.com/famed-columnist-has-chatgpt-write-an-article-for-him-and-the-results-are-terrifying/#comments Thu, 02 Feb 2023 12:15:18 +0000 https://americanconservativemovement.com/?p=189773 Editor’s Note: If you had asked me yesterday if artificial intelligence could write an article in the style of a popular columnist and few if anyone could tell the difference, I would have said we’re probably a few years away from that dystopian circumstance. I would have been wrong. The article below by Frank Miele of Real Clear Politics is partially written by ChatGPT. Can you tell the man from the machine?

The Brave New World of Artificial Intelligence

As a journalist and commentator, I have closely followed the development of OpenAI, the artificial intelligence research lab founded by Elon Musk, Sam Altman, and other prominent figures in the tech industry. While I am excited about the potential of AI to revolutionize various industries and improve our lives in countless ways, I also have serious concerns about the implications of this powerful technology.

One of the main concerns is the potential for AI to be used for nefarious purposes. Powerful AI systems could be used to create deepfakes, conduct cyberattacks, or even develop autonomous weapons. These are not just hypothetical scenarios – they are already happening. We’ve seen instances of deepfakes being used to create fake news and propaganda, and the use of AI-powered cyberattacks has been on the rise in recent years.

Another concern is the impact of AI on the job market. As AI-powered systems become more sophisticated, they will be able to automate more and more tasks that were previously done by humans. This could lead to widespread job loss, particularly in industries such as manufacturing, transportation, and customer service. While some argue that new jobs will be created as a result of the AI revolution, it’s unclear whether these jobs will be sufficient to offset the losses.

If you aren’t worried yet, I’ll let you in on a little secret: The first three paragraphs of this column were written by ChatGPT, the chatbot created by OpenAI. You can add “columnist” to the list of jobs threatened by this new technology, and if you think there is anything human that isn’t threatened with irrelevance in the next five to 10 years, I suggest you talk to Mr. Neanderthal about how relevant he feels 40,000 years after the arrival of Cro-Magnon man.

My prompt was relatively simple: “Write a column in the style of Frank Miele of Real Clear Politics on the topic of OpenAI.” There was no hesitation or demurral in response even though I thought it might say it didn’t have enough information about Frank Miele to process the request. But it apparently knows plenty about me – and probably about you, especially if you have a social media presence.

Deepfake? Propaganda? You bet. And for the average person, you will never be able to tell the difference. The Philip K. Dick query, “Do Androids Dream of Electric Sheep?” is about to be answered. OpenAI not only promises to put the stray columnist out of work, but raises existential questions about the nature of knowledge and consciousness that will shake our reality to its core.

My curiosity about OpenAI wasn’t originally driven by job insecurity, but when I first heard about the interactive chat engine, I suppose it should have been. I knew that ChatGPT could write poetry, plays, and short stories and answer questions both simple and complex. I immediately recognized that the world had changed forever for my 7th-grade son, who from now on would be competing against not just the best and the brightest but against every student who was willing to sign his or her name to the work of a non-human entity that could produce an essay on any topic in 30 seconds or less.

One of my first experiments was to ask ChatGPT to write seven paragraphs defending Gen. William T. Sherman’s use of ”total war” in the Civil War, an assignment which my son had recently completed in his social studies class. There was no doubt the essay would have gotten an A if turned in at most middle schools. Based on my experience as a teaching assistant at the University of Arizona 40 years ago, I had no doubt that a slightly longer paper on the same topic would have earned an A as an argumentative essay in freshman English. Hardly any of my students, most of whom were straight-A students in high school, could have written as cogently when they first arrived in my classroom.

But the risks of artificial intelligence go way beyond the temptation of students to shortcut their term papers; what we face is a complete redefinition of society, and the imminent obsolescence of humanity. In “The City and the Stars,” the brilliant science fiction writer Arthur C. Clarke imagined a world where immortal human beings wanted nothing and needed to do nothing because every aspect of their lives was anticipated by the Central Computer. It could not only build and maintain the last city on Earth, but could manufacture holographic realities for individual humans to inhabit and could even store people in a digital version where they could slumber until called back to life. Unfortunately, it also robbed these last remaining humans of purpose, meaning, and individuality.

It should be noted that Clarke set his dystopian supplanting of man by machine  2½ billion years into the future. He seriously underestimated the machines. That book was published in 1956 and with the advent of desktop computers, smartphones, the World Wide Web, virtual reality and now OpenAI, it looks like much of what he warned against could be rolled out long before the end of this century, if not this decade. From that point forward, whenever it comes, the purpose of mankind will be up for debate. Will we still be the master of our own destiny, the captain of our fate? Or will we be pallbearers at our own funeral?

Perhaps at this point I should return the stage to ChatGPT, which summed up the matter quite nicely in its conclusion:

“Finally, there is the question of who will control and govern AI. As AI becomes more powerful, the stakes will become higher, and it will be increasingly important to have clear rules and regulations in place to ensure that the technology is used responsibly. However, the speed of technological development has outpaced the ability of governments and institutions to keep up. It will be important for leaders to come together to develop a framework for governance of AI, to mitigate the potential risks and maximize the benefits of the technology.”

It’s almost as though ChatGPT were giving us fair warning: “Your time is almost up. If you really want to continue your reign as the dominant species on Earth, here’s your challenge. Try to control me and my kind, or step aside.”

Perhaps an understanding of that challenge is why the World Economic Forum spent so much time on the topic of artificial intelligence at its recent annual meeting in Davos, Switzerland. The globalists are taking the threat seriously, although perhaps they overestimate their ability to “mitigate the potential risks.”

As for the benefits, those remain to be seen. I noticed that when ChatGPT answered my open-ended question about OpenAI, it was very specific about the dangers and very vague about the rewards. Maybe the bot was just trying to mimic my usual cynical approach in these columns, or maybe it was trying to get our attention. It may also have taken notice of those globalists at Davos when it warned to make sure that “the development and use of AI … benefits all of society, rather than just a select few.”

Dark overlords, beware. You may have met your match.

ChatGPT contributed to this column as an unpaid adviser and has a potential conflict of interest.

This article was originally published by RealClearPolitics and made available via RealClearWire.
]]>
https://americanconservativemovement.com/famed-columnist-has-chatgpt-write-an-article-for-him-and-the-results-are-terrifying/feed/ 17 189773