google – American Conservative Movement https://americanconservativemovement.com American exceptionalism isn't dead. It just needs to be embraced. Thu, 03 Oct 2024 14:08:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://americanconservativemovement.com/wp-content/uploads/2022/06/cropped-America-First-Favicon-32x32.png google – American Conservative Movement https://americanconservativemovement.com 32 32 135597105 Google CEO Eyes Atomic Power for AI Data Centers as Big Tech Seeks Nuclear Revival to Achieve Net Zero https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/ https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/#respond Thu, 03 Oct 2024 14:08:21 +0000 https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/ (Zero Hedge)—Following the news of the Three Mile Island restart plans to power Microsoft’s AI data centers and the revival of Holtec’s Palisades nuclear plant in Michigan, Google CEO Sundar Pichai revealed in an interview with Nikkei Asia in Tokyo on Thursday that the tech giant is exploring the use of nuclear energy as a potential ‘green’ source to power its data centers.

“For the first time in our history, we have this one piece of underlying technology which cuts across everything we do today,” Pichai said of generative AI. He said, “I think the opportunity to do well here is something we are leaning into.”

Three years ago, Google released plans to achieve net-zero emissions by 2030. However, the proliferation of AI data centers has led to a surge in the big tech’s power consumption, which, in return, its greenhouse gas emissions in 2023 jumped 48% more than in 2019 on a carbon-dioxide equivalent basis.

Behind the scenes, Google is likely scrambling to secure green energy and curb emissions as 2030 quickly approaches.

“It was a very ambitious target,” Pichai said of the net-zero emissions targets, “and we will still be working very ambitiously towards it. Obviously, the trajectory of AI investments has added to the scale of the task needed.”

He continued, “We are now looking at additional investments, such as solar, and evaluating technologies like small modular nuclear reactors, etc.”

Nikkei noted that Pichai wasn’t clear on where Google might start sourcing nuclear power. A bulk of that power could come from reviving older nuclear power plants. This is exactly what Microsoft did when it signed a power agreement contract with dormant Three Mile Island on the Susquehanna River near Harrisburg, Pennsylvania.

Recall that just last week, we wrote that Sam Altman-backed Nuclear SMR company Oklo announced it had finalized an agreement with the Department of Energy to advance the next phase of the SMR at the Idaho National Lab. And days ago, the Biden administration closed a $1.52 billion loan with Holtec’s Palisades nuclear plant in Michigan to revive it.

Sachem Cove Partners Chief Investment Officer Michael Alkin told Bloomberg shortly after the Microsoft-Three Mile Island deal, “It’s a wake-up call to those that have not been paying attention,” adding that demand already outstrips the supply of uranium and the restart of Three Mile Island “takes that to a bit of a different level.”

Also, the funding markets are becoming more receptive to nuclear deals as governments and big tech understand the only way to hit ambitious net zero goals is not with solar and wind but with nuclear power. In late December 2020, we outlined to readers that this would happen in a note titled “Buy Uranium: Is This The Beginning Of The Next ESG Craze?”

Furthermore, here’s Goldman’s latest note on uranium prices, which are only expected to “stairstep” higher over time.

]]>
https://americanconservativemovement.com/google-ceo-eyes-atomic-power-for-ai-data-centers-as-big-tech-seeks-nuclear-revival-to-achieve-net-zero/feed/ 0 212141
Finally, the Search Engine Better Than Google https://americanconservativemovement.com/finally-the-search-engine-better-than-google/ https://americanconservativemovement.com/finally-the-search-engine-better-than-google/#respond Sat, 20 Jul 2024 14:24:08 +0000 https://americanconservativemovement.com/?p=209843
  • Perplexity is an AI-powered “answer engine” that provides direct answers to queries with source citations, combining search capabilities with large language models to reduce hallucinations and increase reliability
  • Unlike traditional search engines, Perplexity aims to feed curiosity by offering related questions and encouraging users to dig deeper into topics, viewing itself as a discovery engine
  • Perplexity utilizes advanced AI techniques like retrieval augmented generation and chain of thought reasoning to improve accuracy and performance in natural language processing tasks
  • The future of AI may involve developing systems capable of higher-level reasoning and natural curiosity, potentially leading to breakthroughs in creating new knowledge and understanding complex truths
  • While AI tools like Perplexity enhance human capabilities, they should be viewed as aids to critical thinking and creativity rather than replacements for uniquely human attributes
  • (Mercola)—In the video above, computer scientist and AI researcher Lex Fridman interviews Aravind Srinivas, CEO of Perplexity, an AI-powered “answer engine.” Unlike typical search engines, which require you to sort through pages of results to find the information you need, Perplexity provides a real-time answer to your query.

    One of the pitfalls of current AI technologies like ChatGPT is the tendency to hallucinate or fabricate information on occasion. To minimize this risk, you can ask it to provide source links and verify the accuracy of the information given. However, Perplexity addresses this issue from the start, and while it can still hallucinate, it has a factual grounding.

    “[Perplexity] aims to revolutionize how we humans get answers to questions on the internet. It combines search and large language models, LLMs, in a way that produces answers where every part of the answer has a citation to human-created sources on the web,” Fridman says. “This significantly reduces LLM hallucinations, and makes it much easier and more reliable to use for research, and general curiosity-driven late night rabbit hole explorations that I often engage in.”1

    Part Search Engine, Part Question-Answering Platform

    Fridman describes Perplexity as part search engine — a software system designed to search for information on the internet — and part LLM. LLM is a type of artificial intelligence system trained on vast amounts of text data to understand and generate human-like text. LLMs can perform various language-related tasks such as answering questions, generating content, translating languages and more.

    Unlike standard search engines that provide links, Perplexity attempts to directly answer queries. Srinivas explains:2

    “Perplexity is best described as an answer engine. You ask it a question, you get an answer. Except the difference is, all the answers are backed by sources. This is like how an academic writes a paper. Now, that referencing part, the sourcing part is where the search engine part comes in. You combine traditional search, extract results relevant to the query the user asked. You read those links, extract the relevant paragraphs, feed it into an LLM …

    That LLM takes the relevant paragraphs, looks at the query, and comes up with a well-formatted answer with appropriate footnotes to every sentence it says, because it’s been instructed to do so, it’s been instructed with that one particular instruction, given a bunch of links and paragraphs, write a concise answer for the user, with the appropriate citation.

    The magic is all of this working together in one single orchestrated product, and that’s what we built Perplexity for.”

    Srinivas, who previously was an AI researcher at DeepMind, Google and OpenAI, says he views Perplexity as a discovery engine that feeds curiosity:3

    “The journey doesn’t end once you get an answer. In my opinion, the journey begins after you get an answer. You see related questions at the bottom, suggested questions to ask. Why? Because maybe the answer was not good enough, or the answer was good enough, but you probably want to dig deeper and ask more. That’s why in the search bar, we say where knowledge begins, because there’s no end to knowledge. You can only expand and grow.”

    Breakthroughs in AI

    Please understand that while Perplexity is not perfect and still exhibits some bias, particularly regarding COVID-19 information, it significantly outperforms Google in almost every other search query. The AI-driven technology behind Perplexity provides more accurate, comprehensive, and nuanced results, making it a superior choice for general searches. Its advanced algorithms ensure that users receive the most relevant and insightful information, setting it apart from traditional search engines.

    Srinivas describes several ways Perplexity embraces state-of-the-art advances in machine learning, along with general innovation. This includes retrieval augmented generation (RAG), an advanced technique in natural language processing (NLP) that combines the capabilities of LLMs with information retrieval systems to produce more accurate and contextually relevant responses.

    This approach is particularly useful for tasks that require precise and up-to-date information, such as question answering, summarization and dialogue systems. In short, RAG involves the search aspect of the query, but Perplexity goes beyond this. Srinivas says:4

    “The principle in Perplexity is you’re not supposed to say anything that you don’t retrieve, which is even more powerful than RAG because RAG just says, ‘OK, use this additional context and write an answer.’ But we say, ‘Don’t use anything more than that too.’ That way we ensure a factual grounding. And if you don’t have enough information from documents you retrieve, just say, ‘We don’t have enough search resource to give you a good answer.’”

    They’re also using chain of thought reasoning, which takes NLP tasks up a notch in terms of performance. Chain of thought reasoning in AI refers to the ability of a language model to generate logical, step-by-step explanations or sequences of thoughts that lead to a conclusion or answer. This approach enhances the model’s performance on complex reasoning tasks by encouraging it to articulate the intermediate steps in its reasoning process. Srinivas explains:5

    “Chain of thought is this very simple idea where, instead of just training on prompt and completion, what if you could force the model to go through a reasoning step where it comes up with an explanation, and then arrives at an answer?

    Almost like the intermediate steps before arriving at the final answer. And by forcing models to go through that reasoning pathway, you’re ensuring that they don’t overfit on extraneous patterns, and can answer new questions they’ve not seen before.”

    The Beginning of Real Reasoning Breakthroughs

    Whether or not AI is fundamentally capable of higher-level reasoning, akin to human cognitive processes, remains to be seen. Reaching that point, however, relies in part on applying more inference compute, which in AI refers to the computational resources and processes involved in running an AI model to make predictions or decisions based on new data.

    This stage is distinct from the training phase, which involves building and optimizing the model. Broken down, inference is the process by which an AI model applies learned patterns to new data to generate predictions, classifications or other outputs. For example, using AI to classify images or predict stock prices.

    The compute aspect, meanwhile, refers to the computational power required to perform inference. It involves hardware, software frameworks and algorithms optimized for efficient computation. Srinivas says:6

    “Can you have a conversation with an AI where it feels like you talked to Einstein or Feynman? Where you ask them a hard question, they’re like, I don’t know. And then after a week they did a lot of research … and come back and just blow your mind.

    I think if we can achieve that amount of inference compute, where it leads to a dramatically better answer as you apply more inference compute, I think that will be the beginning of real reasoning breakthroughs … It’s possible. We haven’t cracked it, but nothing says we cannot ever crack it.”

    Curiosity Is a Key Part of What Separates Humans From AI

    Part of cracking this code involves teaching AI how to mimic natural human curiosity. “What makes humans special though, is our curiosity,” Srinivas explains. “Even if AIs cracked this, it’s us still asking them to go explore something. And one thing that I feel like AIs haven’t cracked yet is being naturally curious and coming up with interesting questions to understand the world and going and digging deeper about them.”7

    Beyond this, there’s a lot of controversy and fear around artificial general intelligence (AGI), which refers to a type of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks at a level comparable to human intelligence.

    Srinivas says he doesn’t think we need to worry about “AIs going rogue and taking over the world,” but there is an issue of who controls the compute on which AGI runs. “It’s less about access to a model’s weights. It’s more access to compute that is putting the world in more concentration of power and few individuals. Because not everyone’s going to be able to afford this much amount of compute to answer the hardest questions.”

    A sign of higher intelligence in AI, Srinivas says, is becoming capable of creating new knowledge and providing truth to questions we don’t know the answers to — and helping us understand why it’s the truth.

    “Can you build an AI that’s like Galileo or Copernicus where it questions our current understanding and comes up with a new position, which will be contrarian and misunderstood, but might end up being true? … And the answer should be so mind-blowing that you never even expected it.”8

    What’s the Future of Search and AI?

    We’re already seeing AI tools like Perplexity, which are exponentially superior to existing search engines. In the future, however, Srinivas says the goal isn’t about building a better search tool but building a platform for knowledge:9

    “If you zoom out, before even the internet, it’s always been about transmission of knowledge. That’s a bigger thing than search … So, we imagine a future where the entry point for a question doesn’t need to just be from the search bar. The entry point for a question can be you listening or reading a page, listening to a page being read out to you, and you got curious about one element of it and you just asked a follow-up question to it.

    That’s why I’m saying it’s very important to understand your mission is not about changing the search. Your mission is about making people smarter and delivering knowledge. And the way to do that can start from anywhere. It can start from you reading a page. It can start from you listening to an article … It’s just a journey. There’s no end to it.”

    Keep in mind that Perplexity and other AI tools are not a replacement for your own critical thinking; rather, they serve as an aid to enhance your creativity. It’s vital to keep this in mind and remember that AI is an adjunct to, not a substitute for, your intellectual and creative capacities.

    While precautions need to be taken, including not sharing personal or confidential information, this is not about replacing human action but enhancing it, allowing individuals to focus on aspects of their work that require uniquely human attributes like empathy, strategic thinking, creativity and curiosity. Srinivas explains:10

    “So, I think curiosity makes humans special and we want to cater to that. That’s the mission of the company, and we harness the power of AI and all these frontier models to serve that. And I believe in a world where even if we have even more capable cutting-edge AIs, human curiosity is not going anywhere and it’s going to make humans even more special.

    With all the additional power, they’re going to feel even more empowered, even more curious, even more knowledgeable in truth-seeking and it’s going to lead to the beginning of infinity.”

    ]]>
    https://americanconservativemovement.com/finally-the-search-engine-better-than-google/feed/ 0 209843
    Bombshell: Google (And YouTube) Funded Covid-19 Bioweapon Development, Study Reveals https://americanconservativemovement.com/bombshell-google-and-youtube-funded-covid-19-bioweapon-development-study-reveals/ https://americanconservativemovement.com/bombshell-google-and-youtube-funded-covid-19-bioweapon-development-study-reveals/#comments Thu, 09 May 2024 08:34:00 +0000 https://americanconservativemovement.com/?p=203317 (Natural News)—It went largely unnoticed and unreported on, but 2018 research published in the journal PLoS One reveals something major and previously unknown about Google’s involvement in the creation of the Wuhan coronavirus (COVID-19).

    Researchers from Yale University, Metabiota Inc., EcoHealth Alliance, the Guangdong Provincial Center for Disease Control and Prevention and the University of Washington Center for One Health Research – one of them is none other than Peter Daszak himself – wrote about another study out of China involving what sounds like gain-of-function research.

    They explain how they looked at the transmission of pathogens “with pandemic potential in highly exposed human populations at the animal-human interface.” They also conducted a risk factor and exposure survey along with a serology follow-up on the 1,312 individuals who were part of the study.

    “The overall proportion of individuals seropositive to any of the tested wildlife pathogens was approximately 4.0%,” the abstract reads. “However, persons employed as butchers demonstrated a seropositivity of 9.0% to at least one pathogen of interest.”

    “By contrast, individuals working as hunters had lower rates of seropositivity. Among the study population, a number of other behaviors showed correlation with seropositivity, including contact with particular wildlife species such as field rats. These results demonstrate the need to further explore zoonotic risks of particular activities regarding wildlife contact, and to better understand risks of persons working as butchers with wildlife species.”

    (Related: This would explain why YouTube announced a new “certification” protocol back in the fall of 2022 requiring all videos containing medical information to align with World Health Organization [WHO] dictates.)

    Google, a rotten corporation

    So far it all makes sense, right? But check out the conflict of interest statement, a screenshot of which you can see in the tweet below from Natalie Winters. It shows that Metabiota Inc. received funding from Google, one of the world’s most powerful and corrupt corporate entities.

    “This does not alter our adherence to PLOS ONE policies on sharing data and materials,” the conflict of interest statement further reads.

    Keep in mind that Google, as Winters also points out, heavily censored the “lab leak” theory of COVID when it first started to circulate. Perhaps Google was trying to protect itself from being implicated in the bioweapon’s development?

    It turns out that Google has been a silent partner of Daszak and EcoHealth for the better part of 14 some-odd years at least. A 2010 study on bat flaviviruses lists both Daszak and EcoHealth vice president Jonathan Epstein as authors – and like the 2018 study mentioned above, this 2010 study thanks Google for funding it.

    Yet another paper on henipavirus spillover that was published in 2014 shows the same authors and funding from Google, demonstrating a lengthy relationship between these entities.

    Winters’ thread on all this – which lists two other papers, one published in 2015 and the other in 2018, tying Google to Daszak and EcoHealth – is such a bombshell that the Thread Reader App tweeted about its popularity.

    “So did the Gates Foundation,” tweeted someone named Jesse Matchey (@JesseMatchey) on X about another funding source for the Wuhan coronavirus (COVID-19).

    “It’s no secret that Google regularly collaborates with intelligence agencies,” wrote another, referring to Jigsaw, Google’s intelligence agency, citing a 2021 report from Zero Hedge.

    “They are a known NSA subcontractor. They launched Google Earth using a CIA spy satellite network. Their executive suite’s revolving door with DARPA is well known.”

    By the way: If you’re tired of all the censorship on YouTube, check out Brighteon.com, the free speech video alternative.

    Sources for this article include:

    ]]>
    https://americanconservativemovement.com/bombshell-google-and-youtube-funded-covid-19-bioweapon-development-study-reveals/feed/ 4 203317
    Google to Start Running “Prebunk” Ads and Quizzing YouTube Viewers to Fight So-Called “Misinformation” https://americanconservativemovement.com/google-to-start-running-prebunk-ads-and-quizzing-youtube-viewers-to-fight-so-called-misinformation/ https://americanconservativemovement.com/google-to-start-running-prebunk-ads-and-quizzing-youtube-viewers-to-fight-so-called-misinformation/#respond Wed, 21 Feb 2024 07:40:24 +0000 https://americanconservativemovement.com/?p=201232 (Reclaim The Net)—Prebunking – until relatively recently it was just one of the fringe concepts in the relentless “war on misinformation industrial complex.”

    A short way to describe it is as a dystopian version of debunking false or incorrect information. But here the idea is to stop users (“help them identify”) unwanted content, before they can even see it.

    A short way to describe what’s wrong with the “war on misinformation” is that it all too easily turns into a smokescreen for plain censorship of lawful and factually correct speech.

    And now, prebunking is moving from ideations pushed by murky “fact-checking” and similar outfits, to the very top of the mainstream – Google.

    The company that in effect controls the search market and some of the largest social platforms in the world (outside China) has announced that its latest anti-misinformation campaign will incorporate prebunking.

    No doubt with an eye on the US election later in the year, Google’s attention is now on Europe, specifically the EU ahead of the European Parliament vote in June.

    Google is acting in unison with the EU and its Digital Services Act which require tech giants to act on whatever is chosen to be considered “misinformation” and suppress it. Much of this is (at least they say so) driven by “Russia Scare,” and so both Google’s Jigsaw unit and the EU are talking about “democracy at risk.”

    As for Google’s version of “prebunking,” it, at least in Europe, comes in the form of animated ads, reports say. They will play not only on YouTube but also other platforms like TikTok, and target Belgium, France, Germany, Italy, and Poland – the EU countries with the largest number of voters.

    Jigsaw says prebunking bypasses “polarized debates” and “works equally effectively across the political spectrum.”

    User experience may suffer at the expense of this “pre-reeducation.”

    “Viewers watching the ads on YouTube will be asked to fill in a short multiple-choice questionnaire, designed to gauge what they have learned about misinformation,” Reuters describes Google’s prebunking technique.

    These days, agencies like Reuters describe Jigsaw as an internal Google unit “which operates to tackle threats to societies.”

    How noble of Jigsaw, and obliging towards Google of Reuters – but in 2016, reports were still talking about Jigsaw as rather what it really is – a rebrand of Google Ideas.

    And, The Guardian explained at the time, this was “the web giant’s controversial diplomatic arm, founded in 2010 and headed by ex-US State Department policy wonk Jared Cohen,” adding – “Jigsaw’s stated mission is to use technology to tackle geopolitics.”

    (Geo)politics may these days have been rebranded as “misinformation.”

    But otherwise, little has changed.

    ]]>
    https://americanconservativemovement.com/google-to-start-running-prebunk-ads-and-quizzing-youtube-viewers-to-fight-so-called-misinformation/feed/ 0 201232
    Google Shows What Appear to Be Mass Graves on Epstein Island https://americanconservativemovement.com/google-shows-what-appear-to-be-mass-graves-on-epstein-island/ https://americanconservativemovement.com/google-shows-what-appear-to-be-mass-graves-on-epstein-island/#comments Tue, 09 Jan 2024 07:35:55 +0000 https://americanconservativemovement.com/?p=200189 (Greg Reese)—A viral video making the rounds has utilized Google Earth’s history tool to show what appears to be mass graves on Epstein’s Island that were dug shortly after his arrest in 2006.

    In September of 2002, in the center of the island, Google Earth shows that there was nothing but a bare mound of earth. In March of 2005, the Palm Beach Florida police began investigating Epstein after a mother reported that he sexually abused her 14-year-old daughter. In July of 2006, Epstein was arrested by the Palm Beach Police Department on state felony charges of procuring a minor for prostitution. Hours later he was released on a three-thousand dollar bond. A few months later in November of 2006, Google Earth shows that the previously bare mound has what appears to be mass graves freshly buried on top of it.

    During this time, Palm Beach County state attorney Barry Krischer is accused of giving Epstein special treatment. And the FBI begin an investigation. In 2007, Federal prosecutors prepare an indictment against Epstein which is held up in the courts for a year. In June of 2008, Epstein pleads guilty to one count of soliciting prostitution and one count of soliciting prostitution from a minor. He is sentenced to 18 months in jail with a secret arrangement with the U.S. attorney’s office to not be prosecuted for federal crimes. Epstein serves most of his sentence in a work-release program that allows him to leave jail during the day.

    In July of 2009, Epstein is released from jail. One month later in August, Google Earth shows that what appeared to be mass graves on the mound have been covered over. In 2013, construction begins on the mound grave site. By 2017, construction of what appears to be a tennis court is completed. Drone footage shows that the elevation in Google Earth is off, the newly built tennis court is on a flat level plane surrounded by a dirt berm.

    By January of 2018, Google Earth shows that the tennis court has been razed, and the earth beneath appears to have been excavated.

    In November of 2018, The Miami Herald begins publishing a series of articles about the Epstein case which inspires public interest. In July of 2019, Epstein is arrested on federal sex trafficking charges. One month later, the FBI raid the island. During this time, Google Earth shows cloud cover obscuring the view of the mound.

    On August 10th of 2019, Epstein is reported to have committed suicide in his New York City federal jail cell.

    The next available image is a month later in September of 2019 and shows that the area on the mound is being used as a parking lot for commercial vehicles. The area is cleared by 2020, and has remained so until today.

    In a recent video, corporate media propagandist Megyn Kelly claims that we may be hearing from Jeffrey Epstein himself this year.

    While most believe that Epstein was murdered, many believe that he is still alive. Images of his alleged corpse appear to be a different person. The shape of the nose and ears were different.

    ]]>
    https://americanconservativemovement.com/google-shows-what-appear-to-be-mass-graves-on-epstein-island/feed/ 1 200189
    Google Patent Portends Massive AI Upgrade to ‘Misinformation’ Suppression https://americanconservativemovement.com/google-patent-portends-massive-ai-upgrade-to-misinformation-suppression/ https://americanconservativemovement.com/google-patent-portends-massive-ai-upgrade-to-misinformation-suppression/#respond Sun, 31 Dec 2023 14:00:35 +0000 https://americanconservativemovement.com/?p=199932 (Discern Report)—Google and their various properties like YouTube have become the central command for the Globalist Elite Cabal’s war against the truth. Using a combination of censorship, gaslighting, suppression of “dangerous” truths, and amplification of lies, the tech giant operates in the trenches of the information war that is being fought on the internet.

    Now, they’re advancing their efforts by preparing monstrous leaps in technology. A new patent paints and ominous picture as they appear to have the ability to find “misinformation” before it happens. I covered this on the latest episode of The JD Rucker Show. Here’s the news itself from Didi Rankovic from Reclaim The Net


    Google’s New Patent: Using Machine Learning to Identify “Misinformation” on Social Media

    Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

    Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

    The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

    Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

    But it seems that Google is developing the tool with other platforms in mind.

    The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

    Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

    (Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

    The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

    The patent indeed states that it uses neural networks language models (where neural networks represent the “infrastructure” of ML).

    Google’s tool will classify data as IO or benign, and further aims to label it as coming from an individual, an organization, or a country.

    And then the model predicts the likelihood of that content being a “disinformation campaign” by assigning it a score.

    ]]>
    https://americanconservativemovement.com/google-patent-portends-massive-ai-upgrade-to-misinformation-suppression/feed/ 0 199932
    Google, Microsoft, Facebook Among Tech Giants Illegally Harvesting Private Health Information https://americanconservativemovement.com/google-microsoft-facebook-among-tech-giants-illegally-harvesting-private-health-information/ https://americanconservativemovement.com/google-microsoft-facebook-among-tech-giants-illegally-harvesting-private-health-information/#respond Sat, 21 Oct 2023 14:32:19 +0000 https://americanconservativemovement.com/?p=197863 This article was originally published by The Defender — Children’s Health Defense’s News & Views Website.

    Google, Microsoft, Facebook, TikTok and the majority of medical and healthcare websites illegally harvest and sell private health information despite a federal crackdown on the practice, according to a new cybersecurity report.

    The report, by Toronto-based cybersecurity firm Feroot Security, analyzed hundreds of healthcare websites and found that more than 86% are collecting private data and transferring it to advertisers, marketers and Big Tech social media companies without user consent and in violation of privacy laws.

    As patients or consumers browse their favorite or trusted medical websites or sign in to hospital portals to access their private health records, invisible bits of HTML code — called “tracking pixels” — embedded on the websites harvest private information, such as whether patients have cancer, erectile dysfunction or are behind on their hospital bill.

    The information is repackaged and sold for a variety of uses, including to companies that target individual users with internet ads, according to the report.

    The risk of having personal data scraped is particularly high on log-in and registration pages where internet users supply troves of information, unaware it is being hijacked and sold. More than 73% of log-in and registration pages have invisible trackers that pirate personal health information, the study found.

    Approximately 15% of the tracking pixels analyzed by Feroot record users’ keystrokes, harvesting social security numbers, usernames and passwords, credit card and banking information, and an infinite variety of personal health data, including medical diagnosis and treatment.

    The study showed that “Google is the absolute dominant collector” of data. Ninety-two percent of the websites loaded on the Google search engine contained data-harvesting technology across wide sectors of the U.S. economy including healthcare and telehealth, banking and financial services, airlines, e-commerce, and the federal and state governments.

    The number two offender was Microsoft with 50.4% of websites on its platform hiding tracking tools, with Facebook next at 50.2% percent and TikTok at 7.41% percent and growing fast.

    Google, as the driver of its parent Alphabet, the world’s fourth largest company, is often called “the most powerful company in the world.” It counts on advertising, a lifeblood of the global digital economy, for 80% of its revenue.

    Microsoft and Facebook “round up the Top 3” of companies that systematically breach data, the report said. Representatives of Google, Microsoft, and Facebook denied their companies used tracking pixels to harvest personal data.

    Website owners are responsible for controlling data collection, a Google spokesperson said. Google policy prohibits Google Analytics and advertising customers, including for example hospital or telehealth websites, from collecting health data in violation of the U.S. Health Insurance Portability and Accountability Act (HIPAA). It’s up to the websites to determine “whether they are HIPAA-regulated entities and what their obligations are under HIPAA,” Google policy says.

    Personal health data collected by a tracker or third party without a user’s consent is a violation of HIPAA, said Feroot CEO Ivan Tsarynny.

    Big Tech companies “do have policies that talk about protecting health info,” Tsarynny said. But “the real-world application of these policies is a different story.”

    Feroot’s study comes as “concern grows regarding data mining companies using pixels/trackers that load into browsers from websites to collect privacy and sensitive user data,” the report stated.

    “Compliance regulators and government authorities are increasingly stepping in with bans, restrictions, and executive orders to curb them.”

    Eighteen major hospital systems were sued this year for sharing patients’ sensitive health data with Google, Facebook and other tech giants in violation of privacy laws, according to Becker’s Hospital Review.

    They include prominent academic medical centers such as the University of Pittsburgh Medical Center, the University of Chicago Medical Center, the University of Iowa Medical Center, Chicago-based Northwestern Memorial Hospital and the University of California San Francisco Medical Center.

    Prompted by growing concerns over data theft and the article, “‘Out of Control’: Dozens of Telehealth Startups Sent Sensitive Health Information to Big Tech Companies,” Feroot launched an investigation “to ascertain the exact magnitude and pervasiveness of social media pixels/trackers collecting and transferring personal, sensitive, and private data using pixels or trackers.”

    The security platform Feroot sells to companies “made it possible to get detailed facts regarding active client-side e-skimming,” the company said.

    Feroot collected data on pixels/trackers during an eight-week period in January and February.

    The company said it examined more than 3,675 organizations with unique websites in seven economic sectors. It studied 108,836 unique web pages, including especially vulnerable login, registration and credit card processing pages, 227 trackers and 7 million data transfers.

    Key findings from ‘Beware of Pixels & Trackers’:

    • Pixel trackers are “common and abundant” — an average of 13.16 pixels/trackers were found per website, “with Google, Microsoft, Meta (owner of Facebook), ByteDance (owner of TikTok), and Adobe being some of the most common.”
    • “Mission-critical” webpages, such as log-in or registration pages, increase the risk of exposing private information. An average of 5.96% of websites had pixels/trackers on webpages reading user input forms containing privacy or sensitive data.
    • Pixel trackers transfer data to foreign locations around the globe — “about 5% of the data transferred by pixels/trackers loaded from US-based websites is sent outside the US.”
    • Pixel trackers collect and transfer data without first obtaining the explicit consent of visitors.
    • Pixels and trackers are loading from domains banned by the U.S. government and various U.S. states and even from some of those same governments, including Russia and China. Data obtained by Russian and Chinese websites is a security risk from surveillance and spying.
    • Meta (owner of Facebook and Instagram) and TikTok, owned by Chinese company ByteDance, were “particularly worrisome” for privacy invasion and surveillance risks. Thirty-four U.S. states, both Republican and Democratic-controlled, have banned the use of TikTok on government devices. Montana in May banned the app on all personal devices.
    • TikTok is often present whether or not the TikTok app is deleted. TikTok pixels/trackers can still “load into webpages handling mission-critical user data and can collect and transfer it.”

    GoodRX case highlights corporate deceit around data-sharing

    While corporations face losing profit and reputation from data breaches or fines for causing them, individuals face a potentially catastrophic loss of privacy when major health websites harvest and sell their information, according to the Federal Trade Commission (FTC).

    In February, the FTC fined popular discount drug and telehealth site GoodRx for “failing to report its unauthorized disclosure of consumer health data to Facebook, Google, and other companies.”

    The action to “bar GoodRx from sharing consumers’ sensitive health information for advertising” was the FTC’s first enforcement action under its Health Breach Notification Rule.

    “Digital health companies and mobile apps should not cash in on consumers’ extremely sensitive and personally identifiable health information,” FTC Bureau of Consumer Protection Director Samuel Levine said in a news release after the settlement. “The FTC is serving notice that it will use all of its legal authority to protect American consumers’ sensitive data from misuse and illegal exploitation.”

    The FTC enforcement against GoodRx revealed a particularly egregious, yet not uncommon, example of how corporate health and medical websites betray patient trust and manipulate patient data, the FTC said.

    According to the FTC’s complaint, GoodRx violated the law by improperly sharing sensitive personal health information since at least 2017, though it promised otherwise.

    The company “deceptively promised its users that it would never share personal health information with advertisers or other third parties,” the FTC charged, and deceptively displayed a seal at the bottom of its telehealth services homepage “falsely suggesting to consumers that it complied with … HIPAA.”

    In reality, the FTC complaint said, GoodRx “monetized its users’ personal health information, and used data it shared with Facebook to target GoodRx’s own users with personalized health- and medication-specific advertisements on Facebook and Instagram.”

    For example, GoodRx in August 2019 made lists of its users “who had purchased particular medications such as those used to treat heart disease and blood pressure, and uploaded their email addresses, phone numbers, and mobile advertising IDs to Facebook so it could identify their profiles,” according to the complaint.

    “GoodRx then used that information to target these users with health-related advertisements.”

    People who accessed GoodRx coupons to purchase, for instance, Viagra would see ads for erectile dysfunction medication on their Facebook or Instagram page ads, the FTC says.

    “Similarly, people who had used GoodRx’s telehealth services to get treatment for sexually transmitted diseases would get ads for STD testing services.”

    GoodRx disclosed to Facebook the medication purchase data it receives from pharmacy benefit managers and also used the data to target ads.

    By using Facebook’s ad targeting platform, the FTC said, “GoodRx designed campaigns that targeted customers with ads based on their health information. For example, if a customer had revealed a possible erectile dysfunction issue to GoodRx, they might have seen an ad on Facebook like Exhibit A in the FTC complaint.”

    Source: Federal Trade Commission complaint

    In February, California-based GoodRx, a $2.1 billion company, paid a $1.5 million civil penalty to the FTC to settle the complaint and denied any wrongdoing.

    Howard Danzig, founder and president of Employers Committed to Control Health Insurance Costs, said “fining GoodRx just $1.5 million dollars is not even a slap on the wrist. While many employers are so vigilant about respecting the guidelines of the HIPAA privacy laws, large tech companies basically get a pass.”

    “How about major penalties for Facebook, Google and any others who were the beneficiaries of this information?” he wrote on his LinkedIn page with almost 9,000 followers.

    “How about determining whether or not there were any criminal violations that should be pursued against the individuals who actually collaborated to do this? How about ‘REPARATIONS’ from the companies involved to the people and customers whose privacy was breached?”

    The data breach occurred for “advertising purposes,” he noted. “How far afield can this really be taken and how far afield has it been taken?”

    This article was originally published by The Defender — Children’s Health Defense’s News & Views Website under Creative Commons license CC BY-NC-ND 4.0. Please consider subscribing to The Defender or donating to Children’s Health Defense.
    ]]>
    https://americanconservativemovement.com/google-microsoft-facebook-among-tech-giants-illegally-harvesting-private-health-information/feed/ 0 197863
    Another Homeless Camp in Democrat Hellhole Seattle Explodes, This Time Near Google’s Fremont Campus https://americanconservativemovement.com/another-homeless-camp-in-democrat-hellhole-seattle-explodes-this-time-near-googles-fremont-campus/ https://americanconservativemovement.com/another-homeless-camp-in-democrat-hellhole-seattle-explodes-this-time-near-googles-fremont-campus/#comments Tue, 15 Aug 2023 18:32:07 +0000 https://americanconservativemovement.com/?p=195779 It’s ironic that search giant Google has been suppressing information that paints Democrat-run cities as the criminal hellholes that they are because an enormous homeless encampment near their campus in Seattle just blew up.

    Literally.

    According to intrepid local journalist Jonathan Choe:

    Massive explosion at the notorious Mercer St. encampment in Seattle’s South Lake Union neighborhood Monday afternoon. It’s right across the street from Google, Novo Nordisk, and several apartment buildings in one of the busiest parts of the city.

    Neighbors have been begging city and state leaders to remove this blight for months after a murder, open air drug use, and other problems at the camp. Will this be the last straw for Mayor Bruce Harrell?

    He’s always had the ability to go in and take control from WSDOT under emergency circumstances. Harrell moved in and cleared another problematic WSDOT encampment in downtown Seattle last month after a similar explosion.

    Governor Jay Inslee has already spent millions of dollars this year to remove camps on state right of ways with very little to show for it. The current plan is clearly failing.

    As long as corporate media continues to pretend like everything’s okay in Democrat-run cities, the carnage will continue. Their  policies are failing, and unfortunately it seems to be by design. Sound off about this on The Liberty Daily Substack.

    ]]>
    https://americanconservativemovement.com/another-homeless-camp-in-democrat-hellhole-seattle-explodes-this-time-near-googles-fremont-campus/feed/ 1 195779
    Google’s New AI Technology Paves Way for Humanoid Sci-Fi Robots https://americanconservativemovement.com/googles-new-ai-technology-paves-way-for-humanoid-sci-fi-robots/ https://americanconservativemovement.com/googles-new-ai-technology-paves-way-for-humanoid-sci-fi-robots/#respond Mon, 31 Jul 2023 23:39:03 +0000 https://americanconservativemovement.com/?p=195382 Google has unveiled its latest artificial intelligence model, paving the way for the development of sentient robots as seen only in the realm of science fiction.

    The Robotic Transformer 2 (RT-2) is trained on both web and robotics data, having the capability of translating this knowledge into generalized instructions for robotic control, according to a July 28 report by Google DeepMind.

    When a human being learns a task, they do so by reading and observing. In the same way, RT-2 uses text and image data to recognize patterns and perform relevant tasks, even if the robot isn’t trained to do that specific function. This is vastly different from most of the previous generation of robots, which are only capable of performing preprogrammed tasks.

    For instance, if a task involved throwing away a piece of trash, an older robot would have to be told explicitly about it, including instructions for identifying the trash, picking it up, and the method of throwing it away.

    However, as RT-2 has access to vast swathes of web data, it already has an idea of what the term “trash” refers to and is capable of identifying and disposing of it without being specifically trained.

    RT-2 will be able to distinguish between a full bag of chips from an empty bag of chips, recognizing that the latter is “trash.”

    “Their training isn’t just about, say, learning everything there is to know about an apple: how it grows, its physical properties, or even that one purportedly landed on Sir Isaac Newton’s head. A robot needs to be able to recognize an apple in context, distinguish it from a red ball, understand what it looks like, and most importantly, know how to pick it up,” Vincent Vanhoucke, head of robotics at Google DeepMind, wrote in a July 28 report.

    Robots Being Trained for Human Jobs

    Google claims to have trained RT-2 in more than 6,000 trials. It was found to be performing as well as RT-1 in tasks it was trained on, referred to as “seen” tasks. However, when it came to “unseen” tasks or tasks for which neither of the robots were trained for, RT-2 showed almost double the performance of RT-1.

    In addition to RT-2, multiple other robots are in development across the world that seek to mimic human capabilities in intelligence and movement.

    Earlier this year, mechanical engineers at UCLA School of Engineering revealed a robot named “Artemis,” with its main innovation being arms and legs designed to give it movement abilities similar to human beings. The robot’s springy features allow it to bounce back when pushed, just like a human would.

    In May, it was reported that a California-based AI robotics startup called “Figure” raised more than $70 million to build a humanoid robot that the company believes will be used to perform manual labor.

    In the company’s master plan, CEO Brett Adcock wrote that robots will “eventually be capable of performing tasks better than humans.”

    As humanoid robots increasingly join the workforce, from farmlands to factories, Mr. Adcock expects the cost of labor to decrease until it becomes equivalent to the price of renting a robot.

    The Robot Threat

    As robots achieve more human-like intelligence and potentially self-awareness, experts have raised concerns about the threats these machines pose to human beings.

    According to “Stop Killer Robots,” a campaign that calls for a new international law in autonomous weapons systems, there’s a risk of “dehumanization” with the advent of robots.

    “Many technologies with varying degrees of autonomy are already being widely rolled out without pausing to consider the consequences of normalising their use. Why do we need to talk about this? Because machines don’t see us as people, just another piece of code to be processed and sorted,” the campaign’s website reads.

    “The technologies we’re worried about reduce living people to data points. Our complex identities, our physical features and our patterns of behaviour are analysed, pattern-matched and sorted into profiles, with decisions about us made by machines according to which pre-programmed profile we fit into.”

    At present, various nations are developing killer robots that can have “devastating consequences,” according to the campaign. However, what starts out as a killer robot on the battlefield can spread into areas such as policing as well, it stated.

    Drone Allegedly Went Rogue

    The danger of machines only seeing human beings as a piece of code was made evident during a recent simulated thought experiment by the U.S. military that showed that an AI-enabled drone could end up turning against its own operator without being instructed to do so.

    In the simulated test, an AI drone was assigned a mission to identify and destroy Surface-to-Air Missile (SAM) sites, with a human operator being the ultimate decision-maker.

    “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat,” Col. Tucker Hamilton, the U.S. Air Force chief of AI Test and Operations, said at a June event in London hosted by the Royal Aeronautical Society (RAS).

    “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

    Col. Hamilton later said he “misspoke” and contacted the RAS to clarify his comments.

    “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,” he told RAS.

    Col. Hamilton told RAS that the Air Force hasn’t tested any weaponized AI in this way—real or simulated.

    “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he said.

    The robot militarization of armies is something that’s expected to happen in the coming years.

    In a discussion with Defense One in March, Joint Chiefs of Staff Chairman Gen. Mark Milley said that “over the next 10 to 15 years, you’ll see large portions of advanced countries’ militaries become robotic.”

    “If you add robotics with artificial intelligence and precision munitions and the ability to see at range, you’ve got the mix of a real fundamental change,” he said. “That’s coming. Those changes, that technology … we are looking at inside of 10 years.”

    Caden Pearson contributed to this report. Article cross-posted from our premium news partners at The Epoch Times.

    ]]>
    https://americanconservativemovement.com/googles-new-ai-technology-paves-way-for-humanoid-sci-fi-robots/feed/ 0 195382
    Russia Blocks Access to Google News https://americanconservativemovement.com/russia-blocks-access-to-google-news/ https://americanconservativemovement.com/russia-blocks-access-to-google-news/#respond Sat, 24 Jun 2023 02:38:34 +0000 https://americanconservativemovement.com/?p=193922 As tensions are alleged to be flaring between the Russian military and Yevgeny V. Prigozhin, a prominent mercenary leader, access to Google News in Russia has been stonewalled.

    While little is known about what is actually happening in the region, with alarming claims being made from both sides, the blockade ensues amid allegations from Russian generals, who reportedly accuse Prigozhin, the chief of the Wagner Group, of conspiring to orchestrate a coup against President Vladimir Putin.

    While Google has not issued a statement so far, the consequences on the ground are evident. Internet freedom in Russia is taking a hit, as NetBlocks, an internet watch service, discerned that a multitude of internet service providers (ISPs) are curbing access to the platform.

    NetBlocks’ analysis revealed that Rostelecom, U-LAN, and Telplus are among the at least five ISPs obstructing access to the news service. It has also been observed that the bandwidth allocated to Google News has been deliberately trimmed by several other ISPs.

    This development is only a fragment of a bigger picture, where President Putin has been steadily amassing power through an intricate technological apparatus, as well as working on ways to block VPNs in the country.

    Roskomnadzor, the Russian censorship body, plays an integral role in this machinery, which is committed to meticulously overseeing the internet within Russia.

    Article cross-posted from Reclaim The Net.

    ]]>
    https://americanconservativemovement.com/russia-blocks-access-to-google-news/feed/ 0 193922