Joel R. McConvey, Biometric Update – American Conservative Movement https://americanconservativemovement.com American exceptionalism isn't dead. It just needs to be embraced. Tue, 30 Apr 2024 07:21:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://americanconservativemovement.com/wp-content/uploads/2022/06/cropped-America-First-Favicon-32x32.png Joel R. McConvey, Biometric Update – American Conservative Movement https://americanconservativemovement.com 32 32 135597105 Creeping Tyranny: Facial Recognition Coming to LA Transit After Passenger Fatally Stabbed https://americanconservativemovement.com/facial-recognition-coming-to-la-transit-after-passenger-fatally-stabbed/ https://americanconservativemovement.com/facial-recognition-coming-to-la-transit-after-passenger-fatally-stabbed/#respond Tue, 30 Apr 2024 07:19:53 +0000 https://americanconservativemovement.com/?p=203082 Editor’s Note: The news article below is a mostly unbiased report about facial recognition technology advancing in cities around the world. It specifically highlights Los Angeles, but we can expect similar pushes for “common sense” uses of facial recognition in most major metros soon.

For the record, this publication adamantly opposes such moves. Many frame it as a privacy issue. We see it as a tool for near-future tyranny. Once installed, it is inevitable that this type of technology will be widely and grossly abused by both public and private entities for the sake of their “greater good.” With that said, here’s Joel R. McConvey from Biometric Update


Transit officials in Los Angeles have declared a public safety emergency over the stabbing of a 66-year-old woman in the city’s Metro transit system, and are planning to deploy facial recognition tools to help identify repeat offenders and deter violent crime, according to reports from the Los Angeles Times and Los Angeles Daily News.

Beatings, stabbings and other violent incidents have been rising on L.A.’s public buses and trains, including four attacks in April. The perpetrator of the stabbing attack that killed Mirna Soza Arauz had a prior ban from the transit system for violent altercations. But Metro says its officers had no way of knowing that a dangerous individual was riding the train. Had facial recognition systems been in place, they might have made the match.

Calling Soza Arauz’s death “a shot across the bow,” the Metro board has given unanimous support to a motion asking the CEO to report back in two months on the feasibility of facial recognition deployments on buses and trains.

The situation is being framed in the direst of terms by those who initiated the request. “Our agency has grappled with a very real and unacceptable level of violence, illicit drug use sales and overdoses, and a blatant disregard for the law, our code of conduct and, quite frankly, basic human decency,” says board member and Los Angeles County Supervisor Kathryn Barger. “Until we completely reverse security reality on our system, I’m concerned that we will never come back.”

FRT payments common but security use cases come with privacy concerns

Facial recognition and other biometric systems have been trialed or installed in transit systems around the world, most often for payments. Deployments in MoscowMumbaiShanghai and Indonesia have differed in scale, modality and approach. For security purposes, Bogota deployed facial recognition software from Corsight AI for real-time surveillance of the city’s TransMilenio system, which resulted in six arrests. And Sao Paolo outfitted its 3-Red subway line with face biometrics and object detection systems that trigger alerts for security operators.

One place that transit riders will not be able to use facial recognition to pay for their rides any time soon is New York City. Gothamist reports on a new law that requires the Metropolitan Transportation Authority to “not use, or arrange for the use, of biometric identifying technology, including but not limited to facial recognition technology, to enforce rules relating to the payment of fares.”

Cautious approach to facial recognition depends on perspective

Academia has typically recommended a cautious approach to using facial recognition for law enforcement in public spaces – although that caution takes different forms and focuses. An article in the Cambridge Law Journal from December 2023 advocates for an incremental approach to regulating the technology. Per the abstract, “by analyzing legislative instruments, judicial decisions, deployment practices of UK law enforcement authorities, various procedural and policy documents, as well as available safeguards, the article suggests incremental adjustments to the existing legal framework instead of sweeping regulatory change.”

Other voices in the debate, however, argue that advances in facial recognition technology are outpacing laws and regulations, and that a swift, comprehensive response should be the government’s primary concern. In a new report entitled “Facial Recognition Technology: Current Capabilities, Future Prospects, and Governance,” the National Academies of Sciences, Engineering and Medicine “recommends consideration of federal legislation and an executive order” on facial recognition tools.

“An outright ban on all FRT under any condition is not practically achievable, may not necessarily be desirable to all, and is in any event an implausible policy, but restrictions or other regulations are appropriate for particular use cases and contexts,” says the report. “In light of the fact that FRT has the potential for mass surveillance of the population, courts and legislatures will need to consider the implications for constitutional protections related to surveillance, such as due process and search and seizure thresholds and free speech and assembly rights.”

Meanwhile, the U.S. Commission on Civil Rights has launched an investigation into facial recognition and its use by American federal agencies.

]]>
https://americanconservativemovement.com/facial-recognition-coming-to-la-transit-after-passenger-fatally-stabbed/feed/ 0 203082
The Era of the Celebrity Deepfakes Has Begun, and It May Kill What Little Trust People Still Have https://americanconservativemovement.com/the-era-of-the-celebrity-deepfakes-has-begun-and-it-may-kill-what-little-trust-people-still-have/ https://americanconservativemovement.com/the-era-of-the-celebrity-deepfakes-has-begun-and-it-may-kill-what-little-trust-people-still-have/#respond Wed, 31 Jan 2024 15:03:22 +0000 https://americanconservativemovement.com/?p=200834 (Biometric Update)—U.S. President Joe Biden is not robocalling voters to tell them not to vote in state primaries – and Pindrop knows which AI text-to-speech (TTS) engine was used to fake his voice. A post written by the voice fraud detection firm’s CEO says its software analyzed spectral and temporal artifacts in the audio to determine that the biometric deepfake came from generative speech synthesis startup ElevenLabs.

“Pindrop’s deepfake engine analyzed the 39-second audio clip through a four-stage process,” writes CEO Vijay Balasubramaniyan. “Audio filtering & cleansing, feature extraction, breaking the audio into 155 segments of 250 milliseconds each, and continuous scoring all 155 segments of the audio.” Each segment is assigned a liveness score indicating potential artificiality.

Pindrop’s system replicates end-user listening conditions by simulating typical phone channel conditions. Using a deep neural network, it outputs low-level spectro-temporal features as a fakeprint – “a unit-vector low-rank mathematical representation preserving the artifacts that distinguish between machine-generated vs. generic human speech.” Artifacts tend to show up more prominently in phrases with linguistic fricatives and, in the case of the Biden audio, in phrases the president is unlikely to have uttered.

Balasubramaniyan points out that, “even though the attackers used ElevenLabs this time, it is likely to be a different Generative AI system in future attacks.” For its part, ElevenLabs has suspended the creator of the Biden deepfake, according to Bloomberg.

The Pindrop Co-founder and CEO wrote about the potential of biometric liveness detection as a defense against deepfakes in an August Biometric Update guest post.

Cause the fakers gonna fake, fake, fake, deepfake

Few forces in the current universal order command as much attention and have as much power to cause major shifts in culture as generative AI. One such force, however, is Swifites. Fans of Taylor Swift have mustered a campaign to purge the internet of pornographic deepfakes of the iconic performer that generated millions of views on the social media network X, Elon Musk’s less-regulated incarnation of Twitter. The issue has even reached the White House, which expressed “alarm” at the circulation of the fake Swift images.

Speaking to ABC news, White House Press Secretary Karine Jean-Pierre said that “while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people.”

In response to the concern, X temporarily paused searches for the singer’s name and pledged to help Swifties get the images taken down. The user accused of creating the images, Toronto man Zubear Abdi, has made his account private. Toronto-based music publication Exclaim! reports that Swift is considering suing Abdi.

But, it says, the Swifties may get to him first.

The bipartisan “Preventing Deepfakes of Intimate Images Act,” drafted to address the issue of sexually explicit AI-generated deepfakes, is currently referred to the U.S. House Committee on the Judiciary.

]]>
https://americanconservativemovement.com/the-era-of-the-celebrity-deepfakes-has-begun-and-it-may-kill-what-little-trust-people-still-have/feed/ 0 200834