Biometric Update – American Conservative Movement https://americanconservativemovement.com American exceptionalism isn't dead. It just needs to be embraced. Tue, 30 Apr 2024 07:21:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://americanconservativemovement.com/wp-content/uploads/2022/06/cropped-America-First-Favicon-32x32.png Biometric Update – American Conservative Movement https://americanconservativemovement.com 32 32 135597105 Creeping Tyranny: Facial Recognition Coming to LA Transit After Passenger Fatally Stabbed https://americanconservativemovement.com/facial-recognition-coming-to-la-transit-after-passenger-fatally-stabbed/ https://americanconservativemovement.com/facial-recognition-coming-to-la-transit-after-passenger-fatally-stabbed/#respond Tue, 30 Apr 2024 07:19:53 +0000 https://americanconservativemovement.com/?p=203082 Editor’s Note: The news article below is a mostly unbiased report about facial recognition technology advancing in cities around the world. It specifically highlights Los Angeles, but we can expect similar pushes for “common sense” uses of facial recognition in most major metros soon.

For the record, this publication adamantly opposes such moves. Many frame it as a privacy issue. We see it as a tool for near-future tyranny. Once installed, it is inevitable that this type of technology will be widely and grossly abused by both public and private entities for the sake of their “greater good.” With that said, here’s Joel R. McConvey from Biometric Update


Transit officials in Los Angeles have declared a public safety emergency over the stabbing of a 66-year-old woman in the city’s Metro transit system, and are planning to deploy facial recognition tools to help identify repeat offenders and deter violent crime, according to reports from the Los Angeles Times and Los Angeles Daily News.

Beatings, stabbings and other violent incidents have been rising on L.A.’s public buses and trains, including four attacks in April. The perpetrator of the stabbing attack that killed Mirna Soza Arauz had a prior ban from the transit system for violent altercations. But Metro says its officers had no way of knowing that a dangerous individual was riding the train. Had facial recognition systems been in place, they might have made the match.

Calling Soza Arauz’s death “a shot across the bow,” the Metro board has given unanimous support to a motion asking the CEO to report back in two months on the feasibility of facial recognition deployments on buses and trains.

The situation is being framed in the direst of terms by those who initiated the request. “Our agency has grappled with a very real and unacceptable level of violence, illicit drug use sales and overdoses, and a blatant disregard for the law, our code of conduct and, quite frankly, basic human decency,” says board member and Los Angeles County Supervisor Kathryn Barger. “Until we completely reverse security reality on our system, I’m concerned that we will never come back.”

FRT payments common but security use cases come with privacy concerns

Facial recognition and other biometric systems have been trialed or installed in transit systems around the world, most often for payments. Deployments in MoscowMumbaiShanghai and Indonesia have differed in scale, modality and approach. For security purposes, Bogota deployed facial recognition software from Corsight AI for real-time surveillance of the city’s TransMilenio system, which resulted in six arrests. And Sao Paolo outfitted its 3-Red subway line with face biometrics and object detection systems that trigger alerts for security operators.

One place that transit riders will not be able to use facial recognition to pay for their rides any time soon is New York City. Gothamist reports on a new law that requires the Metropolitan Transportation Authority to “not use, or arrange for the use, of biometric identifying technology, including but not limited to facial recognition technology, to enforce rules relating to the payment of fares.”

Cautious approach to facial recognition depends on perspective

Academia has typically recommended a cautious approach to using facial recognition for law enforcement in public spaces – although that caution takes different forms and focuses. An article in the Cambridge Law Journal from December 2023 advocates for an incremental approach to regulating the technology. Per the abstract, “by analyzing legislative instruments, judicial decisions, deployment practices of UK law enforcement authorities, various procedural and policy documents, as well as available safeguards, the article suggests incremental adjustments to the existing legal framework instead of sweeping regulatory change.”

Other voices in the debate, however, argue that advances in facial recognition technology are outpacing laws and regulations, and that a swift, comprehensive response should be the government’s primary concern. In a new report entitled “Facial Recognition Technology: Current Capabilities, Future Prospects, and Governance,” the National Academies of Sciences, Engineering and Medicine “recommends consideration of federal legislation and an executive order” on facial recognition tools.

“An outright ban on all FRT under any condition is not practically achievable, may not necessarily be desirable to all, and is in any event an implausible policy, but restrictions or other regulations are appropriate for particular use cases and contexts,” says the report. “In light of the fact that FRT has the potential for mass surveillance of the population, courts and legislatures will need to consider the implications for constitutional protections related to surveillance, such as due process and search and seizure thresholds and free speech and assembly rights.”

Meanwhile, the U.S. Commission on Civil Rights has launched an investigation into facial recognition and its use by American federal agencies.

]]>
https://americanconservativemovement.com/facial-recognition-coming-to-la-transit-after-passenger-fatally-stabbed/feed/ 0 203082
The Era of the Celebrity Deepfakes Has Begun, and It May Kill What Little Trust People Still Have https://americanconservativemovement.com/the-era-of-the-celebrity-deepfakes-has-begun-and-it-may-kill-what-little-trust-people-still-have/ https://americanconservativemovement.com/the-era-of-the-celebrity-deepfakes-has-begun-and-it-may-kill-what-little-trust-people-still-have/#respond Wed, 31 Jan 2024 15:03:22 +0000 https://americanconservativemovement.com/?p=200834 (Biometric Update)—U.S. President Joe Biden is not robocalling voters to tell them not to vote in state primaries – and Pindrop knows which AI text-to-speech (TTS) engine was used to fake his voice. A post written by the voice fraud detection firm’s CEO says its software analyzed spectral and temporal artifacts in the audio to determine that the biometric deepfake came from generative speech synthesis startup ElevenLabs.

“Pindrop’s deepfake engine analyzed the 39-second audio clip through a four-stage process,” writes CEO Vijay Balasubramaniyan. “Audio filtering & cleansing, feature extraction, breaking the audio into 155 segments of 250 milliseconds each, and continuous scoring all 155 segments of the audio.” Each segment is assigned a liveness score indicating potential artificiality.

Pindrop’s system replicates end-user listening conditions by simulating typical phone channel conditions. Using a deep neural network, it outputs low-level spectro-temporal features as a fakeprint – “a unit-vector low-rank mathematical representation preserving the artifacts that distinguish between machine-generated vs. generic human speech.” Artifacts tend to show up more prominently in phrases with linguistic fricatives and, in the case of the Biden audio, in phrases the president is unlikely to have uttered.

Balasubramaniyan points out that, “even though the attackers used ElevenLabs this time, it is likely to be a different Generative AI system in future attacks.” For its part, ElevenLabs has suspended the creator of the Biden deepfake, according to Bloomberg.

The Pindrop Co-founder and CEO wrote about the potential of biometric liveness detection as a defense against deepfakes in an August Biometric Update guest post.

Cause the fakers gonna fake, fake, fake, deepfake

Few forces in the current universal order command as much attention and have as much power to cause major shifts in culture as generative AI. One such force, however, is Swifites. Fans of Taylor Swift have mustered a campaign to purge the internet of pornographic deepfakes of the iconic performer that generated millions of views on the social media network X, Elon Musk’s less-regulated incarnation of Twitter. The issue has even reached the White House, which expressed “alarm” at the circulation of the fake Swift images.

Speaking to ABC news, White House Press Secretary Karine Jean-Pierre said that “while social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people.”

In response to the concern, X temporarily paused searches for the singer’s name and pledged to help Swifties get the images taken down. The user accused of creating the images, Toronto man Zubear Abdi, has made his account private. Toronto-based music publication Exclaim! reports that Swift is considering suing Abdi.

But, it says, the Swifties may get to him first.

The bipartisan “Preventing Deepfakes of Intimate Images Act,” drafted to address the issue of sexually explicit AI-generated deepfakes, is currently referred to the U.S. House Committee on the Judiciary.

]]>
https://americanconservativemovement.com/the-era-of-the-celebrity-deepfakes-has-begun-and-it-may-kill-what-little-trust-people-still-have/feed/ 0 200834
Biometrics in Retail Sparks Concerns Among Consumers, Privacy Advocates https://americanconservativemovement.com/biometrics-in-retail-sparks-concerns-among-consumers-privacy-advocates/ https://americanconservativemovement.com/biometrics-in-retail-sparks-concerns-among-consumers-privacy-advocates/#comments Mon, 28 Aug 2023 23:16:43 +0000 https://americanconservativemovement.com/?p=196074 Companies are pitching biometric payment as a solution to fraud and theft. But their fast expansion is making some consumers and privacy advocates wary – especially when companies do not offer alternative payment options. By 2026, almost $5.8 trillion in payments are expected to be made using biometrics each year, according to a Goode Intelligence forecast.

The latest example comes from California. Biometrics fintech firm PopID invited scrutiny at a student event at the University of Southern California (USC) campus in Los Angeles after leaving no choice to purchase food inside the venue aside from its facial recognition payment system PopPay.

“[It’s] slightly coercive, because you’re not really being given a choice between normal payment methods and using your face, which is a pretty intimate subject matter,” USC student Vera Wang told student paper The Daily Trojan.

PopID explained that the company was “a paid sponsor of promotional events to market our products.” However, the move has invited questions about PopID’s commitment to privacy and data security.

The company, co-founded and seeded by food and retail conglomerate Cali Group, says it complies with the strictest law in the United States regarding facial recognition data, the Illinois Biometric Information Privacy Act, the student paper notes that its privacy policy states that it “cannot guarantee the security of your data transmitted to our site.”

Privacy advocates issue warnings

Like many other biometric payment companies, PopID has been busy this year, partnering with restaurants such as Steak ‘n ShakeTyme’s self-checkout restaurant kiosks, and Samsung’s POS kiosks. Similar efforts are being made by Amazon, Mastercard, Clear and JPMorgan Chase. The latter has piloted palm and face-based payments at the Miami Grand Prix Formula One race in May.

January research from Research and Markets supports the idea that consumers can be reassured that biometrics payments are safe. But privacy advocates have raised concerns about the risk of biometric information being stolen by identity thieves or abused by law enforcement agencies, Bloomberg Law reports.

Digital rights group Fight for the Future, for instance, has been organizing an online petition calling on grocery stores not to include Amazon’s palm-scanning technology as a payment option. The group warns that sensitive data could potentially be abused, hacked or stolen.

Cobun Zweifel-Keegan, managing director of the International Association of Privacy Professionals trade group, notes that companies usually don’t keep raw biometric information but instead store a computer’s interpretation of a physical feature, like a set of numbers.

But other experts, such as Jen King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, maintain that hackers or fraudsters could try to combine scans with other pieces of consumer data.

“If I look at an image of a palm, I probably can’t tell it’s you versus me necessarily,” King told Bloomberg. “But that doesn’t say it’s not identifiable, because if it wasn’t identifiable they wouldn’t be using it.”

The U.S. has seen piecemeal efforts to regulate biometric payments, including a state-level bill sponsored by New York State Senator James Skoufis. An earlier request to the Washington State Liquor and Cannabis Board to allow the use of biometrics for age verification for restricted purchases was tanked, according to a board spokesman.

Meanwhile, similar skepticism about the tech is on the rise in other parts of the world.

Australian privacy group warns about biometrics in retail

The Australian Privacy Foundation is warning that increasing CCTV usage in stores is a major concern. The government is also looking into facial recognition tools with the Attorney-General’s Department recently completing a comprehensive review of the Privacy Act, according to the Sydney Morning Herald.

The concerns were sparked with recent investment by Australian supermarkets into surveillance following a surge in shoplifting: Last year it was revealed Kmart and Bunnings introduced facial recognition technology in stores, sparking an investigation by the Office of the Australian Information Commissioner (OAIC).

A more recent subject of controversy is Woolworths. The supermarket chain announced it will invest $40 million on CCTV upgrades, body-worn cameras and other devices. An average Woolworths store has 62 CCTV cameras while self-checkout desks are equipped with six to eight cameras, including an AI system determining whether the correct items are being scanned.

The non-government organization says that while supermarket employees are likely not accessing or analyzing this data, external service providers have this capability. The prospect of the data collected at the supermarket proliferating raises the possibility that biometric technology could be applied to it after the fact, or without customers being aware of it.

“There’s the lack of reciprocity when you have technology like this. You don’t get to know what a company is doing, so you can’t even decide if you don’t want to be paranoid,” says Australian Privacy Foundation Chair David Vaile.

According to Woolworths, stock monitoring cameras record silhouettes of customers or staff, while the self-serve checkout cameras blur faces, blackout PIN pads and are not viewed live. All CCTV footage was stored locally and only accessed by store team leaders and the investigation teams, along with police if necessary, while self-scan checkout footage was stored in Australia.

About the Author

Masha Borak is a technology journalist. Her work has appeared in Wired, Business Insider, Rest of World, and other media outlets. Previously she reported for the South China Morning Post in Hong Kong. Reach out to her on LinkedIn. Article cross-posted from Biometric Update.

]]>
https://americanconservativemovement.com/biometrics-in-retail-sparks-concerns-among-consumers-privacy-advocates/feed/ 1 196074