A student is suing Apple Inc for $1bn (£0.77bn), claiming that its in-store AI led to his mistaken arrest.
Ousmane Bah, 18, said he was accused of stealing from Apple Stores in four US states, and arrested at his home in New York last autumn.
He believes Apple's algorithms linked video footage of the thief with his name, leading to the charges.
Apple has told the BBC that it does not use facial recognition technology in its stores.
Mr Bah claims that a detective reviewed security footage from the time of one of the crimes and found the thief looked "nothing like" him.
Mr Bah had previously lost his provisional driving licence, which he believes may have been used by the thief during the robberies. The licence is not meant to be used for identification purposes, and does not include a photograph.
Mr Bah believes that Apple's algorithms are now trained to connect his name to images of the thief.
A detective with the New York Police Department allegedly told Mr Bah that the thief probably used Mr Bah's driving licence as identification during one of the robberies. The detective reportedly said that this may have caused Mr Bah to be charged with thefts committed at Apple Stores in New York, Delaware, New Jersey and Massachusetts, according to court papers
Mr Bah said one of the charges was for the theft of Apple pencils from a store in Boston - a city he had never visited. On the date of the robbery, he says he was attending his senior prom in New York.
Mr Bah claims that travelling to different states to respond to charges filed against him has affected his college attendance, and his grades have suffered as a result.
Apple's Face ID technology caused a stir when it was launched on the iPhone X in 2017, with commentators concerned that users' biometric data could be hacked if they used the feature. As far as is known, this is the first case against Apple that claims its facial recognition technology has been used to identify customers who have visited its stores.
Amazon: Facial recognition bias claims are 'misleading'
Amazon has defended its facial-recognition tool, Rekognition, against claims of racial and gender bias, following a study published by the Massachusetts Institute of Technology.
The researchers compared tools from five companies, including Microsoft and IBM.
While none was 100% accurate, it found that Amazon's Rekognition tool performed the worst when it came to recognising women with darker skin.
Amazon said the study was "misleading".
The study found that Amazon had an error rate of 31% when identifying the gender of images of women with dark skin.
This compared with a 22.5% rate from Kairos, which offers a rival commercial product, and a 17% rate from IBM.
By contrast Amazon, Microsoft and Kairos all successfully identified images of light-skinned men 100% of the time.
The tools work by offering a probability score that they are correct in their assumption.
Facial-recognition tools are trained on huge datasets of hundreds of thousands of images.
But there is concern that many of these datasets are not sufficiently diverse to enable the algorithms to learn to correctly identify non-white faces.
Clients of Rekognition include a company that provides tools for US law enforcement, a genealogy service and a Japanese newspaper, according to the Amazon Web Services website.
Use with caution
In a blog post, Dr Matt Wood, general manager of artificial intelligence at AWS, highlighted several concerns about the study, including that it did not use the latest version of Rekognition.
He said the findings from MIT did not reflect Amazon's own research, which had used 12,000 images of men and women of six different ethnicities.
"Across all ethnicities, we found no significant difference in accuracy with respect to gender classification," he wrote.
He also said the company advised law enforcement to use machine-learning facial-recognition results when the certainty of the result was listed at 99% or higher only and never to use it as the sole source of identification.
"Keep in mind that our benchmark is not very challenging. We have profile images of people looking straight into a camera. Real-world conditions are much harder," said MIT researcher Joy Buolamwini in a Medium post responding to Dr Wood's criticisms.
In an earlier YouTube video, published in June 2018, MIT researchers showed various facial-recognition tools, including Amazon's, suggesting that the US TV presenter Oprah Winfrey was probably a man, based on a professional photograph of her.
"The main message is to check all systems that analyse human faces for any kind of bias. If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias free," Ms Buolamwini said.
Face recognition police tools 'staggeringly inaccurate'
Police must address concerns over the use of facial recognition systems or may face legal action, the UK's privacy watchdog says.
Information Commissioner Elizabeth Denham said the issue had become a "priority" for her office.
An investigation by campaign group Big Brother Watch suggested the technology flagged up a "staggering" number of innocent people as suspects.
But police have defended the technology and say safeguards are in place.
Which police forces are using facial recognition?
Big Brother Watch submitted freedom of information requests to every police force in the UK.
Two police forces acknowledged they were currently testing facial recognition cameras.
The Metropolitan Police used facial recognition at London's Notting Hill carnival in 2016 and 2017 and at a Remembrance Sunday event.
Its system incorrectly flagged 102 people as potential suspects and led to no arrests.
In figures given to Big Brother Watch, South Wales Police said its technology had made 2,685 "matches" between May 2017 and March 2018 - but 2,451 were false alarms.
Leicestershire Police tested facial recognition in 2015, but is no longer using it at events.
How does it work?
Police facial recognition cameras have been trialled at events such as football matches, festivals and parades.
Any potential matches are flagged for a police officer to investigate further.
How have the police forces responded?
South Wales Police has defended its use of facial recognition software and says the system has improved with time.
"When we first deployed and we were learning how to use it... some of the digital images we used weren't of sufficient quality," said Deputy Chief Constable Richard Lewis. "Because of the poor quality, it was identifying people wrongly. They weren't able to get the detail from the picture."
It said a "number of safeguards" prevented any action being taken against innocent people.
"Firstly, the operator in the van is able to see that the person identified in the picture is clearly not the same person, and it's literally disregarded at that point," said Mr Lewis.
"On a much smaller number of occasions, officers went and spoke to the individual... realised it wasn't them, and offered them the opportunity to come and see the van.
"At no time was anybody arrested wrongly, nobody's liberty was taken away from them."
'Checks and balances'
The Metropolitan Police told the BBC it was testing facial recognition to see whether it could "assist police in identifying known offenders in large events, in order to protect the wider public".
"Regarding 'false' positive matches - we do not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts," it said in a statement.
"All alerts against the watch list are deleted after 30 days. Faces in the video stream that do not generate an alert are deleted immediately."
But Big Brother Watch said it was concerned that facial recognition cameras would affect "individuals' right to a private life and freedom of expression".
It also raised concerns that photos of any "false alarms" were sometimes kept by police for weeks.
"Automated facial recognition technology is currently used by UK police forces without a clear legal basis, oversight or governmental strategy," the group said.
What does Big Brother Watch want?
Big Brother Watch wants police to stop using facial recognition technology. It has also called on the government to make sure that the police do not keep the photos of innocent people.
Information Commissioner Elizabeth Denham said police had to demonstrate that facial recognition was "effective" that no less intrusive methods were available.
"Should my concerns not be addressed I will consider what legal action is needed to ensure the right protections are in place for the public," said Ms Denham.
The Home Office told the BBC it plans to publish its biometrics strategy in June, and it "continues to support police to respond to changing criminal activity and new demands".
"When trialling facial recognition technologies, forces must show regard to relevant policies, including the Surveillance Camera Code of Practices and the Information Commissioner's guide," it said in a statement.