The US government's use of facial recognition technology to identify immigrants and citizens alike has been shrouded in controversy, with a recent investigation revealing that the app, called Mobile Fortify, is not designed to reliably verify identities. The app was approved by the Department of Homeland Security (DHS) without proper scrutiny, and its deployment has been criticized for undermining civil liberties and privacy.
According to records reviewed by Wired, Mobile Fortify has been used over 100,000 times since its launch in May 2025, but it is not capable of providing a positive identification of individuals. The technology relies on matching algorithms developed by the NEC Corporation of America, which has been shown to be inaccurate when images are taken outside controlled settings.
The app's design prioritizes speed and scale over accuracy, with a threshold that can be adjusted dynamically based on operational factors such as system load or performance speed. This means that even if the technology is not confident in a match, it may still surface a candidate pair for human review.
DHS has used Mobile Fortify to scan the faces of targeted individuals, as well as people later confirmed to be US citizens and others who were observing or protesting enforcement activity. The data collected through Fortify is stored in databases linked by a centralized platform called the Automated Targeting System (ATS), which can be accessed by other agencies beyond CBP's control.
The use of facial recognition technology has been criticized for its invasive nature, with experts arguing that it gives a veneer of certainty when there isn't one. Senator Ed Markey has warned that DHS officials have suggested building a database to catalog people who protest or observe immigration enforcement, citing public statements and internal directives.
The investigation also found that CBP has assumed responsibility for reviewing and adjudicating privacy reviews on the use of facial recognition technology, despite federal guidelines requiring a privacy assessment when an agency deploys new technology. The director of the Electronic Privacy Information Center's surveillance oversight program has described this approach as "cavalier" and warned that it undermines what little oversight is in place.
In response to these concerns, Senator Markey and colleagues have introduced legislation aimed at prohibiting ICE and CBP from using certain facial-recognition and biometric surveillance tools. The bill, short-titled the ICE Out of Our Faces Act, aims to prohibit the use of facial recognition technology for law or civil enforcement actions and require that US citizens be given the right to opt out when collection is not for a law enforcement purpose.
Overall, the investigation highlights the need for greater transparency and oversight in the use of facial recognition technology by government agencies, particularly those responsible for immigration enforcement. It also underscores the importance of protecting civil liberties and privacy in the face of rapidly advancing technologies that can be used to surveil and identify individuals without consent.
According to records reviewed by Wired, Mobile Fortify has been used over 100,000 times since its launch in May 2025, but it is not capable of providing a positive identification of individuals. The technology relies on matching algorithms developed by the NEC Corporation of America, which has been shown to be inaccurate when images are taken outside controlled settings.
The app's design prioritizes speed and scale over accuracy, with a threshold that can be adjusted dynamically based on operational factors such as system load or performance speed. This means that even if the technology is not confident in a match, it may still surface a candidate pair for human review.
DHS has used Mobile Fortify to scan the faces of targeted individuals, as well as people later confirmed to be US citizens and others who were observing or protesting enforcement activity. The data collected through Fortify is stored in databases linked by a centralized platform called the Automated Targeting System (ATS), which can be accessed by other agencies beyond CBP's control.
The use of facial recognition technology has been criticized for its invasive nature, with experts arguing that it gives a veneer of certainty when there isn't one. Senator Ed Markey has warned that DHS officials have suggested building a database to catalog people who protest or observe immigration enforcement, citing public statements and internal directives.
The investigation also found that CBP has assumed responsibility for reviewing and adjudicating privacy reviews on the use of facial recognition technology, despite federal guidelines requiring a privacy assessment when an agency deploys new technology. The director of the Electronic Privacy Information Center's surveillance oversight program has described this approach as "cavalier" and warned that it undermines what little oversight is in place.
In response to these concerns, Senator Markey and colleagues have introduced legislation aimed at prohibiting ICE and CBP from using certain facial-recognition and biometric surveillance tools. The bill, short-titled the ICE Out of Our Faces Act, aims to prohibit the use of facial recognition technology for law or civil enforcement actions and require that US citizens be given the right to opt out when collection is not for a law enforcement purpose.
Overall, the investigation highlights the need for greater transparency and oversight in the use of facial recognition technology by government agencies, particularly those responsible for immigration enforcement. It also underscores the importance of protecting civil liberties and privacy in the face of rapidly advancing technologies that can be used to surveil and identify individuals without consent.