To sort metro passengers into different channels, Beijing is reportedly planning to use facial recognition omitting the need for security check. For this, the city administration also plans to use the credit system and there will be individuals on a white list who will be offered expedited security clearance. The initiative cropped up over the overcrowding of metros and tussle between passengers and metro security over slow security procedures. Zhan Minghui, director of the Beijing Rail Traffic Control Centre, said on October 29 that the new plan involves installing cameras that will scan the faces of passengers as they enter a subway station and sort them into different security channels. Once the recognition finds the face abnormal, further checks will be conducted. The criteria for creating the white list was not revealed.
"The technique aims to improve the efficiency of security checks and includes both body checks and luggage screening when large numbers of passengers enter the station," he told an urban transportation forum.
Earlier in May, Beijing administration had said that it had started deducting credit points from passengers who eat in railway carriages. Over 12 million trips on a workday are currently handled by the metro which is expected to increase to 17 million trips by 2022, as per Beijing authority. It was not clear by when the system will be regulated in stations. The technology is spreading across China for almost all kinds of surveillances most of which are in the planning and will be implemented gradually. Consumers seem to welcome the technology though analysts have warned of a breach of privacy. A few days ago, Universal Studios amusement park which is under construction in Beijing said that it will admit visitors without a ticket through facial recognition.
California law enforcement agency recently requested Microsoft to install facial recognition technology in its officers’ cars and body cameras. Microsoft rejected their request over human rights concerns, Microsoft president Brad Smith said. According to Microsoft, it would lead to discrimination against innocent women and minorities being unfairly held for questioning. Microsoft also said that artificial intelligence (AI) has been trained predominantly with white male pictures. Many cases and incidents have been reported in the past where AI failed to identify women and minorities. In fact, many studies and research projects in the past have pointed out bias in artificial intelligence (AI).