Microsoft has outlined a set of principles to be adhered to in this area in a 27-page study outlining its vision of responsible artificial intelligence. Here are some key points to keep in mind. The Redmond company made a number of significant announcements, one of which was the limiting of access to its facial recognition technologies within its Azure Face API, Computer Vision, and Video Indexer platforms in order to adhere to its own standards. Azure services that are used to determine “emotional states and identification features such as gender, age, grin, beard, hair, and cosmetics” will also be eliminated, according to Natasha Crampton, head of Microsoft’s artificial intelligence division. “We acknowledge that AI systems must be adequate answers to the problems they are designed to tackle in order to be trustworthy,” Crampton explains. To avoid abuse, AI must, in other words, be a specific solution to a specific problem and cannot be given features that go beyond the job that is allocated to it. Microsoft continues by stating that it is crucial for private businesses to have appropriate attitudes toward artificial intelligence because most regulation is still unprepared to stop immoral usage of related technologies. Microsoft is targeting a number of technologies, not only facial recognition. Its text-to-speech technology provided by the unique neural voice feature of Azure AI is also impacted. It was enhanced in this instance to avoid discrimination after being charged in a March 2020 study causing extremely high mistake rates among black populations. In addition to privacy issues, face recognition has come under fire for having high failure rates among women and people of color.