Government authorities as well as industries already analyze as well as maintain massive collections of interrelated datasets, or collection of data, that contain highly personal information.
Take insurance companies, for example.
They collect both health data as well as track driving behaviors to personalize insurance fees. While law enforcement use driver’s license photos to identify criminals as well as suspected criminals.
Oh, and shopping centers? They analyze people’s facial features to improve their target advertising.
Datasets such as the a fore mentioned seem harmless — as they are usually analyzed by “black box” algorithms — where the logic and justification of the predictions are not transparent.
Additionally, it is not easy to know if a prediction is based on false data or data that has been acquired illegally and/or unethically, or data that contains incorrect assumptions.
An example of this would be a traffic camera detecting your speeding incorrectly and cancelling your license or a surveillance camera mistaking your handshake for a drug deal.
But even if the underlying data proves correct — the transparency of the AI processes make it hard to address the algorithmic bias which are found in many AI systems — which are sexist, racist, and/or discriminate against the poor.
So how exactly do you appeal against poor choices if the data underneath or the rationale for the choice is unavailable?
One research program led by the University of Melbourne’s associate professor, Tim Miller, answered this question by creating an explainable AI.
Miller explained how their particular AI’s decision making would be explained in a manner that could be easily understood by everyone.
Transparent and open representations of AI capabilities could potentially contribute a more open discussion regarding the ethical implications of human-tracking technologies as well as its societal impacts.
So how exactly does the AI work?
Miller’s team created an interactive application that takes your photo and analyzes it to identify your demographic as well as personality characteristics. They call it the “Biometric Mirror.”
But that’s not all. It also analyzes your level of attractiveness, aggression, emotional stability, and even your “weirdness.”
Using an open dataset of thousands of facial images as well as crowd-sourced evaluations, the AI compares your face to said information.
The Biometric Mirror both assesses as well as displays your individual personality traits.
One of your traits is then selected — let’s say, your attention to detail — and Biometric Mirror requests you to imagine that this information is now being shared with someone — like your future employer or insurer.
And while it seems that the Biometric Mirror is a tool for psychological analysis, it actually solely calculates the estimated public perception of personality traits based on facial appearance.
It rather is a research tool to help understand how people’s attitudes shift as more of their data is revealed as well as participant interviews continue to reveal folk’s ethical, social as well as cultural concerns.
And while the ethics of the use of AI continues, there is a huge need for the public to take part in the debate — not only to challenge questions about the boundaries of the AI but to also help understand how privacy and mass surveillance could be improved with help from something like the “Biometric Mirror.”
NOW WATCH: Sweden Actually Turns It’s Garbage Into Energy | Save The World