What your doormat, body odor, or wandering gaze can tell us that we don’t already know.
Analysis of one’s personality often can produce surprising results. In my case, a recent long examination of my personality revealed that in the end, I have a guarded, self-protective attitude, a propensity for leadership—a bit strong-handed maybe—tend to play fairly, and display a well-organized mental structure. Maybe there’s something to that old adage “messy desk, clean mind” after all.
This personality assessment isn’t the result of a psychometric test, à la Myers-Briggs or the Big 5 personality test. Instead, it was produced by the product FaceCode. FaceCode is an AI-powered biometric application created by Texas-based Phenometrix, and to obtain the assessment, I didn’t have to answer any questions or meet anybody. All it took was analyzing my profile photo on LinkedIn.
“We’ve developed something that is beyond anything that facial recognition does,” explains behavioral anthropologist Roman Kikta, CEO of Phenometrix. “If you look at some of the best facial recognition systems, they’re examining 4–500 points of interest on the face. We’re looking at thousands.”
“Only in the lip area for us there are 40–50 points of interest,” adds Kikta. “Today DNA companies can tell you not only who you are, where you come from, but they can also tell your disease susceptibility and your personality. We do reverse engineering. If we know what you look like, we can trace back to your genetic makeup.”
Regardless of the accuracy of its assessment, FaceCode represents the frontier of biometric applications for the recognition of body features for personal identification. Rather than seeking just to recognize a person’s face, the creators of FaceCode go beyond the superficial markers to develop an AI application that—so they claim—can uncover something of a person’s genetic makeup and map the facial physiognomy of their expressions to their DNA.
“There are all sorts of things that somebody could do, for example eye tracking of a political candidate and then coming up with conclusions about them as it has been written in some fictional literature. But that’s just plain old political smear tactics backed up by pseudoscience,” cautions Jon Callas, the director of technology projects at the Electronic Frontier Foundation, a San Francisco-based nonprofit devoted to digital rights and privacy. “An awful lot of that is nonsense; the same is true for all these companies claiming that they can do emotion detection. It doesn’t work very well. People are varied enough in what they do that one can’t really read into their minds that way. But if one could, it would be abusive.”
Though the market is indeed registering an inflation of products that fail to meet their promise. For example in a test run by the ACLU, Amazon’s Rekognition API handily associated eleven U.S. Congress members of color to the mug shots of convicted criminals. At the same time, the use of biometrics for identification, law enforcement, and marketing goals is skyrocketing. As the number of applications grows, so do the capabilities of the systems.
But what about personal privacy?
A study published in early January by the journal Scientific Reports confirms this evolution. Conducted by Stanford University’s organizational behavioralist Michal Kosinski, the study found that by simply downloading an off-the-shelf open-source facial-recognition algorithm—freely available on the internet—and analyzing just one of the photos posted by the subject on a social network or a dating site, the researchers were able to accurately predict the political leanings of a person in 72 percent of the cases for the liberals and 68 percent for the conservatives.
The ability to download the algorithm from the web underlines the widespread availability of these tools and the level of sophistication they have reached. Furthermore, the researcher stresses that because AI-based algorithms are progressively surpassing humans in visual processing tasks that range from recognizing skin cancer to detecting facial features, and now include evaluation of personal traits, such as sexual orientation, trustworthiness, and political orientation, they pose a growing threat to personal privacy and civil rights.
“Most people will not be able to understand, let alone control what types of information they are revealing about themselves through their eye movements.”
“In the corporate world, data processing and analysis methods are usually developed and applied behind closed doors. They are considered trade secrets and not revealed to the public,” says Jacob Leon Kröger, a researcher at the Berlin-based Weizenbaum Institute for the Networked Society. Kröger is the co-author with Otto Hans-Martin Lutz and Florian Müller of a revealing 2020 report on eye tracking that has been reverberating throughout the industry, consumer advocate organizations, and academia.
“I believe that pervasive eye tracking in smart glasses, laptops, smartphones, and AR/VR goggles will be a game changer,” continues Kröger. “Eye-tracking data may implicitly contain information about a user’s gender, age, ethnicity, body weight, personality traits, drug consumption habits, emotional state, skills and abilities, fears, interests, sexual preference. Eye trackers also capture eye opening and closure, ocular microtremors, pupil size, and pupil reactivity. But, unfortunately, our voluntary control over these parameters is minimal. Thus, most people will not be able to understand, let alone control what types of information they are revealing about themselves through their eye movements.”
According to Kröger, by leveraging machine learning and big-data analysis, these eye blinks can be transformed into personal profiles for broadly sweeping purposes like offering insurance risk assessments, evaluating credit scores, and making hiring decisions, all of which would be fraught with considerable discrimination risks. Could an AI program designed to dispense hiring advice be systemically biased against giving positive assessments of Black candidates if it were trained on a dataset of white faces, like in the cited case of Amazon’s Rekognition API?
Because eye tracking is the cornerstone of Mark Zuckerberg’s metaverse, it has instantly become the talk of the town. His $10 billion investment in the development of Meta, and his apparent intention to extend people tracking to the poses assumed by their bodies in the metaverse, have attracted everybody’s attention.
But unlike the cult graphic novel The Surrogates—and Bruce Willis’s cinematic version—where the humans sitting at home download themselves into robots to access the metaverse, consumers will use a virtual Oculus-style headset. Thus, eye tracking is essential to see what the consumer is looking at in order to place the message, the product, or the center of interest in the high-resolution area of the image.
Eye-tracking technology is also essential for the future of smart glasses. Facebook in partnership with Ray-Ban has just launched a new mode of wearable tech called Ray Ban Stories that embed a camera and speakers into three different standard Ray Ban frames, including the iconic Wayfarer, enabling audio and overlaying text and images onto the user’s field of vision.
Magicians and market research
In order to preserve the illusion of high definition in every detail, the eyes shoot around the visual field like camera shutters opening and closing very rapidly. The points at the center of the optical focus come more in detail than the periphery, where elements can enter and exit the field without the observer always realizing it. These rapid movements are called saccades. In between the saccades, the eye is literally blind. Magicians take advantage of that phenomenon to distract people. And now so do researchers and marketers.
“During this frequent short blindness, researchers can opportunistically change the world around us,” writes Avi Bar-Zeev, co-inventor of Microsoft’s HoloLens. “Like magicians, they can swap out one whole object for another, remove something, or rotate an entire (virtual) world around us. If they simply zapped an object, you’d likely notice. But if they do it while you blink or are in a saccade, you most likely won’t.”
“If there’s a breach of your password, you can change it. If there’s a breach of your gait or your earlobe or facial geometry data? That’s not so easily changed.”
Though the issue of eye tracking and face recognition are pervasive, there are methods of collection of unique biometric data coming ahead which render ineffective any piecemeal approach to the problem of privacy and human rights, believes Peter Micek, legal counsel with the human rights advocacy Access Now. Access Now has just launched a public campaign to ban biometric surveillance.
And, due to the increasing hacking of private and public internet domains and the hijacking of computer networks, face recognition unreliability, and users’ bad etiquette when it comes to password management, novel biometric recognition techniques are now being developed.
However, as physical biometrics can also be reproduced and hacked, the tendency is to adopt new biometrics such as vein and EKG recognition to be used in parallel with physical biometrics. Vein biometrics can identify people from the unique blood vessel pattern on the back of their hand—vein patterns that, like fingerprints, develop prior to birth and remain stable throughout life, changing only overall size. Other types of biometrics include DNA matching (a non-intrusive technology that requires more time for authentication due to the need for samples) and facial thermography (which detects the heat patterns created by the branching of the blood vessels). Body odor, gait recognition, nailbed identification, and ear shape recognition are other biometric identification markers that are being considered with an unclear time-to-market deadline.
“These are our most sensitive data and are often immutable data. If there’s a breach of your password, you can change it—regain control of your account. If there’s a breach of your gait or your earlobe or facial geometry data? That's not so easily changed,” adds Micek. “Once you lose control of your biometrics, they’re gone. And, if more and more devices and services require biometrics for authentication, that poses a considerable risk for individuals and this entire ecosystem built on biometric authentication.”
So how does one resolve the escalating security war between hackers, regulatory authorities, and consumers? On this, Micek agrees with Thomas Serval, CEO of France-based Baracoda, maker of health tech solutions for the bathroom, including the digital mirror, the connected toothbrush, and other bathroom appliances. He says the answer should be multipronged and start from the society ground level with the institution of European-style, neighbor-friendly regulatory agencies and data protection authorities and gradually climb up the ladder of civil responsibility to involve the community and then finally big business. However, these community-based solutions should not absolve the industry and all the places of business from adopting security-by-design architectures. Serval stresses that because they are conceived with inborn fail-safes, security-by-design architectures can take alternative pathways to defend themselves in case of hacking and offer a greater degree of security for the companies and the public.