New AI Can Determine Sexual Preference from a Photograph

Stan Ward

Stan Ward

September 12, 2017

Elon Musk’s warning that AI poses mankind’s “biggest existential threat” seems appropriate, given the news from a recent study that says artificial intelligence can now accurately guess whether people are gay or straight based on photos of their faces. It reports that an algorithm deduced the sexuality of people on a dating site with up to 91% accuracy. The possibility that this technology is available and could become widespread is alarming.

The potential for such a dangerous and controversial technological development, if taken to its extreme, adds credence to Musk’s contention that AI poses “vastly more risk (to humanity) than North Korea.” True, he may have been referring to battlefield applications of AI run amok. However, in my judgment, it could apply equally to this report’s findings, given the potential for human suffering.

The Stanford University study found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time. With women, it achieved a 74% success rate. When the software reviewed five images of each person, it was even more successful – 91% of the time with men and 83% with women. The bulk of the study was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website.

The study has major implications for society on several levels. It raises the specter of a new tool that could be used to discriminate against gay people. For repressive regimes whose laws permit punishing or incarcerating (or worse) LBGT individuals, the implications are dire.

Let’s consider another extreme. There are millions of devices worldwide capable of discerning people’s identities based on facial recognition software. There are also billions of images stored on both social and governmental platforms. Someone using these for nefarious purposes could undertake a colossal invasion of privacy for those who wish to keep their gender identity secret.

Interestingly, human judges of faces is significantly less reliable than the algorithm. Humans accurately identified sexual orientation just 61% of the time for men and 54% of the time for women.

The study suggests that, because facial features can indicate sexual preferences, this has to do with certain hormones present near the time of birth. It bolsters the contention that people are born gay, rather than choosing to be.

I’ve already pointed out the inherent dangers to being identified as gay in a country that outlaws homosexuality. However, this is not the only danger. Bullying among teens and adolescents could be taken to new extremes of meanness by youngsters scrutinizing the faces of their peers. Spouses, too, could give in to fears and doubts as to their partners’ true sexuality, with potentially devastating consequences. Nor is that all…

Artificial intelligence could be expanded to discover links among facial features and a range of other phenomena, such as political views, psychological conditions, or personality. This was used by both political parties in the 2016 US presidential campaign to target and button-hole prospective voters. Who knows where it will stop? It’s easy to imagine a scenario where people are arrested before a crime is committed because their face has labeled them a criminal threat!

“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

Brackeen called the Stanford Study “startlingly correct,” and opined that the study was worthwhile in exposing the dangers of AI so the public, companies, and governments could take precautions.

It seems to me, though, that the genie is already out of the bottle. The dangers are obvious but the remedies and precautions are too few, too late. What do you think?

Opinions are the writer’s own.

Image credit: By Neil Anton Dumas/
Exclusive Offer
Get NordVPN for only