Google's and Facebook's Fraudulent Web Traffic Continues to Plague Advertisers, Other Businesses

Adobe found that about 28% of website traffic likely came from bots and other “non-human signals”

Fraudulent Web Traffic Continues to Plague Advertisers, Other Businesses



Alexandra Bruell



  • Web traffic is rife with bots and non-human traffic, making it difficult for ad and media businesses to understand who is visiting their sites and why, according to new findings from Adobe .

    In a recent study, Adobe found that about 28% of website traffic showed strong “non-human signals,” leading the company to believe that the traffic came from bots or click farms. The company studied traffic across websites belonging to thousands of clients.

    Adobe is currently working with a handful of clients in the travel, retail and publishing industries to identify how much of their web traffic has non-human characteristics. By weeding out that misleading data, brands can better understand what prompted consumers to follow their ads and ultimately visit their websites and buy their products.



    CMO Insights and Analysis from Deloitte

    • #SeeHer: Leading the Way to Equality in Media Portrayals

      Over the past two years, the Association of National Advertisers’ #SeeHer initiative has been striving to increase accurate portrayals of women and girls in advertising and media. In a conversation with Deloitte Digital CMO Alicia Hatch, #SeeHer Chair Stephen Quinn discusses the movement’s origin and why marketers are a critical piece of the equality equation.

    Please note: The Wall Street Journal News Department was not involved in the creation of the content above.

    More from Deloitte


    “It’s really about understanding your traffic at a deeper level. And not just understanding, ‘I got this many hits.’ What do those hits represent? Were they people, malicious bots, good bots?” said Dave Weinstein, director of engineering for Adobe Experience Cloud.

    While hardly the first study of online fraud, Adobe’s findings are one more indication of how the problem has roiled the fast-changing ad, media and digital commerce industries, while prompting marketers to rethink their web efforts.

    Non-human traffic can create an “inflated number that sets false expectations for marketing efforts,” said Mr. Weinstein.

    Marketers often use web traffic as a good measure for how many of their consumers saw their ads, and some even pay their ad vendors when people see their ads and subsequently visit their website. Knowing more about how much of their web traffic was non-human could change the way they pay their ad vendors.

    Advertisers have told Adobe that the ability to break down human and non-human traffic helps them understand which audiences matter “when they’re doing ad buying and trying to do re-marketing efforts, or things like lookalike modeling,” he said. Advertisers use lookalike modeling to reach online users or consumers who share similar characteristics to their specific audiences or customers.

    Ad buyers can also exclude visitors with non-human characteristics from future targeting segments by removing the cookies or unique web IDs that represented those visitors from their audience segments.

    In addition to malicious bots, many web visits also come from website “scrapers,” such as search engines, voice assistants or travel aggregators looking for business descriptions or pricing information. Some are also from rivals “scraping” for information so they can undercut the competition on pricing.

    While bots from big search engines and aggregators tend to overtly present themselves as bots, and can easily be discounted from human web traffic, a small percentage of scrapers generate visits even if they’re not intentionally posing as visitors, said Mr. Weinstein.

    “We realized that with the growth of things like Alexa and Google Home and other assistants, increasingly more and more traffic is going to be automated in nature,” he said. “In the long term, real humans at real browsers will be a diminishing portion of traffic.”

    While there aren’t any plans to monetize a tool that can analyze non-human web traffic for clients, Adobe eventually could use it to sell something like a “bot score,” said Mr. Weinstein. For now, the company will likely just build the function into its existing analytics products.





    Facebook, faced with the reality that absolutely nobody cares about Facebook, has been found to have been lying about who uses Facebook in order to hold off it’s spiral down the drain!

    Mike Gamaroff of Sito Mobile: “For Facebook to over-report these metrics is pretty inexcusable.”






    Facebook’s FAKE ad metric problem is becoming Zuckerberg’s headache


    Mark Zuckerberg Photo: Getty Images

    Mark Zuckerberg has a credibility problem.

    The tech mogul’s Facebook just admitted to finding more “bugs” in the way it measures ads — and once again, those bugs benefited Facebook.

    The social-networking giant said Wednesday it has found numerous errors in the ways it calculates how many people view its ads, artificially inflating their perceived value to advertisers and publishers.

    Key metrics that Facebook has exaggerated include the weekly and monthly reach of marketers’ posts, which got inflated by 33 percent and 55 percent, respectively, as the site improperly included repeat visitors in its figures.

    Elsewhere, Facebook admitted to exaggerating the number of full views that video ads received, as well as time spent by users reading fast-loading “Instant Articles” for publishers including The Post and the Wall Street Journal, both of which are owned by News Corp.

    Facebook insisted that the messed-up metrics — which followed the company’s admission in September that it had inflated its reporting of video viewing times to advertisers by as much as 80 percent — didn’t affect billing to publishers and advertisers.

    Nevertheless, industry insiders said the inflated figures likely swayed ad-buying decisions as Facebook competed for ad dollars with everybody from Google’s search engine to struggling newspaper sites.

    “It’s not difficult to measure views. It’s not difficult to measure engagement. It’s not difficult to measure any ad metric,” said Mike Gamaroff of Sito Mobile, an ad-targeting firm. “For Facebook to over-report these metrics is pretty inexcusable.”

    The disclosures came as Chief Executive Zuckerberg is taking heat for allowing a barrage of fake news stories to propagate across its site, a policy some critics say may have tipped the outcome of the presidential election.

    Earlier this month, Facebook shares got slammed after it told investors it expects a “meaningful” slowdown in ad-revenue growth next year as it seeks to avoid saturating users’ News Feeds with marketing posts.

    To fix the ad-metric mess, Facebook said Wednesday it will begin allowing third-party firms like comScore and Moat to vet its viewability data for display-ad campaigns, in addition to video campaigns.

    The tech giant also said it is working with TV-ratings firm Nielsen to count video views, and that it’s forming a “Measurement Council” of marketers and ad-agency execs to monitor its metrics.

    Still, Facebook stopped short of putting all of its ad measurements up for third-party verification — a stubborn refusal that continues to undermine trust in its ad data, critics say.

    “It certainly doesn’t look good,” said Mitchell Reichgut of Jun Group, a New York ad firm. “Online advertising has a history of opaque reporting, and this doesn’t help.”

    Facebook, which has more than 4 million advertisers, has been growing its ad revenue this year at more than three times the rate of the overall online ad market, according to Cantor Fitzgerald.

    Filed under digital advertisementsfacebookmark zuckerbergsilicon valleysocial media




    Nov. 16, 2016


    Federal Trade Commission
    600 Pennsylvania Avenue, NW
    Washington, DC 20580
    Telephone: (202) 326-2222


    Advertisers in the United States have had billions of dollars squandered by the lies, misrepresentations, falsehoods and manipulations in the collusion between Google, Facebook and Twitter to rig the false impression of advertising value.

    Using fake users called “Bots”, falsified “impressions reports:, rigged metrics and forensically confirmed lies in their marketing, these three companies deluded customers and users in violation of ethics and laws.

    Facebook is a dead, and or dying, irrelevant platform which was converted to a political manipulation tool for Silicon Valley billionaires. As the public rejected Facebook in a massive departure of users, Facebook turned to criminality in order to survive.

    This is a disservice to it’s customers, to American users and to the U.S. Treasury , who has provided Facebook with billions of dollars derived from those taxpayers….”

    Reporter Samantha Masunaga



    Facebook’s CIA-like Political Control System Bosses Get Pissed At Those Who Vary From The MK Ultra Plan


    By Martin Ranshoff


    Facebook exists to spy on the public and control their thoughts by controlling every word, sentence and phrase that the public is allowed to see on the internet.

    When Facebook bosses realized that people were adding their own thoughts to web postings, Facebook flipped out.

    A number of people were posting headline comments that were unfavorable to Barack Obama and Hillary Clinton, Facebook’s political monkeys.

    Facebook thereby created a fake issue about writers being upset that people had opinions in order to control all internet opinions. The “Altering Headlines” issue has nothing to do with “publishers being upset” and everything to do with Hillary Clinton and the DNC being upset that Facebook’s media control was getting real opinions attached to it!

    Facebook exists to rig the news to put politicians in office who will give cash to Zuckerberg, Plouffe and their buddies.

    Facebook Bars Advertisers from Altering News Headlines

    Social network’s change came after WSJ flagged examples of the practice By Jack Marshall


    Facebook FB -1.21%▲ on Thursday said it will stop allowing advertisers who promote news articles on the site to modify the headlines and descriptions that appear with them, a practice that some publishers say misrepresents their work.

    The social network’s change came after The Wall Street Journal contacted the company, pointing to examples of such ads.

    In June, Facebook said it would prevent its users from modifying news article headlines, descriptions and images when posting links, as part of a broader push to crack down on the spread of false or misleading information.

    But the change didn’t apply to paying advertisers, who continued to have the ability to alter these “link previews” through Facebook’s ad platform.

    The Journal found examples of how marketers had used the tactic to subtly reposition press coverage about their companies or products. In many cases the changes didn’t appear drastic, and the advertisers say they were meant to enhance clarity, not mislead readers.

    But the changes were enough to make some publishers uncomfortable. It wasn’t clear how widespread the practice was.


    A Facebook ad for Casper, including an edited link to a Business Insider article. Photo: Facebook

    recent ad for mattress company Casper linked to a Business Insider article using the headline “How Casper is Revolutionizing the Way We Sleep.” But the Business Insider article the ad linked to carried the headline “I bought a bed from the Target-backed ‘Warby Parker of mattresses’ and I’ll never buy one in stores again.” It didn’t say anything about Casper “revolutionizing” sleep.


    Similarly, BuzzFeed published an article in 2016 about a toothbrush called Quip, with the headline “I Tried The Hipster Toothbrush That’s All Over Facebook And TBH I Loved It”.


    A Facebook ad for Quip, including an edited link to a BuzzFeed article. Photo: Facebook

    uip subsequently purchased Facebook ads linking to the BuzzFeed article, but edited the headline to remove the word “hipster” and “TBH”, which is an acronym for “to be honest.”


    Some publishers say they’re worried their content is being presented to consumers in ways they have no knowledge of and no control over.

    Our audience trusts and values our product reviews and editorial for their authenticity, so anything that violates the integrity of that content is concerning to us,” a BuzzFeed spokesman said, adding that the company’s legal department had contacted Quip to ask it to refrain from editing its headlines in Facebook ads in future.

    Other advertisers besides Casper have also posted edited links to Business Insider articles in their Facebook ads in recent weeks. Business Insider declined to comment.

    In a statement, a Facebook spokesman said advertisers will no longer be able to modify news headlines in this way. “While they should be able to edit links pointing to their own material, they shouldn’t be able to edit headlines on stories they didn’t create,” the spokesman said. “Advertisers will still be able to edit headlines in links when they point to their own content, and we have strict policies in place that prohibit misleading ads.”

    Publishers will also continue to be able to modify the social headlines for their own articles.

    For advertisers, editing link previews enabled them to position editorial coverage about their companies in the best light possible. It also allowed them to carefully optimize the wording of headlines for maximum impact as they typically would with most advertising they purchase.

    A Casper spokeswoman said the company decided to alter the headline in the link preview because Business Insider’s version didn’t explicitly mention the Casper brand, and because it was truncated when it appeared in the news feed because of Facebook’s character limits.

    Shane Pittson, a growth marketer at Quip, said the company edited the BuzzFeed link preview for similar reasons. Without the ability to edit link previews it would be less willing to spend money on ads promoting publishers’ content because those posts are often “unusable in their natural form,” Mr. Pittson said.

    Write to Jack Marshall at



    Facebook claims a third more users in the US than people who exist

    Are we looking at an advertising house of cards?

    By Kieren McCarthy in San Francisco

    91 SHARE ▼

    Facebook promises advertisers access to more US customers than actually exist.

    That's according to an investment analyst who had long held that Facebook is misleading the market on what its actual digital reach is, and recommends a "sell" on the social media giant's stock.

    Facebook has an extensive and sophisticated ad-buying system that assures potential advertisers it can reach no fewer than 41 million of a core target group of 18 to 24-year-olds in the United States.

    The only problem, analyst Brian Wieser of Pivotal Research Group pointed out in a note to customers, is that there are only 31 million of them that actually exist in the US, according to the official census data. The same gap in reality also holds for other groups, including the next most-targeted group of 25 to 34-year-olds.

    There are very few independent checks on the accuracy of information and data provided by Facebook, Google and other digital companies that monetize their huge user bases by offering targeted advertising.

    Facebook is also renowned in the ad-buying business for being a black hole, particularly when it comes to things like paying for Facebook to get Facebook users to "like" your Facebook page – a circular money flow that boggles the mind.

    But for those hoping to target millennials, the company's ubiquity is a good solution to a hard-to-reach group that does not use traditional media such as TV or radio anywhere near as much. Claiming more users than actually exist is a stretch even by Facebook's standard, however.

    By design

    Of course, Zuckerberg's Monster has an answer for why this isn't a problem. The audiences it offers to advertisers are only estimates that do not match real-world figures "by design" according to the company, because they account for a "number of factors, including Facebook user behaviors, user demographics, and location data from devices."

    Say that again?

    The estimates "are designed to estimate how many people in a given area are eligible to see an ad a business might run," the company said in a statement. "They are not designed to match population or census estimates."

    How do you say 'whoops!' in Russian?

    Facebook admitted to congressional investigators it sold web ads during the US presidential election to a Russian company seeking to target American voters, it was reported Wednesday.

    What does that mean in plain English? One of two things. Either Facebook knows better than the US government who is actually living in the country (tens of millions of illegal immigrants and tourists maybe?), or the website is flooded with fake profiles. We're willing to bet on the latter.

    Despite recent often-criticized efforts to get its users to supply their real personal data, Facebook is still almost entirely reliant on users to self-report their age, sex, location and other information. There is also nothing to stop anyone from creating fake personas or profiles. So the question is: how many profiles are fake? It could be as high as half of them.

    As has been noted repeatedly in recent years, on social media you may be spending much of your time and your advertising money talking to computer-generated bots that exist solely to game systems and boost specific posts or articles. As we saw with the fake news controversy earlier this year.

    The problem is that Facebook has a massive financial incentive to present the highest possible figures. While Facebook does not charge per estimated audience but by user view, every ad view and click equates to real dollars.

    It is a fair bet that when a company claims it has 32 per cent more users than actual live people, a big chunk of your advertising dollars is going down straight into the platform's pockets with nothing in return.

    Sponsored: The Joy and Pain of Buying IT - Have Your Say


    Next page: Realities






    More from The Register

    Facebook's music plans mean you'll never leave Facebook

    Analysis Roach motel


    Facebook fails in bid for streaming sports rights

    Indian Premier League cricket rejects Zuck's ~US$600m bid in favour of Murdoch moolah


    Apache says 'no' to Facebook code libraries

    Anti-patent-lawsuit restrictions land tools on banned list


    Facebook's left hand is fighting for Americans' right to privacy

    The right hand? Go on, guess


    Facebook gives itself mission to 'bring the world closer' by getting people off Facebook

    Analysis Zuckerberg preaches connectivity gospel, sends faithful to do good works


    Facebook will deny ads to repeat promoters of fake news

    Sharing network hopes to hit hoaxers where it hurts


    Facebook won't change React.js license despite Apache developer pain

    We love open source so much we can't drop sueball shield, says The Social Network™


    German court says 'Nein' on Facebook profile access request

    Berlin bench rules against parents' access to teen's account


    FACEBOOK is now determining if you are a “fag” just by your photo.


    Remember how Obama and Hillary were huge on transgender bathrooms for mentally ill men who want to cut their penises off? It turns out only a microscopic number of Americans are into that and they mostly work at Google, Facebook and in Hollywood.


    Facebook’s AI photo analysis software can now find out if you are a “butt surfer” and try to get you to join the DNC! The American Democratic Party likes to be known as the party-of-stick-it-in-anywhere and now Facebook can hunt-down all of the homosexuals just by scanning all of the selfies on the internet! Gays look faggy according to expert Mark Zuckerberg, who has sold hundreds of millions of dollars of face scanning services to the CIA, NSA and the DNC!


    Row over AI that 'identifies gay faces'


    Image copyright


    Image caption

    TANFORD UNIVERSITYThe study created composite faces judged most and least likely to belong to homosexuals


    A facial recognition experiment that claims to be able to distinguish between gay and heterosexual people has sparked a row between its creators and two leading LGBT rights groups.

    The Stanford University study claims its software recognises facial features relating to sexual orientation that are not perceived by human observers.

    The work has been accused of being "dangerous" and "junk science".

    But the scientists involved say these are "knee-jerk" reactions.

    Details of the peer-reviewed project are due to be published in the Journal of Personality and Social Psychology.

    Narrow jaws

    For their study, the researchers trained an algorithm using the photos of more than 14,000 white Americans taken from a dating website.

    They used between one and five of each person's pictures and took people's sexuality as self-reported on the dating site.

    The researchers said the resulting software appeared to be able to distinguish between gay and heterosexual men and women.

    In one test, when the algorithm was presented with two photos where one picture was definitely of a gay man and the other heterosexual, it was able to determine which was which 81% of the time.

    With women, the figure was 71%.

    "Gay faces tended to be gender atypical," the researchers said. "Gay men had narrower jaws and longer noses, while lesbians had larger jaws."

    But their software did not perform as well in other situations, including a test in which it was given photos of 70 gay men and 930 heterosexual men.

    When asked to pick 100 men "most likely to be gay" it missed 23 of them.

    In its summary of the study, the Economist - which was first to report the research - pointed to several "limitations" including a concentration on white Americans and the use of dating site pictures, which were "likely to be particularly revealing of sexual orientation".

    'Reckless findings'

    On Friday, two US-based LGBT-focused civil rights groups issued a joint press release attacking the study in harsh terms.

    "This research isn't science or news, but it's a description of beauty standards on dating sites that ignores huge segments of the LGBTQ (lesbian, gay, bisexual, transgender and queer/questioning) community, including people of colour, transgender people, older individuals, and other LGBTQ people who don't want to post photos on dating sites," said Jim Halloran, chief digital officer of Glaad, a media-monitoring body.

    "These reckless findings could serve as a weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous."


    Image caption

    ampaigners raised concerns about what would happen if surveillance tech tried to make use of the study


    The Human Rights Campaign added that it had warned the university of its concerns months ago.

    "Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world - and this case, millions of people's lives - worse and less safe than before," said its director of research, Ashland Johnson.

    The two researchers involved - Prof Michael Kosinski and Yilun Wang - have since responded in turn, accusing their critics of "premature judgement".

    "Our findings could be wrong... however, scientific findings can only be debunked by scientific data and replication, not by well-meaning lawyers and communication officers lacking scientific training," they wrote.

    "However, if our results are correct, Glaad and HRC representatives' knee-jerk dismissal of the scientific findings puts at risk the very people for whom their organisations strive to advocate."

    'Treat cautiously'

    Previous research that linked facial features to personality traits has become unstuck when follow-up studies failed to replicate the findings. This includes the claim that a face's shape could be linked to aggression.

    One independent expert, who spoke to the BBC, said he had added concerns about the claim that the software involved in the latest study picked up on "subtle" features shaped by hormones the subjects had been exposed to in the womb.

    "These 'subtle' differences could be a consequence of gay and straight people choosing to portray themselves in systematically different ways, rather than differences in facial appearance itself," said Prof Benedict Jones, who runs the Face Research Lab at the University of Glasgow.

    It was also important, he said, for the technical details of the analysis algorithm to be published to see if they stood up to informed criticism.

    "New discoveries need to be treated cautiously until the wider scientific community - and public - have had an opportunity to assess and digest their strengths and weaknesses," he said.


    New AI can guess whether you're 'faggy' or straight from a photograph

    An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions

    An illustrated depiction of facial analysis technology similar to that used in the experiment. Illustration: Alamy


    Sam Levin in San Francisco




    Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

    The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

    The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

    The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

    Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

    Elon Musk says AI could lead to third world war

    Read more

    The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.

    While the findings have clear limits when it comes to gender and sexuality – people of color were not included in the study, and there was no consideration of transgender or bisexual people – the implications for artificial intelligence (AI) are vast and alarming. With billions of facial images of people stored on social media sites and in government databases, the researchers suggested that public data could be used to detect people’s sexual orientation without their consent.

    It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.

    But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.

    “It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”

    Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. Now we know that we need protections.”

    Kosinski was not available for an interview, according to a Stanford spokesperson. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.

    In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.

    This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.

    “AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

    Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.

    Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”

    Contact the author:


    Face-reading AI will be able to detect your politics and IQ, professor says

    Professor whose study suggested technology can detect whether a person is gay or straight says programs will soon reveal traits such as criminal predisposition

     Your photo could soon reveal your political views, says a Stanford professor. Photograph: Frank Baron for the Guardian


      View more sharing options










    Sam Levin in San Francisco



     03.00 EDT 07.59 EDT

    Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

    Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

    Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

    New AI can guess whether you're gay or straight from a photograph


    Read more

    Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

    The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

    Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.



    osinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.


    Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

    This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

    Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.














    Michal Kosinski. Photograph: Lauren Bamford


    Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”



    osinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.


    Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”

    Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

    He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”

    There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.

    Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”



    e also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.


    Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.

    The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.

    Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”

    Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

    You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”


    Facebook Helps Any Government Violate The Rights Of Citizens and Spy On Them For Cash

    Facebook seeks ad profits as it ruins democracy around the world
    Facebook Navigates an Internet Fractured by Governmental Controls

    How Facebook Is Changing Your Internet

    Continue reading the main story Video
    How Facebook Is Changing Your Internet
    Behind the scenes, Facebook is involved in high-stakes diplomatic battles across the globe that have begun fragmenting the internet itself.
    By JONAH M. KESSEL and PAUL MOZUR on Publish Date September 17, 2017. Photo by Albert Gea/Reuters. Watch in Times Video »

    On a muggy, late spring evening, Tuan Pham awoke to the police storming his house in Hanoi, Vietnam.
    They marched him to a police station and made their demand: Hand over your Facebook password. Mr. Tuan, a computer engineer, had recently written a poem on the social network called “Mother’s Lullaby,” which criticized how the communist country was run.
    One line read, “One century has passed, we are still poor and hungry, do you ask why?”
    Mr. Tuan’s arrest came just weeks after Facebook offered a major olive branch to Vietnam’s government. Facebook’s head of global policy management, Monika Bickert, met with a top Vietnamese official in April and pledged to remove information from the social network that violated the country’s laws.
    While Facebook said its policies in Vietnam have not changed, and it has a consistent process for governments to report illegal content, the Vietnamese government was specific. The social network, they have said, had agreed to help create a new communications channel with the government to prioritize Hanoi’s requests and remove what the regime considered inaccurate posts about senior leaders.

    Vietnam’s government has said Facebook agreed to help create a new communications channel with the government. Credit Na Son Nguyen/Associated Press
    Populous, developing countries like Vietnam are where the company is looking to add its next billion customers — and to bolster its ad business. Facebook’s promise to Vietnam helped the social media giant placate a government that had called on local companies not to advertise on foreign sites like Facebook, and it remains a major marketing channel for businesses there.
    Continue reading the main story
    Related Coverage
    In China, Facebook Tests the Waters With a Stealth App
    Can Facebook Fix Its Own Worst Bug?
    Facebook’s Zuckerberg, Bucking Tide, Takes Public Stand Against Isolationism
    Even Uber Couldn’t Bridge the China Divide

    Continue reading the main story
    The diplomatic game that unfolded in Vietnam has become increasingly common for Facebook. The internet is Balkanizing, and the world’s largest tech companies have had to dispatch envoys to, in effect, contain the damage such divisions pose to their ambitions.
    The internet has long had a reputation of being an anything-goes place that only a few nations have tried to tame — China in particular. But in recent years, events as varied as the Arab Spring, elections in France and confusion in Indonesia over the religion of the country’s president have awakened governments to how they have lost some control over online speech, commerce and politics on their home turf.
    Even in the United States, tech giants are facing heightened scrutiny from the government. Facebook recently cooperated with investigators for Robert S. Mueller III, the special counsel investigating Russian interference in the American presidential election. In recent weeks, politicians on the left and the right have also spoken out about the excess power of America’s largest tech companies.
    As nations try to grab back power online, a clash is brewing between governments and companies. Some of the biggest companies in the world — Google, Apple, Facebook, Amazon and Alibaba among them — are finding they need to play by an entirely new set of rules on the once-anarchic internet.
    And it’s not just one new set of rules. According to a review by The New York Times, more than 50 countries have passed laws over the last five years to gain greater control over how their people use the web.
    “Ultimately, it’s a grand power struggle,” said David Reed, an early pioneer of the internet and a former professor at the M.I.T. Media Lab. “Governments started waking up as soon as a significant part of their powers of communication of any sort started being invaded by companies.”
    Facebook encapsulates the reasons for the internet’s fragmentation — and increasingly, its consequences.
    Global Reach
    Facebook has grown by leaps and bounds around the world to over 1.3 billion daily users worldwide.
    Source: Company reports | 2017 as of the second quarter
    The company has become so far-reaching that more than two billion people — about a quarter of the world’s population — now use Facebook each month. Internet users (excluding China) spend one in five minutes online within the Facebook universe, according to comScore, a research firm. And Mark Zuckerberg, Facebook’s chief executive, wants that dominance to grow.
    But politicians have struck back. China, which blocked Facebook in 2009, has resisted Mr. Zuckerberg’s efforts to get the social network back into the country. In Europe, officials have repudiated Facebook’s attempts to gather data from its messaging apps and third-party websites.
    The Silicon Valley giant’s tussle with the fracturing internet is poised to escalate. Facebook has now reached almost everyone who already has some form of internet access, excluding China. Capturing those last users — including in Asian nations like Vietnam and African countries like Kenya — may involve more government roadblocks.
    “We understand that and accept that our ideals are not everyone’s,” said Elliot Schrage, Facebook’s vice president of communications and public policy. “But when you look at the data and truly listen to the people around the world who rely on our service, it’s clear that we do a much better job of bringing people together than polarizing them.”
    Friending China
    By mid-2016, a yearslong campaign by Facebook to get into China — the world’s biggest internet market — appeared to be sputtering.

    Facebook has tried various methods to get back into China, where the social network has been blocked since 2009. Credit Ng Han Guan/Associated Press
    Mr. Zuckerberg had wined and dined Chinese politicians, publicly showed off his newly acquired Chinese-language skills — a moment that set the internet abuzz — and talked with a potential Chinese partner about pushing the social network into the market, according to a person familiar with the talks who declined to be named because the discussions were confidential.
    At a White House dinner in 2015, Mr. Zuckerberg had even asked the Chinese president, Xi Jinping, whether Mr. Xi might offer a Chinese name for his soon-to-be-born first child — usually a privilege reserved for older relatives, or sometimes a fortune teller. Mr. Xi declined, according to a person briefed on the matter.
    But all those efforts flopped, foiling Facebook’s attempts to crack one of the most isolated pockets of the internet.
    China has blocked Facebook and Twitter since mid-2009, after an outbreak of ethnic rioting in the western part of the country. In recent years, similar barriers have gone up for Google services and other apps, like Line and Instagram.
    Even if Facebook found a way to enter China now, it would not guarantee financial success. Today, the overwhelming majority of Chinese citizens use local online services like Qihoo 360 and Sina Weibo. No American-made apps rank among China’s 50 most popular services, according to SAMPi, a market research firm.
    Chinese tech officials said that although many in the government are open to the idea of Facebook releasing products in China, there is resistance among leaders in the standing committee of the country’s Politburo, its top decision-making body.
    In 2016, Facebook took tentative steps toward embracing China’s censorship policies. That summer, Facebook developed a tool that could suppress posts in certain geographic areas, The Times reported last year. The idea was that it would help the company get into China by enabling Facebook or a local partner to censor content according to Beijing’s demands. The tool was not deployed.
    In another push last year, Mr. Zuckerberg spent time at a conference in Beijing that is a standard on the China government relations tour. Using his characteristic brand of diplomacy — the Facebook status update — he posted a photo of himself running in Tiananmen Square on a dangerously smoggy day. The photo drew derision on Twitter, and concerns from Chinese about Mr. Zuckerberg’s health.


    Mark Zuckerberg, Facebook’s chief executive, on a run in Beijing in 2016. The outing set the internet abuzz as “the smog jog.” Credit Facebook/Agence France-Presse — Getty Images
    For all the courtship, things never quite worked out.
    “There’s an interest on both sides of the dance, so some kind of product can be introduced,” said Kai-Fu Lee, the former head of Google in China who now runs a venture-capital firm in Beijing. “But what Facebook wants is impossible, and what they can have may not be very meaningful.”
    This spring, Facebook tried a different tactic: testing the waters in China without telling anyone. The company authorized the release of a photo-sharing app there that does not bear its name, and experimented by linking it to a Chinese social network called WeChat.
    One factor driving Mr. Zuckerberg may be the brisk ad business that Facebook does from its Hong Kong offices, where the company helps Chinese companies — and the government’s own propaganda organs — spread their messages. In fact, the scale of the Chinese government’s use of Facebook to communicate abroad offers a notable sign of Beijing’s understanding of Facebook’s power to mold public opinion.
    Chinese state media outlets have used ad buys to spread propaganda around key diplomatic events. Its stodgy state-run television station and the party mouthpiece newspaper each have far more Facebook “likes” than popular Western news brands like CNN and Fox News, a likely indication of big ad buys.
    To attract more ad spending, Facebook set up one page to show China’s state broadcaster, CCTV, how to promote on the platform, according to a person familiar with the matter. Dedicated to Mr. Xi’s international trips, the page is still regularly updated by CCTV, and has 2.7 million likes. During the 2015 trip when Mr. Xi met Mr. Zuckerberg, CCTV used the channel to spread positive stories. One post was titled “Xi’s UN address wins warm applause.”

    At a White House dinner in 2015, Mr. Zuckerberg asked the Chinese president, Xi Jinping, whether Mr. Xi might offer a Chinese name for his soon-to-be-born first child — usually a privilege reserved for older relatives, or sometimes a fortune teller. Credit Charles Ommanney/Facebook, via Associated Press
    Fittingly, Mr. Zuckerberg’s eagerness and China’s reluctance can be tracked on Facebook.
    During Mr. Xi’s 2015 trip to America, Mr. Zuckerberg posted about how the visit offered him his first chance to speak a foreign language with a world leader. The post got more than a half million likes, including from Chinese state media (despite the national ban). But on Mr. Xi’s propaganda page, Mr. Zuckerberg got only one mention — in a list of the many tech executives who met the Chinese president.
    Europe’s Privacy Pushback
    Last summer, emails winged back and forth between members of Facebook’s global policy team. They were finalizing plans, more than two years in the making, for WhatsApp, the messaging app Facebook had bought in 2014, to start sharing data on its one billion users with its new parent company. The company planned to use the data to tailor ads on Facebook’s other services and to stop spam on WhatsApp.
    A big issue: how to win over wary regulators around the world.
    Despite all that planning, Facebook was hit by a major backlash. A month after the new data-sharing deal started in August 2016, German privacy officials ordered WhatsApp to stop passing data on its 36 million local users to Facebook, claiming people did not have enough say over how it would be used. The British privacy watchdog soon followed.
    By late October, all 28 of Europe’s national data-protection authorities jointly called on Facebook to stop the practice. Facebook quietly mothballed its plans in Europe. It has continued to collect people’s information elsewhere, including the United States.
    “There’s a growing awareness that people’s data is controlled by large American actors,” said Isabelle Falque-Pierrotin, France’s privacy regulator. “These actors now know that times have changed.”
    Facebook’s retreat shows how Europe is effectively employing regulations — including tough privacy rules — to control how parts of the internet are run.

    Facebook’s international headquarters in Dublin. The company has faced regulatory pushback in Europe. Credit Aidan Crawley/Bloomberg Nytcredit:
    The goal of European regulators, officials said, is to give users greater control over the data from social media posts, online searches and purchases that Facebook and other tech giants rely on to monitor our online habits.
    As a tech company whose ad business requires harvesting digital information, Facebook has often underestimated the deep emotions that European officials and citizens have tied into the collection of such details. That dates back to the time of the Cold War, when many Europeans were routinely monitored by secret police.
    Now, regulators from Colombia to Japan are often mimicking Europe’s stance on digital privacy. “It’s only natural European regulators would be at the forefront,” said Brad Smith, Microsoft’s president and chief legal officer. “It reflects the importance they’ve attached to the privacy agenda.”
    In interviews, Facebook denied it has played fast and loose with users’ online information and said it complies with national rules wherever it operates. It questioned whether Europe’s position has been effective in protecting individuals’ privacy at a time when the region continues to fall behind the United States and China in all things digital.
    Still, the company said it respected Europe’s stance on data protection, particularly in Germany, where many citizens have long memories of government surveillance.
    “There’s no doubt the German government is a strong voice inside the European community,” said Richard Allen, Facebook’s head of public policy in Europe. “We find their directness pretty helpful.”
    Europe has the law on its side when dictating global privacy. Facebook’s non-North American users, roughly 1.8 billion people, are primarily overseen by Ireland’s privacy regulator because the company’s international headquarters is in Dublin, mostly for tax reasons. In 2012, Facebook was forced to alter its global privacy settings — including those in the United States — after Ireland’s data protection watchdog found problems while auditing the company’s operations there.
    Three years later, Europe’s highest court also threw out a 15-year-old data-sharing agreement between the region and the United States following a complaint that Facebook had not sufficiently protected Europeans’ data when it was transferred across the Atlantic. The company denies any wrongdoing.

    A Facebook event in Berlin last year. Europe, where Cold War-era suspicions over monitoring still linger, is exporting its views of privacy to other parts of the world. Credit Tobias Schwarz/Agence France-Presse — Getty Images
    And on Sept. 12, Spain’s privacy agency fined the company 1.2 million euros for not giving people sufficient control over their data when Facebook collected it from third-party websites. Watchdogs in Germany, the Netherlands and elsewhere are conducting similar investigations. Facebook is appealing the Spanish ruling.
    “Facebook simply can’t stick to a one-size-fits-all product around the world,” said Max Schrems, an Austrian lawyer who has been a Facebook critic after filing the case that eventually overturned the 15-year-old data deal.
    Potentially more worrying for Facebook is how Europe’s view of privacy is being exported. Countries from Brazil to Malaysia, which are crucial to Facebook’s growth, have incorporated many of Europe’s tough privacy rules into their legislation.
    “We regard the European directives as best practice,” said Pansy Tlakula, chairwoman of South Africa’s Information Regulator, the country’s data protection agency. South Africa has gone so far as to copy whole sections, almost word-for-word, from Europe’s rule book.
    The Play for Kenya
    Blocked in China and troubled by regulators in Europe, Facebook is trying to become “the internet” in Africa. Helping get people online, subsidizing access, and trying to launch satellites to beam the internet down to the markets it covets, Facebook has become a dominant force on a continent rapidly getting online.
    But that has given it a power that has made some in Africa uncomfortable.
    Some countries have blocked access, and outsiders have complained Facebook could squelch rival online business initiatives. Its competition with other internet companies from the United States and China has drawn comparisons to a bygone era of colonialism.
    For Kenyans like Phyl Cherop, 33, an entrepreneur in Nairobi, online life is already dominated by the social network. She abandoned her bricks-and-mortar store in a middle-class part of the city in 2015 to sell on Facebook and WhatsApp.

    Phyl Cherop, who lives in Kenya, closed her bricks-and-mortar store to sell items through Facebook. Credit Adriane Ohanesian for The New York Times
    “I gave it up because people just didn’t come anymore,” said Ms. Cherop, who sells items like designer dresses and school textbooks. She added that a stand-alone website would not have the same reach. “I prefer using Facebook because that’s where my customers are. The first thing people want to do when they buy a smartphone is to open a Facebook account.”
    As Facebook hunts for more users, the company’s aspirations have shifted to emerging economies where people like Ms. Cherop live. Less than 50 percent of Africa’s population has internet connectivity, and regulation is often rudimentary.
    Since Facebook entered Africa about a decade ago, it has become the region’s dominant tech platform. Some 170 million people — more than two thirds of all internet users from South Africa to Senegal — use it, according Facebook’s statistics. That is up 40 percent since 2015.
    The company has struck partnerships with local carriers to offer basic internet services — centered on those offered by Facebook — for free. It has built a pared-down version of its social network to run on the cheaper, less powerful phones that are prevalent there.
    Continue reading the main story


    Mr. Zuckerberg visited Lagos, Nigeria, last year. Credit Andrew Esiebo for The New York Times
    Facebook is also investing tens of millions of dollars alongside telecom operators to build a 500-mile fiber-optic internet connection in rural Uganda. In total, it is working with about 30 regional governments on digital projects.
    “We want to bring connectivity to the world,” said Jay Parikh, a Facebook vice president for engineering who oversees the company’s plans to use drones, satellites and other technology to connect the developing world.
    Facebook is racing to gain the advantage in Africa over rivals like Google and Chinese players including Tencent, in a 21st century version of the “Scramble for Africa.” Google has built fiber internet networks in Uganda and Ghana. Tencent has released WeChat, its popular messaging and e-commerce app, in South Africa.
    Facebook has already hit some bumps in its African push. Chad blocked access to Facebook and other sites during elections or political protests. Uganda also took legal action in Irish courts to force the social network to name an anonymous blogger who had been critical of the government. Those efforts failed.
    In Kenya, one of Africa’s most connected countries, there has been less pushback.
    Facebook expanded its efforts in the country of 48 million in 2014. It teamed up with Airtel Africa, a mobile operator, to roll out Facebook’s Free Basics — a no-fee version of the social network, with access to certain news, health, job and other services there and in more than 20 other countries worldwide. In Kenya, the average person has a budget of just 30 cents a day to spend on internet access.
    Free Basics now lets Kenyans use Facebook and its Messenger service at no cost, as well as read news from a Kenyan newspaper and view information about public health programs. Joe Mucheru, Kenya’s tech minister, said it at least gives his countrymen a degree of internet access.
    Still, Facebook’s plans have not always worked out. Many Kenyans with access to Free Basics rely on it only as a backup when their existing smartphone credit runs out.
    “Free Basics? I don’t really use it that often,” said Victor Odinga, 27, an accountant in downtown Nairobi. “No one wants to be seen as someone who can’t afford to get online.”

    A cybercafe in Nairobi, Kenya, earlier this year. Africa, where many people are only just beginning to get online, is a greenfield for internet companies like Facebook. Credit Adriane Ohanesian for The New York Times
    Paul Mozur reported from Hong Kong, Mark Scott from Nairobi, and Mike Isaac from San Francisco.
    Follow Paul Mozur, Mark Scott and Mike Isaac on Twitter @paulmozur @markscott82 @MikeIsaac.