Websites that work for you

We're a small team based in Scotland. We turn small business websites into effective marketing tools.

Contact Us

Bing autosuggest is returning disturbing suggestions relating to President Trump

Bing, which has a 33 percent share of the search market in the U.S., is appending the words "shot" and "killed" to the query "Trump should be".
by on 6th May 2018

Bing autosuggest, a feature that offers suggestions to users as they type queries into the Bing search engine, is outputting suggestions including “Trump should be killed” and “Trump should be shot”.

Bing’s suggestions for Barack Obama appear to have been more carefully moderated. A search for “Obama should be” only produced two autocomplete suggestions:

  • “Obama should be quiet”
  • “Obama should be allowed to Royal Wedding”

The full range of autocomplete suggestions produced by Bing for the query “Trump should be” include:

Results for "Trump should be" query - Bing autosuggest

However, Google’s autocomplete contains many inflammatory searches related to President Obama and liberals, including:

Obama Google screenshot

Appending every letter of the alphabet to the query “liberals are” returns more derogatory results.

Liberals autocomplete

Google’s autocomplete suggestions are also hostile towards conservatives and particularly black conservatives. Some of the autocomplete suggestions for conservatives include:

Black conservatives Google autocomplete

The following seems particularly egregious as it’s a result for a short-tail term i.e. a user only needs to type “conservatives need” to be prompted with the suggestion.

Conservatives need - autocomplete

Google has faced a lot of criticism about its autocomplete suggestions in recent years.

A report by the Daily Mail from 2014 discovered derogatory slurs appended to searches for cities and towns in the UK, including “why is Bradford so full of P****?”

Meanwhile, Google removed results from autocomplete after an investigation by The Observer discovered Google’s autocomplete was appending a range of derogatory suggestions to the end of queries like “are Jews,” including the word “evil”.

In an official statement from 2016 about how it processes autocomplete results, Google sought to clarify that its autocomplete results are based on what users are searching for i.e. they are determined algorithmically.

“Users search for such a wide range of material on the web – 15% of searches we see every day are new. Because of this, terms that appear in autocomplete may be unexpected or unpleasant,” said an official Google spokesperson in 2016.

Despite announcing it has increased the size of its content monitoring team to 10,000 people, the search giant has also been keen to emphasize the part artificial intelligence plays in monitoring the content available through its platforms.

One particularly noteworthy example is that of YouTube, which has recently produced a video on the steps its technology takes to remove inappropriate content from its platform.

Despite these proclamations, Google’s inability to filter inappropriate suggestions from autocomplete perhaps signifies its experiments with what it calls “artificial intelligence” aren’t as effective they would like consumers to believe.

Moreover, it’s not the only tech organization that makes proclamations about artificial intelligence that aren’t backed up by results or actions.

Despite Mark Zuckerberg proclaiming “AI tools” will be the solution to many of Facebook’s content problems during Senate hearings on Facebook’s relationship with Cambridge Analytica, the world’s largest social network has recently had to resort to sending postcards to verify political advertisers.

By continuing to use the site, you agree to the use of cookies. Read more

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.