Google is set to launch a fierce battle against fake news and problematic content. Putting some stringent measures in place, the search engine is hoping to improve the users’ search experience. Google Project Owl is an attempt from Google to deliver authoritative content filtered through suggested searches and Featured Snippets answers.
Over the years, people consistently raised eyebrows over fake news content, disturbing answers and offensive search suggestions appearing at the top of Google’s search results. Google took all that into the consideration and manually resolved issues when they appeared. However, since November, the problem turned into a nightmare when Google flooded with complaints against fake news and manipulated content.
Project Owl is Google’s attempt to fix these issues urgently as possible. Under the project, team Google will go full throttle with the following actions in a full swing:
- A new feedback form for search suggestions and formal policies about why suggestions might be removed.
- A new feedback form for “Featured Snippets” answers.
- A new emphasis on authoritative content to improve search quality.
What’s Problematic Content?
When asked, Google briefed about what problematic search is all about. The search conglomerate said that searchers are increasingly inclined to search for problematic content such as rumors, urban myths, slurs and derogatory topics. This growing addiction is influencing the search suggestions negatively that Google offers .
Google's Plan To Attack Problematic Content:
Google says that problematic searches have been an issue for long but with lesser gravity. But the issue became nightmare over the past few months for the company. But over the past few months, they’ve grown as a major public relations nightmare for the company.
Following are the measures that Google has been taking to tame the risk of disturbing content.
Improving Autocomplete search suggestions:
The first change Google prompts will be in its ‘Autocomplete’ feature. When a user starts typing a search phrase in the search box, Google suggests several related topics to the user that fellow users have been found searching frequently. This is actually Google’s ‘Autocomplete’ feature introduced to speed up searching.
Haven’t you noticed that as soon you type ‘Fac’ in Google search box, several suggestions start popping up in a row including ‘Facebook’ that you can choose? This saves searchers time they spend on searching. These suggestions come from the most popular searches made and related to the first few letters or words that someone enters.
Since these auto complete suggestions are taken from real things people search on, they sometime reflect problematic topics they are researching. Google lately noticed that sometimes these suggestions potentially deflected people from their actual research and drifted them towards shocking and disturbing topics. Look at the following screenshot. As soon user starts typing ‘did the hol’, autocompletes brings a few most searched though disturbing suggestions like ‘did the holocaust happen’ etc.
Over the years, Google received many such complaints from users, but latest instances forced them to act. Last February, Google began a limited test allowing searchers to report offensive and problematic search suggestions
Now the test goes live for every user. Below is a snapshot of the feature. Users can now see a new “Report inappropriate predictions” link below the search box. Clicking that link pops up a form that allows people to select a prediction or predictions with issues and report in one of several categories. Users can use this feature to report predictions as hateful, sexually explicit, violent or including dangerous and harmful activity, plus a catch-all “Other” category. Users can also add comments on the predictions.
Improving ‘Featured Snippets’ answers:
Google also drew flack over the past few months for some of its ‘Featured Snippets’ results. Featured snippets display one of the so many results that Google believes is the best answer to the query triggered. Featured snippets are very popular with Google Assistant on Android phones and in Google Home. There, Google displays the best answer in response to a question. In that case, problematic and offensive answers are an issue.
Now, Google has introduced ways to tackle the issue. The company has rolled out an improved feedback form that would be available with Featured Snippets. With an already existing “Feedback” link, featured snippets will now start showing new options as well. See the screenshot:
Previously, the form just prompted users if the Featured Snippet was helpful, had something missing, was wrong or wasn’t useful. The form still has the option ‘to mark it as helpful’. New options that have been added to the form will allow users to mark the respective options if they don’t like an answer; find it hateful, racist or offensive; vulgar or sexually explicit; harmful, dangerous or violent; misleading or inaccurate.
More emphasis on authoritative content:
Moreover, Google is also mulling ways to improve its search quality in order to attack problematic Featured Snippets. Improved search quality will help Google show more authoritative content for more obscure and infrequent queries. This could be a big change that may eventually better search results than ever.
These proposed changes are the improvised version of the existing one and introduced to make users aware about their helpfulness. Google hopes that they could bring down the peril of fake news and disturbing content. This hopefully will work to solve Google’s search quality issues in this area.