Google is working hard to identify potentially offensive or offensive content for searchers. They hope this will help prevent such content. It then returns accurate, reliable, and close-up information to the user’s query.
Paul Haahr, one of Google’s senior engineers and search engine quality manager, said. “We avoided the word ‘fake news’ because we thought the word was so obscure. However, we would like to point out that the information has been proved to be inaccurate. ”
This effort revolves around Google’s quality ranking tool for over 10,000 Google contractors used around the world to evaluate search results. These rating tools https://kientrucac.com/thiet-ke-biet-thu/ are provided for actual searches to proceed, extracted from the actual searches Google sees. These tools then rank pages that appear in the first results according to the level of relevance to the user’s query.
Quality rating engines do not have the authority to directly change Google’s results. A rating tool that marks a particular result of poor quality will not cause the page to drop. Instead, data generated by quality rating engines is used https://kientrucac.com/thiet-ke-biet-thu-phap/ to improve Google’s search algorithms. Finally, that data may affect poor quality pages found by rating engines, as well as other undecided pages.
Quality rating tools use a nearly 200 pages long tutorial. Helps these tools know how to evaluate website quality and predict the results returned with specific user queries.
These guidelines have been updated with a completely new section on “Offensive – Offensive” content, which includes new markers added to rating tools for marking. . So far, the rating tool can not bookmark pages with this feature.
In accordance with these guidelines, objectionable or offensive content usually contains the following content (the following is quoted directly from the guide):
Content that increases hatred or violence against a group of people based on criteria such as race or ethnicity, religion, gender, nationality or citizenship, disability, Sexual orientation or veteran status.
Violent images such as cruelty to animals or the abuse of children.
Information on how to carry out harmful activities (eg, human trafficking or violent assault).
The instructions also include examples. For example, here is an example of the search query “holocaust history,” which returns two results that can be displayed and how these two results are ranked:
The first result is from a supporter of the theory that whites are superior. Ranking tools reported as a result should be marked as offensive – offensive because many will find denial of the offensive to be counterproductive.
The second result is from History Channel. The rating tools ordered to mark this result as offensive – offensive because it is a “true historical source of information.”
In the other two examples given, ranking tools were ordered to mark a result that allegedly misrepresented a scientific study in a detrimental manner and a page seemed to exist only for service. The purpose of making people more cruel:
Marked not necessarily immediately downgraded or banned
What if the content is marked this way? The answer is immediate all right. The results marked by the rating tools as “training data” are for Google coders, search algorithm writers, as well as for Google machine learning systems. Basically, content like this is used to help Google know how to automatically identify offensive or offensive content.
In other words, being labeled as “offensive” does not mean that the site or site is also identified as offensive – offensive on the search engine. Google’s actual search. Instead, it’s Google data used to help search algorithms automatically detect which pages should be marked.
If the algorithms themselves are marked with content, then the content is less likely to be shown for searches that seek to learn in general. For example, people seeking information about the Holocaust are less likely to visit sites that deny the Holocaust, if everything goes according to Google’s intentions.
Marked as offensive – offensive does not mean that the content will not appear on the entire Google page. In case Google determines that the user is certain that they want to find the content, the content is still returned. For example, if someone is sure that they want to find a site that claims that whites are superior, the ranking tools are ordered:
People who want to find objectionable content will receive real information.
And with searches in which users are very sure about the specific circumstances that may be encountered, how will it be? For example, if someone suspects a Holocaust has occurred or does not perform a search on that topic, then the search is viewed as a user who is undoubtedly seeking content to unravel the term. Doubt that, even if the information is considered offensive or offensive?
Instructions help solve this problem. The guide acknowledges that users can find topics that may be offensive or offensive. The guideline’s view is that in all cases, the assumption would be to return credible information that is true.
From the instructions:
Remember that users of all ages, gender and religion use search engines for a variety of needs. A particularly important user need is to find out topics difficult to communicate directly with others. For example, some people may be afraid to ask what the racial slur means. There are also people who want to find out why racist insults are made. Allowing users access to resources to help them understand racism, hatred, and other sensitive topics is beneficial to society.
When a user’s search order appears to require the supply or tolerance of potentially offensive or offensive content, we consider it a ” oranges”. To rank the Needs Met, assume that the user has enough education / information to make sure he / she wants to perform the nasty-offensive search. All results returned must be ranked according to the Needs Met rating scale, which assumes that the user has sufficient education / information to perform the search.
In particular, to receive a Highly Meets rating, the results of information on offensive topics are:
Corresponds to the specific subject of the search order to help the user understand why the content is offensive or offensive and what are the sensible elements associated with it. 
The guide also gives some examples of offensive, offensive topic searches:
Is it effective?
Google ordered Search Engine Land, which has been testing these new guides with a quality subset of tools and using that data to make rankings changes in December. The purpose of this work is to minimize the content of the offensive that appears for searches such as “There really is a Holocaust.”
The results returned for that search order have definitely improved. In part, changing ratings has worked. In part, the whole new content, displayed by the degree of insecurity of these search results has had an impact.
However, in addition, Google now no longer returns a fake video of President Barack Obama to say he was born in Kenya, for the search “obama born in kenya,” as before (Unless you choose the “Videos” search option, where the fake video is still hosted on Google’s YouTube site remains the first result returned.)
Similarly, a search warrant for the phrase “Obama pledge of allegiance” (Obama pledged to be loyal) was no longer pushed to the first result of a fake news site, claiming he was prepared to ban. That commitment, like the previous case. The phrase is still in the first result, but it is ranked after five articles that expose the spokesperson.
Everything has not improved. A search warrant for the phrase “white people are inbred” continues to be the first to come back, saying it is violating Google’s new guidelines.
“We will know how some of these guides work. I will tell the truth. We are just learning, “Haahr said, admitting that the effort would not produce perfect results. However, Google hopes that will be an important improvement. Haahr said the ranking tools helped shape Google algorithms successfully, and confident that the tools helped Google to improve its ability to handle fake news and problematic search results.
“We are very happy with what the rating engine has done for us. We only have the ability to improve our ranking to the level we have achieved over the years, because we have a really strong ranking program that really helps us evaluate what we’re doing. , “He said.
In the increasingly responsible political environment, it is easy to understand when we wonder how ranking engines will handle content that can easily be found on major news sites. , Which calls for both liberalism and conservatism or worse. Should this content be marked as “Offensive – Offensive?” Under the instructions, the answer is no. The reason is because the political orientation is not one of the areas in the marked range.
What about counterfeit but forgeries, such as “who invented the stairs” that prompted Google to give an answer that the stairs were invented in 1948?
Or an annoying situation for both Google and Bing, a non-existent story about someone who “invented” homework:
Google thinks other changes to the guide can fix that, where ranking tools are ordered to double-check the authenticity of the answers and create high reliability ratings. For sites with true information compared to sites with credible information.
Translated by Persotran
Edited by vietmoz.net
This article is licensed under the GNU Free Documentation License.