Google Search's Guidance About AI-Generated Content
Google has long believed in the power of AI to transform the ability to deliver helpful information. In this post, we'll share more about how AI-generated content fits into Google's long-standing approach to showing helpful content to people on Search.
Rewarding high-quality content, however it is produced
Google's ranking systems aim to reward original, high-quality content that demonstrates qualities of E-E-A-T: Expertise, Experience, Authoritativeness, and Trustworthiness, as shared on the How Search Works site.
Google's focus on the quality of content, rather than how content is produced, is a useful guide that has helped deliver reliable, high-quality results to users for years.
For example, about ten years ago, there were understandable concerns about a rise in mass-produced yet human-generated content. No one would have thought it reasonable for Google to ban all human-generated content in response. Instead, improving the systems to reward quality content made more sense.
Focusing on rewarding quality content has been core to Google since the beginning. It continues today, including through the ranking systems designed to surface reliable information and the helpful content system. The helpful content system was introduced last year, with the August 2022 helpful content update, to ensure better those searching get content created primarily for people rather than for search ranking purposes.
How automation can create helpful content
Regarding automatically generated content, Google's guidance has been consistent for years. Using automation—including AI—to develop content primarily to manipulate ranking in search results violates Google's spam policies.
Google has many years of experience dealing with automation being used in an attempt to game search results. Google's spam-fighting efforts—including its SpamBrain system—will continue no matter how spam is produced.
This said, it's important to recognize that not all use of automation, including AI generation, is spam. Automation has long been used to generate helpful content, such as sports scores, weather forecasts, and transcripts. AI can power new levels of expression and creativity and be a critical tool to help people create great content for the web.
This aligns with how Google always thought about empowering people with new technologies. Google continues taking a responsible approach while also maintaining a high bar for information quality and the overall helpfulness of content on Search.
Google's advice for creators considering AI generation
As explained, no matter how the content is produced, those seeking success in Google Search should be looking to produce original, high-quality, people-first content demonstrating qualities E-E-A-T.
Creators can learn more about the concept of E-E-A-T on Google's Creating Helpful, Reliable, People-first Content help page. In addition, Google has updated that page with some guidance about thinking in terms of Who, How, and Why content is produced.
Whether you're using AI-generated content or not, evaluating your content in this way will help you stay on course with what Google's systems seek to reward.
Google AI FAQs
To further help, here are some answers to questions you may have about AI content and Google Search:
Appropriate use of AI or automation is not against Google's guidelines. This means that it is not used to generate content primarily to manipulate search rankings, which is against Google's spam policies.
Automation has long been used in publishing to create useful content. AI can assist with and generate useful content in exciting new ways.
Poor quality content isn't a new challenge for Google Search. Google has been tackling poor-quality content created by humans and automation for years and has existing systems to determine the helpfulness of content. Other systems work to elevate original news reporting and continue to be regularly improved.
These issues exist in both human-generated and AI-generated content. However the content is produced, Google's systems look to surface high-quality information from reliable sources and not information that contradicts well-established consensus on important topics. On topics where information quality is critically important—like health, civic, or financial information—Google's systems place an even greater emphasis on reliability signals.
Google has a variety of systems, including SpamBrain, that analyze patterns and signals to help identify spam content, however it is produced.
Using AI doesn't give content any special gains. It's just content. If it is useful, helpful, original, and satisfies aspects of E-E-A-T, it might do well in Search. If it doesn't, it might not.
If you see AI as an essential way to help produce helpful and original content, it might be useful to consider. If you see AI as an inexpensive, easy way to game search engine rankings, then no.
You should consider having accurate author bylines when readers reasonably expect it, such as to any content where someone might think, "Who wrote this?"
As a reminder, publishers that appear in Google News should use bylines and author information. Learn more on the Google News policies page.
AI or automation disclosures are useful for content where someone might think, "How was this created?". Consider adding these when it would be reasonably expected.
Partially edited for clarity; links replaced with information on aspiration.marketing where available. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.