The Googlebot CSS and JS Notification for Dental and Medical Websites
In the blog post below, I am going to provide background information about the Googlebot CSS and JS notification, .and the robots.txt file, and then explain the status (as of 7/28/15) of the robots.txt update.
Googlebot is Crawling the Web
Google learns about content on the Internet via a program called “Googlebot”. Googlebot looks at websites and webpages to examine what content is present, new, and/or updated, etc., in a process known as “crawling”. By crawling (exploring/learning about) web pages, Google is able to “know” what web pages are out there on the Internet, and in turn, can show the best pages when a user performs a Google search.
Robots.txt is the Gatekeeper
Robots.txt is the name of a file that is present alongside all of the other files that comprise your website. Whereas .jpg files are images, and .PDF files are PDF documents, robots.txt has a special job.
The robots.txt file acts as a gatekeeper for your website, and “tells” Googlebot what information it may or may not access. For example, while you would certainly want Googlebot knowing about your content pages, you would not want Googlebot to let everyone on the Internet see private information in your “keepOut directory”.
And since Google has now made it clear that giving access to the above files might help with search engine rankings, it’s imperative that we tell robots.txt to “allow” Googlebot to see these files.
What is the fix, and how do I implement it?
From a technical standpoint, the fix is to add directives in the robots.txt file to allow Googlebot access to .CSS and .JS files. But as of this writing (7/28/15) there is no consensus as to the best way to balance access to these files and security concerns. As soon as more information becomes available I will post it here and on the SHD social sites.