Crawl Errors
Below is a list of common reasons for Ecograder crawl and reporting errors. Where possible, we have included potential action items for each.
Most Common Reasons for Ecograder Crawl Errors
We built Ecograder to crawl any publicly available URL. If that URL uses standard web technologies, your crawl should go off without a hitch. However, there are occasional exceptions to this. Here are a few of them.
1. Bots are blocked by the website you want to crawl
Some websites actively block bot crawlers for security or other reasons. They do this to improve security and cut down on bad bot traffic.
Action item: Unfortunately, if a website blocks bots, there isn’t much we can do about this. You won’t be able to use Ecograder on those URLs.
2. The page scan times out
Occasionally, Ecograder times out when crawling a page. This might happen for any number of reasons, from network traffic to third-party API issues.
Action item: Try scanning the page again by hitting the ‘Retry’ button. If that doesn’t work, wait for a while and try again. You might also consider trying another URL on the same website.
3. Google Lighthouse renames an audit
Periodically, Google renames Lighthouse audits and while the data is the same, the audit name for Ecograder scans is not.
Action item: If Google Lighthouse renames an audit, you might consistently receive an error while trying to scan. If you are unable to resolve the error after retrying a scan two or three times, please contact us.
Green Hosting Values
Ecograder pulls data directly from The Green Web Foundation's hosting database to assess whether or not pages use a provider that powers its infrastructure with renewable energy. In today's modular, third-party services-enabled web environment, this can be a complicated endeavor. For more information on this, read their post Why Does my Website Show up as Grey in the Green Web Checker?.
If your Ecograder report provides inaccurate hosting information, please contact The Green Web Foundation directly. Thank you.