Crawlers (or bots) are designed to crawl HTML content of web pages but due to AJAX operations for asynchronous data fetching, this became a problem as it takes sometime to render page and show dynamic content on it. Similarly, AngularJS
also use asynchronous model, which creates problem for Google crawlers.
Some developers create basic html pages with real data and serve these pages from server side at the time of crawling. We can render same pages with PhantomJS
on serve side which has _escaped_fragment_
(Because Google looks for #!
in our site urls and then takes everything after the #!
and adds it in _escaped_fragment_
query parameter). For more detail please read this blog .