Once we enter our search query and start searching then google tries to find out the relative matched words or key words in websites, webpages for your search query. Based on the findings and relevance it displays search results. The process of finding out and serving the required results is called as crawling. Here we get to know what is crawling, but how and who crawls the web pages?
Google Bot or spiders crawl the web pages. Once the crawling process initiates google bot checks whether the URL is an existing one or updated one. It crawls through each link, the content of web pages and indexes those to the database. And based on the relevance of the search query, it displays the results in the webpage.
Crawling is not payable that, by paying to search engines you can make crawling for your website. It is done as per the algorithm of search engine and its process. Index database is a place where the words, URL’s of web pages are stored. Even the word’s location is also stored. As webpages are constructed over HTML, Search engines do scan HTML tags such as title, Meta tags, alt attributes etc. While crawling Google bots scan all these, and it gets indexed. While indexing in database there is a limitation such as it can’t track or scan flash or rich media, audio, video websites.