- Crawling: This is the first step of the process. This process is to fetch all the details, links related to a website/page. It is a continuous process to find new or updated (modified) pages on the web. This crawling process is done by the SPIDERS/CRAWLERS. (These are even called as bots, *Google Bots for Google search engine). These Spiders are software and they perform the task. When crawling starts the crawler visits listed sites, detects links on the pages, changes status of new sites to existing one, records dead links and updated all the data.
- Indexing After the crawling process is done, The Spider indexes all the collected data of webpages, websites, url, links in a database. These database will be used whenever it is necessary. In this process the spider indexes
- Relevancy Once the search query is entered in the search engine and it gets a request to search the completed indexed database to find out the related data for the search query. It has billions of documents and database in store. So, search engines calculate the relevancy of the data with the requested search query. Search engines follow an algorithm to present the accurate results for the users.
- Results After the relevancy level is calculated it displays the results in the search results page. This is last step of the process.
Major Search engines like Google use an algorithm called “PAGE RANK”. This is named after the Founder of the Google Mr. Larry Page.