HOW THE SEARCH ENGINE WORKS
A search engine is a web-based tool that enables users to locate information on the World Wide Web. Popular examples of search engines are Google, Yahoo!, and MSN Search. A search engine is a website that allows users to look up information on the World Wide Web (WWW), known as the Internet. The search engine will achieve this by looking at many web pages to find matches to the user’s search inputs.
IN Simple terms we can say that a search engine is a program that identifies file in its database based on the keyword which is known as search query added by the user & fetch out the result accordingly to the relevant content searches by the user . Its not wrong if we say that SEARCH engines is ARTIFICIAL INTELLIGENCE whose algorithm is totally based on the user point of view the main aim of the search engine is to provide the better and the most relevant content in front of its user the whole programming of the search engine is done in such a way that is facilitate the user and give the better experience every time, the Google search engine is specially famous in all of the other search engines because people may rely on this , and Google continuously working on its user experience and try to make it more relevant and friendly for the users who are searching their related or interested query on Search Engine .
|SOME WELL KNOWN SEARCH ENGINES USED BY THE USERS IN WORLD WIDE !!!!!|
The Best Search Engine in the World: Google
- Search Engine – Bing.
- Search Engine – Baidu.
- Search Engine – Yahoo!
- Search Engine. – Yandex.
- Search Engine- Ask.
- Search Engine – DuckDuckGo.
- Search Engine – Naver.
|1. CRAWLER BASED SEARCH ENGINE|
They “crawl” or “spider” the web, then people search through what they have found. If web pages are changed, crawler-based search engines eventually find these changes, and that can affect how those pages are listed. Page titles, body copy and other elements all play a role.
|2. Human-powered directories|
A human-powered directory, such as the Open Directory Project depends on humans for its listings. (Yahoo!, which used to be a directory, now gets its information from the use of crawlers.) A directory gets its information from submissions, which include a short description to the directory for the entire site, or from editors who write one for sites they review.
|3. Hybrid searches|
Today, it is extremely common for crawler-type and human-powered results to be combined when conducting a search. Usually, a hybrid search engine will favor one type of listings over another.
EVERY SEARCH ENGINE HAS ITS ONE BOT JUST LIKE GOOGLE HAS ITS BOT NAMES AS GOOGLE CRAWLER
GOOGLE – CRAWLER
YAHOO – SLURP
BING – ADIDXBOX
YANDEX – BOT
In short if we can say that Search engines are programs that make it easy for people to search the internet for a relevant web page. The three main functions of a search engine are collecting information about Webpages, categorizing those Webpages, and creating an algorithm that makes it easy for people to find relevant web pages.
|WHY WE USE GOOGLE IN COMPARE TO OTHER SEARCH ENGINE|
THERE ARE SOME STASTICS THROUGH WHICH WE CAN DRWAN THE PRESUMPTION THAT THE GOOGLE IS USED BY MORE OF THE USER IN COMPARE TO OTHER SEARCG ENGINES
THE DOMINANCY OF THE GOOGLE IS MORE OVER THE OTHER SEARCH ENGINE BEACAUSE THE RESULT AND THE SERP PRESENT BEFORE THE AUDIENCE IS MORE RELEVANT AND TRUSTWORTHY IN THE AUDIENCE
ACCORDING TO THE DATA OR STATITICS
Google: 80% (That’s pretty impressive) Baidu: 11% (Chinese-language only; helped by the fact that Google is blocked in China) Bing: 5% (Only 5% worldwide but up to 33% in the U.S!) Yahoo: 3% (11% of US market share)
Google’s trust has continued to increase over the years by:
Constantly updating their search algorithms (technology and process for collecting relevant information)
Focusing on the user, their intent, and their satisfaction
ONE OF THE BIGGEST GAME CHANGER FOR THE GOOGLE DOMINANT OVER THE WORLD IS
- GOOGLE Constantly Testing and Improving: Algorithms and AI
Google’s algorithm is a hidden secret that is constantly being developed using human analysis and machine learning
The Internet is constantly growing as more and more people and businesses create websites. IN THIS ERA Google IS SOMETHING WHICH has simplified the web by creating an outstanding search engine that people trust.
|FRAME WORK OF SEACH ENGINE –
“HOW ITS WORK”
THE WHOLE SEARCH ENGINE IS BASED ON ONLY THREE STEPS –
crawling indexing as well as retrieval these three are the pillars of the search engine and only theses three pillars or we can say that it’s a foundation of the processing of the search engine In fact the Google which is dominant in the worlds and over the other search engines also based on only these three pillars whole algorithm of Google is set under the these steps.
CRWALER INDEXING RETRIVAL
Crawler is a type of agent of Google search engine crawler who visits the web pages of the worldwide and draw the analyses it accordingly. It is the process used by search engine to gather information of any websites. A crawler is a computer program that automatically searches documents on the Web. Crawlers are primarily programmed for repetitive actions so that browsing is automated.. It helps us to discover the updated content on the web such as new sites or pages, changing in existing sites.
Search Engine uses a program referred as “Crawler” which follows a proper process to determine which site to crawl and how.
IN SIMPLE TERMS WE CAN SAY THAT CRAWLER IS THE AGENT OF THE SEARCH ENGINE IT IS NOT WRONG TO SAY THAT THE GENERIC NAME OF CRAWLER IS GOOGLE BOT.
The term crawler comes from the first search engine on the Internet: the Web Crawler. Synonyms are also “Bot” or “Spider.” The most well known WebCrawler is the GoogleBot.
Who visit the web pages in the particular period of time and take the snapshot of it but one thing is very relevant to point out here that the crawler is not able to understand that snapshot which he was taken from the particular web pages because the crawler is only an only able to read and analysis the html code which is called as hyper text markup language. crawler only understands the markup language and after that crawler will get to know about what you content is what’s you niche , how relevant is your content on the web page after fetching theses details the next step of crawler is begin which is called as indexing .
- Crawler crawl all the websites
- When you own any website it has to be read in order, that it to be store in Google database.
- Crawler only understand the html which is called as the hyper text markup language
Crawler takes the snapshot
Decode the snapshot into html
send to the google data base
In data base the Indexing start
Entire sites or specific pages can be selectively visited and indexed. Crawlers apparently gained the name because they crawl through a site a page at a time, following the links to other pages on the site until all pages have been read. From here the process of crawler end if we summaries the crawler functioning we can say that the crawler is used to perform two task which are.
CRAWLER PERFORM TWO TASK-
Indexing is the process by which search engines organize information before a search to enable super-fast responses to queries.
In simple language we can say that indexing is the process of creating index for all fetches web pages and keeping them into a giant database from where it can later be retrieved
Indexing is the way to get an unordered table into an order that will maximize the query’s efficiency while searching.
THERE IS ONE TERM CALLED AS INVERT INDEXING BEFORE GOING TO THE DEEP FUCNTIONING OF THE INDEXING YOU FIRSTLY HAVE TO KNOW WHAT IS INVERT INDEXING An inverted index is a system wherein a database of text elements is compiled along with pointers to the documents which contain those elements. Then, search engines use a process called tokenization to reduce words to their core meaning, thus reducing the amount of resources needed to store and retrieve data.
YOU HAVE EVER NOTICE THAT SOME TIMES YOU GET TO SAW THE ERROR ON THE SERP THSES ERROR OCCURE DUE TO GLITCH IN THE PROCESS FINDING THAT PARTICULAR QUERY SEARCHED BY THE USER ON THE SEARCH ENGINE LET WE DISCUUSED SOME BASICS ERRORS OF THE SEARCH ENGINES WHICH SHOWS TO THE MAXIUMUM TIME.
|FUCNTIONING OF THE INDEXING|
MAKE THE SNAP SHOT
(Take information and read or decode it into html)
ARRANGE/ INDEX ACCORDING TO THE WORD AND EXPRESSION THAT DESCRIBE PAGES
ACCORDING TO THE NICHE YOU HAVE WRITEN IN THE CONTENT THE CRAWLER INDEXED THAT PAGE INTO THE GOOGLE DATA BASE BEFORE INDEXIN IT WILL SEGREGATE YOU PAGE ACCORDING TO WHICH NICHE YOU HAVE WRITEN YOUR CONTENT
LET WE TAKE THE CASE STUDY
SUPPOSE YOU ARE SEARCHING FOR THE BEST DIGITAL MARKETING AGENCY TO STORE THE DATA THE CRAWLER WILL READ THE NICHE OF YOUR WEBSITE LIKE DIGITAL MARKETING IS YOUR SEED KEYWORD.
ACCORDING TO YOUR SEED KEYWORD CRAWLER WILL PLACE OR INDEX YOUR SITE ACCORDING TO THE SEGREGATED DATA BASE
The process of accessing and retrieving the most appropriate information from text based on a particular query given by the user, Information retrieval is the process of satisfying user information needs that are expressed as textual queries.
RETRIVAL FUNCTIONING CONTAIN TWO PROCESSING WHICH ARE AS FOLLOWED WHEN THE SEARCH QUERRY COMES SEARCH ENGINES PROCESSING IS START. THE SEARCH ENGINE IS START COMPARING THE SEARCH STRING IN THE SEARCH REQUEST WITH THE INDEXED PAGES IN THE DATA BASE AFTER THAT IN THE DATA BASE ITS CHECK THE RELEVANCY OF THE PAGES EITHER IT IS MATCHED OR NOT WITH THE SEARCH QUERRY LIKE WHATS THE USER ARE SARCHING FOR .
BUT IT IS VERY RELEVANT TO POINT OUT HERE THAT IT IS GENIUNE THAT THERE WOULD BE LOTS OF INDEXED PAGES ON THE GOOGLE DATA BASE AND THERE WOULD BE LOTS OF PAGES WHO MATCHED WITH THE SEARCH STRINGS SEARCHED BY THE USER IN THE URL , SO TO TACKLE THIS PROBLEM THE SEARCH ENGINE START CALCULATING THE RELAVNCY SCORE THAT HOW THE RELEVANT IS YOUR CONTENT TO THE USER WHAT THEY SEARCHE IN THE URL SECTION SEARCH ENGINE START CALCULATING THE RELVANCY OF EACHED INDEXED PAGES ON THE GOOGLE DATA BASE AND PRESENT IN FRONT OF THE USER.
THE RETRIVAL FUCNTIONING CONTAIN ONE MOST IMPORTANT TERM TO UNDERSTAND WHICH IS CALLED AS RELAVNCY
ALGORITHIM OF SEARCH ENGINE
ALL THE RANKING YOU HAVE SEEN ON THE SERP ARE BASED ON THE (R.S + P.S) OF WEBSITE, THE SEARCH ENGINE WILL UPDATE THEIR ALGORITHIM TIME TO TIME TO BRING THE MOST APPROPIATE AND RELEVANT SITE ON THE TOP OF THE SERP AND THE CHANGES ON THE ALGO OF THE SEARCH ENGINE ALSO HELP TO AVOID SPAM SITE TO RANK ON THE SERP
AND TO RANK ON THAT “SERP” WE HAVE TO DONE SEO THAT’S WHY THIS WE DO SEO TO REALISE THE IMPORTANCE OF RELEVANCY AND PROMOTIONAL SCORE TO RANK 1ST ON THE SEARCH ENGINE
FROM THIS ABOVED REPORT WE GET TO KNOW FULL ASSESMENT OF THE SEARCH ENGINE THAT HOW ITS WORKS!!!
😊😊😊😊😊😊😊😊😊 THANK YOU 😊😊😊😊😊😊😊😊😊