ﺩﺭﺱ ﻧﻬﻢ- ﺍﺑﺰﺍﺭ Fetch as Google
ﻳﮑﯽ ﺍﺯ ﺍﺑﺰﺍﺭﻫﺎﯼ ﻣﻬﻢ ﻭ ﮐﺎﺭﺑﺮﺩﯼ ﺑﺨﺶ Crawl، ﺍﺑﺰﺍﺭ Fetch as Google ﺍﺳﺖ ﮐﻪ ﺑﻪ ﺷﻤﺎ ﺩﺭ ﮐﺸﺎﻧﺪﻥ ﺭﻭﺑﺎﺕﻫﺎﯼ ﺍﺳﭙﺎﻳﺪﺭ ﮔﻮﮔﻞ ﺑﻪ ﺩﺭﻭﻥ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﮐﻤﮏ ﻣﯽﮐﻨﺪ.
ﺍﻳﻦ ﻣﻮﺿﻮﻉ ﺯﻣﺎﻧﯽ ﺍﻫﻤﻴﺖ ﺧﻮﺩﺵ ﺭﺍ ﻧﺸﺎﻥ ﻣﯽﺩﻫﺪ ﮐﻪ ﺷﻤﺎ ﺑﻪ ﺩﻧﺒﺎﻝ ﺭﺍﻫﮑﺎﺭﯼ ﺑﺮﺍﯼ ﺑﻬﺘﺮ ﮐﺮﺩﻥ ﻋﻤﻠﮑﺮﺩ ﺿﻌﻴﻒ ﺻﻔﺤﻪﻫﺎﻳﺘﺎﻥ ﺩﺭ ﻧﺘﺎﻳﺞ ﺟﺴﺘﺠﻮﻫﺎ ﻫﺴﺘﻴﺪ.
ﺑﺮﺍﯼ ﻣﺜﺎﻝ، ﻫﻨﮕﺎﻣﯽ ﮐﻪ ﺷﻤﺎ ﺍﺯ ﻓﺎﻳﻞﻫﺎﯼ ﻣﺪﻳﺎ ﺩﺭ ﺩﺭ ﻧﻤﺎﻳﺶ ﻣﺤﺘﻮﺍﯼ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﺍﺳﺘﻔﺎﺩﻩ ﻣﯽﮐﻨﻴﺪ، ﺍﮔﺮ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺑﻪ ﺧﻮﺑﯽ ﻣﺤﺘﻮﺍﯼ ﺷﻤﺎ ﺭﺍ ﮐﺮﺍﻭﻝ ﻧﮑﻨﻨﺪ ﺑﻪ ﺍﺣﺘﻤﺎﻝ ﻓﺮﺍﻭﺍﻥ ﻣﺤﺘﻮﺍﯼ ﺻﻔﺤﻪ ﺷﻤﺎ ﺁﻧﮕﻮﻧﻪ ﮐﻪ ﻫﺴﺖ ﺩﺭ ﻧﺘﺎﻳﺞ ﺟﺴﺘﺠﻮﻫﺎ ﺑﻪ ﻧﻤﺎﻳﺶ ﺩﺭ ﻧﺨﻮﺍﻫﺪ ﺁﻣﺪ. ﺑﺮﺍﯼ ﺭﻓﻊ ﺍﻳﻦ ﻣﺸﮑﻞ ﺷﻤﺎ ﻣﯽﺗﻮﺍﻧﻴﺪ ﺍﺯ ﺍﺑﺰﺍﺭ Fetch as Google ﺍﺳﺘﻔﺎﺩﻩ ﮐﻨﻴﺪ ﺗﺎ ﺩﻭﺑﺎﺭﻩ ﻣﻮﺭﺩ ﺭﺻﺪ ﺍﺳﭙﺎﻳﺪﺭﻫﺎﯼ ﮔﻮﮔﻞ ﻗﺮﺍﺭ ﺑﮕﻴﺮﺩ.
ﻣﻮﺭﺩ ﺩﻳﮕﺮ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺍﺑﺰﺍﺭ Fetch as Google ﻫﻨﮕﺎﻣﯽﺳﺖ ﮐﻪ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ ﻫﮏ ﺷﺪﻩ ﺑﺎﺷﺪ. ﺩﺭ ﺍﻳﻦ ﻫﻨﮕﺎﻡ ﺍﺑﺰﺍﺭ Fetch as Google ﺑﻪ ﺷﻤﺎ ﮐﻤﮏ ﻣﯽﮐﻨﺪ ﺗﺎ ﺻﻔﺤﺎﺕ ﻣﺸﮑﻠﺪﺍﺭ ﺭﺍ ﺷﻨﺎﺳﺎﻳﯽ ﮐﻨﻴﺪ. ﺑﺮﺍﯼ ﻣﺜﺎﻝ ﻓﺮﺽ ﮐﻨﻴﺪ ﮐﻪ ﺍﺩﻣﻴﻦ ﻭﺏﺳﺎﻳﺖ www.example.com ﺩﺭ ﺣﺎﻝ ﺟﺴﺘﺠﻮﯼ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﺩﺭ ﮔﻮﮔﻞ ﺍﺳﺖ. ﺍﻭ ﺍﺯ ﺩﻳﺪﻥ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﺩﺭ ﻣﻴﺎﻥ ﻧﺘﺎﻳﺞ ﺟﺴﺘﺠﻮﯼ ﻭﺍﮊﻩ ﺍﯼ ﭼﻮﻥ Viagra ﺷﮕﻔﺖ ﺯﺩﻩ ﻣﯽﺷﻮﺩ، ﻣﺨﺼﻮﺻﺎ ﺍﻳﻨﮑﻪ ﺍﻭ ﻣﺘﻮﺟﻪ ﻣﯽﺷﻮﺩ ﺍﻳﻦ ﮐﻠﻤﻪ ﺩﺭ ﻭﺑﻼﮒ ﺍﻭ ﺍﺻﻼً ﺑﻪ ﮐﺎﺭ ﻧﺮﻓﺘﻪ ﺍﺳﺖ. ﺧﻮﺷﺒﺨﺘﺎﻧﻪ ﻭﺏﺳﺎﻳﺖ ﺍﻭ ﺩﺭ ﺍﺑﺰﺍﺭ ﻭﺑﺴﻤﺘﺮ ﮔﻮﮔﻞ Verify ﺷﺪﻩ ﺍﺳﺖ ﻭ ﺍﻭ ﻣﯽﺗﻮﺍﻧﺪ ﺍﺯ ﺍﺑﺰﺍﺭ Fetch as Google ﺍﺳﺘﻔﺎﺩﻩ ﮐﻨﺪ ﺗﺎ ﺩﻗﻴﻘﺎ ﻣﺘﻮﺟﻪ ﺷﻮﺩ ﮔﻮﮔﻞ ﭼﻪ ﭼﻴﺰﯼ ﺩﺭ ﻭﺏﺳﺎﻳﺖ ﺍﻭ ﺩﻳﺪﻩ ﺍﺳﺖ.
ﺍﻳﻦ ﺍﺑﺰﺍﺭ ﺟﺰﻳﻴﺎﺕ ﻭ ﻣﺤﺘﻮﺍﯼ ﺻﻔﺤﺎﺕ ﻭﺏﺳﺎﻳﺖ ﺭﺍ ﺑﻪ ﺍﻭ ﻧﺸﺎﻥ ﻣﯽﺩﻫﺪ ﻭ ﺍﻭ ﺩﻗﻴﻘﺎ ﻣﯽﺗﻮﺍﻧﺪ ﻭﺍﮊﻩ Viagra ﻭ ﺩﻳﮕﺮ ﻭﺍﮊﻩﻫﺎﯼ ﺍﺳﭙﻢ ﺩﺭ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﺭﺍ ﻣﺸﺎﻫﺪﻩ ﮐﻨﺪ.
ﺍﻳﻦ ﻣﻮﺿﻮﻉ ﺯﻣﺎﻧﯽ ﺍﺗﻔﺎﻕ ﻣﯽﺍﻓﺘﺪ ﮐﻪ ﻳﮏ ﻫﮑﺮ ﻣﺨﺮﺏ ﺑﻪ ﺍﻣﻨﻴﺖ ﻭﺏﺳﺎﻳﺖ ﺍﻭ ﻧﻔﻮﺫ ﮐﺮﺩﻩ ﻭ ﻣﺤﺘﻮﺍﻫﺎﻳﯽ ﻧﺎﺧﻮﺍﺳﺘﻨﯽ ﻭ ﭘﻨﻬﺎﻥ ﺭﺍ ﺑﻪ ﻭﺏﺳﺎﻳﺖ ﺍﻭ ﺍﻓﺰﻭﺩﻩ ﺍﺳﺖ. ﺍﻳﻦ ﻣﻮﺿﻮﻉ ﺭﺍ ﻧﻪ ﮐﺎﺭﺑﺮﺍﻥ ﻋﺎﺩﯼ ﺑﻠﮑﻪ ﺗﻨﻬﺎ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺗﺸﺨﻴﺺ ﻣﯽﺩﻫﻨﺪ ﭼﺮﺍ ﮐﻪ ﺳﻮﺭﺱ ﮐﺪ ﻭﺏﺳﺎﻳﺖ (Source Code at The Site) ﺑﻪ ﺻﻮﺭﺕ ﻋﺎﺩﯼ ﺑﺮﺍﯼ ﮐﺎﺭﺑﺮﺍﻥ ﺑﻪ ﻧﻤﺎﻳﺶ ﺩﺭ ﻣﯽﺁﻳﺪ ﺍﻣﺎ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺑﻪ ﮔﻮﻧﻪﺍﯼ ﺩﻳﮕﺮ ﺁﻥ ﺭﺍ ﻣﯽﺑﻴﻨﻨﺪ. ﺑﺎﻳﺪ ﺑﮕﻮﻳﻴﻢ ﺗﺸﺨﻴﺺ ﺍﻳﻦ ﻣﻮﺿﻮﻉ ﺍﺯ ﺭﺍﻩﻫﺎﻳﯽ ﺑﻪ ﺟﺰ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺍﺑﺰﺍﺭ Fetch as Google ﺑﺴﻴﺎﺭ ﺳﺨﺖ ﺍﺳﺖ.
ﮔﻮﮔﻞ ﭘﻴﺸﻨﻬﺎﺩ ﻣﯽﮐﻨﺪ ﺑﺮﺍﯼ ﺍﺳﺘﻔﺎﺩﻩ ﺑﻬﺘﺮ ﺍﺯ ﺍﺑﺰﺍﺭ Fetch as Google، ﺁﻥ ﺭﺍ ﺑﻪ ﻫﻤﺮﺍﻩ ﺍﺑﺰﺍﺭﻫﺎﯼ HTML Suggestions ﻭ Crawl Errors ﺑﻪ ﮐﺎﺭ ﺑﺒﺮﻳﺪ. ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ HTML Suggestions ﺳﺒﺐ ﻣﯽﺷﻮﺩ ﮐﻪ ﺷﻤﺎ ﭘﻴﺸﻨﻬﺎﺩﻫﺎﻳﯽ ﺑﺮﺍﯼ ﺑﻬﺒﻮﺩ ﺗﮓﻫﺎﯼ ﻋﻨﻮﺍﻥ (Title Tags)، Meta Descriptions ﻭ ﻋﻨﺎﺻﺮ ﺩﻳﮕﺮﯼ ﮐﻪ ﺑﺮ ﻋﻤﻠﮑﺮﺩ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ ﺩﺭ ﻫﻨﮕﺎﻡ ﺟﺴﺘﺠﻮ ﺗﺎﺛﻴﺮ ﻣﯽﮔﺬﺍﺭﺩ، ﺩﺍﺷﺘﻪ ﺑﺎﺷﻴﺪ. ﺍﺑﺰﺍﺭ Crawl Errors ﻫﻢ ﺑﻪ ﺷﻤﺎ ﮐﻤﮏ ﻣﯽﮐﻨﺪ ﺗﺎ ﺻﻔﺤﻪﻫﺎﻳﯽ ﮐﻪ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺑﺮﺍﯼ ﮐﺮﺍﻭﻝ ﮐﺮﺩﻥ ﺩﺭ ﺁﻥﻫﺎ ﺩﭼﺎﺭ ﻣﺸﮑﻞ ﻫﺴﺘﻨﺪ ﺭﺍ ﺑﺒﻴﻨﻴﺪ.
ﺑﺮﺍﯼ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺍﺑﺰﺍﺭ Fetch as Google ﮐﺎﻓﯽﺳﺖ ﺑﻪ ﺯﻳﺮﻣﺠﻤﻮﻋﻪ Crawl ﺩﺭ ﺍﺑﺰﺍﺭ ﻭﺏﻣﺴﺘﺮ ﮔﻮﮔﻞ ﺭﻓﺘﻪ ﻭ ﺩﺭ ﻗﺴﻤﺖ Text Box، ﺁﺩﺭﺱ ﺑﺨﺸﯽ ﺍﺯ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﮐﻪ ﻣﯽﺧﻮﺍﻫﻴﺪ ﭼﮏ ﺷﻮﺩ ﺭﺍ ﻭﺍﺭﺩ ﮐﻨﻴﺪ. ﺳﭙﺲ ﺍﺯ ﻣﻴﺎﻥ ﻟﻴﺴﺖ، ﻧﻮﻉ ﻭ ﻧﺤﻮﻩ ﭼﮏ ﺷﺪﻥ ﺗﻮﺳﻂ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺭﺍ ﻣﺸﺨﺺ ﮐﻨﻴﺪ. ﺑﺮﺍﯼ ﺍﻳﻨﮑﻪ ﺑﺒﻴﻨﻴﺪ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ ﺭﺍ ﭼﮕﻮﻧﻪ ﮐﺮﺍﻭﻝ ﻣﯽﮐﻨﻨﺪ، ﮔﺰﻳﻨﻪ Web، ﺑﺮﺍﯼ ﺍﻳﻨﮑﻪ ﺑﺒﻴﻨﻴﺪ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﭼﮕﻮﻧﻪ ﺑﺮﺍﯼ ﺗﻠﻔﻦﻫﺎﯼ ﻫﻮﺷﻤﻨﺪ ﮐﺮﺍﻭﻝ ﻣﯽﮐﻨﻨﺪ، ﮔﺰﻳﻨﻪ Mobile Smartphone ﻭ ﺩﺭ ﻧﻬﺎﻳﺖ ﺑﺮﺍﯼ ﻧﺤﻮﻩ ﭼﮕﻮﻧﮕﯽ ﮐﺎﺭ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺩﺭ ﺯﻣﻴﻨﻪ Feature Phones، ﮔﺰﻳﻨﻪﻫﺎﯼ Mobile xHTML (ﺑﻴﺸﺘﺮ ﺑﺮﺍﯼ ﻭﺏﺳﺎﻳﺖ ﻫﺎﯼ ﮊﺍﭘﻨﯽ) ﻭ ﻳﺎ Mobile XHTML/WML ﺭﺍ ﺍﻧﺘﺨﺎﺏ ﮐﻨﻴﺪ.
ﺳﭙﺲ ﺩﮐﻤﻪ Fetch ﺭﺍ ﻓﺸﺎﺭ ﺩﻫﻴﺪ ﺗﺎ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺁﺩﺭﺱ ﺧﻮﺍﺳﺘﻪ ﺷﺪﻩ ﺷﻤﺎ ﺭﺍ ﮐﺮﺍﻭﻝ ﮐﻨﻨﺪ. ﺍﻟﺒﺘﻪ ﺩﺭ ﺍﻳﻦ ﺑﺨﺶ ﻫﻤﭽﻨﻴﻦ ﻣﯽﺗﻮﺍﻧﻴﺪ ﺑﺮ ﺩﮐﻤﻪ Fetch and Render ﮐﻠﻴﮏ ﮐﻨﻴﺪ ﺗﺎ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﻋﻼﻭﻩ ﺑﺮ ﮐﺮﺍﻭﻝ ﮐﺮﺩﻥ ﺩﺭ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ، ﺑﻪ ﺭﻧﺪﺭ ﺁﻥ ﻧﻴﺰ ﺑﭙﺮﺩﺍﺯﻧﺪ.
ﻫﺮﮔﺎﻩ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺑﻪ ﺻﻮﺭﺕ ﻣﻮﻓﻖ ﮐﺎﺭ ﺭﺍ ﺑﻪ ﭘﺎﻳﺎﻥ ﺭﺳﺎﻧﺪﻧﺪ، ﺷﻤﺎ ﻣﯽﺗﻮﺍﻧﻴﺪ ﺁﻥ ﺻﻔﺤﻪ ﺭﺍ ﺑﺮﺍﯼ ﺍﻳﻨﺪﮐﺲ ﺷﺪﻥ ﺩﺭ ﮔﻮﮔﻞ ﺍﺭﺳﺎﻝ ﻧﻤﺎﻳﻴﺪ. ﺑﺮﺍﯼ ﺍﻳﻦ ﮐﺎﺭ ﮐﺎﻓﯽ ﺳﺖ ﮐﻪ ﺑﺮ ﺩﮐﻤﻪ Submit to Google Index ﮐﻠﻴﮏ ﮐﻨﻴﺪ. ﺷﻤﺎ ﺍﺯ ﺍﺑﺰﺍﺭ Fetch as Google ﻣﯽﺗﻮﺍﻧﻴﺪ ﺗﺎ ۵۰۰ ﻣﺮﺗﺒﻪ ﻃﯽ ﻳﮏ ﻫﻔﺘﻪ ﺍﺳﺘﻔﺎﺩﻩ ﮐﻨﻴﺪ.
Blocked URLs
ﭼﻬﺎﺭﻣﻴﻦ ﺑﺨﺶ ﺍﺯ ﻣﺠﻤﻮﻋﻪ Crawl، ﺑﺨﺶ Blocked URLs ﺍﺳﺖ ﮐﻪ ﺩﺭ ﺁﻥ ﺁﺩﺭﺱﻫﺎﻳﯽ ﺍﺯ ﻭﺏﺳﺎﻳﺖ ﺭﺍ ﺑﻪ ﺷﻤﺎ ﻧﺸﺎﻥ ﻣﯽﺩﻫﺪ ﮐﻪ ﻓﺎﻳﻞ robote.txt ﺍﺟﺎﺯﻩ ﺩﺳﺘﺮﺳﯽ ﮔﻮﮔﻞ ﺭﺍ ﺑﻪ ﺍﻳﻦ ﺻﻔﺤﺎﺕ ﻧﺪﺍﺩﻩ ﺍﺳﺖ. ﺩﺭ ﻭﺍﻗﻊ ﺍﻳﻦ ﻓﺎﻳﻞ ﭘﻴﺸﺘﺮ ﺍﺯ ﺳﻮﯼ ﺷﻤﺎ ﺍﻳﺠﺎﺩ ﺷﺪﻩ ﻭ ﺷﻤﺎ ﺩﺭ ﺁﻥ ﺧﻮﺍﺳﺘﻪ ﺑﻮﺩﻳﺪ ﮐﻪ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺍﺯ ﺭﻓﺘﻦ ﺑﻪ ﺑﺮﺧﯽ ﺍﺯ ﺻﻔﺤﺎﺕ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ ﻣﻨﻊ ﺷﻮﻧﺪ. ﺍﻳﻦ ﺩﺳﺘﻮﺭ ﻣﻌﻤﻮﻻ ﺑﻪ ﺍﻳﻦ ﺩﻟﻴﻞ ﺻﺎﺩﺭ ﺷﺪﻩ ﮐﻪ ﻣﺤﺘﻮﺍﯼ ﺻﻔﺤﻪﻫﺎ ﻗﺮﺍﺭ ﺍﺳﺖ ﻣﺤﺮﻣﺎﻧﻪ ﺑﻤﺎﻧﺪ.
ﺷﻤﺎ ﻫﻤﭽﻨﻴﻦ ﺍﺯ ﺩﺳﺘﻮﺭ Noindex ﺩﺭ ﺑﺨﺶ Meta Tag ﻧﻴﺰ ﻣﯽﺗﻮﺍﻧﻴﺪ ﻣﺎﻧﻊ ﺍﺯ ﺧﻮﺍﻧﺪﻩ ﺷﺪﻥ ﺻﻔﺤﻪ ﺗﻮﺳﻂ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺷﻮﻳﺪ. ﻫﻨﮕﺎﻣﯽ ﮐﻪ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺑﺎ ﺩﺳﺘﻮﺭ Noindex ﺩﺭ Meta Tag ﻳﮏ ﺻﻔﺤﻪ ﺍﻳﻨﺘﺮﻧﺘﯽ ﻣﻮﺍﺟﻪ ﻣﯽﺷﻮﻧﺪ، ﮔﻮﮔﻞ ﺍﻳﻦ ﺻﻔﺤﻪ ﺭﺍ ﺑﻪ ﺻﻮﺭﺕ ﮐﺎﻣﻞ ﺁﻥ ﺭﺍ ﺍﺯ ﻧﺘﺎﻳﺞ ﺟﺴﺘﺠﻮﻫﺎﯼ ﺧﻮﺩ ﮐﻨﺎﺭ ﻣﯽﮔﺬﺍﺭﺩ. ﺍﮔﺮ ﭘﻴﺸﺘﺮ ﺍﻳﻦ ﺻﻔﺤﻪ ﺷﺎﻣﻞ ﺩﺳﺘﻮﺭ Noindex ﻧﺒﻮﺩ ﻭ ﺍﻳﻦ ﺩﺳﺘﻮﺭ ﭘﺲ ﺍﺯ ﻣﺪﺗﯽ ﺑﻪ ﺁﻥ ﺍﺿﺎﻓﻪ ﺷﺪ، ﮔﻮﮔﻞ ﭘﺲ ﺍﺯ ﻣﺘﻮﺟﻪ ﺷﺪﻥ ﺍﺯ ﺍﻳﻦ ﻣﻮﺿﻮﻉ ﺩﺭ ﺍﻭﻟﻴﻦ ﮐﺮﺍﻭﻝ ﺧﻮﺩ ﺍﺯ ﺻﻔﺤﻪ، ﺍﻳﻦ ﺻﻔﺤﻪ ﺭﺍ ﺍﺯ ﻧﺘﺎﻳﺞ ﺟﺴﺘﺠﻮﻫﺎ ﮐﻨﺎﺭ ﻣﯽﮔﺬﺍﺭﺩ ﻭ ﺑﺮﺍﯼ ﮐﺎﺭﺑﺮﺍﻥ ﺟﺴﺘﺠﻮﮔﺮ ﻧﻤﺎﻳﺶ ﻧﺨﻮﺍﻫﺪ ﺩﺍﺩ.
ﺗﻮﺿﻴﺤﺎﺕ ﺑﻴﺸﺘﺮ ﺩﺭﺑﺎﺭﻩ ﻓﺎﻳﻞ robot.txt
ﭘﻴﺸﺘﺮ ﺑﺎ ﺑﺮﺧﯽ ﺍﺯ ﺩﺳﺘﻮﺭﻫﺎﯼ ﺩﺍﺧﻞ ﻓﺎﻳﻞ robot.txt ﺁﺷﻨﺎ ﺷﺪﻩ ﺑﻮﺩﻳﺪ. ﺍﻳﻦ ﻓﺎﻳﻞ ﻫﻤﺎﻧﻄﻮﺭ ﮐﻪ ﭘﻴﺸﺘﺮ ﮔﻔﺘﻪ ﺷﺪ، ﻗﺮﺍﺭ ﺍﺳﺖ ﻣﺎﻧﻊ ﺣﻀﻮﺭ ﺭﻭﺑﺎﺕﻫﺎ ﺩﺭ ﺑﺮﺧﯽ ﺍﺯ ﺻﻔﺤﺎﺕ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ ﺷﻮﺩ. ﺩﺭ ﻭﺍﻗﻊ ﺭﻭﺑﺎﺕﻫﺎﯼ ﺑﺴﻴﺎﺭﯼ ﺍﺯ ﻣﻮﺗﻮﺭﻫﺎﯼ ﺟﺴﺘﺠﻮ ﭘﻴﺶ ﺍﺯ ﺣﻀﻮﺭ ﺩﺭ ﺻﻔﺤﺎﺕ ﺷﻤﺎ، ﺍﺯ ﺷﻤﺎ ﺍﺟﺎﺯﻩ ﺣﻀﻮﺭ ﺩﺭ ﻭﺏﺳﺎﻳﺖ ﺭﺍ ﺩﺭﻳﺎﻓﺖ ﻣﯽﮐﻨﻨﺪ. ﺍﻳﻦ ﺍﺟﺎﺯﻩ ﺑﻪ ﻫﻤﻪ ﺭﻭﺑﺎﺕ ﻫﺎ ﺩﺍﺩﻩ ﻣﯽﺷﻮﺩ ﻭ ﺍﮔﺮ ﺷﻤﺎ ﺑﺨﻮﺍﻫﻴﺪ ﮐﻪ ﺁﻥﻫﺎ ﺭﺍ ﺍﺯ ﺣﻀﻮﺭ ﺩﺭ ﺑﺮﺧﯽ ﺍﺯ ﺻﻔﺤﺎﺕ ﻣﻨﻊ ﮐﻨﻴﺪ ﻣﯽﺗﻮﺍﻧﻴﺪ ﺑﺎ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺩﺳﺘﻮﺭﻫﺎﯼ ﻓﺎﻳﻞ robot.txt ﺍﻳﻦ ﻣﻮﺿﻮﻉ ﺭﺍ ﺑﻪ ﺁﻥﻫﺎ ﮔﻮﺷﺰﺩ ﮐﻨﻴﺪ.
ﻫﻤﺎﻧﻄﻮﺭ ﮐﻪ ﺍﺯ ﻧﺎﻡ ﻓﺎﻳﻞ ﭘﻴﺪﺍﺳﺖ، ﺍﻳﻦ ﻓﺎﻳﻞ، ﻳﮏ ﻓﺎﻳﻞ ﻣﺘﻨﯽ ﺑﺎ ﭘﺴﻮﻧﺪ txt ﺍﺳﺖ ﮐﻪ ﺩﺭ ﺑﺎﻻﺗﺮﻳﻦ ﺳﻄﺢ ﺩﺍﻣﻨﻪ ﺑﺎﻳﺪ ﺁﺩﺭﺱ ﺩﺍﺩﻩ ﺷﻮﺩ. ﻳﻌﻨﯽ ﺍﮔﻪ ﺁﺩﺭﺱ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ www.example.com ﺍﺳﺖ، ﺁﺩﺭﺱ ﺍﻳﻦ ﻓﺎﻳﻞ ﺑﺎﻳﺪ www.example.com/robot.txt ﺑﻮﺩﻩ ﻭ ﺍﮔﺮ ﺁﺩﺭﺱ ﺁﻥ www.example.com/blog/robot.txt ﺑﺎﺷﺪ، ﻣﻮﺭﺩ ﺗﻮﺟﻪ ﺭﻭﺑﺎﺕﻫﺎ ﻗﺮﺍﺭ ﻧﻤﯽﮔﻴﺮﺩ.
ﺩﺭﺑﺎﺭﻩ ﻓﺎﻳﻞ robot.txt ﭘﻴﺸﺘﺮ ﻧﻴﺰ ﮔﻔﺘﻪ ﺷﺪﻩ ﺑﻮﺩ ﮐﻪ ﺍﻳﻦ ﻓﺎﻳﻞ ﺍﺯ ﺩﻭ ﺧﻂ ﺗﺸﮑﻴﻞ ﺷﺪﻩ ﮐﻪ ﻋﺒﺎﺭﺕ ﺳﺖ ﺍﺯ ﺑﺨﺶ User-agent ﮐﻪ ﺭﻭﺑﺎﺕﻫﺎ ﺭﺍ ﻣﻠﺰﻡ ﺑﻪ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺩﺳﺘﻮﺭﻫﺎ ﻣﯽﮐﻨﺪ ﻭ ﺑﺨﺶ Disallow ﮐﻪ ﺷﺎﻣﻞ ﺁﺩﺭﺱ ﺍﻳﻨﺘﺮﻧﺘﯽ ﺑﺨﺸﯽﺳﺖ ﮐﻪ ﺷﻤﺎ ﻣﯽﺧﻮﺍﻫﻴﺪ ﺩﺳﺘﺮﺳﯽ ﺭﻭﺑﺎﺕﻫﺎ ﺑﻪ ﺁﻥ ﺭﺍ ﻣﺤﺪﻭﺩ ﮐﻨﻴﺪ.
ﺷﻤﺎ ﺩﺭ ﺑﺨﺶ User-agent ﻣﯽﺗﻮﺍﻧﻴﺪ ﺗﻤﺎﻣﯽ ﺭﻭﺑﺎﺕﻫﺎﯼ ﺟﺴﺘﺠﻮﮔﺮ ﺭﺍ ﺑﺎ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺩﺳﺘﻮﺭ User-agent: * ﻭ ﻳﺎ ﺗﻨﻬﺎ ﺭﻭﺑﺎﺕﻫﺎﯼ ﮔﻮﮔﻞ ﺭﺍ ﺑﺎ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺩﺳﺘﻮﺭ User-Agent: Googlebot ﺍﺯ ﻓﻌﺎﻟﻴﺖ ﺧﻮﺩ ﺁﮔﺎﻩ ﮐﺮﺩﻩ ﻭ ﺳﭙﺲ ﺩﺭ ﺑﺨﺶ Disallow ﺑﺎ ﺍﺳﺘﻔﺎﺩﻩ ﺍﺯ ﺩﺳﺘﻮﺭ “ Disallow: / “ ﺁﻥﻫﺎ ﺭﺍ ﺍﺯ ﺧﺰﻳﺪﻥ ﺩﺭ ﮐﻞ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﻣﻨﻊ ﮐﻨﻴﺪ؛ ﻳﻌﻨﯽ ﺩﺭ ﻭﺍﻗﻊ ﻓﺎﻳﻞ robots.txt ﺑﺮﺍﯼ ﺗﻤﺎﻣﯽ ﻣﻨﻊ ﺗﻤﺎﻣﯽ ﻣﻮﺗﻮﺭﻫﺎﯼ ﺟﺴﺘﺠﻮ ﺩﺭ ﺗﻤﺎﻣﯽ ﻭﺏﺳﺎﻳﺖ ﺷﻤﺎ ﺑﻪ ﺍﻳﻦ ﺻﻮﺭﺕ ﺩﺭ ﺧﻮﺍﻫﺪ ﺁﻣﺪ:
User-agent: *
Disallow: /
ﺍﮔﺮ ﺑﺨﻮﺍﻫﻴﺪ ﺭﻭﺑﺎﺕﻫﺎ ﺭﺍ ﺍﺯ ﺣﻀﻮﺭ ﺩﺭ ﻳﮏ ﺁﺩﺭﺱ ﻣﺸﺨﺺ ﻣﻨﻊ ﮐﻨﻴﺪ ﮐﺎﻓﯽ ﺳﺖ ﺩﺳﺘﻮﺭ Disallow ﺭﺍ ﺑﻪ ﻃﻮﺭ ﻣﺜﺎﻝ ﺍﻳﻨﮕﻮﻧﻪ ﺻﺎﺩﺭ ﮐﻨﻴﺪ:
Disallow: /private_file.html
ﺍﮔﺮ ﺑﻪ ﺩﻧﺒﺎﻝ ﺁﻥ ﻫﺴﺘﻴﺪ ﺗﺎ ﻋﮑﺲ ﻣﺸﺨﺼﯽ ﺭﺍ ﺍﺯ ﻧﺘﺎﻳﺞ ﺟﺴﺘﺠﻮﯼ ﮔﻮﮔﻞ ﺣﺬﻑ ﮐﻨﻴﺪ ﮐﺎﻓﯽ ﺳﺖ ﻓﺎﻳﻞ robot.txt ﺭﺍ ﺍﻳﻨﮕﻮﻧﻪ ﻃﺮﺡ ﺭﻳﺰﯼ ﮐﻨﻴﺪ:
User-agent: Googlebot-Image
Disallow: /images/dogs.jpg
ﺣﺘﻤﺎ ﻣﯽﺩﺍﻧﻴﺪ ﮐﻪ ﺩﺳﺘﻮﺭ / ﺩﺭ ﺑﺨﺶ Disallow ﺑﻪ ﻣﻌﻨﯽ ﺗﻤﺎﻡ ﻭ ﮐﻞ ﺍﺳﺖ ﻭ ﺍﮔﺮ ﺑﺨﻮﺍﻫﻴﺪ ﺗﻤﺎﻣﯽ ﻋﮑﺲﻫﺎﯼ ﻭﺏﺳﺎﻳﺖ ﺧﻮﺩ ﺭﺍ ﺍﺯ ﻧﺘﺎﻳﺞ ﮔﻮﮔﻞ ﺣﺬﻑ ﮐﻨﻴﺪ، ﺩﺳﺘﻮﺭﺍﺕ ﻓﺎﻳﻞ robot.txt ﺑﻪ ﺻﻮﺭﺕ ﺯﻳﺮ ﺩﺭ ﺧﻮﺍﻫﺪ ﺁﻣﺪ:
User-agent: Googlebot-Image
Disallow: /
ﺑﺮﺍﯼ ﺗﺴﺖ ﺩﺭﺳﺘﯽ ﻓﺎﻳﻞ robot.txt ﻫﻢ ﮐﺎﻓﯽ ﺳﺖ ﺩﺭ ﺍﺑﺰﺍﺭ ﻭﺏﻣﺴﺘﺮ ﮔﻮﮔﻞ، ﺑﻪ ﺑﺨﺶ Crawl ﺭﻓﺘﻪ ﻭ ﺑﺮ ﺑﺨﺶ Blocked URLs ﮐﻠﻴﮏ ﮐﻨﻴﺪ. ﭘﺲ ﺍﺯ ﺁﻥ ﺑﺮ ﺯﺑﺎﻧﻪ Test robots.txt ﮐﻠﻴﮏ ﮐﺮﺩﻩ ﻭ ﻣﺤﺘﻮﺍﯼ ﻓﺎﻳﻞ robot.txt ﺧﻮﺩ ﺭﺍ ﺩﺭ ﺁﻥ ﺑﭽﺴﺒﺎﻧﻴﺪ.