Post by account_disabled on Mar 10, 2024 7:42:20 GMT
The we can piece together from the external behavior of the search results public comments from Googlers tests and experiments and first principles heres how I think JavaScript indexing is working at Google at the moment I think there is a separate queue for JSenabled rendering because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many many pages. In detail I think Googlebot crawls and caches HTML and core resources regularly Heuristics and probably machine learning are used to prioritize JavaScript rendering for each page Some pages are indexed with no JS execution.
There are many pages that can probably be easily identified as not needing Europe Cell Phone Number List rendering and others which are such a low priority that it isnt worth the computing resources. Some pages get immediate rendering or possibly immediate basicregular indexing along with highpriority rendering. This would enable the immediate indexation of pages in news results or other QDF results but also allow pages that rely heavily on JS to get updated indexation when the rendering completes. Many pages are rendered async in a separate processqueue from both crawling and regular indexing thereby adding the page to the index for new words and phrases found only in the in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also in addition to adding pages to the index May make modifications to the link graph May add new URLs to the discoverycrawling queue for Googlebot The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag who I mentioned previously for his contributions to this HN thread direct link emphasis mine I was working on the lightweight highperformance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page.
There are many pages that can probably be easily identified as not needing Europe Cell Phone Number List rendering and others which are such a low priority that it isnt worth the computing resources. Some pages get immediate rendering or possibly immediate basicregular indexing along with highpriority rendering. This would enable the immediate indexation of pages in news results or other QDF results but also allow pages that rely heavily on JS to get updated indexation when the rendering completes. Many pages are rendered async in a separate processqueue from both crawling and regular indexing thereby adding the page to the index for new words and phrases found only in the in addition to the words and phrases found in the unrendered version indexed initially.
The JS rendering also in addition to adding pages to the index May make modifications to the link graph May add new URLs to the discoverycrawling queue for Googlebot The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag who I mentioned previously for his contributions to this HN thread direct link emphasis mine I was working on the lightweight highperformance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page.