Tag: crawler

Crawl Job Execution

imbWEB
Excerpt from theoretical paper on imbWEM and Crawl Job execution

The Crawl Job consists of the web domain list and the configuration parameters. The result of the job execution, the Result Set, is fed into index database for later use by the Company Semantic Profile (CSP) construction and enrichment (Figure 1) procedures. Resource Employment features (Table 2) are related to two different levels of the architecture (Figure 4): the Job Level Context (JLC) and the Domain Level Crawl (DLC).

The Diversity Module

The Diversity Module inherits the Frontier Ranking Module base class and commences target sorting according to the estimated semantic difference (as complementary value of semantic similarity) between the Target and already crawled content. The crawled content is represented by two collections: the Target Tokens Repository (TTR), which is domain level term frequency table aggregating TSTs…

The Template Module

The heart of this module is procedure of page decomposition and detection of semantic role for each of extracted content blocks. This is the only module in the stack that evaluates links using strictly information immutable across the DLC process iterations. Furthermore, the alternative ranking implementation assumes that higher position in the navigation menu hierarchy…

Web Crawlers – Literature review

The greatest algorithmic challenges of the web crawling are: loaded page and discovered links relevance estimation. Usually, the both are playing a crucial role in the frontier scheduling. The earliest relevant works on page importance ranking are: • the PageRank [1] which defines web page relevance as function of link-reference page relationship where sum of…