Using the DOM Tree for Content Extraction

Using the DOM Tree for Content Extraction
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The main information of a webpage is usually mixed between menus, advertisements, panels, and other not necessarily related information; and it is often difficult to automatically isolate this information. This is precisely the objective of content extraction, a research area of widely interest due to its many applications. Content extraction is useful not only for the final human user, but it is also frequently used as a preprocessing stage of different systems that need to extract the main content in a web document to avoid the treatment and processing of other useless information. Other interesting application where content extraction is particularly used is displaying webpages in small screens such as mobile phones or PDAs. In this work we present a new technique for content extraction that uses the DOM tree of the webpage to analyze the hierarchical relations of the elements in the webpage. Thanks to this information, the technique achieves a considerable recall and precision. Using the DOM structure for content extraction gives us the benefits of other approaches based on the syntax of the webpage (such as characters, words and tags), but it also gives us a very precise information regarding the related components in a block, thus, producing very cohesive blocks.


💡 Research Summary

The paper addresses the problem of automatically isolating the main textual content of a web page from surrounding noise such as menus, advertisements, and other peripheral elements. While many existing approaches rely on statistical measures applied to raw characters or lines (e.g., tag‑ratio, content‑code vectors) or assume the existence of templates and prior knowledge about page structure, these methods often fail on modern, dynamically generated sites that heavily use

elements and lack a regular hierarchical layout.

To overcome these limitations, the authors propose a technique that exploits the Document Object Model (DOM) tree, which is automatically generated by browsers for every loaded HTML document. The core contribution is the definition of the “chars‑nodes ratio” (CNR), a metric that captures the relationship between the amount of textual characters contained in a subtree and the total number of nodes (tags and text) in that subtree. Formally, for a node n, CNR(n) = (total characters in all descendant text nodes) / (total number of descendant nodes, including n itself). Because text nodes are always leaves, a node that only aggregates other tags can still obtain a high CNR if its children collectively contain a lot of text, allowing the method to recognize blocks that include non‑textual structural elements.

The extraction pipeline consists of three linear‑time steps.

  1. CNR Computation – A single recursive traversal (Algorithm 1) annotates each DOM node with three attributes: weight (the number of descendant nodes), textLength (the total character count of descendant text nodes, ignoring whitespace), and CNR. Nodes that are known to be irrelevant for content (e.g., ,