Complex Question Answering: Unsupervised Learning Approaches and Experiments

Complex Question Answering: Unsupervised Learning Approaches and   Experiments
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Complex questions that require inferencing and synthesizing information from multiple documents can be seen as a kind of topic-oriented, informative multi-document summarization where the goal is to produce a single text as a compressed version of a set of documents with a minimum loss of relevant information. In this paper, we experiment with one empirical method and two unsupervised statistical machine learning techniques: K-means and Expectation Maximization (EM), for computing relative importance of the sentences. We compare the results of these approaches. Our experiments show that the empirical approach outperforms the other two techniques and EM performs better than K-means. However, the performance of these approaches depends entirely on the feature set used and the weighting of these features. In order to measure the importance and relevance to the user query we extract different kinds of features (i.e. lexical, lexical semantic, cosine similarity, basic element, tree kernel based syntactic and shallow-semantic) for each of the document sentences. We use a local search technique to learn the weights of the features. To the best of our knowledge, no study has used tree kernel functions to encode syntactic/semantic information for more complex tasks such as computing the relatedness between the query sentences and the document sentences in order to generate query-focused summaries (or answers to complex questions). For each of our methods of generating summaries (i.e. empirical, K-means and EM) we show the effects of syntactic and shallow-semantic features over the bag-of-words (BOW) features.


💡 Research Summary

The paper tackles the problem of answering complex, multi‑sentence questions by framing it as a topic‑oriented, informative multi‑document summarization task. In this setting, a set of documents that collectively contain the answer must be compressed into a single, concise text that preserves as much relevant information as possible. To select the most important sentences for the summary, the authors experiment with three unsupervised approaches: an empirical weighting scheme, K‑means clustering, and Expectation‑Maximization (EM) applied to a Gaussian mixture model.

A central contribution of the work is the construction of a rich feature set for each candidate sentence. The authors extract seven families of features: (1) lexical bag‑of‑words (BOW) and TF‑IDF statistics, (2) lexical‑semantic relations derived from WordNet (synonyms, hypernyms, etc.), (3) cosine similarity between the query and the sentence, (4) Basic Elements (BE) that capture predicate‑argument structures, (5) syntactic tree‑kernel features that encode the shape of constituency parses, (6) shallow‑semantic tree‑kernel features that incorporate semantic role labels, and (7) a concatenation of all the above. The inclusion of tree‑kernel functions is novel for this task; they allow the system to measure similarity between query and document sentences based on structural and shallow‑semantic information rather than surface word overlap alone.

Because the unsupervised models require a set of feature weights, the authors adopt a local‑search optimization procedure. Starting from random weights, each dimension is perturbed slightly, a summary is generated, and the resulting ROUGE‑2 score (computed against the DUC‑2007 reference summaries) is used as a fitness signal. If the score improves, the new weight is kept; the process repeats for a fixed number of iterations, effectively learning a weight vector that maximizes an external evaluation metric without any labeled sentence importance data.

The three sentence‑selection methods are implemented as follows. The empirical method computes a linear combination of the feature values using the learned weights and selects the top‑N sentences. K‑means clusters the sentence vectors into K groups (K is set empirically) and picks the sentences closest to each cluster centroid, assuming those are the most representative. EM treats sentence importance as a hidden binary variable (important vs. non‑important) and iteratively updates the mixture parameters and posterior probabilities using the Expectation‑Maximization algorithm; the initial parameters are seeded with the K‑means solution.

Experiments are conducted on the DUC‑2007 dataset, which provides complex questions and ten associated documents per question. Summaries are limited to 250 words, and evaluation uses ROUGE‑1, ROUGE‑2, and ROUGE‑SU4. Results show that the empirical approach consistently outperforms the other two, achieving the highest ROUGE scores. EM performs better than K‑means, indicating that probabilistic modeling of importance yields more robust sentence selection than simple distance‑based clustering. Importantly, when syntactic and shallow‑semantic tree‑kernel features are added to the basic BOW representation, ROUGE‑2 improves by roughly 3–4 % and ROUGE‑SU4 by about 2–3 %, demonstrating that structural information substantially enhances query‑focused summarization for complex questions.

The authors discuss several limitations. Tree‑kernel computation is computationally intensive, which may hinder scalability to very large corpora. Moreover, the local‑search weight learning relies on an external evaluation metric, meaning that truly unsupervised deployment would still need some form of reference or proxy feedback. Future work is suggested in three directions: (1) integrating neural sentence embeddings to reduce feature dimensionality and capture deeper semantics, (2) employing kernel approximation techniques (e.g., random Fourier features) to speed up tree‑kernel calculations, and (3) extending the framework to handle richer question types by incorporating discourse‑level modeling and answer type prediction.

In summary, the paper provides a thorough comparative study of three unsupervised sentence‑selection strategies for complex‑question answering, introduces a comprehensive feature suite that includes novel syntactic and shallow‑semantic tree‑kernel representations, and demonstrates that even without supervised training data, careful feature engineering combined with simple weight‑learning heuristics can yield competitive query‑focused summaries.


Comments & Academic Discussion

Loading comments...

Leave a Comment