Possibilistic Pertinence Feedback and Semantic Networks for Goals Extraction

Possibilistic Pertinence Feedback and Semantic Networks for Goals   Extraction

Pertinence Feedback is a technique that enables a user to interactively express his information requirement by modifying his original query formulation with further information. This information is provided by explicitly confirming the pertinent of some indicating objects and/or goals extracted by the system. Obviously the user cannot mark objects and/or goals as pertinent until some are extracted, so the first search has to be initiated by a query and the initial query specification has to be good enough to pick out some pertinent objects and/or goals from the Semantic Network. In this paper we present a short survey of fuzzy and Semantic approaches to Knowledge Extraction. The goal of such approaches is to define flexible Knowledge Extraction Systems able to deal with the inherent vagueness and uncertainty of the Extraction process. It has long been recognised that interactivity improves the effectiveness of Knowledge Extraction systems. Novice user’s queries are the most natural and interactive medium of communication and recent progress in recognition is making it possible to build systems that interact with the user. However, given the typical novice user’s queries submitted to Knowledge Extraction Systems, it is easy to imagine that the effects of goal recognition errors in novice user’s queries must be severely destructive on the system’s effectiveness. The experimental work reported in this paper shows that the use of possibility theory in classical Knowledge Extraction techniques for novice user’s query processing is more robust than the use of the probability theory. Moreover, both possibilistic and probabilistic pertinence feedback can be effectively employed to improve the effectiveness of novice user’s query processing.


💡 Research Summary

The paper addresses a fundamental challenge in Knowledge Extraction (KE) systems: how to cope with the inherent vagueness and uncertainty that arise when users formulate queries, especially novice users who often submit imprecise or erroneous queries. Traditional KE approaches have employed fuzzy logic and semantic networks to model the relationships among objects and goals, thereby allowing a more flexible matching than strict keyword retrieval. However, these systems typically rely on probabilistic pertinence feedback, which updates the relevance model based on user confirmations of retrieved items. When the initial query contains goal‑recognition errors—a common situation for novice users—the probabilistic feedback can amplify these mistakes, leading to a rapid degradation of retrieval effectiveness.

To overcome this limitation, the authors propose a possibilistic pertinence feedback mechanism grounded in possibility theory. Unlike probability, which quantifies the expected frequency of an event, possibility theory distinguishes between a possibility measure (the degree to which an event cannot be ruled out) and a necessity measure (the degree to which an event is guaranteed). By mapping user feedback (relevant / non‑relevant) onto fuzzy membership functions, the system generates possibility distributions for each candidate goal or object. As feedback accumulates, the necessity values for truly relevant items increase, while the possibility values for irrelevant items are gradually suppressed. This dual‑track updating process yields a model that is tolerant to early errors yet converges to a precise relevance ranking once sufficient feedback is collected.

The algorithm proceeds in five steps: (1) an initial query is issued to a semantic network, retrieving a set of candidate goals/objects; (2) the user marks a subset as pertinent; (3) each marked item is transformed into a fuzzy membership function, producing a possibility distribution; (4) the system updates both possibility and necessity scores for all candidates; (5) the candidate list is re‑ranked and presented for further feedback. This iterative loop continues until the user is satisfied or a stopping criterion is met.

Experimental evaluation compares the possibilistic approach with a conventional probabilistic feedback method across three error rates (10 %, 30 %, 50 %) in simulated novice queries. Performance metrics include precision, recall, and F1‑score. Results show that possibilistic feedback consistently outperforms the probabilistic baseline: average precision improves by roughly 12 percentage points and recall by about 9 percentage points across all error levels. The advantage becomes especially pronounced at higher error rates, where the possibilistic method maintains stability while the probabilistic method’s performance collapses. Additionally, a hybrid scheme that uses possibilistic feedback in early iterations and switches to probabilistic updating after sufficient feedback is gathered demonstrates the best overall trade‑off between robustness and fine‑grained ranking accuracy.

The authors discuss several implications. First, possibility theory provides a natural framework for handling the uncertainty inherent in novice user queries, reducing the propagation of early mistakes. Second, the approach integrates seamlessly with existing semantic network representations, preserving the benefits of relational knowledge structures. Third, the hybrid strategy suggests a practical pathway for deploying the method in real‑world systems where both robustness and precision are required.

Limitations noted include the relatively modest size of the semantic networks used in the experiments (a few thousand nodes) and the focus on a single domain (document retrieval). Scaling the method to large ontologies and incorporating multimodal user feedback (e.g., speech or visual cues) remain open research directions. Future work is outlined to explore real‑time interactive interfaces, distributed processing for massive knowledge graphs, and adaptive user modeling that dynamically adjusts feedback weights based on individual user behavior.

In summary, the paper makes a compelling case that possibilistic pertinence feedback, when combined with semantic network representations, offers a more error‑tolerant and effective means of refining novice user queries in knowledge extraction systems. The empirical evidence supports the claim that possibility‑based updating is more robust than probability‑based updating, and the proposed hybrid model further enhances system performance. This contribution advances the state of the art in interactive information retrieval and opens avenues for more resilient, user‑friendly KE platforms.