Tacit knowledge mining algorithm based on linguistic truth-valued concept lattice
This paper is the continuation of our research work about linguistic truth-valued concept lattice. In order to provide a mathematical tool for mining tacit knowledge, we establish a concrete model of 6-ary linguistic truth-valued concept lattice and introduce a mining algorithm through the structure consistency. Specifically, we utilize the attributes to depict knowledge, propose the 6-ary linguistic truth-valued attribute extended context and congener context to characterize tacit knowledge, and research the necessary and sufficient conditions of forming tacit knowledge. We respectively give the algorithms of generating the linguistic truth-valued congener context and constructing the linguistic truth-valued concept lattice.
💡 Research Summary
This paper extends formal concept analysis (FCA) by incorporating a six‑valued linguistic truth‑value (LTV) framework to mine tacit knowledge—knowledge that is implicit, experiential, and difficult to articulate. The authors begin by defining a six‑ary LTV set L = {0, a, b, c, d, 1}, where the elements represent a graded linguistic scale ranging from “very low” to “very high.” A partial order (0 ≤ a ≤ b ≤ c ≤ d ≤ 1) and lattice operations (join ∨ and meet ∧) are introduced, turning L into a bounded lattice suitable for FCA.
The core data structure is the attribute‑extended context (AEC), a triple (G, M, R) where G is a set of objects, M a set of attributes, and R : G × M → L assigns an LTV to each object‑attribute pair. Unlike classical binary contexts, the AEC captures nuanced expert assessments (e.g., “moderately reliable”) directly in the data matrix.
To expose tacit knowledge, the authors define a congener context (CC). A CC shares the same G and M as the AEC but may modify some LTV entries via transformation functions f : L → L. The crucial requirement is structure consistency: the concept lattices derived from the AEC and the CC must be isomorphic, i.e., every concept (extent, intent) in one lattice corresponds exactly to a concept in the other. This ensures that any change introduced by the transformation does not break the underlying hierarchical relationships among concepts.
Two theorems formalize the necessary and sufficient conditions for a CC to represent tacit knowledge:
-
Transformation Necessity – For each attribute m, the altered value r′(g,m) in the CC must be obtained by applying a monotone lattice homomorphism f to the original value r(g,m). The homomorphism must preserve joins and meets (f(x ∨ y) = f(x) ∨ f(y), f(x ∧ y) = f(x) ∧ f(y)).
-
Congener Sufficiency – If a CC constructed with such homomorphisms yields a concept lattice identical (in terms of extents and intents) to that of the original AEC, then the CC faithfully encodes the tacit knowledge hidden in the original data.
Based on these theoretical foundations, the paper proposes a two‑phase algorithm:
Phase 1 – Generating Congener Contexts
For each attribute, the algorithm enumerates all admissible transformation functions (six possibilities, including the identity). It creates candidate CCs by applying each function to the corresponding column of the AEC. After each candidate is built, the algorithm recomputes its concept lattice (using a modified Ganter‑Wille procedure that works with six‑valued joins and meets) and checks for structural consistency with the original lattice. Consistent candidates are retained as valid CCs. The computational cost of this phase is O(|M|·k), where k = 6, because each attribute is examined independently.
Phase 2 – Constructing the LTV Concept Lattice
With a validated CC, the algorithm constructs the full six‑valued concept lattice. The classic Ganter‑Wille algorithm is adapted: closure operators now compute the meet of LTV values across objects (for intents) and the join across attributes (for extents). The algorithm iteratively generates all closed (extent, intent) pairs, preserving the partial order induced by the LTV lattice. In the worst case, lattice construction remains exponential (O(2^|G|)), as in standard FCA, but the additional constraints imposed by the LTV scale and the consistency checks dramatically prune the search space in practice.
The authors illustrate the approach with a small case study involving eight products evaluated on five quality attributes by domain experts. Each evaluation is recorded as a six‑valued LTV. Applying the algorithm uncovers concepts that correspond to implicit expert judgments—e.g., a hidden belief that “high durability implies high overall trust”—which are not visible in a binary FCA analysis.
While the paper successfully demonstrates the feasibility of mining tacit knowledge using a six‑valued linguistic lattice, several limitations are acknowledged. The transformation enumeration currently relies on exhaustive search, which may become costly for contexts with many attributes. No empirical evaluation on large‑scale real‑world datasets is provided, leaving open questions about scalability and runtime performance. Moreover, the choice of six linguistic levels is somewhat arbitrary; exploring other multi‑valued logics (e.g., fuzzy, intuitionistic) could broaden applicability.
In conclusion, the work contributes a novel mathematical framework that blends multi‑valued logic with FCA, introduces the notion of congener contexts to capture hidden knowledge, and supplies concrete algorithms for generating consistent transformed contexts and constructing the resulting lattices. It opens avenues for further research on efficient context transformation, integration with machine learning pipelines, and application to domains where expert tacit knowledge is critical, such as medical diagnosis, risk assessment, and knowledge‑intensive engineering design.