AI Centered on Scene Fitting and Dynamic Cognitive Network
This paper briefly analyzes the advantages and problems of AI mainstream technology and puts forward: To achieve stronger Artificial Intelligence, the end-to-end function calculation must be changed and adopt the technology system centered on scene fitting. It also discusses the concrete scheme named Dynamic Cognitive Network model (DC Net). Discussions : The knowledge and data in the comprehensive domain are uniformly represented by using the rich connection heterogeneous Dynamic Cognitive Network constructed by conceptualized elements; A network structure of two dimensions and multi layers is designed to achieve unified implementation of AI core processing such as combination and generalization; This paper analyzes the implementation differences of computer systems in different scenes, such as open domain, closed domain, significant probability and non-significant probability, and points out that the implementation in open domain and significant probability scene is the key of AI, and a cognitive probability model combining bidirectional conditional probability, probability passing and superposition, probability col-lapse is designed; An omnidirectional network matching-growth algorithm system driven by target and probability is designed to realize the integration of parsing, generating, reasoning, querying, learning and so on; The principle of cognitive network optimization is proposed, and the basic framework of Cognitive Network Learning algorithm (CNL) is designed that structure learning is the primary method and parameter learning is the auxiliary. The logical similarity of implementation between DC Net model and human intelligence is analyzed in this paper.
💡 Research Summary
The paper begins by critiquing the prevailing “data‑versus‑model” paradigm that dominates contemporary AI research, especially large‑scale deep learning and reinforcement‑learning systems. While these approaches achieve impressive performance on narrowly defined benchmarks, they struggle to generalize across open‑domain or novel situations because they rely on a monolithic end‑to‑end function that is fixed after training. To overcome this limitation, the authors argue that future AI must abandon the static end‑to‑end computation model and adopt a scene‑fitting paradigm: the system should first identify the characteristics of the current problem scene and then dynamically assemble the most appropriate computational pathway.
To operationalize scene fitting, the authors introduce the Dynamic Cognitive Network (DC Net). DC Net is built around three core ideas: (1) a heterogeneous, time‑varying graph whose nodes are “conceptualized elements” and whose edges encode logical, semantic, and probabilistic relations; (2) a two‑dimensional, multi‑layer architecture in which one dimension captures the static concept‑relation graph while the other models conditional‑probability flows, allowing both bottom‑up sensory processing and top‑down reasoning to interact across layers; and (3) a cognitive probability engine that distinguishes between “significant‑probability” and “non‑significant‑probability” scenes, integrating bidirectional conditional probabilities, probability passing, superposition, and collapse. This engine enables the network to select, on the fly, the probability distribution that best matches the current context.
The paper further proposes an Omnidirectional Network Matching‑Growth algorithm driven simultaneously by a target (goal) and by the probability model. When an input arrives, the algorithm searches for the sub‑graph that best matches the input and, if necessary, grows new connections to accommodate previously unseen structures. This unified matching‑growth process subsumes parsing, generation, reasoning, querying, and learning, eliminating the need for separate pipelines.
Learning in DC Net is organized under the Cognitive Network Learning (CNL) framework. CNL treats structure learning as the primary objective and parameter learning as a supportive task. Structure learning employs meta‑reinforcement, graph‑neural‑network‑based transformations, and concept‑insertion/deletion operations inspired by human conceptual expansion. Parameter learning proceeds with conventional back‑propagation or Bayesian updates, but parameters are automatically re‑initialized whenever the graph structure changes, ensuring consistency between architecture and weights.
A substantial portion of the discussion is devoted to drawing parallels between DC Net and human cognition. Humans constantly reconfigure their mental network when encountering new situations, rely on probabilistic judgments in ambiguous contexts, and use goal‑directed search to retrieve or generate knowledge. DC Net mirrors these capabilities through dynamic graph restructuring, bidirectional probability inference, and goal‑driven matching‑growth. The authors claim that this alignment makes DC Net a more faithful computational analogue of human intelligence than static deep‑learning models.
The paper also categorizes computational scenes into four types: open domain vs. closed domain and significant‑probability vs. non‑significant‑probability. It emphasizes that open‑domain, significant‑probability scenes constitute the core challenge for artificial general intelligence, because they require both flexible knowledge integration and reliable probabilistic reasoning. The proposed cognitive probability model and matching‑growth algorithm are specifically designed to excel in these settings.
Despite its ambitious vision, the manuscript leaves several practical issues unaddressed. The computational complexity of dynamic graph growth, memory overhead for maintaining multi‑dimensional heterogeneous edges, and numerical stability of the probability collapse operation are not quantified. Moreover, the paper provides only high‑level algorithmic sketches and lacks empirical validation on benchmark tasks such as open‑domain question answering, few‑shot learning, or interactive dialogue. Integration with existing deep‑learning frameworks and hardware acceleration strategies are also omitted, raising questions about scalability.
In conclusion, the authors present a compelling argument that AI must transition from static end‑to‑end function approximation to a scene‑centric, dynamically reconfigurable cognitive network. DC Net offers a concrete architectural blueprint—heterogeneous dynamic graphs, dual‑dimensional layered design, a sophisticated cognitive probability engine, an omnidirectional matching‑growth process, and a structure‑first learning paradigm—that aims to emulate key aspects of human intelligence. While the theoretical contributions are noteworthy, future work must focus on concrete implementations, performance benchmarks, and engineering optimizations to demonstrate that DC Net can indeed deliver the promised generalization and flexibility in real‑world AI applications.