DeepDFA: Injecting Temporal Logic in Deep Learning for Sequential Subsymbolic Applications

DeepDFA: Injecting Temporal Logic in Deep Learning for Sequential Subsymbolic Applications
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Integrating logical knowledge into deep neural network training is still a hard challenge, especially for sequential or temporally extended domains involving subsymbolic observations. To address this problem, we propose DeepDFA, a neurosymbolic framework that integrates high-level temporal logic - expressed as Deterministic Finite Automata (DFA) or Moore Machines - into neural architectures. DeepDFA models temporal rules as continuous, differentiable layers, enabling symbolic knowledge injection into subsymbolic domains. We demonstrate how DeepDFA can be used in two key settings: (i) static image sequence classification, and (ii) policy learning in interactive non-Markovian environments. Across extensive experiments, DeepDFA outperforms traditional deep learning models (e.g., LSTMs, GRUs, Transformers) and novel neuro-symbolic systems, achieving state-of-the-art results in temporal knowledge integration. These results highlight the potential of DeepDFA to bridge subsymbolic learning and symbolic reasoning in sequential tasks.


💡 Research Summary

DeepDFA introduces a novel neurosymbolic framework that embeds high‑level temporal logic—specifically deterministic finite automata (DFA) and Moore machines—directly into deep neural networks as continuous, differentiable layers. The authors observe that while deep learning excels at processing raw, subsymbolic data (e.g., images, sensor streams), it typically ignores domain knowledge expressed as logical constraints, especially when those constraints evolve over time. Existing neurosymbolic approaches either embed static logical formulas or rely on fuzzy relaxations that are not fully differentiable, and they are rarely applied to sequential decision‑making problems where the logical rules are temporal in nature.

The core technical contribution is the construction of a DeepDFA layer that implements the semantics of a probabilistic finite automaton (PFA). A DFA is first translated into a three‑dimensional transition tensor T ∈ ℝ^{|Σ|×|Q|×|Q|} and a final‑state distribution vector ρ. For each time step t, the network receives a soft symbol vector \tildeσ_t (produced by a perception module such as a CNN or an RNN) and updates the state distribution \tildeq_t via the recurrence **\tildeq_t = \tildeq_{t‑1}·T


Comments & Academic Discussion

Loading comments...

Leave a Comment