DAGAF : a directed acyclic generative adversarial framework for joint structure learning and tabular data synthesis

Petkov, Hristo and MacLellan, Calum and Dong, Feng (2025) DAGAF : a directed acyclic generative adversarial framework for joint structure learning and tabular data synthesis. Applied Intelligence. ISSN 1573-7497 (In Press) (https://doi.org/10.1007/s10489-025-06410-8)

[thumbnail of Petkov-etal-AI-2025-DAGAF-a-directed-acyclic-generative-adversarial-framework] Text. Filename: Petkov-etal-AI-2025-DAGAF-a-directed-acyclic-generative-adversarial-framework.pdf
Accepted Author Manuscript
Restricted to Repository staff only until 1 January 2099.

Download (3MB) | Request a copy

Abstract

Understanding the causal relationships between data variables can provide crucial insights into the construction of tabular datasets. Most existing causality learning methods typically focus on applying a single identifiable causal model, such as the Additive Noise Model (ANM) or the Linear non-Gaussian Acyclic Model (LiNGAM), to discover the dependencies exhibited in observational data. We improve on this approach by introducing a novel dual-step framework capable of performing both causal structure learning and tabular data synthesis under multiple causal model assumptions. Our approach uses Directed Acyclic Graphs (DAG) to represent causal relationships among data variables. By applying various functional causal models including ANM, LiNGAM and the Post-Nonlinear model (PNL), we implicitly learn the contents of DAG to simulate the generative process of observational data, effectively replicating the real data distribution. This is supported by a theoretical analysis to explain the multiple loss terms comprising the objective function of the framework. Experimental results demonstrate that DAGAF outperforms many existing methods in structure learning, achieving significantly lower Structural Hamming Distance (SHD) scores across both real-world and benchmark datasets (Sachs: 47%, Child: 11%, Hailfinder: 5%, Pathfinder: 7% improvement compared to state-of-the-art), while being able to produce diverse, high-quality samples.