Differentiable Simulation & AI
These notes collect ideas, formulations, and architectural patterns around differentiable physical simulation, with a focus on implicit constrained mechanics and AI-assisted solvers.
The primary motivation is to understand how modern automatic differentiation frameworks (notably JAX) enable scalable, matrix-free, fully differentiable simulation pipelines, and how learning-based components can be integrated without sacrificing physical structure.
Intended Audience
These notes are written for researchers in numerical simulation, computational mechanics, and scientific machine learning who are familiar with:
- Finite element methods and constrained mechanics
- Newton–Krylov solvers and iterative linear algebra
- Automatic differentiation and differentiable programming
- Modern deep learning frameworks (JAX, PyTorch, etc.)
The perspective is research-oriented: emphasis is placed on formulations, operator structure, and differentiability rather than on user-facing APIs.
Scope
The notes cover:
- Implicit time integration of nonlinear mechanical systems
- Holonomic constraints and physical interactions
- Newton–Krylov solvers and matrix-free linear algebra
- PyTree-based scene graphs and GPU-oriented architectures
- Parameter optimisation and inverse problems
-
Integration of AI models as:
- surrogate simulators
- solver accelerators
- learned constitutive laws
- preconditioners and warm starts
Philosophy
AI components are treated as assistive or embedded elements inside a well-posed physical solver, not as black-box replacements unless explicitly stated.
The long-term objective is a simulation stack that combines:
- robustness of implicit constrained solvers
- scalability of matrix-free (multi-)GPU execution
- flexibility of learned components
- end-to-end differentiability for inverse problems, parameter optimisation, and learning
Organization
SOFAx v2
Mathematical and numerical foundations of the differentiable simulation framework:
-
Mathematical Foundations:
- Overview — Framework design and key ideas
- Problem Formulation — Dynamic and discrete formulations
- Constraints & Physical Interactions — Geometric decomposition and exact constraints
- Preconditioning — Block preconditioners and Schur complement methods
- Multi-physics Extensions — Electrophysiology, fluids, and coupled systems
-
Numerical Methods:
- Time Integration — Implicit schemes and residual formulation
- Residual Operator — Tree-based evaluation and matrix-free operations
- Newton–Krylov Solver — Matrix-free nonlinear solver
-
Implementation:
- Code Architecture — Scene graph and PyTree structure
- CPU Setup — Implementation details
AI for Simulation
Learning-based components integrated into the physics solver:
- Overview — Integration philosophy and opportunities
- Model Families — Taxonomy of learning-based approaches
- Neural Operators — Focus FNO and DeepONet
- Equivariant GNNs — Importance of E(n) symmetries
- Integration Points — Where AI components plug into the solver
- Pseudo-Newton Methods — AI-driven correction loops
-
Advanced Topics:
- Implicit vs Unfolded — Fixed-point vs unrolled formulations
- Neural ODE Solvers — Continuous-time learned dynamics
-
References — Key papers and resources
Status
These notes are evolving and reflect ongoing research rather than a finished framework. No components are implemented yet.