Design Principle

Core design philosophy: Fully differentiable physical digital twin

Core Philosophy

We aim to make all parameters definable and differentiable.

RFDT is designed as a physical digital twin system where every parameter can be precisely controlled and optimized through automatic differentiation, just like neural networks, but grounded in physical reality.

Hierarchical Architecture

The system follows a hierarchical structure that organizes all parameters:

Scene

Object: "Building"

Transform
position
rotation
Material
color
metalness
Geometry
type
size

Object: "Radar"

Transform
position
rotation
Radar
frequency
power
Transmitter
gain
beam_width

Scene contains Objects, each object has Components, and each component has Fields (the actual parameters).

Parameter Space

All scene parameters = Union of all object component fields

The entire parameter space of a scene is the collection of all fields from all components of all objects. This is analogous to the parameter space of a neural network, but instead of weights and biases, we have physical quantities like positions, materials, and RF properties.

Fusion of Differentiable DT and Neural Networks

A key design principle: Parameters can be transformed into deep learning models.

Our system allows certain parameters to be replaced by neural networks. Instead of fixed values, parameters can be computed by neural network models, enabling learned behaviors and adaptive systems.

  • • Hybrid Modeling: Combine physics-based simulation with learned components
  • • Parameter Networks: Neural networks that predict material properties, beam patterns, or propagation coefficients
  • • End-to-End Differentiability: Gradients flow through both physical and neural components
  • • Unified Optimization: Jointly optimize physical parameters and neural network weights

Physical Digital Twin

Unlike pure neural networks that learn abstract representations, RFDT parameters represent real physical quantities. Each parameter has physical meaning and units, making the system interpretable and grounded in physics while maintaining the optimization capabilities of modern deep learning frameworks. The differentiable digital twin and neural networks are deeply integrated.

Hyperparameters System

In practice, we don't always need to control or optimize every individual parameter. Some parameters may be related, or we may want to control groups of parameters through higher-level abstractions.

What are Hyperparameters?

Hyperparameters are higher-level variables that control one or more scene parameters through computational relationships. They provide:

  • • Abstraction: Control multiple related parameters with a single variable
  • • Constraints: Enforce physical relationships between parameters
  • • Optimization: Reduce the optimization space for inverse problems
  • • Differentiation: Enable gradient-based optimization through enable_grad

Visual Node Editor

The relationship between hyperparameters and component fields is defined through the visual node editor:

  • • Hyperparameter Nodes: Input nodes representing controllable variables
  • • Field Nodes: Output nodes representing object component fields
  • • Operation Nodes: Mathematical operations connecting hyperparameters to fields
  • • Gradient Flow: When enable_grad=True, gradients flow through the node graph

Bidirectional Synchronization

A key design principle is the real-time, bidirectional synchronization between the UI editor and Python code:

Python → UI

Changes made in Python code are immediately reflected in the UI editor

# Change in Python
cube["Transform"].position = [5, 0, 0]

# Viewport updates immediately
# Properties panel shows new values
# No manual refresh needed

UI → Python

Changes made in the UI editor are immediately available in Python

# User drags object in viewport
# or edits property in panel

# Python code sees changes
pos = cube["Transform"].position
# pos is now updated value

Headless Mode

The UI editor is not required.

Python code can run completely independently in headless mode without any UI. This is essential for:

  • • Batch Processing: Running large-scale simulations on servers
  • • Optimization: Automated parameter sweeps and gradient-based optimization
  • • Integration: Embedding in existing pipelines and workflows
  • • Deployment: Production systems without UI dependencies
from rfdt import Scene, Server, ObjectFactory

# Headless mode - no UI
scene = Scene()
server = Server(scene=scene, start_editor=False)
server.start()

# Full simulation capabilities
# All parameters accessible
# Differentiable computations
# No browser required