Papers
Topics
Authors
Recent
Search
2000 character limit reached

BuckiNet: ML Discovery & PLC Sensing

Updated 2 February 2026
  • BuckiNet is a dual system comprising a neural network that embeds the Buckingham Pi theorem for discovering dimensionless groups and a power-line sensor protocol for robust data collection.
  • The neural architecture leverages a dedicated Pi-layer to extract sparse, interpretable dimensionless groups, achieving high regression accuracy on canonical physics problems.
  • The power-line protocol employs a bucket-brigade design to enable deterministic, energy-efficient sensing with low latency in harsh and distributed environments.

BuckiNet is a term referring to two technically rigorous, application-specific systems: (1) a neural network architecture designed for automated discovery of dimensionless groups in physics-informed machine learning (Bakarji et al., 2022), and (2) a power‐line–based sensor network protocol for linear, queue-style deployment in harsh environments (Santos, 2021). Despite nominal similarity, the two are unrelated; both are documented extensively in the academic literature and deployed or validated in distinct domains. This article details each system comprehensively, focusing primarily on the neural-network-based BuckiNet as introduced in "Dimensionally Consistent Learning with Buckingham Pi", before summarizing key features of the power line protocol.

1. Dimensionally Consistent BuckiNet: Neural Architecture

BuckiNet, as specified in (Bakarji et al., 2022), is a deep learning architecture embedding dimensional analysis principles for regression or model discovery in applications lacking governing equations but exhibiting dimensional symmetries. Its core is the explicit incorporation of the Buckingham Pi theorem in the network’s first ("Pi") layer, enabling the automated extraction of n=nrank(Dp)n’ = n - \mathrm{rank}(D_p) dimensionless groups (πp\pi_p) from physical input variables pRnp \in \mathbb{R}^n.

Architectural Flow

  • Input Preprocessing: Data matrices PRm×nP \in \mathbb{R}^{m \times n} and QRm×kQ \in \mathbb{R}^{m \times k} are constructed.
  • Pi-Layer (First Layer): For each input row xx (strictly positive to allow logarithms), a linear mapping z=logxz = \log x is followed by u=zWu = zW, where WRn×nW \in \mathbb{R}^{n \times n’} is trained. The monomial mapping yields πp=exp(u)=exp(logpW)\pi_p = \exp(u) = \exp(\log p\,W), a direct realization of monomial Pi-groups with exponents encoded in WW.
  • ψ-Network (Subsequent Layers): The dimensionless groups πp\pi_p are passed into a standard feedforward MLP with LL ELU-activated hidden layers, learning a nonlinear regression ψ:RnRk\psi:\mathbb{R}^{n’} \rightarrow \mathbb{R}^{k’}.
  • Buckingham Pi Enforcement: A soft null-space penalty ensures each column W:,jW_{:,j} approximately resides in null(Dp)\mathrm{null}(D_p), enforcing near-dimensional consistency.
  • Output: The network output ψ(πp)\psi(\pi_p) is compared to known or computed dimensionless outputs πq\pi_q.

2. Loss Function and Learning Objective

The BuckiNet loss aggregates three terms:

L(Θ)=Lfit+λnullLnull+λregR(W)\mathcal{L}(\Theta) = \mathcal{L}_\mathrm{fit} + \lambda_\mathrm{null}\mathcal{L}_\mathrm{null} + \lambda_\mathrm{reg}\mathcal{R}(W)

where Θ\Theta includes both the first-layer WW and MLP parameters.

  • Data Fit: Lfit=πqψ(exp(logPW))22\mathcal{L}_\mathrm{fit} = \|\pi_q - \psi(\exp(\log P\, W))\|_2^2 (MSE in output space).
  • Soft Null-Space Penalty: Lnull=DpW22\mathcal{L}_\mathrm{null} = \|D_p W\|_2^2, promoting WW columns close to the null-space.
  • Regularization: R(W)=α1W1+α2W22\mathcal{R}(W) = \alpha_1 \|W\|_1 + \alpha_2 \|W\|_2^2, enforcing sparse and small exponents.

Hyperparameters λnull\lambda_\mathrm{null}, α1\alpha_1, α2\alpha_2 are selected based on validation performance and desired regularity.

3. Training Algorithm and Implementation

The BuckiNet model is trained as follows:

  • Initialization: WW is randomly initialized (e.g., Glorot) or using eigenvectors of DpD_p with minimum singular values; MLP weights via Xavier/Glorot.
  • Optimization: Adam or RMSProp with learning rates in [104,102][10^{-4}, 10^{-2}], using either full-batch or mini-batch gradient descent dictated by mm.
  • Hyperparameter Selection: nn’, λnull\lambda_\mathrm{null} (for DpW2<103\|D_p W\|_2 < 10^{-3}), α1\alpha_1, α2\alpha_2 chosen to ensure interpretable, dimensionally-consistent WW.
  • Stopping Criteria: Validation MSE plateaus and null-space penalty meets threshold.
  • Post-Processing: WW columns can be rounded to rational values, and each discovered Pi-group rescaled for interpretability.

4. Empirical Examples and Applications

BuckiNet demonstrates robust performance across several canonical problems:

  • Harmonic Oscillator (Pendulum): Inputs p=[L,m,g,t]p = [L, m, g, t]; output q=αq = \alpha (dimensionless). BuckiNet with n=1n’=1 discovers πp=gt2/L\pi_p = gt^2/L; fits the angular response with MSE <104<10^{-4}.
  • Bead on Rotating Hoop: Inputs [m,R,b,g,ω][m, R, b, g, \omega]; outputs are top principal components of x(t)x(t). BuckiNet (n=2n’=2) finds Pi-groups matching classical analysis γ=Rω2/g\gamma = R\omega^2/g, ϵ=m2gR/b2\epsilon = m^2gR/b^2 within 2% in each exponent; achieves PCA-coefficient MSE 5×103\sim5 \times 10^{-3}, outperforming baseline MLP by 10×10\times.
  • These results validate BuckiNet’s utility in discovering physically meaningful, sparse, and interpretable dimensional reductions from purely data-driven inference.

5. Strengths, Limitations, and Extension Directions

Strengths

  • Automates the identification of interpretable, sparse, dimensionless groups.
  • Embeds physical symmetries directly, yielding improved generalization and smaller networks via dimensionality reduction.
  • Integrates naturally with modern deep learning frameworks (TensorFlow, PyTorch).

Limitations

  • Requires strictly positive input data due to logarithmic layer, necessitating data shifts if negatives are present.
  • Null-space penalty is inherently soft, meaning careful tuning of λnull\lambda_\mathrm{null} may be required for precise integer/rational exponents.
  • Sensitivity to hyperparameters and possible local minima due to the entanglement of multiple Pi-groups.

Potential Extensions

  • Implementing hard null-space constraints to enforce DpW=0D_p W = 0 exactly.
  • Mixed-integer optimization for integer/rational exponents in WW.
  • Automatic determination of nn’, especially when knrank(Dp)k’ \neq n - \mathrm{rank}(D_p).
  • Generalizations to temporal/spatial input fields using convolutional Pi-layers.

6. Power-Line–Based BuckiNet Protocol for Sensed Quantity Acquisition

Independently, the BuckiNet protocol (Santos, 2021) denotes a deterministic, energy-efficient, and scalable network design for linear sensor chains, primarily deployed for profile acquisition (e.g., pressure or temperature) in challenging field conditions such as oil wells.

System Architecture and Protocol Details

  • Topology: Linearly arranged NN nodes over power-line infrastructure with one-end coordinator.
  • PHY Layer: Utilizes $16$-QAM OFDM modulation over $25$ MHz PLC spectrum with concatenated FEC; fixed-length OFDM “bucket” bursts (~0.333 ms).
  • MAC Layer: Contentionless, cyclical measurement relay (“bucket-brigade”), removing the need for token rings; asynchronous CSMA/CA for management.
  • Timing: End-to-end latency for N=1000N=1000 nodes is 2.09\sim2.09 s per cycle, supporting kilometer-scale infrastructure.
  • Energy and Reliability: Each transmit at $0$ dBm requires 33 μ\sim33\ \muJ per bucket; triple-payload error correction and neighbor-fallback self-healing achieve packet-error rates 106\leq10^{-6}.
  • Applications: Demonstrated for oil/gas well monitoring, power-line tower integrity, underground cable surveillance, and various large-scale environmental sensing.

Comparative Table

Domain BuckiNet (NN) BuckiNet (PLC Network)
Application ML for Physics Sensor Data Collection
Core Task Pi-group Discovery End-to-End Profile Relay
Discipline Deep Learning, Physics Communications, Sensing
arXiv Reference (Bakarji et al., 2022) (Santos, 2021)

7. Summary and Outlook

BuckiNet, in both its neural and network incarnations, exemplifies rigorous embedding of domain structure (dimensional or topological) within data-driven and communication frameworks. In machine learning, BuckiNet has demonstrated superior performance and interpretability where traditional regression fails to impose symmetry constraints. In embedded sensing, its power-line–driven bucket-brigade approach solutions deterministic, robust collection of distributed spatial profiles. Open questions in the neural domain include the development of strict constraints and extension to field data, while network deployments demand continued validation at ever-larger spatial scales and under harsher noise regimes.

References:

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to BuckiNet.