Physics

All posts under category "Physics"

10 posts total
Sorted by date
Prioritizing Probabilities  How Hilbert Space Generates and Constrains Them

Prioritizing Probabilities How Hilbert Space Generates and Constrains Them

We use Bub s (2016) correlation arrays and Pitowksy s (1989b) correlation polytopes to analyze an experimental setup due to Mermin (1981) for measurements on the singlet state of a pair of spin-$ frac12$ particles. The class of correlations allowed by quantum mechanics in this setup is represented by an elliptope inscribed in a non-signaling cube. The class of correlations allowed by local hidden-variable theories is represented by a tetrahedron inscribed in this elliptope. We extend this analysis to pairs of particles of arbitrary spin. The class of correlations allowed by quantum mechanics is still represented by the elliptope; the subclass of those allowed by local hidden-variable theories by polyhedra with increasing numbers of vertices and facets that get closer and closer to the elliptope. We use these results to advocate for an interpretation of quantum mechanics like Bub s. Probabilities and expectation values are primary in this interpretation. They are determined by inner products of vectors in Hilbert space. Such vectors do not themselves represent what is real in the quantum world. They encode families of probability distributions over values of different sets of observables. As in classical theory, these values ultimately represent what is real in the quantum world. Hilbert space puts constraints on possible combinations of such values, just as Minkowski space-time puts constraints on possible spatio-temporal constellations of events. Illustrating how generic such constraints are, the equation for the elliptope derived in this paper is a general constraint on correlation coefficients that can be found in older literature on statistics and probability theory. Yule (1896) already stated the constraint. De Finetti (1937) already gave it a geometrical interpretation.

paper research
No Image

Building Social Networks with Consent A Survey

This survey explores the literature on game-theoretic models of network formation under the hypothesis of mutual consent in link formation. The introduction of consent in link formation imposes a coordination problem in the network formation process. This survey explores the conclusions from this theory and the various methodologies to avoid the main pitfalls. The main insight originates from Myerson s work on mutual consent in link formation and his main conclusion that the empty network (the network without any links) always emerges as a strong Nash equilibrium in any game-theoretic model of network formation under mutual consent and positive link formation costs. Jackson and Wolinsky introduced a cooperative framework to avoid this main pitfall. They devised the notion of a pairwise stable network to arrive at equilibrium networks that are mainly non-trivial. Unfortunately, this notion of pairwise stability requires coordinated action by pairs of decision makers in link formation. I survey the possible solutions in a purely non-cooperative framework of network formation under mutual consent by exploring potential refinements of the standard Nash equilibrium concept to explain the emergence of non-trivial networks. This includes the notions of unilateral and monadic stability. The first one is founded on advanced rational reasoning of individuals about how others would respond to one s efforts to modify the network. The latter incorporates trusting, boundedly rational behaviour into the network formation process. The survey is concluded with an initial exploration of external correlation devices as an alternative framework to address mutual consent in network formation.

paper research
No Image

Crowther s Quantum Interpretation Insights and Sharpening Points

We read Karen Crowther s emph{Another 100 Years of Quantum Interpretation?} with two practical goals. First, we spell out what she means by interpretation an attempt to provide understanding (not just predictions), which may be representationalist or non-representationalist, and which she contrasts with deeper emph{reductive} (inter-theoretic) explanation -- especially in the quantum-gravity setting. Second, we list twelve points where the paper s physics-facing wording could be sharpened. In our view, several claims are directionally well-motivated but stated more strongly than the underlying physics supports, or they run together distinct notions (e.g. degrees of freedom, singularity, and different senses of locality ) that need careful separation. We end by suggesting that the philosophical question is genuinely worthwhile, but the physics should be phrased more cautiously so that heuristic motivation is not mistaken for strict implication.

paper research
Deep Learning in Geotechnical Engineering  A Critical Assessment of PINNs and Operator Learning

Deep Learning in Geotechnical Engineering A Critical Assessment of PINNs and Operator Learning

Deep learning methods -- physics-informed neural networks (PINNs), deep operator networks (DeepONet), and graph network simulators (GNS) -- are increasingly proposed for geotechnical problems. This paper tests these methods against traditional solvers on canonical problems wave propagation and beam-foundation interaction. PINNs run 90,000 times slower than finite difference with larger errors. DeepONet requires thousands of training simulations and breaks even only after millions of evaluations. Multi-layer perceptrons fail catastrophically when extrapolating beyond training data -- the common case in geotechnical prediction. GNS shows promise for geometry-agnostic simulation but faces scaling limits and cannot capture path-dependent soil behavior. For inverse problems, automatic differentiation through traditional solvers recovers material parameters with sub-percent accuracy in seconds. We recommend use automatic differentiation for inverse problems; apply site-based cross-validation to account for spatial autocorrelation; reserve neural networks for problems where traditional solvers are genuinely expensive and predictions remain within the training envelope. When a method is four orders of magnitude slower with less accuracy, it is not a viable replacement for proven solvers.

paper research
Photonics Meets AI  An Open-Source Co-Design Toolflow

Photonics Meets AI An Open-Source Co-Design Toolflow

Photonics is becoming a cornerstone technology for high-performance AI systems and scientific computing, offering unparalleled speed, parallelism, and energy efficiency. Despite this promise, the design and deployment of electronic-photonic AI systems remain highly challenging due to a steep learning curve across multiple layers, spanning device physics, circuit design, system architecture, and AI algorithms. The absence of a mature electronic-photonic design automation (EPDA) toolchain leads to long, inefficient design cycles and limits cross-disciplinary innovation and co-evolution. In this work, we present a cross-layer co-design and automation framework aimed at democratizing photonic AI system development. We begin by introducing our architecture designs for scalable photonic edge AI and Transformer inference, followed by SimPhony, an open-source modeling tool for rapid EPIC AI system evaluation and design-space exploration. We then highlight advances in AI-enabled photonic design automation, including physical AI-based Maxwell solvers, a fabrication-aware inverse design framework, and a scalable inverse training algorithm for meta-optical neural networks, enabling a scalable EPDA stack for next-generation electronic-photonic AI systems.

paper research
PINNs Revolutionize Electromagnetic Wave Modeling

PINNs Revolutionize Electromagnetic Wave Modeling

Physics-Informed Neural Networks (PINNs) are a methodology that aims to solve physical systems by directly embedding PDE constraints into the neural network training process. In electromagnetism, where well-established methodologies such as FDTD and FEM already exist, new methodologies are expected to provide clear advantages to be accepted. Despite their mesh-free nature and applicability to inverse problems, PINNs can exhibit deficiencies in terms of accuracy and energy metrics when compared to FDTD solutions. This study demonstrates hybrid training strategies can bring PINNs closer to FDTD-level accuracy and energy consistency. This study presents a hybrid methodology addressing common challenges in wave propagation scenarios. The causality collapse problem in time-dependent PINN training is addressed via time marching and causality-aware weighting. In order to mitigate the discontinuities that are introduced by time marching, a two-stage interface continuity loss is applied. In order to suppress loss accumulation, which is manifested as cumulative energy drift in electromagnetic waves, a local Poynting-based regularizer has been developed. In the developed PINN model, high field accuracy is achieved with an average 0.09 % $NRMSE$ and 1.01 % $L^2$ error over time. Energy conservation is achieved on the PINN side with only a 0.024 % relative energy mismatch in the 2D PEC cavity scenario. Training is performed without labeled field data, using only physics-based residual losses; FDTD is used solely for post-training evaluation. The results demonstrate that PINNs can achieve competitive results with FDTD in canonical electromagnetic examples and are a viable alternative.

paper research
q3-MuPa  Silent Scans with Physics-Infused Diffusion Models

q3-MuPa Silent Scans with Physics-Infused Diffusion Models

The 3D fast silent multi-parametric mapping sequence with zero echo time (MuPa-ZTE) is a novel quantitative MRI (qMRI) acquisition that enables nearly silent scanning by using a 3D phyllotaxis sampling scheme. MuPa-ZTE improves patient comfort and motion robustness, and generates quantitative maps of T1, T2, and proton density using the acquired weighted image series. In this work, we propose a diffusion model-based qMRI mapping method that leverages both a deep generative model and physics-based data consistency to further improve the mapping performance. Furthermore, our method enables additional acquisition acceleration, allowing high-quality qMRI mapping from a fourfold-accelerated MuPa-ZTE scan (approximately 1 minute). Specifically, we trained a denoising diffusion probabilistic model (DDPM) to map MuPa-ZTE image series to qMRI maps, and we incorporated the MuPa-ZTE forward signal model as an explicit data consistency (DC) constraint during inference. We compared our mapping method against a baseline dictionary matching approach and a purely data-driven diffusion model. The diffusion models were trained entirely on synthetic data generated from digital brain phantoms, eliminating the need for large real-scan datasets. We evaluated on synthetic data, a NISM/ISMRM phantom, healthy volunteers, and a patient with brain metastases. The results demonstrated that our method produces 3D qMRI maps with high accuracy, reduced noise and better preservation of structural details. Notably, it generalised well to real scans despite training on synthetic data alone. The combination of the MuPa-ZTE acquisition and our physics-informed diffusion model is termed q3-MuPa, a quick, quiet, and quantitative multi-parametric mapping framework, and our findings highlight its strong clinical potential.

paper research
Scaling Photonic AI  From Design to Co-Exploration

Scaling Photonic AI From Design to Co-Exploration

In this work, we identify three considerations that are essential for realizing practical photonic AI systems at scale (1) dynamic tensor operation support for modern models rather than only weight-static kernels, especially for attention/Transformer-style workloads; (2) systematic management of conversion, control, and data-movement overheads, where multiplexing and dataflow must amortize electronic costs instead of letting ADC/DAC and I/O dominate; and (3) robustness under hardware non-idealities that become more severe as integration density grows. To study these coupled tradeoffs quantitatively, and to ensure they remain meaningful under real implementation constraints, we build a cross-layer toolchain that supports photonic AI design from early exploration to physical realization. SimPhony provides implementation-aware modeling and rapid cross-layer evaluation, translating physical costs into system-level metrics so architectural decisions are grounded in realistic assumptions. ADEPT and ADEPT-Z enable end-to-end circuit and topology exploration, connecting system objectives to feasible photonic fabrics under practical device and circuit constraints. Finally, Apollo and LiDAR provide scalable photonic physical design automation, turning candidate circuits into manufacturable layouts while accounting for routing, thermal, and crosstalk constraints.

paper research
No Image

The Logical Structure of Physical Laws A Fixed Point Reconstruction

We formalise the self referential definition of physical laws using monotone operators on a lattice of theories, resolving the pathologies of naive set theoretic formulations. By invoking Tarski fixed point theorem, we identify physical theories as least fixed points of admissibility constraints derived from Galois connections. We demonstrate that QED and General Relativity can be represented in such a logical structure with respect to their symmetry and locality principles.

paper research
Transformer Surrogates Predict Plasma Edge Dynamics Efficiently

Transformer Surrogates Predict Plasma Edge Dynamics Efficiently

Accurate modeling of scrape-off layer (SOL) and divertor-edge dynamics is vital for designing plasma-facing components in fusion devices. High-fidelity edge fluid/neutral codes such as SOLPS-ITER capture SOL physics with high accuracy, but their computational cost limits broad parameter scans and long transient studies. We present transformer-based, autoregressive surrogates for efficient prediction of 2D, time-dependent plasma edge state fields. Trained on SOLPS-ITER spatiotemporal data, the surrogates forecast electron temperature, electron density, and radiated power over extended horizons. We evaluate model variants trained with increasing autoregressive horizons (1-100 steps) on short- and long-horizon prediction tasks. Longer-horizon training systematically improves rollout stability and mitigates error accumulation, enabling stable predictions over hundreds to thousands of steps and reproducing key dynamical features such as the motion of high-radiation regions. Measured end-to-end wall-clock times show the surrogate is orders of magnitude faster than SOLPS-ITER, enabling rapid parameter exploration. Prediction accuracy degrades when the surrogate enters physical regimes not represented in the training dataset, motivating future work on data enrichment and physics-informed constraints. Overall, this approach provides a fast, accurate surrogate for computationally intensive plasma edge simulations, supporting rapid scenario exploration, control-oriented studies, and progress toward real-time applications in fusion devices.

paper research

< Category Statistics (Total: 566) >

Computer Science (514) Machine Learning (117) Artificial Intelligence (89) Computer Vision (71) Computation and Language (NLP) (62) Electrical Engineering and Systems Science (36) Cryptography and Security (24) Robotics (22) Systems and Control (22) Software Engineering (20) Mathematics (18) Statistics (17) Economics (16) Information Retrieval (15) Distributed, Parallel, and Cluster Computing (14) Human-Computer Interaction (14) Neural and Evolutionary Computing (13) Computer Science and Game Theory (11) Econometrics (11) Image and Video Processing (10) Physics (10) Sound (10) Multiagent Systems (9) Optimization and Control (8) Computational Geometry (7) Databases (7) Graphics (6) Networking and Internet Architecture (6) Quantitative Biology (6) Quantum Physics (5) Theoretical Economics (5) Computational Complexity (4) Computational Engineering, Finance, and Science (4) Computers and Society (4) Emerging Technologies (4) Information Theory (4) Methodology (4) Multimedia (4) Programming Languages (4) Quantitative Finance (4) Signal Processing (4) Audio and Speech Processing (3) Data Structures and Algorithms (3) Hardware Architecture (3) History and Philosophy of Physics (3) Logic in Computer Science (3) Neurons and Cognition (3) Social and Information Networks (3) Statistics Theory (3) Computation (2) Condensed Matter (2) Dynamical Systems (2) Formal Languages and Automata Theory (2) General Finance (2) Operating Systems (2) Optics (2) Quantitative Methods (2) Applications (1) Astrophysics (1) Combinatorics (1) Computational Physics (1) Digital Libraries (1) Disordered Systems and Neural Networks (1) General Economics (1) Genomics (1) Geophysics (1) Instrumentation and Methods for Astrophysics (1) Logic (1) Mathematical Finance (1) Mathematical Software (1) Medical Physics (1) Mesoscale and Nanoscale Physics (1) Metric Geometry (1) Other Statistics (1) Performance (1) Physics and Society (1) Plasma Physics (1) Probability (1) Trading and Market Microstructure (1)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut