# Physics

###### Featured Researches

## Genetic evolution of a multi-generational population in the context of interstellar space travels -- Part I: Genetic evolution under the neutral selection hypothesis

We updated the agent based Monte Carlo code HERITAGE that simulates human evolution within restrictive environments such as interstellar, sub-light speed spacecraft in order to include the effects of population genetics. We incorporated a simplified -- yet representative -- model of the whole human genome with 46 chromosomes (23 pairs), containing 2110 building blocks that simulate genetic elements (loci). Each individual is endowed with his/her own diploid genome. Each locus can take 10 different allelic (mutated) forms that can be investigated. To mimic gamete production (sperm and eggs) in human individuals, we simulate the meiosis process including crossing-over and unilateral conversions of chromosomal sequences. Mutation of the genetic information from cosmic ray bombardments is also included. In this first paper of a series of two, we use the neutral hypothesis: mutations (genetic changes) have only neutral phenotypic effects (physical manifestations), implying no natural selection on variations. We will relax this assumption in the second paper. Under such hypothesis, we demonstrate how the genetic patrimony of multi-generational crews can be affected by genetic drift and mutations. It appears that centuries-long deep space travels have small but unavoidable effects on the genetic composition/diversity of the traveling populations that herald substantial genetic differentiation on longer time-scales if the annual equivalent dose of cosmic ray radiation is similar to the Earth radioactivity background at sea level. For larger doses, genomes in the final populations can deviate more strongly with significant genetic differentiation that arises within centuries.

Read more## On the Radiation Reaction Force

The usual radiation self-force of a point charge is obtained in a mathematically exact way and it is pointed out to that this does not call forth that the spacetime motion of a point charge obeys the Lorentz--Abraham--Dirac equation.

Read more## Predictive Factors of Kinematics in Traumatic Brain Injury from Head Impacts Based on Statistical Interpretation

Brain tissue deformation resulting from head impacts is primarily caused by rotation and can lead to traumatic brain injury. To quantify brain injury risk based on measurements of kinematics on the head, finite element (FE) models and various brain injury criteria based on different factors of these kinematics have been developed, but the contribution of different kinematic factors has not been comprehensively analyzed across different types of head impacts in a data-driven manner. To better design brain injury criteria, the predictive power of rotational kinematics factors, which are different in 1) the derivative order (angular velocity, angular acceleration, angular jerk), 2) the direction and 3) the power (e.g., square-rooted, squared, cubic) of the angular velocity, were analyzed based on different datasets including laboratory impacts, American football, mixed martial arts (MMA), NHTSA automobile crashworthiness tests and NASCAR crash events. Ordinary least squares regressions were built from kinematics factors to the 95\% maximum principal strain (MPS95), and we compared zero-order correlation coefficients, structure coefficients, commonality analysis, and dominance analysis. The angular acceleration, the magnitude, and the first power factors showed the highest predictive power for the majority of impacts including laboratory impacts, American football impacts, with few exceptions (angular velocity for MMA and NASCAR impacts). The predictive power of rotational kinematics in three directions (x: posterior-to-anterior, y: left-to-right, z: superior-to-inferior) of kinematics varied with different sports and types of head impacts.

Read more## Study on multi-fold bunch splitting in a high-intensity medium-energy proton synchrotron

Bunch splitting is an RF manipulation method of changing the bunch structure, bunch numbers and bunch intensity in the high-intensity synchrotrons that serve as the injector for a particle collider. An efficient way to realize bunch splitting is to use the combination of different harmonic RF systems, such as the two-fold bunch splitting of a bunch with a combination of fundamental harmonic and doubled harmonic RF systems. The two-fold bunch splitting and three-fold bunch splitting methods have been experimentally verified and successfully applied to the LHC/PS. In this paper, a generalized multi-fold bunch splitting method is given. The five-fold bunch splitting method using specially designed multi-harmonic RF systems was studied and tentatively applied to the medium-stage synchrotron (MSS), the third accelerator of the injector chain of the Super Proton-Proton Collider (SPPC), to mitigate the pileup effects and collective instabilities of a single bunch in the SPPC. The results show that the five-fold bunch splitting is feasible and both the bunch population distribution and longitudinal emittance growth after the splitting are acceptable, e.g., a few percent in the population deviation and less than 10% in the total emittance growth.

Read more## P , T -odd Faraday rotation in intracavity absorption spectroscopy with molecular beam as a possible way to improve the sensitivity of the search for the time reflection noninvariant effects in nature

The present constraint on the space parity ( P ) and time reflection invariance ( T ) violating electron electric dipole moment ( e EDM) is based on the observation of the electron spin precession in an external electric field using the ThO molecule. We propose an alternative approach: observation of the P ,~ T -odd Faraday effect in an external electric field using the cavity-enhanced polarimetric scheme in combination with a molecular beam crossing the cavity. Our theoretical simulation of the proposed experiment with the PbF and ThO molecular beams shows that the present constraint on the e EDM in principle can be improved by a few orders of magnitude.

Read more## Recoil Implantation Using Gas-Phase Precursor Molecules

Ion implantation underpins a vast range of devices and technologies that require precise control over the physical, chemical, electronic, magnetic and optical properties of materials. A variant termed recoil implantation - in which a precursor is deposited onto a substrate as a thin film and implanted via momentum transfer from incident energetic ions - has a number of compelling advantages, particularly when performed using an inert ion nano-beam [Fröch et al., Nat Commun 11, 5039 (2020)]. However, a major drawback of this approach is that the implant species are limited to the constituents of solid thin films. Here we overcome this limitation by demonstrating recoil implantation using gas-phase precursors. Specifically, we fabricate nitrogen-vacancy (NV) color centers in diamond using an Ar ion beam and the nitrogen-containing precursor gases N2, NH3 and NF3. Our work expands the applicability of recoil implantation to most of the periodic table, and to applications in which thin film deposition or removal is impractical.

Read more## Sub-seasonal forecasting with a large ensemble of deep-learning weather prediction models

We present an ensemble prediction system using a Deep Learning Weather Prediction (DLWP) model that recursively predicts key atmospheric variables with six-hour time resolution. This model uses convolutional neural networks (CNNs) on a cubed sphere grid to produce global forecasts. The approach is computationally efficient, requiring just three minutes on a single GPU to produce a 320-member set of six-week forecasts at 1.4° resolution. Ensemble spread is primarily produced by randomizing the CNN training process to create a set of 32 DLWP models with slightly different learned weights. Although our DLWP model does not forecast precipitation, it does forecast total column water vapor, and it gives a reasonable 4.5-day deterministic forecast of Hurricane Irma. In addition to simulating mid-latitude weather systems, it spontaneously generates tropical cyclones in a one-year free-running simulation. Averaged globally and over a two-year test set, the ensemble mean RMSE retains skill relative to climatology beyond two-weeks, with anomaly correlation coefficients remaining above 0.6 through six days. Our primary application is to subseasonal-to-seasonal (S2S) forecasting at lead times from two to six weeks. Current forecast systems have low skill in predicting one- or 2-week-average weather patterns at S2S time scales. The continuous ranked probability score (CRPS) and the ranked probability skill score (RPSS) show that the DLWP ensemble is only modestly inferior in performance to the European Centre for Medium Range Weather Forecasts (ECMWF) S2S ensemble over land at lead times of 4 and 5-6 weeks. At shorter lead times, the ECMWF ensemble performs better than DLWP.

Read more## Minimum amount of energy required for the transference of electrons between atoms

The purpose of this article is to show a different way to calculate the electron transfer between atoms. It can be used as a base for calculating the minimal electricity, showed here by Joules, necessary for the transference of electrons. There is already a Table, in which you find the result of all of the electric transfer necessary in most of the materials. In this article, we tried to generalize the electricity necessary, but if you want the result in a specific field or material, you can search the references and collaborates workers.

Read more## Bayesian Poroelastic Aquifer Characterization from InSAR Surface Deformation Data Part II: Quantifying the Uncertainty

Uncertainty quantification of groundwater (GW) aquifer parameters is critical for efficient management and sustainable extraction of GW resources. These uncertainties are introduced by the data, model, and prior information on the parameters. Here we develop a Bayesian inversion framework that uses Interferometric Synthetic Aperture Radar (InSAR) surface deformation data to infer the laterally heterogeneous permeability of a transient linear poroelastic model of a confined GW aquifer. The Bayesian solution of this inverse problem takes the form of a posterior probability density of the permeability. Exploring this posterior using classical Markov chain Monte Carlo (MCMC) methods is computationally prohibitive due to the large dimension of the discretized permeability field and the expense of solving the poroelastic forward problem. However, in many partial differential equation (PDE)-based Bayesian inversion problems, the data are only informative in a few directions in parameter space. For the poroelasticity problem, we prove this property theoretically for a one-dimensional problem and demonstrate it numerically for a three-dimensional aquifer model. We design a generalized preconditioned Crank--Nicolson (gpCN) MCMC method that exploits this intrinsic low dimensionality by using a low-rank based Laplace approximation of the posterior as a proposal, which we build scalably. The feasibility of our approach is demonstrated through a real GW aquifer test in Nevada. The inherently two dimensional nature of InSAR surface deformation data informs a sufficient number of modes of the permeability field to allow detection of major structures within the aquifer, significantly reducing the uncertainty in the pressure and the displacement quantities of interest.

Read more## Using Parker Solar Probe observations during the first four perihelia to constrain global magnetohydrodynamic models

Parker Solar Probe (PSP) is providing an unprecedented view of the Sun's corona as it progressively dips closer into the solar atmosphere with each solar encounter. Each set of observations provides a unique opportunity to test and constrain global models of the solar corona and inner heliosphere and, in turn, use the model results to provide a global context for interpreting such observations. In this study, we develop a set of global magnetohydrodynamic (MHD) model solutions of varying degrees of sophistication for PSP's first four encounters and compare the results with in situ measurements from PSP, Stereo-A, and Earth-based spacecraft, with the objective of assessing which models perform better or worse. All models were primarily driven by the observed photospheric magnetic field using data from Solar Dynamics Observatory's Helioseismic and Magnetic Imager (HMI) instrument. Overall, we find that there are substantial differences between the model results, both in terms of the large-scale structure of the inner heliosphere during these time periods, as well as in the inferred time-series at various spacecraft. The "thermodynamic" model, which represents the "middle ground", in terms of model complexity, appears to reproduce the observations most closely for all four encounters. Our results also contradict an earlier study that had hinted that the open flux problem may disappear nearer the Sun. Instead, our results suggest that this "missing" solar flux is still missing even at 26.9 Rs, and thus it cannot be explained by interplanetary processes. Finally, the model results were also used to provide a global context for interpreting the localized in situ measurements.

Read more## From Ramanujan to renormalization: the art of doing away with divergences and arriving at physical results

A century ago Srinivasa Ramanujan - the great self-taught Indian genius of mathematics - died, shortly after returning from Cambridge, UK, where he had collaborated with Godfrey Hardy. Ramanujan contributed numerous outstanding results to different branches of mathematics, like analysis and number theory, with a focus on special functions and series. Here we refer to apparently weird values which he assigned to two simple divergent series, ??n?? n and ??n?? n 3 . These values are sensible, however, as analytic continuations, which correspond to Riemann's ζ -function. Moreover, they have applications in physics: we discuss the vacuum energy of the photon field, from which one can derive the Casimir force, which has been experimentally measured. We also discuss its interpretation, which remains controversial. This is a simple way to illustrate the concept of renormalization, which is vital in quantum field theory.

Read more## A Kinetic Model for Electron-Ion Transport in Warm Dense Matter

We present a model for electron-ion transport in Warm Dense Matter that incorporates Coulomb coupling effects into the quantum Boltzmann equation of Uehling and Uhlenbeck through the use of a statistical potential of mean force. Although this model has been derived rigorously in the classical limit [S.D. Baalrud and J. Daligault, Physics of Plasmas 26, 8, 082106 (2019)], its quantum generalization is complicated by the uncertainty principle. Here we apply an existing model for the potential of mean force based on the quantum Ornstein-Zernike equation coupled with an average-atom model [C. E. Starrett, High Energy Density Phys. 25, 8 (2017)]. This potential contains correlations due to both Coulomb coupling and exchange, and the collision kernel of the kinetic theory enforces Pauli blocking while allowing for electron diffraction and large-angle collisions. By solving the Uehling-Uhlenbeck equation for electron-ion relaxation rates, we predict the momentum and temperature relaxation time and electrical conductivity of solid density aluminum plasma based on electron-ion collisions. We present results for density and temperature conditions that span the transition from classical weakly-coupled plasma to degenerate moderately-coupled plasma. Our findings agree well with recent quantum molecular dynamics simulations.

Read more## Polarisation control of quasi-monochromatic XUV produced via resonant high harmonic generation

We present a numerical study of the resonant high harmonic generation by tin ions in an elliptically-polarised laser field along with a simple analytical model revealing the mechanism and main features of this process. We show that the yield of the resonant harmonics behaves anomalously with the fundamental field ellipticity, namely the drop of the resonant harmonic intensity with the fundamental ellipticity is much slower than for high harmonics generated through the nonresonant mechanism. Moreover, we study the polarisation properties of high harmonics generated in elliptically-polarised field and show that the ellipticity of harmonics near the resonance is significantly higher than for ones far off the resonance. This introduces a prospective way to create a source of the quasi-monochromatic coherent XUV with controllable ellipticity potentially up to circular.

Read more## Dynamic Mode Decomposition of inertial particle caustics in Taylor-Green flow

Inertial particles advected by a background flow can show complex structures. We consider inertial particles in a 2D Taylor-Green (TG) flow and characterize particle dynamics as a function of the particle's Stokes number using dynamic mode decomposition (DMD) method from particle image velocimetry (PIV) like-data. We observe the formation of caustic structures and analyze them using DMD to (a) determine the Stokes number of the particles, and (b) estimate the particle Stokes number composition. Our analysis in this idealized flow will provide useful insight to analyze inertial particles in more complex or turbulent flows. We propose that the DMD technique can be used to perform a similar analysis on an experimental system.

Read more## Inclusive education and research through African Network of Women in Astronomy and STEM for GIRLS in Ethiopia initiatives

The African Network of Women in Astronomy and STEM for GIRLS in Ethiopia initiatives have been established with aim to strengthen the participation of girls and women in astronomy and science in Africa and Ethiopia. We will not be able to achieve the UN Sustainable Development Goals without full participation of women and girls in all aspects of our society and without giving in future the same opportunity to all children to access education independently on their socio-economical status. In this paper both initiatives are briefly introduced.

Read more## Deep learning-based attenuation correction in the image domain for myocardial perfusion SPECT imaging

Objective: In this work, we set out to investigate the accuracy of direct attenuation correction (AC) in the image domain for the myocardial perfusion SPECT imaging (MPI-SPECT) using two residual (ResNet) and UNet deep convolutional neural networks. Methods: The MPI-SPECT 99mTc-sestamibi images of 99 participants were retrospectively examined. UNet and ResNet networks were trained using SPECT non-attenuation corrected images as input and CT-based attenuation corrected SPECT images (CT-AC) as reference. The Chang AC approach, considering a uniform attenuation coefficient within the body contour, was also implemented. Quantitative and clinical evaluation of the proposed methods were performed considering SPECT CT-AC images of 19 subjects as reference using the mean absolute error (MAE), structural similarity index (SSIM) metrics, as well as relevant clinical indices such as perfusion deficit (TPD). Results: Overall, the deep learning solution exhibited good agreement with the CT-based AC, noticeably outperforming the Chang method. The ResNet and UNet models resulted in the ME (count) of ??.99±16.72 and ??.41±11.8 and SSIM of 0.99±0.04 and 0.98±0.05 , respectively. While the Change approach led to ME and SSIM of 25.52±33.98 and 0.93±0.09 , respectively. Similarly, the clinical evaluation revealed a mean TPD of 12.78±9.22 and 12.57±8.93 for the ResNet and UNet models, respectively, compared to 12.84±8.63 obtained from the reference SPECT CT-AC images. On the other hand, the Chang approach led to a mean TPD of 16.68±11.24 . Conclusion: We evaluated two deep convolutional neural networks to estimate SPECT-AC images directly from the non-attenuation corrected images. The deep learning solutions exhibited the promising potential to generate reliable attenuation corrected SPECT images without the use of transmission scanning.

Read more## Physics-aware, deep probabilistic modeling of multiscale dynamics in the Small Data regime

The data-based discovery of effective, coarse-grained (CG) models of high-dimensional dynamical systems presents a unique challenge in computational physics and particularly in the context of multiscale problems. The present paper offers a probabilistic perspective that simultaneously identifies predictive, lower-dimensional coarse-grained (CG) variables as well as their dynamics. We make use of the expressive ability of deep neural networks in order to represent the right-hand side of the CG evolution law. Furthermore, we demonstrate how domain knowledge that is very often available in the form of physical constraints (e.g. conservation laws) can be incorporated with the novel concept of virtual observables. Such constraints, apart from leading to physically realistic predictions, can significantly reduce the requisite amount of training data which enables reducing the amount of required, computationally expensive multiscale simulations (Small Data regime). The proposed state-space model is trained using probabilistic inference tools and, in contrast to several other techniques, does not require the prescription of a fine-to-coarse (restriction) projection nor time-derivatives of the state variables. The formulation adopted is capable of quantifying the predictive uncertainty as well as of reconstructing the evolution of the full, fine-scale system which allows to select the quantities of interest a posteriori. We demonstrate the efficacy of the proposed framework in a high-dimensional system of moving particles.

Read more## A Grid-free Approach for Simulating Sweep and Cyclic Voltammetry

We present a new computational approach to simulate linear sweep and cyclic voltammetry experiments that does not require a discretized grid in space to quantify diffusion. By using a Green's function solution coupled to a standard implicit ordinary differential equation solver, we are able to simulate current and redox species concentrations using only a small grid in time. As a result, where benchmarking is possible, we find that the current method is faster (and quantitatively identical) to established techniques. The present algorithm should help open the door to studying adsorption effects in inner sphere electrochemistry.

Read more## Intermolecular vibrational states far above the van der Waals minimum: combination bands of the polar N2O dimer

Infrared combination bands of the polar isomer of the N2O dimer are observed for the first time, using a tunable infrared laser source to probe a pulsed slit-jet supersonic expansion in the N2O nu1 region (~2240 cm-1). One band involves the torsional (out-of-plane) intermolecular mode and yields a torsional frequency of 19.83 cm-1 if associated with the out-of-phase fundamental (N2O nu1) vibration of the N2O monomers in the dimer. The other band, which is highly perturbed, yields an intermolecular in-plane geared bend frequency of 22.74 cm-1. The results are compared with high level ab initio calculations. The less likely alternate assignment to the in-phase fundamental would give torsional and geared bend frequencies of 17.25 and 20.16 cm-1, respectively.

Read more## Performance of the diamond-based beam-loss monitor system of Belle II

We designed, constructed and have been operating a system based on single-crystal synthetic diamond sensors, to monitor the beam losses at the interaction region of the SuperKEKB asymmetric-energy electron-positron collider. The system records the radiation dose-rates in positions close to the inner detectors of the Belle II experiment, and protects both the detector and accelerator components against destructive beam losses, by participating in the beam-abort system. It also provides complementary information for the dedicated studies of beam-related backgrounds. We describe the performance of the system during the commissioning of the accelerator and during the first physics data taking.

Read more## Point Cloud Transformers applied to Collider Physics

Methods for processing point cloud information have seen a great success in collider physics applications. One recent breakthrough in machine learning is the usage of Transformer networks to learn semantic relationships between sequences in language processing. In this work, we apply a modified Transformer network called Point Cloud Transformer as a method to incorporate the advantages of the Transformer architecture to an unordered set of particles resulting from collision events. To compare the performance with other strategies, we study jet-tagging applications for highly-boosted particles.

Read more## Persistent individual bias in a voter model with quenched disorder

Many theoretical studies of the voter model (or variations thereupon) involve order parameters that are population-averaged. While enlightening, such quantities may obscure important statistical features that are only apparent on the level of the individual. In this work, we ask which factors contribute to a single voter maintaining a long-term statistical bias for one opinion over the other in the face of social influence. To this end, a modified version of the network voter model is proposed, which also incorporates quenched disorder in the interaction strengths between individuals and the possibility of antagonistic relationships. We find that a sparse interaction network and heterogeneity in interaction strengths give rise to the possibility of arbitrarily long-lived individual biases, even when there is no population-averaged bias for one opinion over the other. This is demonstrated by calculating the eigenvalue spectrum of the weighted network Laplacian using the theory of sparse random matrices.

Read more###### Popular Physics

###### Genetic evolution of a multi-generational population in the context of interstellar space travels -- Part I: Genetic evolution under the neutral selection hypothesis

We updated the agent based Monte Carlo code HERITAGE that simulates human evolution within restrictive environments such as interstellar, sub-light speed spacecraft in order to include the effects of population genetics. We incorporated a simplified -- yet representative -- model of the whole human genome with 46 chromosomes (23 pairs), containing 2110 building blocks that simulate genetic elements (loci). Each individual is endowed with his/her own diploid genome. Each locus can take 10 different allelic (mutated) forms that can be investigated. To mimic gamete production (sperm and eggs) in human individuals, we simulate the meiosis process including crossing-over and unilateral conversions of chromosomal sequences. Mutation of the genetic information from cosmic ray bombardments is also included. In this first paper of a series of two, we use the neutral hypothesis: mutations (genetic changes) have only neutral phenotypic effects (physical manifestations), implying no natural selection on variations. We will relax this assumption in the second paper. Under such hypothesis, we demonstrate how the genetic patrimony of multi-generational crews can be affected by genetic drift and mutations. It appears that centuries-long deep space travels have small but unavoidable effects on the genetic composition/diversity of the traveling populations that herald substantial genetic differentiation on longer time-scales if the annual equivalent dose of cosmic ray radiation is similar to the Earth radioactivity background at sea level. For larger doses, genomes in the final populations can deviate more strongly with significant genetic differentiation that arises within centuries.

More from Popular Physics###### `Oumuamua is not Artificial

I summarize evidence against the hypothesis that `Oumuamua is the artificial creation of an advanced civilization. An appendix discusses the flaws and inconsistencies of the "Breakthrough" proposal for laser acceleration of spacecraft to semi-relativistic speeds. Reality is much more challenging, and interesting.

More from Popular Physics###### The Newcomb--Benford law. Scale invariance and a simple Markov process based on it

The Newcomb-Benford law, also known as the first-digit law, gives the probability distribution associated with the first digit of a dataset, so that, for example, the first significant digit has a probability of 30.1 % of being 1 and 4.58 % of being 9 . This law can be extended to the second and next significant digits. In this article, an introduction to the discovery of the law, its derivation from the scale invariance property, as well as some applications and examples, are presented. Additionally, a simple model of a Markov process inspired by scale invariance is proposed. Within this model, it is proved that the probability distribution irreversibly converges to the Newcomb-Benford law, in analogy to the irreversible evolution towards equilibrium of physical systems in thermodynamics and statistical mechanics.

More from Popular Physics###### Classical Physics

###### On the Radiation Reaction Force

The usual radiation self-force of a point charge is obtained in a mathematically exact way and it is pointed out to that this does not call forth that the spacetime motion of a point charge obeys the Lorentz--Abraham--Dirac equation.

More from Classical Physics###### A Thermodynamic Framework for Additive Manufacturing, using Amorphous Polymers, Capable of Predicting Residual Stress, Warpage and Shrinkage

A thermodynamic framework has been developed for a class of amorphous polymers used in fused deposition modeling (FDM), in order to predict the residual stresses and the accompanying distortion of the geometry of the printed part (warping). When a polymeric melt is cooled, the inhomogeneous distribution of temperature causes spatially varying volumetric shrinkage resulting in the generation of residual stresses. Shrinkage is incorporated into the framework by introducing an isotropic volumetric expansion/contraction in the kinematics of the body. We show that the parameter for shrinkage also appears in the systematically derived rate-type constitutive relation for the stress. The solidification of the melt around the glass transition temperature is emulated by drastically increasing the viscosity of the melt. In order to illustrate the usefulness and efficacy of the derived constitutive relation, we consider four ribbons of polymeric melt stacked on each other such as those extruded using a flat nozzle: each layer laid instantaneously and allowed to cool for one second before another layer is laid on it. Each layer cools, shrinks and warps until a new layer is laid, at which time the heat from the newly laid layer flows and heats up the bottom layers. The residual stresses of the existing and newly laid layers readjust to satisfy equilibrium. Such mechanical and thermal interactions amongst layers result in a complex distribution of residual stresses. The plane strain approximation predicts nearly equibiaxial tensile stress conditions in the core region of the solidified part, implying that a pre-existing crack in that region is likely to propagate and cause failure of the part during service. The free-end of the interface between the first and the second layer is subjected to the largest magnitude of combined shear and tension in the plane with a propensity for delamination.

More from Classical Physics###### Acoustical characteristics of segmented plates with contact interfaces

The possibility of shifting sound energy from lower to higher frequency bands is investigated. The system configuration considered is a segmented structure having non-linear stiffness characteristics. It is proposed here that such a frequency-shifting mechanism could complement metamaterial concepts for mass-efficient sound barriers. The acoustical behavior of the material system was studied through a representative two-dimensional model consisting of a segmented plate with a contact interface. Multiple harmonic peaks were observed in response to a purely single frequency excitation, and the strength of the response was found to depend on the degree of non-linearity introduced. The lower and closer an excitation frequency was to the characteristic resonance frequencies of the base system, the stronger was the predicted higher harmonic response. The broadband sound transmission loss of these systems has also been calculated and the low frequency sound transmission loss was found to increase as the level of the broadband incident sound field increased. The present findings support the feasibility of designing material systems that transfer energy from lower frequency bands, where a sound barrier is less efficient, to higher bands where energy is more readily dissipated.

More from Classical Physics###### Biological Physics

###### Predictive Factors of Kinematics in Traumatic Brain Injury from Head Impacts Based on Statistical Interpretation

Brain tissue deformation resulting from head impacts is primarily caused by rotation and can lead to traumatic brain injury. To quantify brain injury risk based on measurements of kinematics on the head, finite element (FE) models and various brain injury criteria based on different factors of these kinematics have been developed, but the contribution of different kinematic factors has not been comprehensively analyzed across different types of head impacts in a data-driven manner. To better design brain injury criteria, the predictive power of rotational kinematics factors, which are different in 1) the derivative order (angular velocity, angular acceleration, angular jerk), 2) the direction and 3) the power (e.g., square-rooted, squared, cubic) of the angular velocity, were analyzed based on different datasets including laboratory impacts, American football, mixed martial arts (MMA), NHTSA automobile crashworthiness tests and NASCAR crash events. Ordinary least squares regressions were built from kinematics factors to the 95\% maximum principal strain (MPS95), and we compared zero-order correlation coefficients, structure coefficients, commonality analysis, and dominance analysis. The angular acceleration, the magnitude, and the first power factors showed the highest predictive power for the majority of impacts including laboratory impacts, American football impacts, with few exceptions (angular velocity for MMA and NASCAR impacts). The predictive power of rotational kinematics in three directions (x: posterior-to-anterior, y: left-to-right, z: superior-to-inferior) of kinematics varied with different sports and types of head impacts.

More from Biological Physics###### Chain ordering of phospholipids in membranes containing cholesterol: What matters?

Cholesterol (CHOL) drives lipid segregation and is thus a key player for the formation of lipid rafts and followingly for the ability of a cell to, e.g., enable selective agglomeration of proteins. The lipid segregation is driven by cholesterol's affinity for saturated lipids, which stands directly in relation to the ability of cholesterol to order the individual phospholipid (PL) acyl chains. In this work, Molecular Dynamics simulations of DPPC (Dipalmitoylphosphatidylcholine, saturated lipid) and DLiPC (Dilineoylphosphatidylcholine, unsaturated lipid) mixtures with cholesterol are used to elucidate the underlying mechanisms of the cholesterol ordering effect. To this end, all enthalpic contributions, experienced by the PL molecules, are recorded as a function of the PL's acyl chain order. This involves, the PL-PL, the PL-cholesterol interaction, the interaction of the PLs with water, and the interleaflet interaction. This systematic analysis allows one to unravel differences of saturated and unsaturated lipids in terms of the different interaction factors. It turns out that cholesterol's impact on chain ordering stems not only from direct interactions with the PLs but is also indirectly present in the other energy contributions. Furthermore, the analysis sheds light on the relevance of the entropic contributions, related to the degrees of freedom of the acyl chain.

More from Biological Physics###### The Spanning Tree Model for the Assembly Kinetics of RNA Viruses

We present a simple kinetic model for the assembly of small single-stranded RNA viruses that can be used to carry out analytical packaging contests between different types of RNA molecules. The RNA selection mechanism is purely kinetic and based on small differences between the assembly energy profiles. RNA molecules that win these packaging contests are characterized by having a minimum "Maximum Ladder Distance" and a maximum "Wrapping Number".The former is a topological invariant that measures the "branchiness" of the genome molecule while the latter measures the ability of the genome molecule to maximally associate with the capsid proteins. The model can also be used study the applicability of the theory of nucleation and growth to viral assembly, which breaks down with increasing strength of the RNA-protein interaction.

More from Biological Physics###### Accelerator Physics

###### Study on multi-fold bunch splitting in a high-intensity medium-energy proton synchrotron

Bunch splitting is an RF manipulation method of changing the bunch structure, bunch numbers and bunch intensity in the high-intensity synchrotrons that serve as the injector for a particle collider. An efficient way to realize bunch splitting is to use the combination of different harmonic RF systems, such as the two-fold bunch splitting of a bunch with a combination of fundamental harmonic and doubled harmonic RF systems. The two-fold bunch splitting and three-fold bunch splitting methods have been experimentally verified and successfully applied to the LHC/PS. In this paper, a generalized multi-fold bunch splitting method is given. The five-fold bunch splitting method using specially designed multi-harmonic RF systems was studied and tentatively applied to the medium-stage synchrotron (MSS), the third accelerator of the injector chain of the Super Proton-Proton Collider (SPPC), to mitigate the pileup effects and collective instabilities of a single bunch in the SPPC. The results show that the five-fold bunch splitting is feasible and both the bunch population distribution and longitudinal emittance growth after the splitting are acceptable, e.g., a few percent in the population deviation and less than 10% in the total emittance growth.

More from Accelerator Physics###### Space-charge effects in low-energy flat-beam transforms

Flat-beam transforms (FBTs) provide a technique for controlling the emittance partitioning between the beam's two transverse dimensions. To date, nearly all FBT studies have been in regimes where the beam's own space-charge effects can be ignored, such as in applications with high-brightness electron linacs where the transform occurs at high, relativistic, energies. Additionally, FBTs may provide a revolutionary path to high power generation at high frequencies in vacuum electron devices where the beam emittance is currently becoming a limiting factor, which is the focus of this paper. Electron beams in vacuum electron devices operate both at a much lower energy and a much higher current than in accelerators and the beam's space charge forces can no longer be ignored. Here we analyze the effects of space-charge in FBTs and show there are both linear and nonlinear forces and effects. The linear effects can be compensated by retuning the FBT and by adding additional quadrupole elements. The nonlinear effects lead to an ultimate dilution of the lower recovered emittance and will lead to an eventual power limitation for high-frequency traveling-wave tubes and other vacuum electron devices.

More from Accelerator Physics###### Adaptive deep learning for time-varying systems with hidden parameters: Predicting changing input beam distributions of compact particle accelerators

Machine learning (ML) tools such as encoder-decoder deep convolutional neural networks (CNN) are able to extract relationships between inputs and outputs of large complex systems directly from raw data. For time-varying systems the predictive capabilities of ML tools degrade as the systems are no longer accurately represented by the data sets with which the ML models were trained. Re-training is possible, but only if the changes are slow and if new input-output training data measurements can be made online non-invasively. In this work we present an approach to deep learning for time-varying systems in which adaptive feedback based only on available system output measurements is applied to encoded low-dimensional dense layers of encoder-decoder type CNNs. We demonstrate our method in developing an inverse model of a complex charged particle accelerator system, mapping output beam measurements to input beam distributions while both the accelerator components and the unknown input beam distribution quickly vary with time. We demonstrate our results using experimental measurements of the input and output beam distributions of the HiRES ultra-fast electron diffraction (UED) microscopy beam line at Lawrence Berkeley National Laboratory. We show how our method can be used to aid both physics and ML-based surrogate online models to provide non-invasive beam diagnostics and we also demonstrate how our method can be used to automatically track the time varying quantum efficiency map of a particle accelerator's photocathode.

More from Accelerator Physics###### Atomic Physics

###### P , T -odd Faraday rotation in intracavity absorption spectroscopy with molecular beam as a possible way to improve the sensitivity of the search for the time reflection noninvariant effects in nature

The present constraint on the space parity ( P ) and time reflection invariance ( T ) violating electron electric dipole moment ( e EDM) is based on the observation of the electron spin precession in an external electric field using the ThO molecule. We propose an alternative approach: observation of the P ,~ T -odd Faraday effect in an external electric field using the cavity-enhanced polarimetric scheme in combination with a molecular beam crossing the cavity. Our theoretical simulation of the proposed experiment with the PbF and ThO molecular beams shows that the present constraint on the e EDM in principle can be improved by a few orders of magnitude.

More from Atomic Physics###### Observation of Microwave Shielding of Ultracold Molecules

Harnessing the potential wide-ranging quantum science applications of molecules will require control of their interactions. Here, we use microwave radiation to directly engineer and tune the interaction potentials between ultracold calcium monofluoride (CaF) molecules. By merging two optical tweezers, each containing a single molecule, we probe collisions in three dimensions. The correct combination of microwave frequency and power creates an effective repulsive shield, which suppresses the inelastic loss rate by a factor of six, in agreement with theoretical calculations. The demonstrated microwave shielding shows a general route to the creation of long-lived, dense samples of ultracold molecules and evaporative cooling.

More from Atomic Physics###### Synthetic dimension-induced conical intersections in Rydberg molecules

We observe a series of conical intersections in the potential energy curves governing both the collision between a Rydberg atom and a ground-state atom and the structure of Rydberg molecules. By employing the electronic energy of the Rydberg atom as a synthetic dimension we circumvent the von Neumann-Wigner theorem. These conical intersections can occur when the Rydberg atom's quantum defect is similar in size to the electron--ground-state atom scattering phase shift divided by ? , a condition satisfied in several commonly studied atomic species. The conical intersections have an observable consequence in the rate of ultracold l -changing collisions of the type Rb (nf) +Rb (5s)??Rb (nl>3) +Rb (5s) . In the vicinity of a conical intersection, this rate is strongly suppressed, and the Rydberg atom becomes nearly transparent to the ground-state atom.

More from Atomic Physics###### Applied Physics

###### Recoil Implantation Using Gas-Phase Precursor Molecules

Ion implantation underpins a vast range of devices and technologies that require precise control over the physical, chemical, electronic, magnetic and optical properties of materials. A variant termed recoil implantation - in which a precursor is deposited onto a substrate as a thin film and implanted via momentum transfer from incident energetic ions - has a number of compelling advantages, particularly when performed using an inert ion nano-beam [Fröch et al., Nat Commun 11, 5039 (2020)]. However, a major drawback of this approach is that the implant species are limited to the constituents of solid thin films. Here we overcome this limitation by demonstrating recoil implantation using gas-phase precursors. Specifically, we fabricate nitrogen-vacancy (NV) color centers in diamond using an Ar ion beam and the nitrogen-containing precursor gases N2, NH3 and NF3. Our work expands the applicability of recoil implantation to most of the periodic table, and to applications in which thin film deposition or removal is impractical.

More from Applied Physics###### Synchronous Inductor Switched Energy Extraction Circuits for Triboelectric Nanogenerator

Triboelectric nanogenerator (TENG), a class of mechanical to electrical energy transducers, has emerged as a promising solution to self-power Internet of Things (IoT) sensors, wearable electronics, etc. The use of synchronous switched energy extraction circuits (EECs) as an interface between TENG and battery load can deliver multi-fold energy gain over simple minded Full Wave Rectification (FWR). This paper presents a detailed analysis of Parallel and Series Synchronous Switched Harvesting on Inductor (P-SSHI and S-SSHI) EECs to derive the energy delivered to the battery load and compare it with the standard FWR (a 3rd circuit) in a common analytical framework, under both realistic conditions, and also ideal conditions. Further, the optimal value of battery load to maximize output and upper bound beyond which charging is not feasible are derived for all the three considered circuits. These closed-form results derived with general TENG electrical parameters and first-order circuit non-idealities shed light on the physics of the modeling and guide the choice and design of EECs for any given TENG. The derived analytical results are verified against PSpice based simulation results as well as the experimentally measured values.

More from Applied Physics###### Compensation of Multicore Fiber Skew Effects for Radio over Fiber mmWave Antenna Beamforming

In 5G networks, a Radio over Fiber architecture utilizing multicore fibers can be adopted for the transmission of mmwave signals feeding phased array antennas. The mmwave signals undergo phase shifts imposed by optical true time delay networks, to provide squint free beams. Multicore fibers are used to transfer the phase shifted optical signals. However, the intercore static skew of these fibers, if not compensated, distorts the radiation pattern. We propose an efficient method to compensate the differential delays, without full equalization of the transmission path lengths, reducing the power loss and complexity. Statistical analysis shows that regardless of the skew distribution, the frequency response can be estimated with respect to the rms skew delays. Simulation analysis of the complete Radio over Fiber and RF link validates the method.

More from Applied Physics###### Atmospheric and Oceanic Physics

###### Sub-seasonal forecasting with a large ensemble of deep-learning weather prediction models

We present an ensemble prediction system using a Deep Learning Weather Prediction (DLWP) model that recursively predicts key atmospheric variables with six-hour time resolution. This model uses convolutional neural networks (CNNs) on a cubed sphere grid to produce global forecasts. The approach is computationally efficient, requiring just three minutes on a single GPU to produce a 320-member set of six-week forecasts at 1.4° resolution. Ensemble spread is primarily produced by randomizing the CNN training process to create a set of 32 DLWP models with slightly different learned weights. Although our DLWP model does not forecast precipitation, it does forecast total column water vapor, and it gives a reasonable 4.5-day deterministic forecast of Hurricane Irma. In addition to simulating mid-latitude weather systems, it spontaneously generates tropical cyclones in a one-year free-running simulation. Averaged globally and over a two-year test set, the ensemble mean RMSE retains skill relative to climatology beyond two-weeks, with anomaly correlation coefficients remaining above 0.6 through six days. Our primary application is to subseasonal-to-seasonal (S2S) forecasting at lead times from two to six weeks. Current forecast systems have low skill in predicting one- or 2-week-average weather patterns at S2S time scales. The continuous ranked probability score (CRPS) and the ranked probability skill score (RPSS) show that the DLWP ensemble is only modestly inferior in performance to the European Centre for Medium Range Weather Forecasts (ECMWF) S2S ensemble over land at lead times of 4 and 5-6 weeks. At shorter lead times, the ECMWF ensemble performs better than DLWP.

More from Atmospheric and Oceanic Physics###### Can Existing Theory Predict the Response of Tropical Cyclone Intensity to Idealized Landfall?

Tropical cyclones cause significant inland hazards, including wind damage and freshwater flooding, that depend strongly on how storm intensity evolves at and after landfall. Existing theoretical predictions for the time-dependent and equilibrium response of storm intensity have been tested over the open ocean but not yet to be applied to storms after landfall. Recent work examined the transient response of the tropical cyclone low-level wind field to instantaneous surface roughening or drying in idealized axisymmetric f-plane simulations. Here, experiments testing combined surface roughening and drying with varying magnitudes of each are used to test theoretical predictions for the intensity response. The transient response to combined surface forcings can be reproduced by the product of their individual responses, in line with traditional potential intensity theory. Existing intensification theory is generalized to weakening and found capable of reproducing the time-dependent inland intensity decay. The initial (0-10min) rapid decay of near-surface wind caused by surface roughening is not captured by existing theory but can be reproduced by a simple frictional spin-down model, where the decay rate is a function of surface roughness. Finally, the theory is shown to compare well with the prevailing empirical decay model for real-world storms. Overall, results indicate the potential for existing theory to predict how tropical cyclone intensity evolves after landfall.

More from Atmospheric and Oceanic Physics###### Universal Wind Profile for Conventionally Neutral Atmospheric Boundary Layers

Conventionally neutral atmospheric boundary layers (CNBLs), which are characterized with zero surface potential temperature flux and capped by an inversion of potential temperature, are frequently encountered in nature. Therefore, predicting the wind speed profiles of CNBLs is relevant for weather forecasting, climate modeling, and wind energy applications. However, previous attempts to predict the velocity profiles in CNBLs have had limited success due to the complicated interplay between buoyancy, shear, and Coriolis effects. Here, we utilize ideas from the classical Monin-Obukhov similarity theory in combination with a local scaling hypothesis to derive an analytic expression for the stability correction function ?=??c ? (z/L ) 1/2 , where c ? =4.2 is an empirical constant, z is the height above ground, and L is the local Obukhov length based on potential temperature flux at that height, for CNBLs. An analytic expression for this flux is also derived using dimensional analysis and a perturbation method approach. We find that the derived profile agrees excellently with the velocity profile in the entire boundary layer obtained from high-fidelity large eddy simulations of typical CNBLs.

More from Atmospheric and Oceanic Physics###### General Physics

###### Minimum amount of energy required for the transference of electrons between atoms

The purpose of this article is to show a different way to calculate the electron transfer between atoms. It can be used as a base for calculating the minimal electricity, showed here by Joules, necessary for the transference of electrons. There is already a Table, in which you find the result of all of the electric transfer necessary in most of the materials. In this article, we tried to generalize the electricity necessary, but if you want the result in a specific field or material, you can search the references and collaborates workers.

More from General Physics###### What sunspots are whispering about covid-19?

Several studies point to the antimicrobial effects of ELF electromagnetic fields. Such fields have accompanied life from the very beginning, and it is possible that they played a significant role in its emergence and evolution. However, the literature on the biological effects of ELF electromagnetic fields is controversial, and we still lack an understanding of the complex mechanisms that make such effects, observed in many experiments, possible. The Covid-19 pandemic has shown how fragile we are in the face of powerful processes operating in the biosphere. We believe that understanding the role of ELF electromagnetic fields in regulating the biosphere is important in our fight against Covid-19, and research in this direction should be intensified.

More from General Physics###### A new proposal to the extension of complex numbers

We propose the extension of the complex numbers to be the new domain where new concepts, like negative and imaginary probabilities, can be defined. The unit of the new space is defined as the solution of the unsolvable equation in the complex domain: |z | 2 = z ∗ z=i . The existence of the unsolvable equation in a closed domain as complex's lead to the definition of a new type of multiplication, for not violate the fundamental theorem of algebra. The definition of the new space also requests the inclusion of a new mapping operation, so the absolute value of the new extended number being real and positive. We study the properties of the vector space like positive-definiteness, linearity, and conjugated symmetry.

More from General Physics###### Geophysics

###### Bayesian Poroelastic Aquifer Characterization from InSAR Surface Deformation Data Part II: Quantifying the Uncertainty

Uncertainty quantification of groundwater (GW) aquifer parameters is critical for efficient management and sustainable extraction of GW resources. These uncertainties are introduced by the data, model, and prior information on the parameters. Here we develop a Bayesian inversion framework that uses Interferometric Synthetic Aperture Radar (InSAR) surface deformation data to infer the laterally heterogeneous permeability of a transient linear poroelastic model of a confined GW aquifer. The Bayesian solution of this inverse problem takes the form of a posterior probability density of the permeability. Exploring this posterior using classical Markov chain Monte Carlo (MCMC) methods is computationally prohibitive due to the large dimension of the discretized permeability field and the expense of solving the poroelastic forward problem. However, in many partial differential equation (PDE)-based Bayesian inversion problems, the data are only informative in a few directions in parameter space. For the poroelasticity problem, we prove this property theoretically for a one-dimensional problem and demonstrate it numerically for a three-dimensional aquifer model. We design a generalized preconditioned Crank--Nicolson (gpCN) MCMC method that exploits this intrinsic low dimensionality by using a low-rank based Laplace approximation of the posterior as a proposal, which we build scalably. The feasibility of our approach is demonstrated through a real GW aquifer test in Nevada. The inherently two dimensional nature of InSAR surface deformation data informs a sufficient number of modes of the permeability field to allow detection of major structures within the aquifer, significantly reducing the uncertainty in the pressure and the displacement quantities of interest.

More from Geophysics###### A Joint Inversion-Segmentation approach to Assisted Seismic Interpretation

Structural seismic interpretation and quantitative characterization are historically intertwined processes. The latter provides estimates of properties of the subsurface which can be used to aid structural interpretation alongside the original seismic data and a number of other seismic attributes. In this work, we redefine this process as a inverse problem which tries to jointly estimate subsurface properties (i.e., acoustic impedance) and a piece-wise segmented representation of the subsurface based on user-defined macro-classes. By inverting for the quantities simultaneously, the inversion is primed with prior knowledge about the regions of interest, whilst at the same time it constrains this belief with the actual seismic measurements. As the proposed functional is separable in the two quantities, these are optimized in an alternating fashion, where each subproblem is solved using a Primal-Dual algorithm. Subsequently, each class is used as input to a workflow which aims to extract the perimeter of the detected shapes and to produce unique horizons. The effectiveness of the proposed method is illustrated through numerical examples on synthetic and field datasets.

More from Geophysics###### Seismic-ionospheric disturbances in ionospheric TEC and plasma parameters associated with the 14 July 2019 Mw 7.2 Laiwui earthquake detected by the GPS and CSES

In this study, with cross-valid analysis of total electron content (TEC) data of the global ionospheric map (GIM) from GPS and plasma parameters data recorded by China Seismo-Electromagnetic Satellite (CSES), signatures of seismic-ionospheric perturbations related to the 14 July 2019 Mw7.2 Laiwui earthquake were detected. After distinguishing the solar and geomagnetic activities, three positive temporal anomalies were found around the epicenter 1 day, 3 days and 8 days before the earthquake (14 July 2019) along with a negative anomaly 6 days after the earthquake, which also agrees well with the TEC spatial variations in latitude-longitude-time (LLT) maps. To further confirm the anomalies, the ionospheric plasma parameters (electron, O+and He+densities) recorded by the Langmuir probe (LAP) and Plasma Analyzer Package (PAP) onboard CSES were analyzed by using the moving mean method (MMM), which also presented remarkable enhancements along the orbits around the epicenter on day 2, day 4 and day 7 before the earthquake. To make the investigations more convincing, the disturbed orbits were compared with their corresponding four nearest revisiting orbits, whose results indeed indicate the existence of plasma parameters anomalies associated with the Laiwui earthquake. All these results illustrated that the GPS and CSES observed unusual ionospheric perturbations are highly associated with the Mw 7.2 Laiwui earthquake, which also strongly indicates the existence of pre-seismic ionospheric anomalies over the earthquake region

More from Geophysics###### Space Physics

###### Using Parker Solar Probe observations during the first four perihelia to constrain global magnetohydrodynamic models

Parker Solar Probe (PSP) is providing an unprecedented view of the Sun's corona as it progressively dips closer into the solar atmosphere with each solar encounter. Each set of observations provides a unique opportunity to test and constrain global models of the solar corona and inner heliosphere and, in turn, use the model results to provide a global context for interpreting such observations. In this study, we develop a set of global magnetohydrodynamic (MHD) model solutions of varying degrees of sophistication for PSP's first four encounters and compare the results with in situ measurements from PSP, Stereo-A, and Earth-based spacecraft, with the objective of assessing which models perform better or worse. All models were primarily driven by the observed photospheric magnetic field using data from Solar Dynamics Observatory's Helioseismic and Magnetic Imager (HMI) instrument. Overall, we find that there are substantial differences between the model results, both in terms of the large-scale structure of the inner heliosphere during these time periods, as well as in the inferred time-series at various spacecraft. The "thermodynamic" model, which represents the "middle ground", in terms of model complexity, appears to reproduce the observations most closely for all four encounters. Our results also contradict an earlier study that had hinted that the open flux problem may disappear nearer the Sun. Instead, our results suggest that this "missing" solar flux is still missing even at 26.9 Rs, and thus it cannot be explained by interplanetary processes. Finally, the model results were also used to provide a global context for interpreting the localized in situ measurements.

More from Space Physics###### Kinetic features for the identification of Kelvin-Helmholtz vortices in \textit{in situ} observations

The boundaries identification of Kelvin-Helmholtz vortices in observational data has been addressed by searching for single-spacecraft small-scale signatures. A recent hybrid Vlasov-Maxwell simulation of Kelvin-Helmholtz instability has pointed out clear kinetic features which uniquely characterize the vortex during both the nonlinear and turbulent stage of the instability. We compare the simulation results with \textit{in situ} observations of Kelvin-Helmholtz vortices by the Magnetospheric MultiScale satellites. We find good agreement between simulation and observations. In particular, the edges of the vortex are associated with strong current sheets, while the center is characterized by a low value for the magnitude of the total current density and strong deviation of the ion distribution function from a Maxwellian distribution. We also find a significant temperature anisotropy parallel to the magnetic field inside the vortex region and strong agyrotropies near the edges. We suggest that these kinetic features can be useful for the identification of Kelvin-Helmholtz vortices in \textit{in situ} data.

More from Space Physics###### Turbulence/wave transmission at an ICME-driven shock observed by Solar Orbiter and Wind

Solar Orbiter observed an interplanetary coronal mass ejection (ICME) event at 0.8 AU on 2020 April 19. The ICME was also observed by Wind at 1 AU on 2020 April 20. An interplanetary shock wave was driven in front of the ICME. We focus on the transmission of the magnetic fluctuations across the shock and analyze the characteristic wave modes of solar wind turbulence near the shock observed by both spacecraft. The ICME event is characterized by a magnetic helicity based technique. The shock normal is determined by magnetic coplanarity method for Solar Orbiter and using a mixed coplanarity approach for Wind. The power spectra of magnetic field fluctuations are generated by applying both a fast Fourier transform and Morlet wavelet analysis. To understand the nature of waves observed near the shock, we use the normalized magnetic helicity as a diagnostic parameter. The wavelet reconstructed magnetic field fluctuation hodograms are used to further study the polarization properties of waves. We find that the ICME-driven shock observed by Solar Orbiter and Wind is a fast forward oblique shock with a more perpendicular shock angle at 1 AU. After the shock crossing, the magnetic field fluctuation power increases. Most of the magnetic field fluctuation power resides in the transverse fluctuations. In the vicinity of the shock, both spacecraft observe right-hand polarized waves in the spacecraft frame. The upstream wave signatures fall in a relatively broad and low-frequency band, which might be attributed to low-frequency MHD waves excited by the streaming particles. For the downstream magnetic wave activity, we find oblique kinetic Alfven waves with frequencies near the proton cyclotron frequency in the spacecraft frame. The frequency of the downstream waves increases by a factor of 7-10 due to the shock compression and the Doppler effect.

More from Space Physics###### History and Philosophy of Physics

###### From Ramanujan to renormalization: the art of doing away with divergences and arriving at physical results

A century ago Srinivasa Ramanujan - the great self-taught Indian genius of mathematics - died, shortly after returning from Cambridge, UK, where he had collaborated with Godfrey Hardy. Ramanujan contributed numerous outstanding results to different branches of mathematics, like analysis and number theory, with a focus on special functions and series. Here we refer to apparently weird values which he assigned to two simple divergent series, ??n?? n and ??n?? n 3 . These values are sensible, however, as analytic continuations, which correspond to Riemann's ζ -function. Moreover, they have applications in physics: we discuss the vacuum energy of the photon field, from which one can derive the Casimir force, which has been experimentally measured. We also discuss its interpretation, which remains controversial. This is a simple way to illustrate the concept of renormalization, which is vital in quantum field theory.

More from History and Philosophy of Physics###### The Arc of the Data Scientific Universe

In this paper I explore the scaffolding of normative assumptions that supports Sabina Leonelli's implicit appeal to the values of epistemic integrity and the global public good that conjointly animate the ethos of responsible and sustainable data work in the context of COVID-19. Drawing primarily on the writings of sociologist Robert K. Merton, the thinkers of the Vienna Circle, and Charles Sanders Peirce, I make some of these assumptions explicit by telling a longer story about the evolution of social thinking about the normative structure of science from Merton's articulation of his well-known norms (those of universalism, communism, organized skepticism, and disinterestedness) to the present. I show that while Merton's norms and his intertwinement of these with the underlying mechanisms of democratic order provide us with an especially good starting point to explore and clarify the commitments and values of science, Leonelli's broader, more context-responsive, and more holistic vision of the epistemic integrity of data scientific understanding, and her discernment of the global and biospheric scope of its moral-practical reach, move beyond Merton's schema in ways that effectively draw upon important critiques. Stepping past Merton, I argue that a combination of situated universalism, methodological pluralism, strong objectivity, and unbounded communalism must guide the responsible and sustainable data work of the future.

More from History and Philosophy of Physics###### Why Do You Think It is a Black Hole?

This paper analyzes the experiment presented in 2019 by the Event Horizon Telescope (EHT) Collaboration that unveiled the first image of the supermassive black hole at the center of galaxy M87. The very first question asked by the EHT Collaboration was: What is the compact object at the center of galaxy M87? Does it have a horizon? Is it a Kerr black hole? In order to answer these questions, the EHT Collaboration first endorsed the working hypothesis that the central object is a black hole described by the Kerr metric, i.e. a spinning Kerr black hole as predicted by classical general relativity. They chose this hypothesis based on previous research and observations of the galaxy M87. After having adopted the Kerr black hole hypothesis, the EHT Collaboration proceeded to test it. They confronted this hypothesis with the data collected in the 2017 EHT experiment. They then compared the Kerr rotating black hole hypothesis with alternative explanations and finally found that their hypothesis was consistent with the data. In this paper, I describe the complex methods used to test the spinning Kerr black hole hypothesis. I conclude this paper with a discussion of the implications of the findings presented here with respect to Hawking radiation.

More from History and Philosophy of Physics###### Plasma Physics

###### A Kinetic Model for Electron-Ion Transport in Warm Dense Matter

We present a model for electron-ion transport in Warm Dense Matter that incorporates Coulomb coupling effects into the quantum Boltzmann equation of Uehling and Uhlenbeck through the use of a statistical potential of mean force. Although this model has been derived rigorously in the classical limit [S.D. Baalrud and J. Daligault, Physics of Plasmas 26, 8, 082106 (2019)], its quantum generalization is complicated by the uncertainty principle. Here we apply an existing model for the potential of mean force based on the quantum Ornstein-Zernike equation coupled with an average-atom model [C. E. Starrett, High Energy Density Phys. 25, 8 (2017)]. This potential contains correlations due to both Coulomb coupling and exchange, and the collision kernel of the kinetic theory enforces Pauli blocking while allowing for electron diffraction and large-angle collisions. By solving the Uehling-Uhlenbeck equation for electron-ion relaxation rates, we predict the momentum and temperature relaxation time and electrical conductivity of solid density aluminum plasma based on electron-ion collisions. We present results for density and temperature conditions that span the transition from classical weakly-coupled plasma to degenerate moderately-coupled plasma. Our findings agree well with recent quantum molecular dynamics simulations.

More from Plasma Physics###### Drift reduced Landau fluid model for magnetized plasma turbulence simulations in BOUT++ framework

Recently the drift-reduced Landau fluid six-field turbulence model within the BOUT++ framework has been upgraded. In particular, this new model employs a new normalization, adds a volumetric flux-driven source option, the Landau fluid closure for parallel heat flux and a Laplacian inversion solver which is able to capture n=0 axisymmetric mode evolution in realistic tokamak configurations. These improvements substantially extended model's capability to study a wider range of tokamak edge phenomena, and are essential to build a fully self-consistent edge turbulence model capable of both transient (e.g., ELM, disruption) and transport time-scale simulations.

More from Plasma Physics###### Axisymmetric dynamo action produced by differential rotation, with anisotropic electrical conductivity and anisotropic magnetic permeability

The effect on dynamo action of an anisotropic electrical conductivity conjugated to an anisotropic magnetic permeability is considered. Not only is the dynamo fully axisymmetric, but it requires only a simple differential rotation, which twice challenges the well-established dynamo theory. Stability analysis is conducted entirely analytically, leading to an explicit expression of the dynamo threshold. The results show a competition between the anisotropy of electrical conductivity and that of magnetic permeability, the dynamo effect becoming impossible if the two anisotropies are identical. For isotropic electrical conductivity, Cowling's neutral point argument does imply the absence of an azimuthal component of current density, but does not prevent the dynamo effect as long as the magnetic permeability is anisotropic.

More from Plasma Physics###### Optics

###### Polarisation control of quasi-monochromatic XUV produced via resonant high harmonic generation

We present a numerical study of the resonant high harmonic generation by tin ions in an elliptically-polarised laser field along with a simple analytical model revealing the mechanism and main features of this process. We show that the yield of the resonant harmonics behaves anomalously with the fundamental field ellipticity, namely the drop of the resonant harmonic intensity with the fundamental ellipticity is much slower than for high harmonics generated through the nonresonant mechanism. Moreover, we study the polarisation properties of high harmonics generated in elliptically-polarised field and show that the ellipticity of harmonics near the resonance is significantly higher than for ones far off the resonance. This introduces a prospective way to create a source of the quasi-monochromatic coherent XUV with controllable ellipticity potentially up to circular.

More from Optics###### In-situ diagnostic of femtosecond probes for high resolution ultrafast imaging

Ultrafast imaging is essential in physics and chemistry to investigate the femtosecond dynamics of nonuniform samples or of phenomena with strong spatial variations. It relies on observing the phenomena induced by an ultrashort laser pump pulse using an ultrashort probe pulse at a later time. Recent years have seen the emergence of very successful ultrafast imaging techniques of single non-reproducible events with extremely high frame rate, based on wavelength or spatial frequency encoding. However, further progress in ultrafast imaging towards high spatial resolution is hampered by the lack of characterization of weak probe beams. Because of the difference in group velocities between pump and probe in the bulk of the material, the determination of the absolute pump-probe delay depends on the sample position. In addition, pulse-front tilt is a widespread issue, unacceptable for ultrafast imaging, but which is conventionally very difficult to evaluate for the low-intensity probe pulses. Here we show that a pump-induced micro-grating generated from the electronic Kerr effect provides a detailed in-situ characterization of a weak probe pulse. It allows solving the two issues. Our approach is valid whatever the transparent medium, whatever the probe pulse polarization and wavelength. Because it is nondestructive and fast to implement, this in-situ probe diagnostic can be repeated to calibrate experimental conditions, particularly in the case where complex wavelength, spatial frequency or polarization encoding is used. We anticipate that this technique will enable previously inaccessible spatiotemporal imaging in all fields of ultrafast science and high field physics at the micro- and nanoscale.

More from Optics###### Pulse interactions in weakly nonlinear coherent optical communication links

The intrachannel interaction of pulses in weakly nonlinear coherent optical fiber lines is theoretically investigated. It is shown that the main contribution to the perturbation of the optical field comes from resonant interactions of ordered triplets of pulses. The structure of triplets is determined. The weight contributions of such interactions are calculated. A classification of interactions using Loeschian numbers is proposed. Using computer simulation, the dependence of the average energy of the optical field perturbations on the distance is shown. Based on the performed analysis, an effective algorithm is proposed for assessing the perturbations resulting from intrachannel interaction.

More from Optics###### Fluid Dynamics

###### Dynamic Mode Decomposition of inertial particle caustics in Taylor-Green flow

Inertial particles advected by a background flow can show complex structures. We consider inertial particles in a 2D Taylor-Green (TG) flow and characterize particle dynamics as a function of the particle's Stokes number using dynamic mode decomposition (DMD) method from particle image velocimetry (PIV) like-data. We observe the formation of caustic structures and analyze them using DMD to (a) determine the Stokes number of the particles, and (b) estimate the particle Stokes number composition. Our analysis in this idealized flow will provide useful insight to analyze inertial particles in more complex or turbulent flows. We propose that the DMD technique can be used to perform a similar analysis on an experimental system.

More from Fluid Dynamics###### Impact of wall modeling on kinetic energy stability for the compressible Navier-Stokes equations

Affordable, high order simulations of turbulent flows on unstructured grids for very high Reynolds number flows require wall models for efficiency. However, different wall models have different accuracy and stability properties. Here, we develop a kinetic energy stability estimate to investigate stability of wall model boundary conditions. Using this norm, two wall models are studied, a popular equilibrium stress wall model, which is found to be unstable and the dynamic slip wall model which is found to be stable. These results are extended to the discrete case using the Summation by parts (SBP) property of the discontinuous Galerkin method. Numerical tests show that while the equilibrium stress wall model is accurate but unstable, the dynamic slip wall model is inaccurate but stable.

More from Fluid Dynamics###### The propagation and decay of a coastal vortex on a shelf

A coastal eddy is modelled as a barotropic vortex propagating along a coastal shelf. If the vortex speed matches the phase speed of any coastal trapped shelf wave modes, a shelf wave wake is generated leading to a flux of energy from the vortex into the wave field. Using a simply shelf geometry, we determine analytic expressions for the wave wake and the leading order flux of wave energy. By considering the balance of energy between the vortex and wave field, this energy flux is then used to make analytic predictions for the evolution of the vortex speed and radius under the assumption that the vortex structure remains self similar. These predictions are examined in the asymptotic limit of small rotation rate and shelf slope and tested against numerical simulations. If the vortex speed does not match the phase speed of any shelf wave, steady vortex solutions are expected to exist. We present a numerical approach for finding these nonlinear solutions and examine the parameter dependence of their structure.

More from Fluid Dynamics###### Physics Education

###### Inclusive education and research through African Network of Women in Astronomy and STEM for GIRLS in Ethiopia initiatives

The African Network of Women in Astronomy and STEM for GIRLS in Ethiopia initiatives have been established with aim to strengthen the participation of girls and women in astronomy and science in Africa and Ethiopia. We will not be able to achieve the UN Sustainable Development Goals without full participation of women and girls in all aspects of our society and without giving in future the same opportunity to all children to access education independently on their socio-economical status. In this paper both initiatives are briefly introduced.

More from Physics Education###### Students' understanding of gravity using the rubber sheet analogy: an Italian experience

General Relativity (GR) represents the most recent theory of gravity, on which all modern astrophysics is based, including some of the most astonishing results of physics research. Nevertheless, its study is limited to university courses, while being ignored at high school level. To introduceGR in high school one of the approaches that can be used is the so-called rubber sheet analogy, i.e. comparing the space-time to a rubber sheet which deforms under a weight. In this paper we analyze the efficacy of an activity for high school students held at the Department of Mathematics and Physics of Roma Tre University that adopts the rubber sheet analogy to address several topics related to gravity. We present the results of the questionnaires we administered to investigate the understanding of the topics treated to over 150 Italian high school students who participated in this activity.

More from Physics Education###### Levitation, oscillations, and wave propagation in a stratified fluid

We present an engaging levitation experiment that students can perform at home or in a simple laboratory using everyday objects. A cork, modified to be slightly denser than water, is placed in a jug containing tap water and coarse kitchen salt delivered at the bottom without stirring. The salt gradually diffuses and determines a stable density stratification of water, the bottom layers being denser than the top ones. During the dissolution of salt, the cork slowly rises at an increasing height, where at any instant its density is balanced by that of the surrounding water. If the cork is gently pushed off its temporary equilibrium position, it experiences a restoring force and starts to oscillate. Students can perform many different measurements of the phenomena involved and tackle non-trivial physical issues related to the behaviour of a macroscopic body immersed in a stratified fluid. Despite its simplicity, this experiment allows to introduce various theoretical concepts of relevance for the physics of the atmosphere and stars and offers students the opportunity of getting acquainted with a simple system that can serve as a model to understand complex phenomena such as oscillations at the Brunt-Väisälä frequency and the propagation of internal gravity waves in a stratified medium.

More from Physics Education###### Medical Physics

###### Deep learning-based attenuation correction in the image domain for myocardial perfusion SPECT imaging

Objective: In this work, we set out to investigate the accuracy of direct attenuation correction (AC) in the image domain for the myocardial perfusion SPECT imaging (MPI-SPECT) using two residual (ResNet) and UNet deep convolutional neural networks. Methods: The MPI-SPECT 99mTc-sestamibi images of 99 participants were retrospectively examined. UNet and ResNet networks were trained using SPECT non-attenuation corrected images as input and CT-based attenuation corrected SPECT images (CT-AC) as reference. The Chang AC approach, considering a uniform attenuation coefficient within the body contour, was also implemented. Quantitative and clinical evaluation of the proposed methods were performed considering SPECT CT-AC images of 19 subjects as reference using the mean absolute error (MAE), structural similarity index (SSIM) metrics, as well as relevant clinical indices such as perfusion deficit (TPD). Results: Overall, the deep learning solution exhibited good agreement with the CT-based AC, noticeably outperforming the Chang method. The ResNet and UNet models resulted in the ME (count) of ??.99±16.72 and ??.41±11.8 and SSIM of 0.99±0.04 and 0.98±0.05 , respectively. While the Change approach led to ME and SSIM of 25.52±33.98 and 0.93±0.09 , respectively. Similarly, the clinical evaluation revealed a mean TPD of 12.78±9.22 and 12.57±8.93 for the ResNet and UNet models, respectively, compared to 12.84±8.63 obtained from the reference SPECT CT-AC images. On the other hand, the Chang approach led to a mean TPD of 16.68±11.24 . Conclusion: We evaluated two deep convolutional neural networks to estimate SPECT-AC images directly from the non-attenuation corrected images. The deep learning solutions exhibited the promising potential to generate reliable attenuation corrected SPECT images without the use of transmission scanning.

More from Medical Physics###### Task-based assessment of binned and list-mode SPECT systems

In SPECT, list-mode (LM) format allows storing data at higher precision compared to binned data. There is significant interest in investigating whether this higher precision translates to improved performance on clinical tasks. Towards this goal, in this study, we quantitatively investigated whether processing data in LM format, and in particular, the energy attribute of the detected photon, provides improved performance on the task of absolute quantification of region-of-interest (ROI) uptake in comparison to processing the data in binned format. We conducted this evaluation study using a DaTscan brain SPECT acquisition protocol, conducted in the context of imaging patients with Parkinson's disease. This study was conducted with a synthetic phantom. A signal-known exactly/background-known-statistically (SKE/BKS) setup was considered. An ordered-subset expectation-maximization algorithm was used to reconstruct images from data acquired in LM format, including the scatter-window data, and including the energy attribute of each LM event. Using a realistic 2-D SPECT system simulation, quantification tasks were performed on the reconstructed images. The results demonstrated improved quantification performance when LM data was used compared to binning the attributes in all the conducted evaluation studies. Overall, we observed that LM data, including the energy attribute, yielded improved performance on absolute quantification tasks compared to binned data.

More from Medical Physics###### Multimodal microscopy for characterization of amyloid- β plaques biomarkers in animal model of Alzheimer's disease

Given the long subclinical stage of Alzheimer's disease (AD), the study of biomarkers is relevant both for early diagnosis and the fundamental understanding of the pathophysiology of AD. Biomarkers provided by amyloid- β (A β ) plaques have led to an increasing interest in characterizing this hallmark of AD due to its promising potential. In this work, we characterize A β plaques by label-free multimodal imaging: we combine two-photon excitation autofluorescence (TPEA), second harmonic generation (SHG), spontaneous Raman scattering (SpRS), coherent anti-Stokes Raman scattering (CARS), and stimulated Raman scattering (SRS) to describe and compare high-resolution images of A β plaques in brain tissues of an AD mouse model. Comparing single-laser techniques images, we discuss the origin of the SHG, which can be used to locate the plaque core reliably. We study both the core and the halo with vibrational microscopy and compare SpRS and SRS microscopies for different frequencies. We also combine SpRS spectroscopy with SRS microscopy and present two core biomarkers unexplored with SRS microscopy: phenylalanine and amide B. We provide high-resolution SRS images with the spatial distribution of these biomarkers in the plaque and compared them with images of the amide I distribution. The obtained spatial correlation corroborates the feasibility of these biomarkers in the study of A β plaques. Furthermore, since amide B enables rapid imaging, we discuss its potential as a novel fingerprint for diagnostic applications.

More from Medical Physics###### Computational Physics

###### Physics-aware, deep probabilistic modeling of multiscale dynamics in the Small Data regime

The data-based discovery of effective, coarse-grained (CG) models of high-dimensional dynamical systems presents a unique challenge in computational physics and particularly in the context of multiscale problems. The present paper offers a probabilistic perspective that simultaneously identifies predictive, lower-dimensional coarse-grained (CG) variables as well as their dynamics. We make use of the expressive ability of deep neural networks in order to represent the right-hand side of the CG evolution law. Furthermore, we demonstrate how domain knowledge that is very often available in the form of physical constraints (e.g. conservation laws) can be incorporated with the novel concept of virtual observables. Such constraints, apart from leading to physically realistic predictions, can significantly reduce the requisite amount of training data which enables reducing the amount of required, computationally expensive multiscale simulations (Small Data regime). The proposed state-space model is trained using probabilistic inference tools and, in contrast to several other techniques, does not require the prescription of a fine-to-coarse (restriction) projection nor time-derivatives of the state variables. The formulation adopted is capable of quantifying the predictive uncertainty as well as of reconstructing the evolution of the full, fine-scale system which allows to select the quantities of interest a posteriori. We demonstrate the efficacy of the proposed framework in a high-dimensional system of moving particles.

More from Computational Physics###### Introduction to Machine Learning for the Sciences

This is an introductory machine learning course specifically developed with STEM students in mind. We discuss supervised, unsupervised, and reinforcement learning. The notes start with an exposition of machine learning methods without neural networks, such as principle component analysis, t-SNE, and linear regression. We continue with an introduction to both basic and advanced neural network structures such as conventional neural networks, (variational) autoencoders, generative adversarial networks, restricted Boltzmann machines, and recurrent neural networks. Questions of interpretability are discussed using the examples of dreaming and adversarial attacks.

More from Computational Physics###### Projection-based model reduction of dynamical systems using space-time subspace and machine learning

This paper considers the creation of parametric surrogate models for applications in science and engineering where the goal is to predict high-dimensional spatiotemporal output quantities of interest, such as pressure, temperature and displacement fields. The proposed methodology develops a low-dimensional parametrization of these quantities of interest using space-time bases combining with machine learning methods. In particular, the space-time solutions are sought in a low-dimensional space-time linear trial subspace that can be obtained by computing tensor decompositions of usual state-snapshots data. The mapping between the input parameters and the basis expansion coefficients (or generalized coordinates) is approximated using four different machine learning techniques: multivariate polynomial regression, k-nearest-neighbors, random forest and neural network. The relative costs and effectiveness of the four machine learning techniques are explored through three engineering problems: steady heat conduction, unsteady heat conduction and unsteady advective-diffusive-reactive system. Numerical results demonstrate that the proposed method performs well in terms of both accuracy and computational cost, and highlight the important point that the amount of model training data available in an engineering setting is often much less than it is in other machine learning applications, making it essential to incorporate knowledge from physical models. In addition, simpler machine learning techniques are seen to perform better than more elaborate ones.

More from Computational Physics###### Chemical Physics

###### A Grid-free Approach for Simulating Sweep and Cyclic Voltammetry

We present a new computational approach to simulate linear sweep and cyclic voltammetry experiments that does not require a discretized grid in space to quantify diffusion. By using a Green's function solution coupled to a standard implicit ordinary differential equation solver, we are able to simulate current and redox species concentrations using only a small grid in time. As a result, where benchmarking is possible, we find that the current method is faster (and quantitatively identical) to established techniques. The present algorithm should help open the door to studying adsorption effects in inner sphere electrochemistry.

More from Chemical Physics###### Exploring Avenues Beyond Revised DSD Functionals: I. range separation, with xDSD as a special case

We have explored the use of range separation as a possible avenue for further improvement on our revDSD minimally empirical double hybrid functionals. Such ? DSD functionals encompass the XYG3 type of double hybrid (i.e., xDSD) as a special case for ? ->0. As in our previous studies, the large and chemically diverse GMTKN55 benchmark suite was used for evaluation. Especially when using the D4 rather than D3BJ dispersion model, xDSD has a slight performance advantage in WTMAD2. As found previously, PBEP86 is the winning combination for the semilocal parts. xDSDn-PBEP86-D4 marginally outperforms the previous 'best in class' ? B97M(2) Berkeley double hybrid, but without range separation and using fewer than half the number of empirical parameters. Range separation turns out to offer only marginal further improvements on GMTKN55 itself. While ? B97M(2) still yields better performance for small-molecule thermochemistry, this is outweighed in WTMAD2 by superior performance of the new functionals for conformer equilibria. Results for two external test sets with pronounced static correlation effects may indicate that range-separated double hybrids are more resilient to such effects.

More from Chemical Physics###### Quantum mechanical study of the attosecond nonlinear Fourier transform spectroscopy of carbon dioxide

Attosecond nonlinear Fourier transform (NFT) pump probe spectroscopy is an experimental technique which allows investigation of the electronic excitation, ionization, and unimolecular dissociation processes. The NFT spectroscopy utilizes ultrafast multiphoton ionization in the extreme ultraviolet spectral range and detects the dissociation products of the unstable ionized species. In this paper, a quantum mechanical description of NFT spectra is suggested, which is based on the second order perturbation theory in molecule-light interaction and the high level ab initio calculations of CO2 and CO2+ in the Franck-Condon zone. The calculations capture the characteristic features of the available experimental NFT spectra of CO2. Approximate analytic expressions are derived and used to assign the calculated spectra in terms of participating electronic states and harmonic photon frequencies. The developed approach provides a convenient framework within which the origin and the significance of near harmonic and non-harmonic NFT spectral lines can be analyzed. The framework is scalable and the spectra of di- and triatomic species as well as the dependences on the control parameters can by predicted semi-quantitatively.

More from Chemical Physics###### Atomic and Molecular Clusters

###### Intermolecular vibrational states far above the van der Waals minimum: combination bands of the polar N2O dimer

Infrared combination bands of the polar isomer of the N2O dimer are observed for the first time, using a tunable infrared laser source to probe a pulsed slit-jet supersonic expansion in the N2O nu1 region (~2240 cm-1). One band involves the torsional (out-of-plane) intermolecular mode and yields a torsional frequency of 19.83 cm-1 if associated with the out-of-phase fundamental (N2O nu1) vibration of the N2O monomers in the dimer. The other band, which is highly perturbed, yields an intermolecular in-plane geared bend frequency of 22.74 cm-1. The results are compared with high level ab initio calculations. The less likely alternate assignment to the in-phase fundamental would give torsional and geared bend frequencies of 17.25 and 20.16 cm-1, respectively.

More from Atomic and Molecular Clusters###### Single-shot electron imaging of dopant-induced nanoplasmas

We present single-shot electron velocity-map images of nanoplasmas generated from doped helium nanodroplets and neon clusters by intense near-infrared and mid-infrared laser pulses. We report a large variety of signal types, most crucially depending on the cluster size. The common feature is a two-component distribution for each single-cluster event: A bright inner part with nearly circular shape corresponding to electron energies up to a few eV, surrounded by an extended background of more energetic electrons. The total counts and energy of the electrons in the inner part are strongly correlated and follow a simple power-law dependence. Deviations from the circular shape of the inner electrons observed for neon clusters and large helium nanodroplets indicate non-spherical shapes of the neutral clusters. The dependence of the measured electron energies on the extraction voltage of the spectrometer indicates that the evolution of the nanoplasma is significantly affected by the presence of an external electric field. This conjecture is confirmed by molecular dynamics simulations, which reproduce the salient features of the experimental electron spectra.

More from Atomic and Molecular Clusters###### Feynman-Enderlein Path Integral for Single-Molecule Nanofluidics

I present a photon statistics method for quasi-one dimensional sub-diffraction limited nanofluidic motions of single molecules using Feynman-Enderlein path integral approach. The theory is validated in Monte Carlo simulation platform to provide fundamental understandings of Knudsen type flow and diffusion of single molecule fluorescence in liquid. Distribution of single molecule burst size can be precise enough to detect molecular interaction. Realisation of this theoretical study considers several fundamental aspects of single-molecule nanofluidics, such as electrodynamics, photophysics, and multi-molecular events/molecular shot noise. I study two different sizes of molecules, one with 2 nm and another with 20 nm hydrodynamic radii driven by a wide range of flow velocities. The study reports distinctly different velocity dependent nanofluidic regimes, which have not been theoretically as well as experimentally reported earlier. Experimental single-molecule fluorescence bursts inside all-silica nanofluidic channels are used to validate the robustness of the method. It is not restricted to single molecule environment of uniform electrodynamic interactions and can be used to investigate complex refractive index mismatch related non-uniform single-molecule electrodynamic interactions as well. This fundamental investigation of single-molecule nanofluidics has a potential to accelerate the progress of dynamic and complex single-molecule experiments, such as dynamic heterogeneity, biomolecular interactions of misfolded proteins, and nanometric cavity electrodynamics.

More from Atomic and Molecular Clusters###### Instrumentation and Detectors

###### Performance of the diamond-based beam-loss monitor system of Belle II

We designed, constructed and have been operating a system based on single-crystal synthetic diamond sensors, to monitor the beam losses at the interaction region of the SuperKEKB asymmetric-energy electron-positron collider. The system records the radiation dose-rates in positions close to the inner detectors of the Belle II experiment, and protects both the detector and accelerator components against destructive beam losses, by participating in the beam-abort system. It also provides complementary information for the dedicated studies of beam-related backgrounds. We describe the performance of the system during the commissioning of the accelerator and during the first physics data taking.

More from Instrumentation and Detectors###### First Application of Large Reactivity Measurement through Rod Drop Based on Three-Dimensional Space-Time Dynamics

Reactivity measurement is an essential part of a zero-power physics test, which is critical to reactor design and development. The rod drop experimental technique is used to measure the control rod worth in a zero-power physics test. The conventional rod drop experimental technique is limited by the spatial effect and the difference between the calculated static reactivity and measured dynamic reactivity; thus, the method must be improved. In this study, a modified rod drop experimental technique that constrains the detector neutron flux shape function based on three-dimensional space-time dynamics to reduce the reactivity perturbation and a new method for calculating the detector neutron flux shape function are proposed. Correction factors were determined using Monte Carlo N-Particle transport code and transient analysis code for a pressurized water reactor at the Ulsan National Institute of Science and Technology and Xi'an Jiaotong University, and a large reactivity of over 2000 pcm was measured using the modified technique. This research evaluated the modified technique accuracy, studied the influence of the correction factors on the modification, and investigated the effect of constraining the shape function on the reactivity perturbation reduction caused by the difference between the calculated neutron flux and true value, using the new method to calculate the shape function of the detector neutron flux and avoiding the neutron detector response function (weighting factor) calculation.

More from Instrumentation and Detectors###### Demonstration of a uniform, high-pressure, high-temperature gas cell with a dual frequency comb absorption spectrometer

Accurate absorption models for gases at high pressure and temperature support advanced optical combustion diagnostics and aid in the study of harsh planetary atmospheres. Developing and validating absorption models for these applications requires recreating the extreme temperature and pressure conditions of these environments in static, uniform, well-known conditions in the laboratory. Here, we present the design of a new gas cell to enable reference-quality absorption spectroscopy at high pressure and temperature. The design centers on a carefully controlled quartz sample cell housed at the core of a pressurized ceramic furnace. The half-meter sample cell is relatively long compared to past high- pressure and temperature absorption cells, and is surrounded by a molybdenum heat spreader that enables high temperature uniformity over the full length of the absorbing gas. We measure the temperature distribution of the sample gas using in situ thermocouples, and fully characterize the temperature uniformity across a full matrix of conditions up to 1000 K and 50 bar. The results demonstrate that the new design enables highly uniform and precisely known conditions across the full absorbing path length. Uniquely, we test the new gas cell with a broadband, high-resolution dual frequency comb spectrometer that enables highly resolved absorption spectroscopy across a wide range of temperature and pressure conditions. With this system, we measure the spectrum of CO 2 between 6800 and 7000 cm ?? at pressures between 0.2 and 20 bar, and temperatures up to 1000 K. The measurements reveal discrepancies from spectra predicted by the HITRAN2016 database with a Voigt line shape at both low- and high-pressure conditions. These results motivate future work to expand absorption models and databases to accurately model high-pressure and -temperature spectra in combustion and planetary science research.

More from Instrumentation and Detectors###### Data Analysis Statistics and Probability

###### Point Cloud Transformers applied to Collider Physics

Methods for processing point cloud information have seen a great success in collider physics applications. One recent breakthrough in machine learning is the usage of Transformer networks to learn semantic relationships between sequences in language processing. In this work, we apply a modified Transformer network called Point Cloud Transformer as a method to incorporate the advantages of the Transformer architecture to an unordered set of particles resulting from collision events. To compare the performance with other strategies, we study jet-tagging applications for highly-boosted particles.

More from Data Analysis Statistics and Probability###### Sequence-based Machine Learning Models in Jet Physics

Sequence-based modeling broadly refers to algorithms that act on data that is represented as an ordered set of input elements. In particular, Machine Learning algorithms with sequences as inputs have seen successfull applications to important problems, such as Natural Language Processing (NLP) and speech signal modeling. The usage this class of models in collider physics leverages their ability to act on data with variable sequence lengths, such as constituents inside a jet. In this document, we explore the application of Recurrent Neural Networks (RNNs) and other sequence-based neural network architectures to classify jets, regress jet-related quantities and to build a physics-inspired jet representation, in connection to jet clustering algorithms. In addition, alternatives to sequential data representations are briefly discussed.

More from Data Analysis Statistics and Probability###### Generalized asymptotic formulae for estimating statistical significance in high energy physics analyses

Within the framework of likelihood-based statistical tests for high energy physics measurements, we derive generalized expressions for estimating the statistical significance of discovery using the asymptotic approximations of Wilks and Wald for a variety of measurement models. These models include arbitrary numbers of signal regions, control regions, and Gaussian constraints. We extend our expressions to use the representative or "Asimov" dataset proposed by Cowan et al. such that they are made Monte Carlo-free. While many of the generalized expressions are complicated and often involve solving systems of coupled, multivariate equations, we show these expressions reduce to closed-form results under simplifying assumptions. We also validate the predicted significance using Monte Carlo toys in select cases.

More from Data Analysis Statistics and Probability###### Physics and Society

###### Persistent individual bias in a voter model with quenched disorder

Many theoretical studies of the voter model (or variations thereupon) involve order parameters that are population-averaged. While enlightening, such quantities may obscure important statistical features that are only apparent on the level of the individual. In this work, we ask which factors contribute to a single voter maintaining a long-term statistical bias for one opinion over the other in the face of social influence. To this end, a modified version of the network voter model is proposed, which also incorporates quenched disorder in the interaction strengths between individuals and the possibility of antagonistic relationships. We find that a sparse interaction network and heterogeneity in interaction strengths give rise to the possibility of arbitrarily long-lived individual biases, even when there is no population-averaged bias for one opinion over the other. This is demonstrated by calculating the eigenvalue spectrum of the weighted network Laplacian using the theory of sparse random matrices.

More from Physics and Society###### Random Choices can Facilitate the Solving of Collective Network Coloring Problems by Artificial Agents

Global coordination is required to solve a wide variety of challenging collective action problems from network colorings to the tragedy of the commons. Recent empirical study shows that the presence of a few noisy autonomous agents can greatly improve collective performance of humans in solving networked color coordination games. To provide further analytical insights into the role of behavioral randomness, here we study myopic artificial agents attempt to solve similar network coloring problems using decision update rules that are only based on local information but allow random choices at various stages of their heuristic reasonings. We consider that agents are distributed over a random bipartite network which is guaranteed to be solvable with two colors. Using agent-based simulations and theoretical analysis, we show that the resulting efficacy of resolving color conflicts is dependent on the specific implementation of random behavior of agents, including the fraction of noisy agents and at which decision stage noise is introduced. Moreover, behavioral randomness can be finely tuned to the specific underlying population structure such as network size and average network degree in order to produce advantageous results in finding collective coloring solutions. Our work demonstrates that distributed greedy optimization algorithms exploiting local information should be deployed in combination with occasional exploration via random choices in order to overcome local minima and achieve global coordination.

More from Physics and Society###### The scaling of social interactions across animal species

Social animals self-organise to create groups to increase protection against predators and productivity. One-to-one interactions are the building blocks of these emergent social structures and may correspond to friendship, grooming, communication, among other social relations. These structures should be robust to failures and provide efficient communication to compensate the costs of forming and maintaining the social contacts but the specific purpose of each social interaction regulates the evolution of the respective social networks. We collate 611 animal social networks and show that the number of social contacts E scales with group size N as a super-linear power-law E=C N β for various species of animals, including humans, other mammals and non-mammals. We identify that the power-law exponent β varies according to the social function of the interactions as β=1+a/4 , with a??,2,3,4 . By fitting a multi-layer model to our data, we observe that the cost to cross social groups also varies according to social function. Relatively low costs are observed for physical contact, grooming and group membership which lead to small groups with high and constant social clustering. Offline friendship has similar patterns while online friendship shows weak social structures. The intermediate case of spatial proximity ( β=1.5 and clustering dependency on network size quantitatively similar to friendship) suggests that proximity interactions may be as relevant for the spread of infectious diseases as for social processes like friendship.

More from Physics and Society#### Ready to get started?

Join us today