To make fusion energy a viable resource for the world’s energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.
Abhilash Mathews, a PhD candidate in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary, it is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces — factors that impact fusion reactor designs.
To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasma's behavior. However, “first principles” simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop “reduced” computer models that run much faster, but with quantified levels of accuracy.
For decades, tokamak physicists have regularly used a reduced “two-fluid theory” rather than higher-fidelity models to simulate boundary plasmas in experiment, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.
“A successful theory is supposed to predict what you're going to observe,” explains Mathews, “for example, the temperature, the density, the electric potential, the flows. And it’s the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.”
In the first paper, published in Physical Review E, Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.
In the second paper, published in Physics of Plasmas, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult — if not impossible — to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid model's predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, “one should check every connection between every variable,” says Mathews.
Mathews’ advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. “This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.”
These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.
“Abhi’s work is a major achievement with the potential for broad application,” he says. “For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.”
Mathews sees exciting research ahead.“Translating these techniques into fusion experiments for real edge plasmas is one goal we have in sight, and work is currently underway,” he says. “But this is just the beginning.”
Mathews was supported in this work by the Manson Benedict Fellowship, Natural Sciences and Engineering Research Council of Canada, and U.S. Department of Energy Office of Science under the Fusion Energy Sciences program.
Source/Credit: Massachusetts Institute of Technology / Paul Rivenberg
phy010522_01