Feed aggregator

Zeroing in on Decarbonization

Research Computing - MGHPCC Feed -

Doctoral Candidate in MIT’s Department of Nuclear Science and Engineering (NSE) Nestor Sepulveda is using MGHPCC research computing resources to help chart a path towards decarbonization.

Read this story at MIT News

To avoid the most destructive consequences of climate change, the world’s electric energy systems must stop producing carbon by 2050. It seems like an overwhelming technological, political, and economic challenge — but not to Nestor Sepulveda.

“My work has shown me that we do have the means to tackle the problem, and we can start now,” he says. “I am optimistic.”

Sepulveda’s research, first as a master’s student and now as a doctoral candidate in the MIT Department of Nuclear Science and Engineering (NSE), involves complex simulations that describe potential pathways to decarbonization. In work published last year in the journal Joule, Sepulveda and his co-authors made a powerful case for using a mix of renewable and “firm” electricity sources, such as nuclear energy, as the least costly, and most likely, route to a low- or no-carbon grid.

These insights, which flow from a unique computational framework blending optimization and data science, operations research, and policy methodologies, have attracted interest from The New York Times and The Economist, as well as from such notable players in the energy arena as Bill Gates. For Sepulveda, the attention could not come at a more vital moment.

“Right now, people are at extremes: on the one hand worrying that steps to address climate change might weaken the economy, and on the other advocating a Green New Deal to transform the economy that depends solely on solar, wind, and battery storage,” he says. “I think my data-based work can help bridge the gap and enable people to find a middle point where they can have a conversation.”

An optimization tool

The computational model Sepulveda is developing to generate this data, the centerpiece of his dissertation research, was sparked by classroom experiences at the start of his NSE master’s degree.

“In courses like Nuclear Technology and Society [22.16], which covered the benefits and risks of nuclear energy, I saw that some people believed the solution for climate change was definitely nuclear, while others said it was wind or solar,” he says. “I began wondering how to determine the value of different technologies.”

Recognizing that “absolutes exist in people’s minds, but not in reality,” Sepulveda sought to develop a tool that might yield an optimal solution to the decarbonization question. His inaugural effort in modeling focused on weighing the advantages of utilizing advanced nuclear reactor designs against exclusive use of existing light-water reactor technology in the decarbonization effort.

“I showed that in spite of their increased costs, advanced reactors proved more valuable to achieving the low-carbon transition than conventional reactor technology alone,” he says. This research formed the basis of Sepulveda’s master’s thesis in 2016, for a degree spanning NSE and the Technology and Policy Program. It also informed the MIT Energy Initiative’s report, “The Future of Nuclear Energy in a Carbon-Constrained World.”

The right stuff

Sepulveda comes to the climate challenge armed with a lifelong commitment to service, an appetite for problem-solving, and grit. Born in Santiago, he enlisted in the Chilean navy, completing his high school and college education at the national naval academy.

“Chile has natural disasters every year, and the defense forces are the ones that jump in to help people, which I found really attractive,” he says. He opted for the most difficult academic specialty, electrical engineering, over combat and weaponry. Early in his career, the climate change issue struck him, he says, and for his senior project, he designed a ship powered by hydrogen fuel cells.

After he graduated, the Chilean navy rewarded his performance with major responsibilities in the fleet, including outfitting a $100 million amphibious ship intended for moving marines and for providing emergency relief services. But Sepulveda was anxious to focus fully on sustainable energy, and petitioned the navy to allow him to pursue a master’s at MIT in 2014.

It was while conducting research for this degree that Sepulveda confronted a life-altering health crisis: a heart defect that led to open-heart surgery. “People told me to take time off and wait another year to finish my degree,” he recalls. Instead, he decided to press on: “I was deep into ideas about decarbonization, which I found really fulfilling.”

After graduating in 2016, he returned to naval life in Chile, but “couldn’t stop thinking about the potential of informing energy policy around the world and making a long-lasting impact,” he says. “Every day, looking in the mirror, I saw the big scar on my chest that reminded me to do something bigger with my life, or at least try.”

Convinced that he could play a significant role in addressing the critical carbon problem if he continued his MIT education, Sepulveda successfully petitioned naval superiors to sanction his return to Cambridge, Massachusetts.

Simulating the energy transition

Since resuming studies here in 2018, Sepulveda has wasted little time. He is focused on refining his modeling tool to play out the potential impacts and costs of increasingly complex energy technology scenarios on achieving deep decarbonization. This has meant rapidly acquiring knowledge in fields such as economics, math, and law.

“The navy gave me discipline, and MIT gave me flexibility of mind — how to look at problems from different angles,” he says.

With mentors and collaborators such as Associate Provost and Japan Steel Industry Professor Richard Lester and MIT Sloan School of Management professors Juan Pablo Vielma and Christopher Knittel, Sepulveda has been tweaking his models. His simulations, which can involve more than 1,000 scenarios, factor in existing and emerging technologies, uncertainties such as the possible emergence of fusion energy, and different regional constraints, to identify optimal investment strategies for low-carbon systems and to determine what pathways generate the most cost-effective solutions.

“The idea isn’t to say we need this many solar farms or nuclear plants, but to look at the trends and value the future impact of technologies for climate change, so we can focus money on those with the highest impact, and generate policies that push harder on those,” he says.

Sepulveda hopes his models won’t just lead the way to decarbonization, but do so in a way that minimizes social costs. “I come from a developing nation, where there are other problems like health care and education, so my goal is to achieve a pathway that leaves resources to address these other issues.”

As he refines his computations with the help of MIT’s massive computing clusters, Sepulveda has been building a life in the United States. He has found a vibrant Chilean community at MIT and discovered local opportunities for venturing out on the water, such as summer sailing on the Charles.

After graduation, he plans to leverage his modeling tool for the public benefit, through direct interactions with policy makers (U.S. congressional staffers have already begun to reach out to him), and with businesses looking to bend their strategies toward a zero-carbon future.

It is a future that weighs even more heavily on him these days: Sepulveda is expecting his first child. “Right now, we’re buying stuff for the baby, but my mind keeps going into algorithmic mode,” he says. “I’m so immersed in decarbonization that I sometimes dream about it.”

Story image: Nestor Sepulveda (photo credit: Gretchen Ertl)

Related Publication

Nestor A. Sepulveda, Jesse D. Jenkins, Fernando J. de Sisternes, Richard K. Lester (2018), The Role of Firm Low-Carbon Electricity Resources in Deep Decarbonization of Power Generation, Joule, doi: 10.1016/j.joule.2018.08.006

Professor discovers way to differentiate individual black holes

Research Computing - MGHPCC Feed -

Astrophysicists at UMass Dartmouth use computing resources at the MGHPCC to study black hole’s hair!
Read this story at UMass Dartmouth News

The black holes of Einstein’s theory of relativity can be described by three parameters: their mass, spin angular momentum, and electric charge. Since two extreme black holes that share these parameters cannot be distinguished, regardless of how they were made, black holes are said to “have no hair”: they have no additional attributes that can be used to tell them apart.

Physics Professor Gaurav Khanna and Professor Lior Burko of Georgia Gwinnett College recently published a paper in Physical Review Research that highlights how they are able to take measurements to discover a black hole’s hair.

Khanna and Burko used very intensive numerical simulations to generate their results. The simulations involved using dozens of the highest-end Nvidia graphics-processing-units (GPUs) with over 5,000 cores each, in parallel. “Each of these GPUs can perform as many as 7 trillion calculations per second; however, even with such computational capacity the simulations look many weeks to complete,” said Khanna.

The team showed that for extreme black holes, hair is a transient behavior. At first, they behave as extreme black holes would, but eventually they behave as regular, non-extreme black holes do. Burko summarized the result, saying, “Nearly extreme black holes that attempt to regrow hair will lose it and become bald again.” The team also discusses the observational feature with gravitational waves observatories such as LIGO/VIRGO or LISA which found the smoking-gun detection of nearly extreme black holes.

Story image: Artist rendition of the capture and tidal disruption of a star by a massive black hole via UMass Dartmouth News


Lior M. Burko, Gaurav Khanna, and Subir Sabharwal (2019), Transient scalar hair for nearly extreme black holes, Phys. Rev. Research, doi: 10.1103/PhysRevResearch.1.033106


University of Massachusetts Research Computing

New model helps pave the way to bringing clean fusion energy down to Earth

Research Computing - MGHPCC Feed -

Turbulence — the unruly swirling of fluid and air that mixes coffee and cream and can rattle airplanes in flight — causes heat loss that weakens efforts to reproduce on Earth the fusion that powers the sun and stars. Now scientists have modeled a key source of the turbulence found in a fusion experiment at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), paving the way for improving similar experiments to capture and control fusion energy. The research used computer resources at the MGHPCC.

Read this story at DOE Science News

State of the art simulations

The research, led by Juan Ruiz Ruiz while a graduate student at the Massachusetts Institute of Technology (MIT) and working with PPPL researchers Walter Guttenfelder and Yang Ren, used state-of-the-art simulations to zero-in on the source of the turbulence that produces heat loss. The findings predicted results consistent with experiments on the National Spherical Torus Experiment (NSTX) fusion device at PPPL, pinpointing the source as microscopic turbulent eddies. Driving these eddies is the gradient, or variation, in the electron temperature in the core of the plasma, the so-called electron temperature gradient (ETG).

Fusion combines light elements in the form of plasma — the hot, electrically charged state of matter composed of free electrons and atomic nuclei — that generates massive amounts of energy. Scientists around the world are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.

The recent findings confirmed theories of when ETG can be a main driver of electron turbulence, known as electron thermal transport, that whips up the heat loss in spherical tokamaks such as NSTX.  The consistency of the simulation with experimental data gives confidence “that the simulation contains the necessary physics to explain the loss of heat,” said Ruiz Ruiz, now a postdoctoral research assistant at the University of Oxford and first author of a paper reporting the results in Plasma Physics and Controlled Fusion.

The results apply to a type of H-mode, or high-confinement, experiment on the spherical NSTX, which is shaped more like a cored apple than the doughnut-like shape of more widely used conventional tokamaks. Understanding the source of electron thermal transport is a top priority for confining heat in future fusion facilities, and particularly in spherical tokamaks, which lose most of their heat through such transport in high-performance H-mode plasmas.

A little like radar

Ruiz Ruiz reached his conclusion by simulating a diagnostic called “high-k scattering” that NSTX researchers used to measure turbulence in the experiment. The technique scatters microwave radiation into the plasma, with the scattered radiation carrying information about the turbulence in the core. The process works a little like radar or sonar, Ruiz Ruiz says. The radiation bounces off objects — plasma eddies the size of electron orbits in this case — and reflects back their movement and positions.

Comparing the simulated and measured data called for painstakingly filtering and analyzing the vast output produced by the simulation code that Ruiz Ruiz used. “It takes a huge amount of effort to do an apples-to-apples comparison of the measured and simulated turbulence,” said Guttenfelder, who co-advised Ruiz Ruiz with an MIT professor. “Juan did just about the most thorough job you could do to show that the model is consistent with all the experimental data.”

Similar methods could be used to confirm the source of heat loss on the upgraded NSTX, called the NSTX-U, and the Mega Ampere Spherical Tokamak (MAST) in the United Kingdom, Ruiz Ruiz said. “That could demonstrate the ability of the simulations to accurately forecast the loss of heat — and therefore the performance — of spherical tokamaks,” he said.

Support for this work comes from the DOE Office of Science. Computer simulations were conducted at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science user facility at the Lawrence Berkeley National Laboratory, and Massachusetts Green High Performance Computing Center, operated jointly by five Massachusetts research universities.

PPPL, on Princeton University’s Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas — ultra-hot, charged gases — and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science.


J Ruiz Ruiz, W Guttenfelder, A E White, N T Howard, J Candy, Y Ren, D R Smith, N F Loureiro, C Holland and C W Domier (2019), Validation of gyrokinetic simulations of a National Spherical Torus eXperiment H-mode plasma and comparisons with a high-k scattering synthetic diagnostic, Plasma Physics and Controlled Fusion, doi: 10.1088/1361-6587/ab4742

Story Image: National Spherical Torus Experiment – Wikipedia

The Giant in our Stars

Research Computing - MGHPCC Feed -

Harvard astronomers using computers housed at the MGHPCC discover largest known coherent gaseous structure in our galaxy.

Astronomers at Harvard University have discovered a monolithic, wave-shaped gaseous structure – the largest ever seen in our galaxy – made up of interconnected stellar nurseries. Dubbed the “Radcliffe Wave” in honor of the collaboration’s home base, the Radcliffe Institute for Advanced Study, the discovery transforms a 150-year-old vision of nearby stellar nurseries as an expanding ring into one featuring an undulating, star-forming filament that reaches trillions of miles above and below the galactic disk.

The work, published in Nature, was enabled by a new analysis of data from the European Space Agency’s Gaia spacecraft, launched in 2013 with the mission of precisely measuring the position, distance, and motion of the stars. The research team’s innovative approach combined the super-accurate data from Gaia with other measurements to construct a detailed, 3D map of interstellar matter in the Milky Way, and noticed an unexpected pattern in the spiral arm closest to Earth.

Computations were enabled using Harvard University’s Odyssey computing cluster housed at the MGHPCC.

Read more at the Harvard Gazette

Story image: In this illustration, the “Radcliffe Wave” data is overlaid on an image of the Milky Way galaxy. Image from the WorldWide Telescope, courtesy of Alyssa Goodman via The Harvard Gazette.

Related Publications:

João Alves, Catherine Zucker, Alyssa A. Goodman, Joshua S. Speagle, Stefan Meingast, Thomas Robitaille, Douglas P. Finkbeiner, Edward F. Schlafly and Gregory M. Green (2020), A Galactic-scale gas wave in the solar neighborhood, Nature, doi: 10.1038/s41586-019-1874-z

Catherine Zucker, Joshua S. Speagle, Edward F. Schlafly, Gregory M. Green, Douglas P. Finkbeiner, Alyssa Goodman, João Alves (2020), A compendium of distances to molecular clouds in the Star Formation Handbook, arXiv: 2001.00591 [astro-ph.GA]


Harvard University Research Computing