. Scientific Frontline: Computer Science
Showing posts with label Computer Science. Show all posts
Showing posts with label Computer Science. Show all posts

Tuesday, November 29, 2022

Breaking the scaling limits of analog computing

MIT researchers have developed a technique that greatly reduces the error in an optical neural network, which uses light to process data instead of electrical signals. With their technique, the larger an optical neural network becomes, the lower the error in its computations. This could enable them to scale these devices up so they would be large enough for commercial uses.
Credit: SFLORG stock photo

As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.

An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.

However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.

Even with error-correction techniques, due to fundamental properties of the devices that make up an optical neural network, some amount of error is unavoidable. A network that is large enough to be implemented in the real world would be far too imprecise to be effective.

MIT researchers have overcome this hurdle and found a way to effectively scale an optical neural network. By adding a tiny hardware component to the optical switches that form the network’s architecture, they can reduce even the uncorrectable errors that would otherwise accumulate in the device.

Tuesday, November 22, 2022

Researchers use blockchain to increase electric grid resiliency

A team led by Raymond Borges Hink has developed a method using blockchain to protect communications between electronic devices in the electric grid, preventing cyberattacks and cascading blackouts.
Photo Credit: Genevieve Martin/ORNL, U.S. Dept. of Energy

Although blockchain is best known for securing digital currency payments, researchers at the Department of Energy’s Oak Ridge National Laboratory are using it to track a different kind of exchange: It’s the first time blockchain has ever been used to validate communication among devices on the electric grid.

The project is part of the ORNL-led Darknet initiative, funded by the DOE Office of Electricity, to secure the nation’s electricity infrastructure by shifting its communications to increasingly secure methods.

Cyber risks have increased with two-way communication between grid power electronics equipment and new edge devices ranging from solar panels to electric car chargers and intelligent home electronics. By providing a trust framework for communication among electrical devices, an ORNL research team led by Raymond Borges Hink is increasing the resilience of the electric grid. The team developed a framework to detect unusual activity, including data manipulation, spoofing and illicit changes to device settings. These activities could trigger cascading power outages as breakers are tripped by protection devices.

Sunday, November 20, 2022

Electronic/Photonic Chip Sandwich Pushes Boundaries of Computing and Data Transmission Efficiency

Image: The chip sandwich: an electronics chip (the smaller chip on the top) integrated with a photonics chip, sitting atop a penny for scale.
Photo Credit: Arian Hashemi Talkhooncheh

Engineers at Caltech and the University of Southampton in England have collaboratively designed an electronics chip integrated with a photonics chip (which uses light to transfer data)—creating a cohesive final product capable of transmitting information at ultrahigh speed while generating minimal heat.

Though the two-chip sandwich is unlikely to find its way into your laptop, the new design could influence the future of data centers that manage very high volumes of data communication.

"Every time you are on a video call, stream a movie, or play an online video game, you're routing data back and forth through a data center to be processed," says Caltech graduate student Arian Hashemi Talkhooncheh (MS '16), lead author of a paper describing the two-chip innovation that was published in the IEEE Journal of Solid-State Circuits. "There are more than 2,700 data centers in the U.S. and more than 8,000 worldwide, with towers of servers stacked on top of each other to manage the load of thousands of terabytes of data going in and out every second."

Just as your laptop heats up on your lap while you use it, the towers of servers in data centers that keep us all connected also heat up as they work, just at a much greater scale. Some data centers are even built underwater to cool the whole facility more easily. The more efficient they can be made, the less heat they will generate, and ultimately, the greater the volume of information that they will be able to manage.

Tuesday, November 15, 2022

Solving brain dynamics gives rise to flexible machine-learning models

Studying the brains of small species recently helped MIT researchers better model the interaction between neurons and synapses — the building blocks of natural and artificial neural networks — into a class of flexible, robust machine-learning models that learn on the job and can adapt to changing conditions.
Image Credit: Ramin Hasani/Stable Diffusion

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution.

Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets — flexible, causal, robust, and explainable — but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training — while many traditional models are fixed.

Wednesday, November 2, 2022

Study urges caution when comparing neural networks to the brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.
Image Credits: Christine Daniloff | Massachusetts Institute of Technology

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.

“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

Tuesday, October 11, 2022

Bristol researchers make important breakthrough in quantum computing


Researchers from the University of Bristol, quantum start-up, Phasecraft and Google Quantum AI have revealed properties of electronic systems that could be used for the development of more efficient batteries and solar cells.

The findings, published in Nature Communications today, describes how the team has taken an important first step towards using quantum computers to determine low-energy properties of strongly-correlated electronic systems that cannot be solved by classical computers. They did this by developing the first truly scalable algorithm for observing ground-state properties of the Fermi-Hubbard model on a quantum computer. The Fermi-Hubbard model is a way of discovering crucial insights into electronic and magnetic properties of materials.

Modeling quantum systems of this form has significant practical implications, including the design of new materials that could be used in the development of more effective solar cells and batteries, or even high-temperature superconductors. However, doing so remains beyond the capacity of the world’s most powerful supercomputers. The Fermi-Hubbard model is widely recognized as an excellent benchmark for near-term quantum computers because it is the simplest materials system that includes non-trivial correlations beyond what is captured by classical methods. Approximately producing the lowest-energy (ground) state of the Fermi-Hubbard model enables the user to calculate key physical properties of the model.

In the past, researchers have only succeeded in solving small, highly simplified Fermi-Hubbard instances on a quantum computer. This research shows that much more ambitious results are possible. Leveraging a new, highly efficient algorithm and better error-mitigation techniques, they successfully ran an experiment that is four times larger – and consists of 10 times more quantum gates – than anything previously recorded.

Thursday, October 6, 2022

As ransomware attacks increase, new algorithm may help prevent power blackouts

Saurabh Bagchi, a Purdue professor of electrical and computer engineering, develops ways to improve the cybersecurity of power grids and other critical infrastructure.
Credit: Purdue University photo/Vincent Walter

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid’s utilities could fill in the most crucial security gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an algorithm to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Power grids are a type of critical infrastructure, which is any network – whether physical like water systems or virtual like health care record keeping – considered essential to a country’s function and safety. The biggest ransomware attacks in history have happened in the past year, affecting most sectors of critical infrastructure in the U.S. such as grain distribution systems in the food and agriculture sector and the Colonial Pipeline, which carries fuel throughout the East Coast.

Thursday, September 22, 2022

Conventional Computers Can Learn to Solve Tricky Quantum Problems

Hsin-Yuan (Robert) Huang
Credit: Caltech

There has been a lot of buzz about quantum computers and for good reason. The futuristic computers are designed to mimic what happens in nature at microscopic scales, which means they have the power to better understand the quantum realm and speed up the discovery of new materials, including pharmaceuticals, environmentally friendly chemicals, and more. However, experts say viable quantum computers are still a decade away or more. What are researchers to do in the meantime?

A new Caltech-led study in the journal Science describes how machine learning tools, run on classical computers, can be used to make predictions about quantum systems and thus help researchers solve some of the trickiest physics and chemistry problems. While this notion has been proposed before, the new report is the first to mathematically prove that the method works in problems that no traditional algorithms could solve.

"Quantum computers are ideal for many types of physics and materials science problems," says lead author Hsin-Yuan (Robert) Huang, a graduate student working with John Preskill, the Richard P. Feynman Professor of Theoretical Physics and the Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Institute for Quantum Science and Technology (IQIM). "But we aren't quite there yet and have been surprised to learn that classical machine learning methods can be used in the meantime. Ultimately, this paper is about showing what humans can learn about the physical world."

Tuesday, September 20, 2022

Supercomputing and 3D printing capture the aerodynamics of F1 cars

A photo of the 3D color printed McLaren 17D Formula One front wing endplate. The colors visualize the complex flow a fraction of a millimeter away from the wing's surface.
Photo credit: KAUST

In Formula One race car design, the manipulation of airflow around the car is the most important factor in performance. A 1% gain in aerodynamics performance can mean the difference between first place and a forgotten finish, which is why teams employ hundreds of people and spend millions of dollars perfecting this manipulation.

Of special interest is the design of the front wing endplate, which is critical for the drag and lift of the car. Dr. Matteo Parsani, associate professor of applied mathematics and computational science at King Abdullah University of Science and Technology (KAUST), has led a multidisciplinary team of scientists and engineers to simulate and 3D color print the solution of the McLaren 17D Formula One front wing endplate. The work is the result of a massively high-performance computing simulation, with contributing expertise by research scientist Dr. Lisandro Dalcin of the KAUST Extreme Computing Research Center (ECRC), directed by Dr. David Keyes, and also the Advanced Algorithm and Numerical Simulations Lab (AANSLab), and Prototyping and Product Development Core Lab (PCL).

Wednesday, September 7, 2022

As threats to the U.S. power grid surge

WVU Lane Department of Computer Science and Electrical Engineering students Partha Sarker, Paroma Chatterjee and Jannatul Adan, discuss a power grid simulation project led by Anurag Srivastava, professor and department chair, in the GOLab.
Photo credit: Brian Persinger | WVU

The electrical grid faces a mounting barrage of threats that could trigger a butterfly effect – floods, superstorms, heat waves, cyberattacks, not to mention its own ballooning complexity and size – that the nation is unprepared to handle, according to one West Virginia University scientist.

But Anurag Srivastava, professor and chair of the Lane Department of Computer Science and Electrical Engineering, has plans to prevent and respond to potential power grid failures, thanks to a pair of National Science Foundation-funded research projects.

“In the grid, we have the butterfly effect,” Srivastava said. “This means that if a butterfly flutters its wings in Florida, that will cause a windstorm in Connecticut because things are synchronously connected, like dominos. In the power grid, states like Florida, Connecticut, Illinois and West Virginia are all part of the eastern interconnection and linked together.

“If a big event happens in the Deep South, it is going to cause a problem up north. To stop that, we need to detect the problem area as soon as possible and gracefully separate that part out so the disturbance does not propagate through the whole.”

Monday, August 1, 2022

Artificial Intelligence Edges Closer to the Clinic

TransMED can help predict the outcomes of COVID-19 patients, generating predictions from different kinds of clinical data, including clinical notes, laboratory tests, diagnosis codes and prescribed drugs. The other uniqueness of TransMED lies in its ability to transfer learn from existing diseases to better predict and reason about progression of new and rare diseases. 
Credit: Shannon Colson | Pacific Northwest National Laboratory

The beginning of the COVID-19 pandemic presented a huge challenge to healthcare workers. Doctors struggled to predict how different patients would fare under treatment against the novel SARS-CoV-2 virus. Deciding how to triage medical resources when presented with very little information took a mental and physical toll on caregivers as the pandemic progressed.

To ease this burden, researchers at Pacific Northwest National Laboratory (PNNL), Stanford University, Virginia Tech, and John Snow Labs developed TransMED, a first-of-its-kind artificial intelligence (AI) prediction tool aimed at addressing issues caused by emerging or rare diseases.

“As COVID-19 unfolded over 2020, it brought a number of us together into thinking how and where we could contribute meaningfully,” said chief scientist Sutanay Choudhury. “We decided we could make the most impact if we worked on the problem of predicting patient outcomes.”

“COVID presented a unique challenge,” said Khushbu Agarwal, lead author of the study published in Nature Scientific Reports. “We had very limited patient data for training an AI model that could learn the complex patterns underlying COVID patient trajectories.”

The multi-institutional team developed TransMED to address this challenge, analyzing data from existing diseases to predict outcomes of an emerging disease.

Wednesday, July 27, 2022

New sensing platform deployed at controlled burn site, could help prevent forest fires

Argonne scientists conduct a controlled burn on the Konza prairie in Kansas using the Sage monitoring system. 
Resized Image using AI by SFLORG
Credit: Rajesh Sankaran/Argonne National Laboratory.

Smokey Bear has lots of great tips about preventing forest fires. But how do you stop one that’s started before it gets out of control? The answer may lie in pairing multichannel sensing with advanced computing technologies provided by a new platform called Sage.

Sage offers a one-of-a-kind combination. This combination involves both multiple types of sensors with computing ​“at the edge”, as well as embedded machine learning algorithms that enable scientists to process the enormous amounts of data generated in the field without having to transfer it all back to the laboratory. Computing ​“at the edge” means that data is processed where it is collected, in the field, while machine learning algorithms are computer programs that train themselves how to recognize patterns.

Sage is funded by the National Science Foundation and developed by the Northwestern-Argonne Institute for Science and Engineering (NAISE), a partnership between Northwestern University and the U.S. Department of Energy’s Argonne National Laboratory.

Thursday, June 23, 2022

Robots play with play dough


The inner child in many of us feels an overwhelming sense of joy when stumbling across a pile of the fluorescent, rubbery mixture of water, salt, and flour that put goo on the map: play dough. (Even if this happens rarely in adulthood.)

While manipulating play dough is fun and easy for 2-year-olds, the shapeless sludge is hard for robots to handle. Machines have become increasingly reliable with rigid objects, but manipulating soft, deformable objects comes with a laundry list of technical challenges, and most importantly, as with most flexible structures, if you move one part, you’re likely affecting everything else.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University recently let robots take their hand at playing with the modeling compound, but not for nostalgia’s sake. Their new system learns directly from visual inputs to let a robot with a two-fingered gripper see, simulate, and shape doughy objects. “RoboCraft” could reliably plan a robot’s behavior to pinch and release play dough to make various letters, including ones it had never seen. With just 10 minutes of data, the two-finger gripper rivaled human counterparts that teleoperated the machine — performing on-par, and at times even better, on the tested tasks.

Wednesday, June 22, 2022

Where Once Were Black Boxes, NIST’s New LANTERN Illuminates

How do you figure out how to alter a gene so that it makes a usefully different protein? The job might be imagined as interacting with a complex machine (at left) that sports a vast control panel filled with thousands of unlabeled switches, which all affect the device’s output somehow. A new tool called LANTERN figures out which sets of switches — rungs on the gene’s DNA ladder — have the largest effect on a given attribute of the protein. It also summarizes how the user can tweak that attribute to achieve a desired effect, essentially transmuting the many switches on our machine’s panel into another machine (at right) with just a few simple dials.
Credit: B. Hayes/NIST

Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.

The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.

Thursday, June 16, 2022

Can computers understand complex words and concepts?

 A depiction of semantic projection, which can determine the similarity between two words in a specific context. This grid shows how similar certain animals are based on their size.
Credit: Idan Blank/UCLA 

In “Through the Looking Glass,” Humpty Dumpty says scornfully, “When I use a word, it means just what I choose it to mean — neither more nor less.” Alice replies, “The question is whether you can make words mean so many different things.”

The study of what words really mean is ages old. The human mind must parse a web of detailed, flexible information and use sophisticated common sense to perceive their meaning.

Now, a newer problem related to the meaning of words has emerged: Scientists are studying whether artificial intelligence can mimic the human mind to understand words the way people do. A new study by researchers at UCLA, MIT and the National Institutes of Health addresses that question.

The paper, published in the journal Nature Human Behavior, reports that artificial intelligence systems can indeed learn very complicated word meanings, and the scientists discovered a simple trick to extract that complex knowledge. They found that the AI system they studied represents the meanings of words in a way that strongly correlates with human judgment.

The AI system the authors investigated has been frequently used in the past decade to study word meaning. It learns to figure out word meanings by “reading” astronomical amounts of content on the internet, encompassing tens of billions of words.

Researchers develop the world's first ultra-fast photonic computing processor using polarization

Photonic computing processor chip
Credit: June Sang Lee

New research uses multiple polarization channels to carry out parallel processing – enhancing computing density by several orders over conventional electronic chips.

In a paper published in Science Advances, researchers at the University of Oxford have developed a method using the polarization of light to maximize information storage density and computing performance using nanowires.

Light has an exploitable property – different wavelengths of light do not interact with each other – a characteristic used by fiberoptic to carry parallel streams of data. Similarly, different polarizations of light do not interact with each other either. Each polarization can be used as an independent information channel, enabling more information to be stored in multiple channels, hugely enhancing information density.

First author and DPhil student June Sang Lee, Department of Materials, University of Oxford said: ‘We all know that the advantage of photonics over electronics is that light is faster and more functional over large bandwidths. So, our aim was to fully harness such advantages of photonics combining with tunable material to realize faster and denser information processing.’

Monday, June 13, 2022

AI platform enables doctors to optimize personalized chemotherapy dose

Research team behind the PRECISE.CURATE trial (from left) Prof Dean Ho, Dr Agata Blasiak, Dr Raghav Sundar, Ms Anh Truong
Credit/Source: National University of Singapore

Based on a pilot clinical trial, close to 97% of dose recommendations by CURATE.AI were accepted by clinicians; some patients were prescribed optimal doses that were around 20% lower on average

A team of researchers from National University of Singapore (NUS), in collaboration with clinicians from the National University Cancer Institute, Singapore (NCIS) which is part of the National University Health System (NUHS), has reported promising results in using CURATE.AI, an artificial intelligence (AI) tool that identifies and better allows clinicians to make optimal and personalized doses of chemotherapy for patients.

Based on a pilot clinical trial – called PRECISE.CURATE - involving 10 patients in Singapore who were diagnosed with advanced solid tumors and predominantly metastatic colorectal cancers, clinicians accepted close to 97% of doses recommended by CURATE.AI, with some patients receiving optimal doses that were approximately 20% lower on average. These early outcomes are a promising step forward for the potential of truly personalizing oncology, where drug doses can be adjusted dynamically during treatment.

Developed by Professor Dean Ho and his team, CURATE.AI is an optimization platform that harnesses a patient’s clinical data, which includes drug type, drug dose and cancer biomarkers, to generate an individualized digital profile which is used to customize the optimal dose during the course of chemotherapy treatment.

Monday, June 6, 2022

Hallucinating to better text translation

Source/Credit: MIT
As babies, we babble and imitate our way to learning languages. We don’t start off reading raw text, which requires fundamental knowledge and understanding about the world, as well as the advanced ability to interpret and infer descriptions and relationships. Rather, humans begin our language journey slowly, by pointing and interacting with our environment, basing our words and perceiving their meaning through the context of the physical and social world. Eventually, we can craft full sentences to communicate complex ideas.

Similarly, when humans begin learning and translating into another language, the incorporation of other sensory information, like multimedia, paired with the new and unfamiliar words, like flashcards with images, improves language acquisition and retention. Then, with enough practice, humans can accurately translate new, unseen sentences in context without the accompanying media; however, imagining a picture based on the original text helps.

This is the basis of a new machine learning model, called VALHALLA, by researchers from MIT, IBM, and the University of California at San Diego, in which a trained neural network sees a source sentence in one language, hallucinates an image of what it looks like, and then uses both to translate into a target language. The team found that their method demonstrates improved accuracy of machine translation over text-only translation. Further, it provided an additional boost for cases with long sentences, under-resourced languages, and instances where part of the source sentence is inaccessible to the machine translator.

Friday, June 3, 2022

Great timing, supercomputer upgrade led to successful forecast of volcanic eruption

Former Illinois graduate student Yan Zhan, left, professor Patricia Gregg and research professor Seid Koric led a team that produced the fortuitous forecast of the 2018 Sierra Negra volcanic eruption five months before it occurred. 
Photo by Michelle Hassel

In the fall of 2017, geology professor Patricia Gregg and her team had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers. Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. One of the scientists on the Ecuador project, Dennis Geist of Colgate University, contacted Gregg, and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.

Initially developed on an iMac computer, the new modeling approach had already garnered attention for successfully recreating the unexpected eruption of Alaska’s Okmok volcano in 2008. Gregg’s team, based out of the University of Illinois Urbana-Champaign and the National Center for Supercomputing Applications, wanted to test the model’s new high-performance computing upgrade, and Geist’s Sierra Negra observations showed signs of an imminent eruption.

“Sierra Negra is a well-behaved volcano,” said Gregg, the lead author of a new report of the successful effort. “Meaning that, before eruptions in the past, the volcano has shown all the telltale signs of an eruption that we would expect to see like groundswell, gas release and increased seismic activity. This characteristic made Sierra Negra a great test case for our upgraded model.”

Monday, May 30, 2022

Frontier supercomputer debuts as world’s fastest, breaking exascale barrier


The Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory earned the top ranking today as the world’s fastest on the 59th TOP500 list, with 1.1 exaflops of performance. The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second.

Frontier features a theoretical peak performance of 2 exaflops, or two quintillion calculations per second, making it ten times more powerful than ORNL’s Summit system. The system leverages ORNL’s extensive expertise in accelerated computing and will enable scientists to develop critically needed technologies for the country’s energy, economic and national security, helping researchers address problems of national importance that were impossible to solve just five years ago.

“Frontier is ushering in a new era of exascale computing to solve the world’s biggest scientific challenges,” ORNL Director Thomas Zacharia said. “This milestone offers just a preview of Frontier’s unmatched capability as a tool for scientific discovery. It is the result of more than a decade of collaboration among the national laboratories, academia and private industry, including DOE’s Exascale Computing Project, which is deploying the applications, software technologies, hardware and integration necessary to ensure impact at the exascale.”

Featured Article

Autism and ADHD are linked to disturbed gut flora very early in life

The researchers have found links between the gut flora in babies first year of life and future diagnoses. Photo Credit:  Cheryl Holt Disturb...

Top Viewed Articles