. Scientific Frontline: Computer Science
Showing posts with label Computer Science. Show all posts
Showing posts with label Computer Science. Show all posts

Tuesday, October 11, 2022

Bristol researchers make important breakthrough in quantum computing


Researchers from the University of Bristol, quantum start-up, Phasecraft and Google Quantum AI have revealed properties of electronic systems that could be used for the development of more efficient batteries and solar cells.

The findings, published in Nature Communications today, describes how the team has taken an important first step towards using quantum computers to determine low-energy properties of strongly-correlated electronic systems that cannot be solved by classical computers. They did this by developing the first truly scalable algorithm for observing ground-state properties of the Fermi-Hubbard model on a quantum computer. The Fermi-Hubbard model is a way of discovering crucial insights into electronic and magnetic properties of materials.

Modeling quantum systems of this form has significant practical implications, including the design of new materials that could be used in the development of more effective solar cells and batteries, or even high-temperature superconductors. However, doing so remains beyond the capacity of the world’s most powerful supercomputers. The Fermi-Hubbard model is widely recognized as an excellent benchmark for near-term quantum computers because it is the simplest materials system that includes non-trivial correlations beyond what is captured by classical methods. Approximately producing the lowest-energy (ground) state of the Fermi-Hubbard model enables the user to calculate key physical properties of the model.

In the past, researchers have only succeeded in solving small, highly simplified Fermi-Hubbard instances on a quantum computer. This research shows that much more ambitious results are possible. Leveraging a new, highly efficient algorithm and better error-mitigation techniques, they successfully ran an experiment that is four times larger – and consists of 10 times more quantum gates – than anything previously recorded.

Thursday, October 6, 2022

As ransomware attacks increase, new algorithm may help prevent power blackouts

Saurabh Bagchi, a Purdue professor of electrical and computer engineering, develops ways to improve the cybersecurity of power grids and other critical infrastructure.
Credit: Purdue University photo/Vincent Walter

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid’s utilities could fill in the most crucial security gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an algorithm to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Power grids are a type of critical infrastructure, which is any network – whether physical like water systems or virtual like health care record keeping – considered essential to a country’s function and safety. The biggest ransomware attacks in history have happened in the past year, affecting most sectors of critical infrastructure in the U.S. such as grain distribution systems in the food and agriculture sector and the Colonial Pipeline, which carries fuel throughout the East Coast.

Thursday, September 22, 2022

Conventional Computers Can Learn to Solve Tricky Quantum Problems

Hsin-Yuan (Robert) Huang
Credit: Caltech

There has been a lot of buzz about quantum computers and for good reason. The futuristic computers are designed to mimic what happens in nature at microscopic scales, which means they have the power to better understand the quantum realm and speed up the discovery of new materials, including pharmaceuticals, environmentally friendly chemicals, and more. However, experts say viable quantum computers are still a decade away or more. What are researchers to do in the meantime?

A new Caltech-led study in the journal Science describes how machine learning tools, run on classical computers, can be used to make predictions about quantum systems and thus help researchers solve some of the trickiest physics and chemistry problems. While this notion has been proposed before, the new report is the first to mathematically prove that the method works in problems that no traditional algorithms could solve.

"Quantum computers are ideal for many types of physics and materials science problems," says lead author Hsin-Yuan (Robert) Huang, a graduate student working with John Preskill, the Richard P. Feynman Professor of Theoretical Physics and the Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Institute for Quantum Science and Technology (IQIM). "But we aren't quite there yet and have been surprised to learn that classical machine learning methods can be used in the meantime. Ultimately, this paper is about showing what humans can learn about the physical world."

Tuesday, September 20, 2022

Supercomputing and 3D printing capture the aerodynamics of F1 cars

A photo of the 3D color printed McLaren 17D Formula One front wing endplate. The colors visualize the complex flow a fraction of a millimeter away from the wing's surface.
Photo credit: KAUST

In Formula One race car design, the manipulation of airflow around the car is the most important factor in performance. A 1% gain in aerodynamics performance can mean the difference between first place and a forgotten finish, which is why teams employ hundreds of people and spend millions of dollars perfecting this manipulation.

Of special interest is the design of the front wing endplate, which is critical for the drag and lift of the car. Dr. Matteo Parsani, associate professor of applied mathematics and computational science at King Abdullah University of Science and Technology (KAUST), has led a multidisciplinary team of scientists and engineers to simulate and 3D color print the solution of the McLaren 17D Formula One front wing endplate. The work is the result of a massively high-performance computing simulation, with contributing expertise by research scientist Dr. Lisandro Dalcin of the KAUST Extreme Computing Research Center (ECRC), directed by Dr. David Keyes, and also the Advanced Algorithm and Numerical Simulations Lab (AANSLab), and Prototyping and Product Development Core Lab (PCL).

Wednesday, September 7, 2022

As threats to the U.S. power grid surge

WVU Lane Department of Computer Science and Electrical Engineering students Partha Sarker, Paroma Chatterjee and Jannatul Adan, discuss a power grid simulation project led by Anurag Srivastava, professor and department chair, in the GOLab.
Photo credit: Brian Persinger | WVU

The electrical grid faces a mounting barrage of threats that could trigger a butterfly effect – floods, superstorms, heat waves, cyberattacks, not to mention its own ballooning complexity and size – that the nation is unprepared to handle, according to one West Virginia University scientist.

But Anurag Srivastava, professor and chair of the Lane Department of Computer Science and Electrical Engineering, has plans to prevent and respond to potential power grid failures, thanks to a pair of National Science Foundation-funded research projects.

“In the grid, we have the butterfly effect,” Srivastava said. “This means that if a butterfly flutters its wings in Florida, that will cause a windstorm in Connecticut because things are synchronously connected, like dominos. In the power grid, states like Florida, Connecticut, Illinois and West Virginia are all part of the eastern interconnection and linked together.

“If a big event happens in the Deep South, it is going to cause a problem up north. To stop that, we need to detect the problem area as soon as possible and gracefully separate that part out so the disturbance does not propagate through the whole.”

Monday, August 1, 2022

Artificial Intelligence Edges Closer to the Clinic

TransMED can help predict the outcomes of COVID-19 patients, generating predictions from different kinds of clinical data, including clinical notes, laboratory tests, diagnosis codes and prescribed drugs. The other uniqueness of TransMED lies in its ability to transfer learn from existing diseases to better predict and reason about progression of new and rare diseases. 
Credit: Shannon Colson | Pacific Northwest National Laboratory

The beginning of the COVID-19 pandemic presented a huge challenge to healthcare workers. Doctors struggled to predict how different patients would fare under treatment against the novel SARS-CoV-2 virus. Deciding how to triage medical resources when presented with very little information took a mental and physical toll on caregivers as the pandemic progressed.

To ease this burden, researchers at Pacific Northwest National Laboratory (PNNL), Stanford University, Virginia Tech, and John Snow Labs developed TransMED, a first-of-its-kind artificial intelligence (AI) prediction tool aimed at addressing issues caused by emerging or rare diseases.

“As COVID-19 unfolded over 2020, it brought a number of us together into thinking how and where we could contribute meaningfully,” said chief scientist Sutanay Choudhury. “We decided we could make the most impact if we worked on the problem of predicting patient outcomes.”

“COVID presented a unique challenge,” said Khushbu Agarwal, lead author of the study published in Nature Scientific Reports. “We had very limited patient data for training an AI model that could learn the complex patterns underlying COVID patient trajectories.”

The multi-institutional team developed TransMED to address this challenge, analyzing data from existing diseases to predict outcomes of an emerging disease.

Wednesday, July 27, 2022

New sensing platform deployed at controlled burn site, could help prevent forest fires

Argonne scientists conduct a controlled burn on the Konza prairie in Kansas using the Sage monitoring system. 
Resized Image using AI by SFLORG
Credit: Rajesh Sankaran/Argonne National Laboratory.

Smokey Bear has lots of great tips about preventing forest fires. But how do you stop one that’s started before it gets out of control? The answer may lie in pairing multichannel sensing with advanced computing technologies provided by a new platform called Sage.

Sage offers a one-of-a-kind combination. This combination involves both multiple types of sensors with computing ​“at the edge”, as well as embedded machine learning algorithms that enable scientists to process the enormous amounts of data generated in the field without having to transfer it all back to the laboratory. Computing ​“at the edge” means that data is processed where it is collected, in the field, while machine learning algorithms are computer programs that train themselves how to recognize patterns.

Sage is funded by the National Science Foundation and developed by the Northwestern-Argonne Institute for Science and Engineering (NAISE), a partnership between Northwestern University and the U.S. Department of Energy’s Argonne National Laboratory.

Thursday, June 23, 2022

Robots play with play dough


The inner child in many of us feels an overwhelming sense of joy when stumbling across a pile of the fluorescent, rubbery mixture of water, salt, and flour that put goo on the map: play dough. (Even if this happens rarely in adulthood.)

While manipulating play dough is fun and easy for 2-year-olds, the shapeless sludge is hard for robots to handle. Machines have become increasingly reliable with rigid objects, but manipulating soft, deformable objects comes with a laundry list of technical challenges, and most importantly, as with most flexible structures, if you move one part, you’re likely affecting everything else.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University recently let robots take their hand at playing with the modeling compound, but not for nostalgia’s sake. Their new system learns directly from visual inputs to let a robot with a two-fingered gripper see, simulate, and shape doughy objects. “RoboCraft” could reliably plan a robot’s behavior to pinch and release play dough to make various letters, including ones it had never seen. With just 10 minutes of data, the two-finger gripper rivaled human counterparts that teleoperated the machine — performing on-par, and at times even better, on the tested tasks.

Wednesday, June 22, 2022

Where Once Were Black Boxes, NIST’s New LANTERN Illuminates

How do you figure out how to alter a gene so that it makes a usefully different protein? The job might be imagined as interacting with a complex machine (at left) that sports a vast control panel filled with thousands of unlabeled switches, which all affect the device’s output somehow. A new tool called LANTERN figures out which sets of switches — rungs on the gene’s DNA ladder — have the largest effect on a given attribute of the protein. It also summarizes how the user can tweak that attribute to achieve a desired effect, essentially transmuting the many switches on our machine’s panel into another machine (at right) with just a few simple dials.
Credit: B. Hayes/NIST

Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.

The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.

Thursday, June 16, 2022

Can computers understand complex words and concepts?

 A depiction of semantic projection, which can determine the similarity between two words in a specific context. This grid shows how similar certain animals are based on their size.
Credit: Idan Blank/UCLA 

In “Through the Looking Glass,” Humpty Dumpty says scornfully, “When I use a word, it means just what I choose it to mean — neither more nor less.” Alice replies, “The question is whether you can make words mean so many different things.”

The study of what words really mean is ages old. The human mind must parse a web of detailed, flexible information and use sophisticated common sense to perceive their meaning.

Now, a newer problem related to the meaning of words has emerged: Scientists are studying whether artificial intelligence can mimic the human mind to understand words the way people do. A new study by researchers at UCLA, MIT and the National Institutes of Health addresses that question.

The paper, published in the journal Nature Human Behavior, reports that artificial intelligence systems can indeed learn very complicated word meanings, and the scientists discovered a simple trick to extract that complex knowledge. They found that the AI system they studied represents the meanings of words in a way that strongly correlates with human judgment.

The AI system the authors investigated has been frequently used in the past decade to study word meaning. It learns to figure out word meanings by “reading” astronomical amounts of content on the internet, encompassing tens of billions of words.

Researchers develop the world's first ultra-fast photonic computing processor using polarization

Photonic computing processor chip
Credit: June Sang Lee

New research uses multiple polarization channels to carry out parallel processing – enhancing computing density by several orders over conventional electronic chips.

In a paper published in Science Advances, researchers at the University of Oxford have developed a method using the polarization of light to maximize information storage density and computing performance using nanowires.

Light has an exploitable property – different wavelengths of light do not interact with each other – a characteristic used by fiberoptic to carry parallel streams of data. Similarly, different polarizations of light do not interact with each other either. Each polarization can be used as an independent information channel, enabling more information to be stored in multiple channels, hugely enhancing information density.

First author and DPhil student June Sang Lee, Department of Materials, University of Oxford said: ‘We all know that the advantage of photonics over electronics is that light is faster and more functional over large bandwidths. So, our aim was to fully harness such advantages of photonics combining with tunable material to realize faster and denser information processing.’

Monday, June 13, 2022

AI platform enables doctors to optimize personalized chemotherapy dose

Research team behind the PRECISE.CURATE trial (from left) Prof Dean Ho, Dr Agata Blasiak, Dr Raghav Sundar, Ms Anh Truong
Credit/Source: National University of Singapore

Based on a pilot clinical trial, close to 97% of dose recommendations by CURATE.AI were accepted by clinicians; some patients were prescribed optimal doses that were around 20% lower on average

A team of researchers from National University of Singapore (NUS), in collaboration with clinicians from the National University Cancer Institute, Singapore (NCIS) which is part of the National University Health System (NUHS), has reported promising results in using CURATE.AI, an artificial intelligence (AI) tool that identifies and better allows clinicians to make optimal and personalized doses of chemotherapy for patients.

Based on a pilot clinical trial – called PRECISE.CURATE - involving 10 patients in Singapore who were diagnosed with advanced solid tumors and predominantly metastatic colorectal cancers, clinicians accepted close to 97% of doses recommended by CURATE.AI, with some patients receiving optimal doses that were approximately 20% lower on average. These early outcomes are a promising step forward for the potential of truly personalizing oncology, where drug doses can be adjusted dynamically during treatment.

Developed by Professor Dean Ho and his team, CURATE.AI is an optimization platform that harnesses a patient’s clinical data, which includes drug type, drug dose and cancer biomarkers, to generate an individualized digital profile which is used to customize the optimal dose during the course of chemotherapy treatment.

Monday, June 6, 2022

Hallucinating to better text translation

Source/Credit: MIT
As babies, we babble and imitate our way to learning languages. We don’t start off reading raw text, which requires fundamental knowledge and understanding about the world, as well as the advanced ability to interpret and infer descriptions and relationships. Rather, humans begin our language journey slowly, by pointing and interacting with our environment, basing our words and perceiving their meaning through the context of the physical and social world. Eventually, we can craft full sentences to communicate complex ideas.

Similarly, when humans begin learning and translating into another language, the incorporation of other sensory information, like multimedia, paired with the new and unfamiliar words, like flashcards with images, improves language acquisition and retention. Then, with enough practice, humans can accurately translate new, unseen sentences in context without the accompanying media; however, imagining a picture based on the original text helps.

This is the basis of a new machine learning model, called VALHALLA, by researchers from MIT, IBM, and the University of California at San Diego, in which a trained neural network sees a source sentence in one language, hallucinates an image of what it looks like, and then uses both to translate into a target language. The team found that their method demonstrates improved accuracy of machine translation over text-only translation. Further, it provided an additional boost for cases with long sentences, under-resourced languages, and instances where part of the source sentence is inaccessible to the machine translator.

Friday, June 3, 2022

Great timing, supercomputer upgrade led to successful forecast of volcanic eruption

Former Illinois graduate student Yan Zhan, left, professor Patricia Gregg and research professor Seid Koric led a team that produced the fortuitous forecast of the 2018 Sierra Negra volcanic eruption five months before it occurred. 
Photo by Michelle Hassel

In the fall of 2017, geology professor Patricia Gregg and her team had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers. Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. One of the scientists on the Ecuador project, Dennis Geist of Colgate University, contacted Gregg, and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.

Initially developed on an iMac computer, the new modeling approach had already garnered attention for successfully recreating the unexpected eruption of Alaska’s Okmok volcano in 2008. Gregg’s team, based out of the University of Illinois Urbana-Champaign and the National Center for Supercomputing Applications, wanted to test the model’s new high-performance computing upgrade, and Geist’s Sierra Negra observations showed signs of an imminent eruption.

“Sierra Negra is a well-behaved volcano,” said Gregg, the lead author of a new report of the successful effort. “Meaning that, before eruptions in the past, the volcano has shown all the telltale signs of an eruption that we would expect to see like groundswell, gas release and increased seismic activity. This characteristic made Sierra Negra a great test case for our upgraded model.”

Monday, May 30, 2022

Frontier supercomputer debuts as world’s fastest, breaking exascale barrier


The Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory earned the top ranking today as the world’s fastest on the 59th TOP500 list, with 1.1 exaflops of performance. The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second.

Frontier features a theoretical peak performance of 2 exaflops, or two quintillion calculations per second, making it ten times more powerful than ORNL’s Summit system. The system leverages ORNL’s extensive expertise in accelerated computing and will enable scientists to develop critically needed technologies for the country’s energy, economic and national security, helping researchers address problems of national importance that were impossible to solve just five years ago.

“Frontier is ushering in a new era of exascale computing to solve the world’s biggest scientific challenges,” ORNL Director Thomas Zacharia said. “This milestone offers just a preview of Frontier’s unmatched capability as a tool for scientific discovery. It is the result of more than a decade of collaboration among the national laboratories, academia and private industry, including DOE’s Exascale Computing Project, which is deploying the applications, software technologies, hardware and integration necessary to ensure impact at the exascale.”

Friday, May 27, 2022

Same symptom – different cause?

Head of the LipiTUM research group Dr. Josch Konstantin Pauling (left) and PhD student Nikolai Köhler (right) interpret the disease-related changes in lipid metabolism using a newly developed network.
Credit: LipiTUM

Machine learning is playing an ever-increasing role in biomedical research. Scientists at the Technical University of Munich (TUM) have now developed a new method of using molecular data to extract subtypes of illnesses. In the future, this method can help to support the study of larger patient groups.

Nowadays doctors define and diagnose most diseases on the basis of symptoms. However, that does not necessarily mean that the illnesses of patients with similar symptoms will have identical causes or demonstrate the same molecular changes. In biomedicine, one often speaks of the molecular mechanisms of a disease. This refers to changes in the regulation of genes, proteins or metabolic pathways at the onset of illness. The goal of stratified medicine is to classify patients into various subtypes at the molecular level in order to provide more targeted treatments.

To extract disease subtypes from large pools of patient data, new machine learning algorithms can help. They are designed to independently recognize patterns and correlations in extensive clinical measurements. The LipiTUM junior research group, headed by Dr. Josch Konstantin Pauling of the Chair for Experimental Bioinformatics has developed an algorithm for this purpose.

Friday, May 20, 2022

Neuromorphic Memory Device Simulates Neurons and Synapses​

A neuromorphic memory device consisting of bottom volatile and top nonvolatile memory layers emulating neuronal and synaptic properties, respectively
Credit: KAIST

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Artificial intelligence predicts patients’ race from their medical images

Researchers demonstrated that medical AI systems can easily learn to recognize racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate.
 Credit: Massachusetts Institute of Technology

The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to reoffend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting.

Removing sensitive features from the data seems like a viable tweak. But what happens when it’s not enough?

Wednesday, May 18, 2022

A component for brain-inspired computing

Scientists aim to perform machine-learning tasks more efficiently with processors that emulate the working principles of the human brain.
Image: Unsplash

Researchers from ETH Zurich, Empa and the University of Zurich have developed a new material for an electronic component that can be used in a wider range of applications than its predecessors. Such components will help create electronic circuits that emulate the human brain and that are more efficient than conventional computers at performing machine-learning tasks.

Compared with computers, the human brain is incredibly energy-efficient. Scientists are therefore drawing on how the brain and its interconnected neurons function for inspiration in designing innovative computing technologies. They foresee that these brain-inspired computing systems will be more energy-efficient than conventional ones, as well as better at performing machine-learning tasks.

Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single type of electronic component, known as a memristor. Their hope is that this will help to achieve greater efficiency because moving data between the processor and the storage, as conventional computers do, is the main reason for the high energy consumption in machine-learning applications.

Tuesday, May 17, 2022

New Approach Allows for Faster Ransomware Detection

Photo credit: Michael Geiger

Engineering researchers have developed a new approach for implementing ransomware detection techniques, allowing them to detect a broad range of ransomware far more quickly than previous systems.

Ransomware is a type of malware. When a system is infiltrated by ransomware, the ransomware encrypts that system’s data – making the data inaccessible to users. The people responsible for the ransomware then extort the affected system’s operators, demanding money from the users in exchange for granting them access to their own data.

Ransomware extortion is hugely expensive, and instances of ransomware extortion are on the rise. The FBI reports receiving 3,729 ransomware complaints in 2021, with costs of more than $49 million. What’s more, 649 of those complaints were from organizations classified as critical infrastructure.

Featured Article

Autism and ADHD are linked to disturbed gut flora very early in life

The researchers have found links between the gut flora in babies first year of life and future diagnoses. Photo Credit:  Cheryl Holt Disturb...

Top Viewed Articles