. Scientific Frontline: Computer Science
Showing posts with label Computer Science. Show all posts
Showing posts with label Computer Science. Show all posts

Monday, August 1, 2022

Artificial Intelligence Edges Closer to the Clinic

TransMED can help predict the outcomes of COVID-19 patients, generating predictions from different kinds of clinical data, including clinical notes, laboratory tests, diagnosis codes and prescribed drugs. The other uniqueness of TransMED lies in its ability to transfer learn from existing diseases to better predict and reason about progression of new and rare diseases. 
Credit: Shannon Colson | Pacific Northwest National Laboratory

The beginning of the COVID-19 pandemic presented a huge challenge to healthcare workers. Doctors struggled to predict how different patients would fare under treatment against the novel SARS-CoV-2 virus. Deciding how to triage medical resources when presented with very little information took a mental and physical toll on caregivers as the pandemic progressed.

To ease this burden, researchers at Pacific Northwest National Laboratory (PNNL), Stanford University, Virginia Tech, and John Snow Labs developed TransMED, a first-of-its-kind artificial intelligence (AI) prediction tool aimed at addressing issues caused by emerging or rare diseases.

“As COVID-19 unfolded over 2020, it brought a number of us together into thinking how and where we could contribute meaningfully,” said chief scientist Sutanay Choudhury. “We decided we could make the most impact if we worked on the problem of predicting patient outcomes.”

“COVID presented a unique challenge,” said Khushbu Agarwal, lead author of the study published in Nature Scientific Reports. “We had very limited patient data for training an AI model that could learn the complex patterns underlying COVID patient trajectories.”

The multi-institutional team developed TransMED to address this challenge, analyzing data from existing diseases to predict outcomes of an emerging disease.

Wednesday, July 27, 2022

New sensing platform deployed at controlled burn site, could help prevent forest fires

Argonne scientists conduct a controlled burn on the Konza prairie in Kansas using the Sage monitoring system. 
Resized Image using AI by SFLORG
Credit: Rajesh Sankaran/Argonne National Laboratory.

Smokey Bear has lots of great tips about preventing forest fires. But how do you stop one that’s started before it gets out of control? The answer may lie in pairing multichannel sensing with advanced computing technologies provided by a new platform called Sage.

Sage offers a one-of-a-kind combination. This combination involves both multiple types of sensors with computing ​“at the edge”, as well as embedded machine learning algorithms that enable scientists to process the enormous amounts of data generated in the field without having to transfer it all back to the laboratory. Computing ​“at the edge” means that data is processed where it is collected, in the field, while machine learning algorithms are computer programs that train themselves how to recognize patterns.

Sage is funded by the National Science Foundation and developed by the Northwestern-Argonne Institute for Science and Engineering (NAISE), a partnership between Northwestern University and the U.S. Department of Energy’s Argonne National Laboratory.

Thursday, June 23, 2022

Robots play with play dough


The inner child in many of us feels an overwhelming sense of joy when stumbling across a pile of the fluorescent, rubbery mixture of water, salt, and flour that put goo on the map: play dough. (Even if this happens rarely in adulthood.)

While manipulating play dough is fun and easy for 2-year-olds, the shapeless sludge is hard for robots to handle. Machines have become increasingly reliable with rigid objects, but manipulating soft, deformable objects comes with a laundry list of technical challenges, and most importantly, as with most flexible structures, if you move one part, you’re likely affecting everything else.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University recently let robots take their hand at playing with the modeling compound, but not for nostalgia’s sake. Their new system learns directly from visual inputs to let a robot with a two-fingered gripper see, simulate, and shape doughy objects. “RoboCraft” could reliably plan a robot’s behavior to pinch and release play dough to make various letters, including ones it had never seen. With just 10 minutes of data, the two-finger gripper rivaled human counterparts that teleoperated the machine — performing on-par, and at times even better, on the tested tasks.

Wednesday, June 22, 2022

Where Once Were Black Boxes, NIST’s New LANTERN Illuminates

How do you figure out how to alter a gene so that it makes a usefully different protein? The job might be imagined as interacting with a complex machine (at left) that sports a vast control panel filled with thousands of unlabeled switches, which all affect the device’s output somehow. A new tool called LANTERN figures out which sets of switches — rungs on the gene’s DNA ladder — have the largest effect on a given attribute of the protein. It also summarizes how the user can tweak that attribute to achieve a desired effect, essentially transmuting the many switches on our machine’s panel into another machine (at right) with just a few simple dials.
Credit: B. Hayes/NIST

Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.

The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.

Thursday, June 16, 2022

Can computers understand complex words and concepts?

 A depiction of semantic projection, which can determine the similarity between two words in a specific context. This grid shows how similar certain animals are based on their size.
Credit: Idan Blank/UCLA 

In “Through the Looking Glass,” Humpty Dumpty says scornfully, “When I use a word, it means just what I choose it to mean — neither more nor less.” Alice replies, “The question is whether you can make words mean so many different things.”

The study of what words really mean is ages old. The human mind must parse a web of detailed, flexible information and use sophisticated common sense to perceive their meaning.

Now, a newer problem related to the meaning of words has emerged: Scientists are studying whether artificial intelligence can mimic the human mind to understand words the way people do. A new study by researchers at UCLA, MIT and the National Institutes of Health addresses that question.

The paper, published in the journal Nature Human Behavior, reports that artificial intelligence systems can indeed learn very complicated word meanings, and the scientists discovered a simple trick to extract that complex knowledge. They found that the AI system they studied represents the meanings of words in a way that strongly correlates with human judgment.

The AI system the authors investigated has been frequently used in the past decade to study word meaning. It learns to figure out word meanings by “reading” astronomical amounts of content on the internet, encompassing tens of billions of words.

Researchers develop the world's first ultra-fast photonic computing processor using polarization

Photonic computing processor chip
Credit: June Sang Lee

New research uses multiple polarization channels to carry out parallel processing – enhancing computing density by several orders over conventional electronic chips.

In a paper published in Science Advances, researchers at the University of Oxford have developed a method using the polarization of light to maximize information storage density and computing performance using nanowires.

Light has an exploitable property – different wavelengths of light do not interact with each other – a characteristic used by fiberoptic to carry parallel streams of data. Similarly, different polarizations of light do not interact with each other either. Each polarization can be used as an independent information channel, enabling more information to be stored in multiple channels, hugely enhancing information density.

First author and DPhil student June Sang Lee, Department of Materials, University of Oxford said: ‘We all know that the advantage of photonics over electronics is that light is faster and more functional over large bandwidths. So, our aim was to fully harness such advantages of photonics combining with tunable material to realize faster and denser information processing.’

Monday, June 13, 2022

AI platform enables doctors to optimize personalized chemotherapy dose

Research team behind the PRECISE.CURATE trial (from left) Prof Dean Ho, Dr Agata Blasiak, Dr Raghav Sundar, Ms Anh Truong
Credit/Source: National University of Singapore

Based on a pilot clinical trial, close to 97% of dose recommendations by CURATE.AI were accepted by clinicians; some patients were prescribed optimal doses that were around 20% lower on average

A team of researchers from National University of Singapore (NUS), in collaboration with clinicians from the National University Cancer Institute, Singapore (NCIS) which is part of the National University Health System (NUHS), has reported promising results in using CURATE.AI, an artificial intelligence (AI) tool that identifies and better allows clinicians to make optimal and personalized doses of chemotherapy for patients.

Based on a pilot clinical trial – called PRECISE.CURATE - involving 10 patients in Singapore who were diagnosed with advanced solid tumors and predominantly metastatic colorectal cancers, clinicians accepted close to 97% of doses recommended by CURATE.AI, with some patients receiving optimal doses that were approximately 20% lower on average. These early outcomes are a promising step forward for the potential of truly personalizing oncology, where drug doses can be adjusted dynamically during treatment.

Developed by Professor Dean Ho and his team, CURATE.AI is an optimization platform that harnesses a patient’s clinical data, which includes drug type, drug dose and cancer biomarkers, to generate an individualized digital profile which is used to customize the optimal dose during the course of chemotherapy treatment.

Monday, June 6, 2022

Hallucinating to better text translation

Source/Credit: MIT
As babies, we babble and imitate our way to learning languages. We don’t start off reading raw text, which requires fundamental knowledge and understanding about the world, as well as the advanced ability to interpret and infer descriptions and relationships. Rather, humans begin our language journey slowly, by pointing and interacting with our environment, basing our words and perceiving their meaning through the context of the physical and social world. Eventually, we can craft full sentences to communicate complex ideas.

Similarly, when humans begin learning and translating into another language, the incorporation of other sensory information, like multimedia, paired with the new and unfamiliar words, like flashcards with images, improves language acquisition and retention. Then, with enough practice, humans can accurately translate new, unseen sentences in context without the accompanying media; however, imagining a picture based on the original text helps.

This is the basis of a new machine learning model, called VALHALLA, by researchers from MIT, IBM, and the University of California at San Diego, in which a trained neural network sees a source sentence in one language, hallucinates an image of what it looks like, and then uses both to translate into a target language. The team found that their method demonstrates improved accuracy of machine translation over text-only translation. Further, it provided an additional boost for cases with long sentences, under-resourced languages, and instances where part of the source sentence is inaccessible to the machine translator.

Friday, June 3, 2022

Great timing, supercomputer upgrade led to successful forecast of volcanic eruption

Former Illinois graduate student Yan Zhan, left, professor Patricia Gregg and research professor Seid Koric led a team that produced the fortuitous forecast of the 2018 Sierra Negra volcanic eruption five months before it occurred. 
Photo by Michelle Hassel

In the fall of 2017, geology professor Patricia Gregg and her team had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers. Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. One of the scientists on the Ecuador project, Dennis Geist of Colgate University, contacted Gregg, and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.

Initially developed on an iMac computer, the new modeling approach had already garnered attention for successfully recreating the unexpected eruption of Alaska’s Okmok volcano in 2008. Gregg’s team, based out of the University of Illinois Urbana-Champaign and the National Center for Supercomputing Applications, wanted to test the model’s new high-performance computing upgrade, and Geist’s Sierra Negra observations showed signs of an imminent eruption.

“Sierra Negra is a well-behaved volcano,” said Gregg, the lead author of a new report of the successful effort. “Meaning that, before eruptions in the past, the volcano has shown all the telltale signs of an eruption that we would expect to see like groundswell, gas release and increased seismic activity. This characteristic made Sierra Negra a great test case for our upgraded model.”

Monday, May 30, 2022

Frontier supercomputer debuts as world’s fastest, breaking exascale barrier


The Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory earned the top ranking today as the world’s fastest on the 59th TOP500 list, with 1.1 exaflops of performance. The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations per second.

Frontier features a theoretical peak performance of 2 exaflops, or two quintillion calculations per second, making it ten times more powerful than ORNL’s Summit system. The system leverages ORNL’s extensive expertise in accelerated computing and will enable scientists to develop critically needed technologies for the country’s energy, economic and national security, helping researchers address problems of national importance that were impossible to solve just five years ago.

“Frontier is ushering in a new era of exascale computing to solve the world’s biggest scientific challenges,” ORNL Director Thomas Zacharia said. “This milestone offers just a preview of Frontier’s unmatched capability as a tool for scientific discovery. It is the result of more than a decade of collaboration among the national laboratories, academia and private industry, including DOE’s Exascale Computing Project, which is deploying the applications, software technologies, hardware and integration necessary to ensure impact at the exascale.”

Friday, May 27, 2022

Same symptom – different cause?

Head of the LipiTUM research group Dr. Josch Konstantin Pauling (left) and PhD student Nikolai Köhler (right) interpret the disease-related changes in lipid metabolism using a newly developed network.
Credit: LipiTUM

Machine learning is playing an ever-increasing role in biomedical research. Scientists at the Technical University of Munich (TUM) have now developed a new method of using molecular data to extract subtypes of illnesses. In the future, this method can help to support the study of larger patient groups.

Nowadays doctors define and diagnose most diseases on the basis of symptoms. However, that does not necessarily mean that the illnesses of patients with similar symptoms will have identical causes or demonstrate the same molecular changes. In biomedicine, one often speaks of the molecular mechanisms of a disease. This refers to changes in the regulation of genes, proteins or metabolic pathways at the onset of illness. The goal of stratified medicine is to classify patients into various subtypes at the molecular level in order to provide more targeted treatments.

To extract disease subtypes from large pools of patient data, new machine learning algorithms can help. They are designed to independently recognize patterns and correlations in extensive clinical measurements. The LipiTUM junior research group, headed by Dr. Josch Konstantin Pauling of the Chair for Experimental Bioinformatics has developed an algorithm for this purpose.

Friday, May 20, 2022

Neuromorphic Memory Device Simulates Neurons and Synapses​

A neuromorphic memory device consisting of bottom volatile and top nonvolatile memory layers emulating neuronal and synaptic properties, respectively
Credit: KAIST

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Artificial intelligence predicts patients’ race from their medical images

Researchers demonstrated that medical AI systems can easily learn to recognize racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate.
 Credit: Massachusetts Institute of Technology

The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to reoffend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting.

Removing sensitive features from the data seems like a viable tweak. But what happens when it’s not enough?

Wednesday, May 18, 2022

A component for brain-inspired computing

Scientists aim to perform machine-learning tasks more efficiently with processors that emulate the working principles of the human brain.
Image: Unsplash

Researchers from ETH Zurich, Empa and the University of Zurich have developed a new material for an electronic component that can be used in a wider range of applications than its predecessors. Such components will help create electronic circuits that emulate the human brain and that are more efficient than conventional computers at performing machine-learning tasks.

Compared with computers, the human brain is incredibly energy-efficient. Scientists are therefore drawing on how the brain and its interconnected neurons function for inspiration in designing innovative computing technologies. They foresee that these brain-inspired computing systems will be more energy-efficient than conventional ones, as well as better at performing machine-learning tasks.

Much like neurons, which are responsible for both data storage and data processing in the brain, scientists want to combine storage and processing in a single type of electronic component, known as a memristor. Their hope is that this will help to achieve greater efficiency because moving data between the processor and the storage, as conventional computers do, is the main reason for the high energy consumption in machine-learning applications.

Tuesday, May 17, 2022

New Approach Allows for Faster Ransomware Detection

Photo credit: Michael Geiger

Engineering researchers have developed a new approach for implementing ransomware detection techniques, allowing them to detect a broad range of ransomware far more quickly than previous systems.

Ransomware is a type of malware. When a system is infiltrated by ransomware, the ransomware encrypts that system’s data – making the data inaccessible to users. The people responsible for the ransomware then extort the affected system’s operators, demanding money from the users in exchange for granting them access to their own data.

Ransomware extortion is hugely expensive, and instances of ransomware extortion are on the rise. The FBI reports receiving 3,729 ransomware complaints in 2021, with costs of more than $49 million. What’s more, 649 of those complaints were from organizations classified as critical infrastructure.

Saturday, April 30, 2022

Researchers Create Self-Assembled Logic Circuits from Proteins


In a proof-of-concept study, researchers have created self-assembled, protein-based circuits that can perform simple logic functions. The work demonstrates that it is possible to create stable digital circuits that take advantage of an electron’s properties at quantum scales.

One of the stumbling blocks in creating molecular circuits is that as the circuit size decreases the circuits become unreliable. This is because the electrons needed to create current behave like waves, not particles, at the quantum scale. For example, on a circuit with two wires that are one nanometer apart, the electron can “tunnel” between the two wires and effectively be in both places simultaneously, making it difficult to control the direction of the current. Molecular circuits can mitigate these problems, but single-molecule junctions are short-lived or low-yielding due to challenges associated with fabricating electrodes at that scale.

“Our goal was to try and create a molecular circuit that uses tunneling to our advantage, rather than fighting against it,” says Ryan Chiechi, associate professor of chemistry at North Carolina State University and co-corresponding author of a paper describing the work.

Friday, April 29, 2022

Fermilab engineers develop new control electronics for quantum computers that improve performance and cut costs

Gustavo Cancelo led a team of Fermilab engineers to create a new compact electronics board: It has the capabilities of an entire rack of equipment that is compatible with many designs of superconducting qubits at a fraction of the cost.
Photo: Ryan Postel, Fermilab

When designing a next-generation quantum computer, a surprisingly large problem is bridging the communication gap between the classical and quantum worlds. Such computers need specialized control and readout electronics to translate back and forth between the human operator and the quantum computer’s languages — but existing systems are cumbersome and expensive.

However, a new system of control and readout electronics, known as Quantum Instrumentation Control Kit, or QICK, developed by engineers at the U.S. Department of Energy’s Fermi National Accelerator Laboratory, has proved to drastically improve quantum computer performance while cutting the cost of control equipment.

“The development of the Quantum Instrumentation Control Kit is an excellent example of U.S. investment in joint quantum technology research with partnerships between industry, academia and government to accelerate pre-competitive quantum research and development technologies,” said Harriet Kung, DOE deputy director for science programs for the Office of Science and acting associate director of science for high-energy physics.

Tuesday, April 12, 2022

Cloud server leasing can leave sensitive data up for grabs


Renting space and IP addresses on a public server has become standard business practice, but according to a team of Penn State computer scientists, current industry practices can lead to "cloud squatting," which can create a security risk, endangering sensitive customer and organization data intended to remain private.

Cloud squatting occurs when a company, such as a bank, leases space and IP addresses — unique addresses that identify individual computers or computer networks — on a public server, uses them, and then releases the space and addresses back to the public server company, a standard pattern seen every day. The public server company, such as Amazon, Google, or Microsoft, then assigns the same addresses to a second company.  If this second company is a bad actor, it can receive information coming into the address intended for the original company — for example, when you as a customer unknowingly use an outdated link when interacting with your bank — and use it to its advantage — cloud squatting.

"There are two advantages to leasing server space," said Eric Pauley, doctoral candidate in computer science and engineering. "One is a cost advantage, saving on equipment and management. The other is scalability. Leasing server space offers an unlimited pool of computing resources so, as workload changes, companies can quickly adapt." As a result, the use of clouds has grown exponentially, meaning almost every website a user visits takes advantage of cloud computing.

Saturday, April 9, 2022

‘Frustrated’ nanomagnets order themselves through disorder

Source/Credit: Yale University

Extremely small arrays of magnets with strange and unusual properties can order themselves by increasing entropy, or the tendency of physical systems to disorder, a behavior that appears to contradict standard thermodynamics—but doesn’t.

“Paradoxically, the system orders because it wants to be more disordered,” said Cristiano Nisoli, a physicist at Los Alamos and coauthor of a paper about the research published in Nature Physics. “Our research demonstrates entropy-driven order in a structured system of magnets at equilibrium.”

The system examined in this work, known as tetris spin ice, was studied as part of a long-standing collaboration between Nisoli and Peter Schiffer at Yale University, with theoretical analysis and simulations led at Los Alamos and experimental work led at Yale. The research team includes scientists from a number of universities and academic institutions.

Nanomagnet arrays, like tetris spin ice, show promise as circuits of logic gates in neuromorphic computing, a leading-edge computing architecture that closely mimics how the brain works. They also have possible applications in a number of high-frequency devices using “magnonics” that exploit the dynamics of magnetism on the nanoscale.

Thursday, April 7, 2022

Engineered crystals could help computers run on less power

Researchers at the University of California, Berkeley, have created engineered crystal structures that display an unusual physical phenomenon known as negative capacitance. Incorporating this material into advanced silicon transistors could make computers more energy efficient.
Credit: UC Berkeley image by Ella Maru Studio

Computers may be growing smaller and more powerful, but they require a great deal of energy to operate. The total amount of energy the U.S. dedicates to computing has risen dramatically over the last decade and is quickly approaching that of other major sectors, like transportation.

In a study published online this week in the journal Nature, University of California, Berkeley, engineers describe a major breakthrough in the design of a component of transistors — the tiny electrical switches that form the building blocks of computers — that could significantly reduce their energy consumption without sacrificing speed, size or performance. The component, called the gate oxide, plays a key role in switching the transistor on and off.

“We have been able to show that our gate-oxide technology is better than commercially available transistors: What the trillion-dollar semiconductor industry can do today — we can essentially beat them,” said study senior author Sayeef Salahuddin, the TSMC Distinguished professor of Electrical Engineering and Computer Sciences at UC Berkeley.

This boost in efficiency is made possible by an effect called negative capacitance, which helps reduce the amount of voltage that is needed to store charge in a material. Salahuddin theoretically predicted the existence of negative capacitance in 2008 and first demonstrated the effect in a ferroelectric crystal in 2011.

Featured Article

Hypoxia is widespread and increasing in the ocean off the Pacific Northwest coast

In late August, OSU's Jack Barth and his colleagues deployed a glider that traversed Oregon’s near-shore waters from Astoria to Coos Bay...

Top Viewed Articles