. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, September 6, 2022

Artificial intelligence against child cancer

Stefan Posch
Photo Credit: Uni Halle / Markus Scholz

"Artificial Professor" is the nickname for a new research project at the University Hospital Leipzig and at the Martin Luther University Halle-Wittenberg (MLU). A team of doctors and bioinformatics wants to use self-learning software to significantly improve the therapy of lymphatic cancer (Hodgkin's lymphoma) in children. The second phase of the multi-year project recently began with 40,000 euros in funding from the Mitteldeutsche Kinderkrebsforschung Foundation.

Children affected by lymph gland cancer can now be cured with modern treatment methods such as chemotherapy and radiation in 95 percent of cases. However, intensive treatment in childhood often leads to late damage. Irradiation in particular increases the risk of developing a second cancer later. Long-term studies show massive over-mortality due to second diseases, such as cancer or heart diseases in adulthood.

Therefore, the primary goal of the medical profession is: only as little treatment as necessary. The data analysis developed in the project, based on artificial intelligence, is intended to help and optimize therapy for each individual patient. In the first phase, the researchers first prepared and prepared a unique data set for the big data analysis: a network of 270 child cancer clinics from 21 countries sent the data from the imaging PET examinations anonymously to Leipzig for years. The three-dimensional image series shows how well individual therapies work and how the tumor tissue develops over time.

Monday, September 5, 2022

A Novel Approach to Creating Tailored Odors and Fragrances Using Machine Learning


Can we use machine learning methods to predict the sensing data of odor mixtures and design new smells? A new study by researchers from Tokyo Tech does just that. The novel method is bound to have applications in the food, health, beauty, and wellness industries, where odors and fragrances are of keen interest.

The sense of smell is one of the basic senses of animal species. It is critical to finding food, realizing attraction, and sensing danger. Humans detect smells, or odorants, with olfactory receptors expressed in olfactory nerve cells. These olfactory impressions of odorants on nerve cells are associated with their molecular features and physicochemical properties. This makes it possible to tailor odors to create an intended odor impression. Current methods only predict olfactory impressions from the physicochemical features of odorants. But that method cannot predict the sensing data, which is indispensable for creating smells.

To tackle this issue, scientists from Tokyo Institute of Technology (Tokyo Tech) have employed the innovative strategy of solving the inverse problem. Instead of predicting the smell from molecular data, this method predicts molecular features based on the odor impression. This is achieved using standard mass spectrum data and machine learning (ML) models. "We used a machine-learning-based odor predictive model that we had previously developed to obtain the odor impression. Then we predicted the mass spectrum from odor impression inversely based on the previously developed forward model," explains Professor Takamichi Nakamoto, the leader of the research effort by Tokyo Tech. The findings have been published in PLoS One.

Friday, September 2, 2022

How Artificial Intelligence can explain its decisions

They have brought together the seemingly incompatible inductive approach of machine learning with deductive logic: Stephanie Schörner, Axel Mosig and David Schuhmacher (from left).
Credit: RUB, Marquard

If an algorithm in a tissue sample makes up a tumor, it does not yet reveal how it came to this result. It is not very trustworthy. Bochum researchers are therefore taking a new approach.

Artificial intelligence (AI) can be trained to recognize whether a tissue image contains tumor. How she makes her decision has so far remained hidden. A team from the Research Center for Protein Diagnostics, or PRODI for short, at the Ruhr University Bochum is developing a new approach: with it, the decision of an AI can be explained and thus trustworthy. The researchers around Prof. Dr. Axel Mosig in the journal "Medical Image Analysis".

Bioinformatician Axel Mosig cooperated with Prof. Dr. Andrea Tannapfel, head of the Institute of Pathology, the oncologist Prof. Dr. Anke Reinacher-Schick from St. Josef Hospital of the Ruhr University as well as the biophysicist and PRODI founding director Prof. Dr. Klaus Gerwert. The group developed a neural network, i.e. an AI that can classify whether a tissue sample contains tumor or not. To do this, they fed the AI with many microscopic tissue images, some of which contained tumors, others were tumor-free.

Thursday, September 1, 2022

New methodology predicts coronavirus and other infectious disease threats to wildlife

The rate that emerging wildlife diseases infect humans has steadily increased over the last three decades. Viruses, such as the global coronavirus pandemic and recent monkeypox outbreak, have heightened the urgent need for disease ecology tools to forecast when and where disease outbreaks are likely. A University of South Florida assistant professor helped develop a methodology that will do just that – predict disease transmission from wildlife to humans, from one wildlife species to another and determine who is at risk of infection.

The methodology is a machine-learning approach that identifies the influence of variables, such as location and climate, on known pathogens. Using only small amounts of information, the system is able to identify community hot spots at risk of infection on both global and local scales.

“Our main goal is to develop this tool for preventive measures,” said co-principal investigator Diego Santiago-Alarcon, assistant professor of integrative biology. “It’s difficult to have an all-purpose methodology that can be used to predict infections across all the diverse parasite systems, but with this research, we contribute to achieving that goal.”

With help from researchers at the Universiad Veracruzana and Instituto de Ecologia, located in Mexico, Santiago-Alarcon examined three host-pathogen systems – avian malaria, birds with West Nile virus and bats with coronavirus – to test the reliability and accuracy of the models generated by the methodology.

Soaking up the sun with artificial intelligence

Machine learning methods are being developed at Argonne to advance solar energy research with perovskites.
Credit: Maria Chan/ Argonne National Laboratory

The sun continuously transmits trillions of watts of energy to the Earth. It will be doing so for billions more years. Yet, we have only just begun tapping into that abundant, renewable source of energy at affordable cost.

Solar absorbers are a material used to convert this energy into heat or electricity. Maria Chan, a scientist in the U.S. Department of Energy’s (DOE) Argonne National Laboratory, has developed a machine learning method for screening many thousands of compounds as solar absorbers. Her co-author on this project was Arun Mannodi-Kanakkithodi, a former Argonne postdoc who is now an assistant professor at Purdue University.

“We are truly in a new era of applying AI and high-performance computing to materials discovery.” — Maria Chan, scientist, Center for Nanoscale Materials

“According to a recent DOE study, by 2035, solar energy could power 40% of the nation’s electricity,” said Chan. ​“And it could help with decarbonizing the grid and provide many new jobs.”

Tuesday, August 23, 2022

Machine learning algorithm predicts how to get the most out of electric vehicle batteries

Credit: (Joenomias) Menno de Jong from Pixabay 

The researchers, from the University of Cambridge, say their algorithm could help drivers, manufacturers and businesses get the most out of the batteries that power electric vehicles by suggesting routes and driving patterns that minimize battery degradation and charging times.

The team developed a non-invasive way to probe batteries and get a holistic view of battery health. These results were then fed into a machine learning algorithm that can predict how different driving patterns will affect the future health of the battery.

"This method could unlock value in so many parts of the supply chain, whether you’re a manufacturer, an end user, or a recycler, because it allows us to capture the health of the battery beyond a single number"
Alpha Lee

If developed commercially, the algorithm could be used to recommend routes that get drivers from point to point in the shortest time without degrading the battery, for example, or recommend the fastest way to charge the battery without causing it to degrade. The results are reported in the journal Nature Communications.

The health of a battery, whether it’s in a smartphone or a car, is far more complex than a single number on a screen. “Battery health, like human health, is a multi-dimensional thing, and it can degrade in lots of different ways,” said first author Penelope Jones, from Cambridge’s Cavendish Laboratory. “Most methods of monitoring battery health assume that a battery is always used in the same way. But that’s not how we use batteries in real life. If I’m streaming a TV show on my phone, it’s going to run down the battery a whole lot faster than if I’m using it for messaging. It’s the same with electric cars – how you drive will affect how the battery degrades.”

Researchers develop the first AI-based method for dating archaeological remains

Credit: Unsplash

By analyzing DNA with the help of artificial intelligence (AI), an international research team led by Lund University in Sweden has developed a method that can accurately date up to ten-thousand-year-old human remains.

Accurately dating ancient humans is key when mapping how people migrated during world history.

The standard dating method since the 1950s has been radiocarbon dating. The method, which is based on the ratio between two different carbon isotopes, has revolutionized archaeology. However, technology is not always completely reliable in terms of accuracy, making it complicated to map ancient people, how they moved and how they are related.

In a new study published in Cell Reports Methods, a research team has developed a dating method that could be of great interest to archaeologists and paleognomicists.

“Unreliable dating is a major problem, resulting in vague and contradictory results. Our method uses artificial intelligence to date genomes via their DNA with great accuracy, says Eran Elhaik, researcher in molecular cell biology at Lund University.

Thursday, August 18, 2022

A new neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today’s compute platforms

 The NeuRRAM chip is an innovative neuromorphic chip
Credit: David Baillot/University of California San Diego

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

Wednesday, August 17, 2022

Machine learning meets medicine in the fight against superbugs

Scanning electron micrograph of a human neutrophil ingesting MRSA.
Image: National Institute of Allergy and Infectious Diseases, National Institutes of Health on Flickr, CC BY-NC 2.0

MRSA is an antibiotic-resistant staph infection that can be deadly for those in hospital care or with weakened immune systems. Staphylococcus aureus bacteria live in the nose without necessarily producing any symptoms but can also spread to other parts of the body, leading to persistent infections. Management of MRSA is long-term and laborious, so any steps to optimize treatments and reduce re-infections will benefit patients. This new research can predict how effective different treatments will be by combining patient data with estimates of how MRSA moves between different parts of the body. The study was published in the Journal of the Royal Society Interface.

The researchers compared data from 2000 patients with MRSA after hospital visits. In one group, patients were given standard information about how to treat MRSA and prevent its spread. The second group followed a more intensive ‘decolonization’ protocol to eliminate MRSA through wound disinfection, cleaning the armpits and groin, and using nasal spray. Both groups were tested for MRSA on different body parts at various time points over nine months.

The current state-of-the-art in medical research often involves comparing two groups in this way, to see if an intervention or treatment could be effective. The new study added another element: a mathematical model that looked at the interactions between treatments and body parts. 'The model shows how MRSA moves between body parts,' says senior author Pekka Marttinen, professor at Aalto University and the Finnish Center for Artificial Intelligence FCAI. 'It can help us optimize the combination of treatments and even predict how new treatments would work before they have been tested on patients.'

Friday, August 12, 2022

Two Monumental Milestones Achieved in CT Imaging

Conventional chest CT image (left side) of the human airways compared to the new and improved PCD-CT system (right side). The image produced with the PCD-CT system shows better delineation of the bronchial walls. Preliminary studies showed that the PCD-CT system allowed radiologists to see smaller airways than with standard CT systems.
Image credit: Cynthia McCollough, Mayo Clinic, Rochester, Minnesota.

Two biomedical imaging technologies developed with support from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) have been cleared for clinical use by the Food and Drug Administration (FDA). Both technologies offer advances in computed tomography (CT).

In one of these developments, project lead Cynthia McCollough, Ph.D., director of Mayo Clinic’s CT Clinical Innovation Center and her team helped develop the first photon-counting detector (PCD)-CT system, which is superior to current CT technology. CT imaging has been an immense clinical asset for diagnosing many diseases and injuries. However, since its introduction into the clinic in 1971, the way that the CT detector converts x-rays to electrical signals has remained essentially the same. Photon-counting detectors operate using a fundamentally different mechanism than any prior CT detector ever has.

“This is the first major imaging advancement cleared by the FDA for CT in a decade,” stated Behrouz Shabestari, Ph.D., director of the division of Health Informatics Technologies. “The impact of this development will be far-reaching and provide clinicians with more detailed information for medical diagnoses.”

A CT scan is obtained when an x-ray beam rotates around a patient, allowing x-rays to pass through the patient. As the x-rays leave the patient a picture is taken by a detector and the information is transmitted to a computer for further processing. “Standard CT detectors use a two-step process, where x-rays are turned into light and then light is converted to an electrical signal,” explained Cynthia McCollough. “The photon-counting detector uses a one-step process where the x-ray is immediately transformed into an electrical signal.”

AI could help patients with chronic pain avoid opioids

Image by Andrea from Pixabay
Cognitive behavioral therapy is an effective alternative to opioid painkillers for managing chronic pain. But getting patients to complete those programs is challenging, especially because psychotherapy often requires multiple sessions and mental health specialists are scarce.

A new study suggests that pain CBT supported by artificial intelligence renders the same results as guideline-recommended programs delivered by therapists, while requiring substantially less clinician time, making this therapy more accessible.

“Chronic pain is incredibly common: back pain, osteoarthritis, migraine headaches and more. Because of pain, people miss work, develop depression, some people drink more alcohol than is healthy, and chronic pain is one of the main drivers of the opioid epidemic,” said John Piette, a professor at the University of Michigan’s School of Public Health and senior research scientist at the Veterans Administration.

“We’re very excited about the results of this study, because we were able to demonstrate that we can achieve pain outcomes that are at least as good as standard cognitive behavioral therapy programs, and maybe even better. And we did that with less than half the therapist time as guideline-recommended approaches.”

Traditionally, CBT is delivered by a therapist in 6 to 12 weekly in-person sessions that target patients’ behaviors, help them cope mentally and assist them in regaining functioning.

Wednesday, August 10, 2022

AI May Come to the Rescue of Future Firefighters

A view from NIST's Burn Observation Bubble (BOB) of a burning structure during an experiment, one minute before flashover. 
Credit: NIST

In firefighting, the worst flames are the ones you don’t see coming. Amid the chaos of a burning building, it is difficult to notice the signs of impending flashover — a deadly fire phenomenon wherein nearly all combustible items in a room ignite suddenly. Flashover is one of the leading causes of firefighter deaths, but new research suggests that artificial intelligence (AI) could provide first responders with a much-needed heads-up.

Researchers at the National Institute of Standards and Technology (NIST), the Hong Kong Polytechnic University and other institutions have developed a Flashover Prediction Neural Network (FlashNet) model to forecast the lethal events precious seconds before they erupt. In a new study published in Engineering Applications of Artificial Intelligence, FlashNet boasted an accuracy of up to 92.1% across more than a dozen common residential floorplans in the U.S. and came out on top when going head-to-head with other AI-based flashover predicting programs.

Flashovers tend to suddenly flare up at approximately 600 degrees Celsius (1,100 degrees Fahrenheit) and can then cause temperatures to shoot up further. To anticipate these events, existing research tools either rely on constant streams of temperature data from burning buildings or use machine learning to fill in the missing data in the likely event that heat detectors succumb to high temperatures.

Until now, most machine learning-based prediction tools, including one the authors previously developed, have been trained to operate in a single, familiar environment. In reality, firefighters are not afforded such luxury. As they charge into hostile territory, they may know little to nothing about the floorplan, the location of fire or whether doors are open or closed.

Tuesday, August 9, 2022

How water turns into ice — with quantum accuracy

Researchers at Princeton University combined artificial intelligence and quantum mechanics to simulate what happens at the molecular level when water freezes. The result is the most complete simulation yet of the first steps in ice “nucleation,” a process important for climate and weather modeling.  
Video by Pablo Piaggi, Princeton University

A team based at Princeton University has accurately simulated the initial steps of ice formation by applying artificial intelligence (AI) to solving equations that govern the quantum behavior of individual atoms and molecules.

The resulting simulation describes how water molecules transition into solid ice with quantum accuracy. This level of accuracy, once thought unreachable due to the amount of computing power it would require, became possible when the researchers incorporated deep neural networks, a form of artificial intelligence, into their methods. The study was published in the journal Proceedings of the National Academy of Sciences.

“In a sense, this is like a dream come true,” said Roberto Car, Princeton’s Ralph W. *31 Dornte Professor in Chemistry, who co-pioneered the approach of simulating molecular behaviors based on the underlying quantum laws more than 35 years ago. “Our hope then was that eventually we would be able to study systems like this one, but it was not possible without further conceptual development, and that development came via a completely different field, that of artificial intelligence and data science.”

The ability to model the initial steps in freezing water, a process called ice nucleation, could improve accuracy of weather and climate modeling as well as other processes like flash-freezing food.

The new approach enables the researchers to track the activity of hundreds of thousands of atoms over time periods that are thousands of times longer, albeit still just fractions of a second, than in early studies.

Car co-invented the approach to using underlying quantum mechanical laws to predict the physical movements of atoms and molecules. Quantum mechanical laws dictate how atoms bind to each other to form molecules, and how molecules join with each other to form everyday objects.

Monday, August 1, 2022

Artificial Intelligence Edges Closer to the Clinic

TransMED can help predict the outcomes of COVID-19 patients, generating predictions from different kinds of clinical data, including clinical notes, laboratory tests, diagnosis codes and prescribed drugs. The other uniqueness of TransMED lies in its ability to transfer learn from existing diseases to better predict and reason about progression of new and rare diseases. 
Credit: Shannon Colson | Pacific Northwest National Laboratory

The beginning of the COVID-19 pandemic presented a huge challenge to healthcare workers. Doctors struggled to predict how different patients would fare under treatment against the novel SARS-CoV-2 virus. Deciding how to triage medical resources when presented with very little information took a mental and physical toll on caregivers as the pandemic progressed.

To ease this burden, researchers at Pacific Northwest National Laboratory (PNNL), Stanford University, Virginia Tech, and John Snow Labs developed TransMED, a first-of-its-kind artificial intelligence (AI) prediction tool aimed at addressing issues caused by emerging or rare diseases.

“As COVID-19 unfolded over 2020, it brought a number of us together into thinking how and where we could contribute meaningfully,” said chief scientist Sutanay Choudhury. “We decided we could make the most impact if we worked on the problem of predicting patient outcomes.”

“COVID presented a unique challenge,” said Khushbu Agarwal, lead author of the study published in Nature Scientific Reports. “We had very limited patient data for training an AI model that could learn the complex patterns underlying COVID patient trajectories.”

The multi-institutional team developed TransMED to address this challenge, analyzing data from existing diseases to predict outcomes of an emerging disease.

Thursday, July 28, 2022

AI tackles the challenge of materials structure prediction


The researchers from Cambridge and Linkoping Universities, have designed a way to predict the structure of materials given its constitutive elements. The results are reported in the journal Science Advances.

The arrangement of atoms in a material determines its properties. The ability to predict this arrangement computationally for different combinations of elements, without having to make the material in the lab, would enable researchers to quickly design and improve materials. This paves the way for advances such as better batteries and photovoltaics.

However, there are many ways that atoms can ‘pack’ into a material: some packings are stable, others are not. Determining the stability of a packing is computationally intensive, and calculating every possible arrangement of atoms to find the best one is not practical. This is a significant bottleneck in materials science.

“This materials structure prediction challenge is similar to the protein folding problem in biology,” said Dr Alpha Lee from Cambridge’s Cavendish Laboratory, who co-led the research. “There are many possible structures that a material can ‘fold’ into. Except the materials science problem is perhaps even more challenging than biology because it considers a much broader set of elements.”

Thursday, June 23, 2022

Robots play with play dough


The inner child in many of us feels an overwhelming sense of joy when stumbling across a pile of the fluorescent, rubbery mixture of water, salt, and flour that put goo on the map: play dough. (Even if this happens rarely in adulthood.)

While manipulating play dough is fun and easy for 2-year-olds, the shapeless sludge is hard for robots to handle. Machines have become increasingly reliable with rigid objects, but manipulating soft, deformable objects comes with a laundry list of technical challenges, and most importantly, as with most flexible structures, if you move one part, you’re likely affecting everything else.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University recently let robots take their hand at playing with the modeling compound, but not for nostalgia’s sake. Their new system learns directly from visual inputs to let a robot with a two-fingered gripper see, simulate, and shape doughy objects. “RoboCraft” could reliably plan a robot’s behavior to pinch and release play dough to make various letters, including ones it had never seen. With just 10 minutes of data, the two-finger gripper rivaled human counterparts that teleoperated the machine — performing on-par, and at times even better, on the tested tasks.

Wednesday, June 22, 2022

Where Once Were Black Boxes, NIST’s New LANTERN Illuminates

How do you figure out how to alter a gene so that it makes a usefully different protein? The job might be imagined as interacting with a complex machine (at left) that sports a vast control panel filled with thousands of unlabeled switches, which all affect the device’s output somehow. A new tool called LANTERN figures out which sets of switches — rungs on the gene’s DNA ladder — have the largest effect on a given attribute of the protein. It also summarizes how the user can tweak that attribute to achieve a desired effect, essentially transmuting the many switches on our machine’s panel into another machine (at right) with just a few simple dials.
Credit: B. Hayes/NIST

Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.

The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.

Tuesday, May 24, 2022

AI reveals unsuspected math underlying search for exoplanets

Artist’s concept of a sun-like star (left) and a rocky planet about 60% larger than Earth in orbit in the star’s habitable zone. Gravitational microlensing has the ability to detect such planetary systems and determine the masses and orbital distances, even though the planet itself is too dim to be seen. 
Image credit: NASA Ames/JPL-Caltech/T. Pyle

Artificial intelligence (AI) algorithms trained on real astronomical observations now outperform astronomers in sifting through massive amounts of data to find new exploding stars, identify new types of galaxies and detect the mergers of massive stars, accelerating the rate of new discovery in the world’s oldest science.

But AI, also called machine learning, can reveal something deeper, University of California, Berkeley, astronomers found: unsuspected connections hidden in the complex mathematics arising from general relativity — in particular, how that theory is applied to finding new planets around other stars.

In a paper appearing this week in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it — a process called gravitational microlensing — revealed that the decades-old theories now used to explain these observations are woefully incomplete.

Friday, May 20, 2022

Artificial intelligence predicts patients’ race from their medical images

Researchers demonstrated that medical AI systems can easily learn to recognize racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate.
 Credit: Massachusetts Institute of Technology

The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to reoffend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting.

Removing sensitive features from the data seems like a viable tweak. But what happens when it’s not enough?

Featured Article

Autism and ADHD are linked to disturbed gut flora very early in life

The researchers have found links between the gut flora in babies first year of life and future diagnoses. Photo Credit:  Cheryl Holt Disturb...

Top Viewed Articles