. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Monday, May 8, 2023

AI Predicts Future Pancreatic Cancer

Pancreatic cancer cells
Image Credit: National Cancer Institute

An artificial intelligence tool has successfully identified people at the highest risk for pancreatic cancer up to three years before diagnosis using solely the patients’ medical records, according to new research led by investigators at Harvard Medical School and the University of Copenhagen, in collaboration with VA Boston Healthcare System, Dana-Farber Cancer Institute, and the Harvard T.H. Chan School of Public Health.

The findings, published May 8 in Nature Medicine, suggest that AI-based population screening could be valuable in finding those at elevated risk for the disease and could expedite the diagnosis of a condition found all too often at advanced stages when treatment is less effective and outcomes are dismal, the researchers said. Pancreatic cancer is one of the deadliest cancers in the world, and its toll projected to increase.

Currently, there are no population-based tools to screen broadly for pancreatic cancer. Those with a family history and certain genetic mutations that predispose them to pancreatic cancer are screened in a targeted fashion. But such targeted screenings can miss other cases that fall outside of those categories, the researchers said.

“One of the most important decisions clinicians face day to day is who is at high risk for a disease, and who would benefit from further testing, which can also mean more invasive and more expensive procedures that carry their own risks,” said study co-senior investigator Chris Sander, faculty member in the Department of Systems Biology in the Blavatnik Institute at HMS. “An AI tool that can zero in on those at highest risk for pancreatic cancer who stand to benefit most from further tests could go a long way toward improving clinical decision-making.”

Monday, May 1, 2023

‘Raw’ data show AI signals mirror how the brain listens and learns

Researchers found strikingly similar signals between the brain and artificial neural networks. The blue line is brain wave when humans listen to a vowel. Red is the artificial neural network’s response to the exact same vowel. The two signals are raw, meaning no transformations were needed.
Illustration Credit: Courtesy Gasper Begus

New research from the University of California, Berkeley, shows that artificial intelligence (AI) systems can process signals in a way that is remarkably similar to how the brain interprets speech, a finding scientists say might help explain the black box of how AI systems operate.

Using a system of electrodes placed on participants’ heads, scientists with the Berkeley Speech and Computation Lab measured brain waves as participants listened to a single syllable — “bah.” They then compared that brain activity to the signals produced by an AI system trained to learn English.

“The shapes are remarkably similar,” said Gasper Begus, assistant professor of linguistics at UC Berkeley and lead author on the study published recently in the journal Scientific Reports. “That tells you similar things get encoded, that processing is similar. “

A side-by-side comparison graph of the two signals shows that similarity strikingly.

Tuesday, April 18, 2023

Study shows how machine learning can identify social grooming behavior from acceleration signals in wild baboons

Photo Credit: Charl Durand

Scientists from Swansea University and the University of Cape Town have tracked social grooming behavior in wild baboons using collar-mounted accelerometers.

The study, published in the journal Royal Society Open Science, is the first to successfully calculate grooming budgets using this method, which opens a whole avenue of future research directions.

Using collars containing accelerometers built at Swansea University, the team recorded the activities of baboons in Cape Town, South Africa, identifying and quantifying general activities such as resting, walking, foraging and running, and also the giving and receiving of grooming.

A supervised machine learning algorithm was trained on acceleration data matched to baboon video recordings and successfully recognized the giving and receiving grooming with high overall accuracy.

The team then applied their machine learning model to acceleration data collected from 12 baboons to quantify grooming and other behaviors continuously throughout the day and night-time.

Friday, April 14, 2023

Personalized Gut Microbiome Analysis for Colorectal Cancer Classification with Explainable AI

Explainable AI offers a promising solution for finding links between diseases and certain species of gut bacteria, finds a research team at Tokyo Tech. Using a concept borrowed from game theory, the researchers developed a framework that reveals which bacterial species are closely associated with colorectal cancer in individual subjects, providing a more reliable way to find and characterize disease subgroups and identify biomarkers in the gut microbiome.

The gut microbiome comprises a complex population of different bacterial species that are essential to human health. In recent years, scientists across several fields have found that changes in the gut microbiome can be linked to a wide variety of diseases, notably colorectal cancer (CRC). Multiple studies have revealed that a higher abundance of certain bacteria, such as Fusobacterium nucleatum and Parvimonas micra, is typically associated with CRC progression.

Thursday, April 13, 2023

AI Tool Predicts Colon Cancer Survival, Treatment Response

New AI tool accurately predicts both overall survival and disease-free survival after colorectal cancer diagnosis.
Image Credit: bodymybody

A new artificial intelligence model designed by researchers at Harvard Medical School and National Cheng Kung University in Taiwan could bring much-needed clarity to doctors delivering prognoses and deciding on treatments for patients with colorectal cancer, the second deadliest cancer worldwide.

Solely by looking at images of tumor samples — microscopic depictions of cancer cells — the new tool accurately predicts how aggressive a colorectal tumor is, how likely the patient is to survive with and without disease recurrence, and what the optimal therapy might be for them.

Having a tool that answers such questions could help clinicians and patients navigate this wily disease, which often behaves differently even among people with similar disease profiles who receive the same treatment — and could ultimately spare some of the 1 million lives that colorectal cancer claims every year.

Thursday, March 30, 2023

AI predicts enzyme function better than leading tools

An Illinois research team created an AI tool to predict an enzyme’s function from its sequence using the campus network and resource group servers. Pictured, from left: Tianhao You, Haiyang (Ocean) Cui, Huimin Zhao and Guangde Jiang.   
Photo Credit: Fred Zwicky

A new artificial intelligence tool can predict the functions of enzymes based on their amino acid sequences, even when the enzymes are unstudied or poorly understood. The researchers said the AI tool, dubbed CLEAN, outperforms the leading state-of-the-art tools in accuracy, reliability and sensitivity. Better understanding of enzymes and their functions would be a boon for research in genomics, chemistry, industrial materials, medicine, pharmaceuticals and more.

“Just like ChatGPT uses data from written language to create predictive text, we are leveraging the language of proteins to predict their activity,” said study leader Huimin Zhao, a University of Illinois Urbana-Champaign professor of chemical and biomolecular engineering. “Almost every researcher, when working with a new protein sequence, wants to know right away what the protein does. In addition, when making chemicals for any application – biology, medicine, industry – this tool will help researchers quickly identify the proper enzymes needed for the synthesis of chemicals and materials.”

The researchers will publish their findings in the journal Science and make CLEAN accessible online March 31.

Machine learning models rank predictive risks for Alzheimer’s disease

Xiaoyi Raymond Gao, PhD Associate Professor
Photo Credit: Courtesy of Ohio State University

Once adults reach age 65, the threshold age for the onset of Alzheimer’s disease, the extent of their genetic risk may outweigh age as a predictor of whether they will develop the fatal brain disorder, a new study suggests. 

The study, published recently in the journal Scientific Reports, is the first to construct machine learning models with genetic risk scores, non-genetic information and electronic health record data from nearly half a million individuals to rank risk factors in order of how strong their association is with eventual development of Alzheimer’s disease.

Researchers used the models to rank predictive risk factors for two populations from the UK Biobank: White individuals aged 40 and older, and a subset of those adults who were 65 or older. 

Results showed that age – which constitutes one-third of total risk by age 85, according to the Alzheimer’s Association – was the biggest risk factor for Alzheimer’s in the entire population, but for the older adults, genetic risk as determined by a polygenic risk score was more predictive. 

“We all know Alzheimer’s disease is a later-onset disease, so we know age is an important risk factor. But when we consider risk only for people age 65 or older, then genetic information captured by a polygenic risk score ranks higher than age,” said lead study author Xiaoyi Raymond Gao, associate professor of ophthalmology and visual sciences and of biomedical informatics in The Ohio State University College of Medicine. “That means it’s really important to consider genetic information when we work on Alzheimer’s disease.” 

Thursday, March 23, 2023

Can Artificial Intelligence Predict Spatiotemporal Distribution of Dengue Fever Outbreaks with Remote Sensing Data?

Image Credit: Sophia University
Full Size Image

Researchers train machine learning model with climatic and epidemiology remote sensing data to predict the spatiotemporal distribution of disease outbreaks

Cases of dengue fever and other zoonotic diseases will keep increasing owing to climate change, and prevention via early warning is one of our best options against them. Recently, researchers combined a machine learning model with remote sensing climatic data and information on past dengue fever cases in Chinese Taiwan, with the aim of predicting likely outbreak locations. Their findings highlight the hurdles to this approach and could facilitate more accurate predictive models.

Outbreaks of zoonotic diseases, which are those transmitted from animals to humans, are globally on the rise owing to climate change. In particular, the spread of diseases transmitted by mosquitoes is very sensitive to climate change, and Chinese Taiwan has seen a worrisome increase in the number of cases of dengue fever in recent years.

Like for most known diseases, the popular saying “an ounce of prevention is worth a pound of cure” also rings true for dengue fever. Since there is still no safe and effective vaccine for all on a global scale, dengue fever prevention efforts rely on limiting places where mosquitoes can lay their eggs and giving people an early warning when an outbreak is likely to happen. However, thus far, there are no mathematical models that can accurately predict the location of dengue fever outbreaks ahead of time.

Wednesday, March 22, 2023

Shining a light into the ‘‘black box’’ of AI

With no insight into how Al algorithms work or what influences their results, the “black box” nature of AI technology raises important questions over trustworthiness.
Illustration Credit: Gerd Altmann

An international team led by UNIGE, HUG and NUS has developed an innovative method for evaluating AI interpretability methods, with the aim of deciphering the basis of AI reasoning and possible biases.

 Researchers from the University of Geneva (UNIGE), the Geneva University Hospitals (HUG), and the National University of Singapore (NUS) have developed a novel method for evaluating the interpretability of artificial intelligence (AI) technologies, opening the door to greater transparency and trust in AI-driven diagnostic and predictive tools. The innovative approach sheds light on the opaque workings of so-called "black box" AI algorithms, helping users understand what influences the results produced by AI and whether the results can be trusted. This is especially important in situations that have significant impacts on the health and lives of people, such as using AI in medical applications. The research carries particular relevance in the context of the forthcoming European Union Artificial Intelligence Act which aims to regulate the development and use of AI within the EU. The findings have recently been published in the journal Nature Machine Intelligence.

Wednesday, March 15, 2023

“Denoising” a Noisy Ocean

Study lead author Ella Kim (pink helmet) helps deploy a HARP instrument package.
Photo Credit: Ana Širović

Come mating season, fishes off the California coast sing songs of love in the evenings and before sunrise. They vocalize not so much as lone crooners but in choruses, in some cases loud enough to be heard from land. It’s a technique of romance shared by frogs, insects, whales, and other animals when the time is right.

For most of these vocal arrangements, the choruses are low-frequency. They’re hard to distinguish from the sounds of ships passing in the night among others.

Biologists, however, have long been interested in listening in on them in the name of understanding fish behavior toward an ultimate goal: They can help preserve fish populations and ocean health by identifying spawning seasons to inform fisheries management.

Now scientists at Scripps Institution of Oceanography at UC San Diego and colleagues have developed a way for computers to sift through sounds collected by field acoustic recording packages known as HARPs and process them faster than even the most trained human analysts. The method represents a major advance in the field of signal processing with uses beyond marine environments.

Wednesday, March 1, 2023

AI offers ‘paradigm shift’ in Stanford study of brain injury

Models discovered by the Constitutive Artificial Neural Network outperform existing models for brain tissue.
Image Credit: Ellen Kuhl

By helping researchers choose among thousands of available computational models of mechanical stress on the brain, AI is yielding powerful new insight on traumatic brain injury.

From the gridiron to the battlefield, the study of traumatic brain injury has exploded in recent years. Crucial to understanding brain injury is the ability to model the mechanical forces that compress, stretch, and twist the brain tissue and cause damage that ranges from fleeting to fatal.

Researchers at Stanford University now say they have tapped artificial intelligence to produce a profoundly more accurate model of how deformations translate into stresses in the brain and believe that their approach could reveal a more definitive understanding of when and why concussion sometimes leads to lasting brain damage, and other times not.

“The problem in brain modeling to date is that the brain is not a homogeneous tissue – it’s not the same in every part of the brain. Yet, trauma is often pervasive,” said Ellen Kuhl, professor of mechanical engineering, director of the Living Matter Lab, and senior author of a new study appearing in the journal, Acta Biomaterialia. “The brain is also ultrasoft, much like Jell-O, which makes both testing and modeling physical effects on the brain very challenging.”

Monday, February 27, 2023

Hackers could try to take over a military aircraft; can a cyber shuffle stop them?

Sandia National Laboratories cybersecurity expert Chris Jenkins sits in front of a whiteboard with the original sketch of the moving target defense idea for which he is the team lead. When the COVID-19 pandemic hit, Jenkins began working from home, and his office whiteboard remained virtually undisturbed for more than two years.
Photo Credit: Craig Fritz

A cybersecurity technique that shuffles network addresses like a blackjack dealer shuffles playing cards could effectively befuddle hackers gambling for control of a military jet, commercial airliner or spacecraft, according to new research. However, the research also shows these defenses must be designed to counter increasingly sophisticated algorithms used to break them.

Many aircraft, spacecraft and weapons systems have an onboard computer network known as military standard 1553, commonly referred to as MIL-STD-1553, or even just 1553. The network is a tried-and-true protocol for letting systems like radar, flight controls and the heads-up display talk to each other.

Securing these networks against a cyberattack is a national security imperative, said Chris Jenkins, a Sandia National Laboratories cybersecurity scientist. If a hacker were to take over 1553 midflight, he said, the pilot could lose control of critical aircraft systems, and the impact could be devastating.

Jenkins is not alone in his concerns. Many researchers across the country are designing defenses for systems that utilize the MIL-STD-1553 protocol for command and control. Recently, Jenkins and his team at Sandia partnered with researchers at Purdue University in West Lafayette, Indiana, to test an idea that could secure these critical networks.

Let's get wasted and apply some deep thinking to rubbish

Photo Credit: John Cameron

Artificial intelligence has made a giant leap into our rubbish bins thanks to new technology being deployed at the University of South Australia.

Using algorithms to analyze data from smart bin sensors, UniSA PhD student Sabbir Ahmed is designing a deep learning model to predict where waste is accumulating in cities and how often public bins should be cleared.

“Sensors in the public smart bins can give us a lot of information about how busy specific locations are, what type of rubbish is being disposed of and even how much methane gas is being produced from food waste in bins,” Ahmed says.

“All that data can be fed into a neural network model to predict where bins in parks, shopping centers and other public places are likely to fill up quickly and, conversely, which locations are rarely visited.

“This can help councils to optimize their waste management services, schedule bin clearances and even relocate rarely used bins to where they are needed most.”

Wednesday, February 22, 2023

Infants Outperform AI in “Commonsense Psychology”

New Study Shows How Infants Are More Adept at Spotting Motivations that Drive Human Behavior

Infants outperform artificial intelligence in detecting what motivates other people’s actions, finds a new study by a team of psychology and data science researchers. Its results, which highlight fundamental differences between cognition and computation, point to shortcomings in today’s technologies and where improvements are needed for AI to more fully replicate human behavior. 

“Adults and even infants can easily make reliable inferences about what drives other people’s actions,” explains Moira Dillon, an assistant professor in New York University’s Department of Psychology and the senior author of the paper, which appears in the journal Cognition. “Current AI finds these inferences challenging to make.”

“The novel idea of putting infants and AI head-to-head on the same tasks is allowing researchers to better describe infants’ intuitive knowledge about other people and suggest ways of integrating that knowledge into AI,” she adds.

“If AI aims to build flexible, commonsense thinkers like human adults become, then machines should draw upon the same core abilities infants possess in detecting goals and preferences,” says Brenden Lake, an assistant professor in NYU’s Center for Data Science and Department of Psychology and one of the paper’s authors.

Monday, February 20, 2023

Researchers aim to bring humans back into the loop, as AI use and misuse rises

U-M researchers aim to bring humans back into the loop, as AI use and misuse rises
Image Credit: Gerd Altmann

Artificial intelligence is dominating headlines—enabling new innovations that drive business performance—yet the negative implications for society are an afterthought.

How can humans get back into the loop in the quest toward a better society for all?

A trans-Atlantic team of researchers, including two from the University of Michigan, has reviewed information systems research on what’s known as the “Fourth Industrial Revolution” and found an overwhelming focus on technology-enabled business benefits.

The focus means far less attention is being paid to societal implications—what the researchers refer to as “the increasing risk and damage to humans.”

“We’re talking about AI the wrong way—focusing on technology not people—moving us away from the things we want, such as better medications, elder care and safety regulations, and toward the things we don’t, like harmful deepfakes, job losses and biased decision making,” said Nigel Melville, associate professor of technology and operations at U-M’s Ross School of Business and design science program director.

Thursday, February 16, 2023

AI could improve mental health care

Photo Credit: SHVETS production

Patients are often asked to rate their feelings using a rating scale, when talking to psychologists or doctors about their mental health. This is currently how depression and anxiety are diagnosed. However, a new study from Lund University in Sweden shows that allowing patients to describe their experience using their own words - is potentially viewed as more precise and preferred by the patients. The Lund researchers have developed an AI-tool that could help doctors analyze their patients’ answers.

The study, published in PLOS ONE, shows that clinicians and patients differ somewhat, as clinics often prefer rating scales when diagnosing a patient (e.g., little interest in doing things: not at all, sometimes, often, daily) whereas patients prefer free language (e.g., Describe your mental health).

The researchers surveyed a group of 150 patients with self-diagnosed depression or anxiety, posing the same questions to a control group of 150 other participants.

Wednesday, February 15, 2023

AI with infrared imaging enables precise colon cancer diagnostics

Klaus Gerwert, Stephanie Schörner and Frederik Großerüschkamp (from left) want to improve the diagnosis of colon cancer with the help of artificial intelligence.
Photo Credit: © RUB, Marquard

Artificial intelligence and infrared imaging automatically classify tumors and are faster than previous methods.

The immense progress in the area of therapy options over the past few years has significantly improved the chances of recovery for patients with colon cancer. However, these new approaches, such as immunotherapy, require a precise diagnosis so that they can be tailored to the respective person. Researchers at the Center for Protein Diagnostics PRODI at the Ruhr University Bochum use artificial intelligence in combination with infrared imaging to optimally coordinate the therapy of colon cancer with the individual patient. The label-free and automatable method can complement existing pathological analyzes. The team around Prof. Dr. Klaus Gerwert reports in the journal "European Journal of Cancer" on January 28, 2023.

Deep insights into human tissue within an hour

The PRODI team has been developing a new method of digital imaging for several years: The so-called label-free infrared (IR) imaging measures the genomic and proteomic composition of the tissue examined, i.e. provides molecular information based on the infrared spectra. This information is decoded using artificial intelligence and displayed as false color images. For this purpose, the researchers use image analysis methods from the field of deep learning.

Monday, February 13, 2023

VISTA X-62 Advancing Autonomy and Changing the Face of Air Power

The X-62A VISTA Aircraft flying above Edwards Air Force Base, California.
Photo Credit: Kyle Brasier, U.S. Air Force

The Lockheed Martin VISTA X-62A, a one-of-a-kind training aircraft, was flown by an artificial intelligence agent for more than 17 hours recently, representing the first time AI engaged on a tactical aircraft.

VISTA, short for Variable In-flight Simulation Test Aircraft, is changing the face of air power at the U.S. Air Force Test Pilot School (USAF TPS) at Edwards Air Force Base in California.

VISTA is a one-of-a-kind training airplane developed by Lockheed Martin Skunk Works® in collaboration with Calspan Corporation for the USAF TPS. Built on open systems architecture, VISTA is fitted with software that allows it to mimic the performance characteristics of other aircraft.

"VISTA will allow us to parallelize the development and test of cutting-edge artificial intelligence techniques with new uncrewed vehicle designs," said Dr. M. Christopher Cotting, U.S. Air Force Test Pilot School director of research. "This approach, combined with focused testing on new vehicle systems as they are produced, will rapidly mature autonomy for uncrewed platforms and allow us to deliver tactically relevant capability to our warfighter."

Efficient technique improves machine-learning models’ reliability

Researchers from MIT and the MIT-IBM Watson AI Lab have developed a new technique that can enable a machine-learning model to quantify how confident it is in its predictions, but does not require vast troves of new data and is much less computationally intensive than other techniques.
Image Credit: MIT News, iStock
Creative Commons Attribution Non-Commercial No Derivatives license

Powerful machine-learning models are being used to help people tackle tough problems such as identifying disease in medical images or detecting road obstacles for autonomous vehicles. But machine-learning models can make mistakes, so in high-stakes settings it’s critical that humans know when to trust a model’s predictions.

Uncertainty quantification is one tool that improves a model’s reliability; the model produces a score along with the prediction that expresses a confidence level that the prediction is correct. While uncertainty quantification can be useful, existing methods typically require retraining the entire model to give it that ability. Training involves showing a model millions of examples so it can learn a task. Retraining then requires millions of new data inputs, which can be expensive and difficult to obtain, and also uses huge amounts of computing resources.

Researchers at MIT and the MIT-IBM Watson AI Lab have now developed a technique that enables a model to perform more effective uncertainty quantification, while using far fewer computing resources than other methods, and no additional data. Their technique, which does not require a user to retrain or modify a model, is flexible enough for many applications.

Friday, February 3, 2023

Robots and A.I. team up to discover highly selective catalysts

Close up of the semi-automated synthesis robot used to generate training data
Photo Credit: ICReDD

Researchers used a chemical synthesis robot and computationally cost effective A.I. model to successfully predict and validate highly selective catalysts.

Artificial intelligence (A.I.) has made headlines recently with the advent of ChatGPT’s language processing capabilities. Creating a similarly powerful tool for chemical reaction design remains a significant challenge, especially for complex catalytic reactions. To help address this challenge, researchers at the Institute for Chemical Reaction Design and Discovery and the Max Planck Institut für Kohlenforschung have demonstrated a machine learning method that utilizes advanced yet efficient 2D chemical descriptors to accurately predict highly selective asymmetric catalysts—without the need for quantum chemical computations.  

“There have been several advanced technologies which can “predict” catalyst structures, but those methods often required large investments of calculation resources and time; yet their accuracy was still limited,” said joint first author Nobuya Tsuji. “In this project, we have developed a predictive model which you can run even with an everyday laptop PC.”

Featured Article

Brain-Belly Connection: Gut Health May Influence Likelihood of Developing Alzheimer’s

UNLV study pinpoints 10 bacterial groups associated with Alzheimer’s disease, provides new insights into the relationship between gut makeup...

Top Viewed Articles