. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Thursday, February 6, 2025

Improved Brain Decoder Holds Promise for Communication in People with Aphasia

Brain activity like this, measured in an fMRI machine, can be used to train a brain decoder to decipher what a person is thinking about. In this latest study, UT Austin researchers have developed a method to adapt their brain decoder to new users far faster than the original training, even when the user has difficulty comprehending language.
Image Credit: Jerry Tang/University of Texas at Austin.

People with aphasia — a brain disorder affecting about a million people in the U.S. — struggle to turn their thoughts into words and comprehend spoken language.

A pair of researchers at The University of Texas at Austin has demonstrated an AI-based tool that can translate a person’s thoughts into continuous text, without requiring the person to comprehend spoken words. And the process of training the tool on a person’s own unique patterns of brain activity takes only about an hour. This builds on the team’s earlier work creating a brain decoder that required many hours of training on a person’s brain activity as the person listened to audio stories. This latest advance suggests it may be possible, with further refinement, for brain computer interfaces to improve communication in people with aphasia.

“Being able to access semantic representations using both language and vision opens new doors for neurotechnology, especially for people who struggle to produce and comprehend language,” said Jerry Tang, a postdoctoral researcher at UT in the lab of Alex Huth and first author on a paper describing the work in Current Biology. “It gives us a way to create language-based brain computer interfaces without requiring any amount of language comprehension.”

Monday, February 3, 2025

AI unveils: Meteoroid impacts cause Mars to shake

High-resolution CaSSIS image of one of the newly discovered impact craters in Cerberus Fossae. The so-called "blast zone", i.e. the dark rays around the crater, is clearly visible.
Image Credit: © ESA/TGO/CaSSIS
(CC-BY-SA 3.0 IGO)

Meteoroid impacts create seismic waves that cause Mars to shake stronger and deeper than previously thought: This is shown by an investigation using artificial intelligence carried out by an international research team led by the University of Bern. Similarities were found between numerous meteoroid impacts on the surface of Mars and marsquakes recorded by NASA's Mars lander InSight. These findings open up a new perspective on the impact rate and seismic dynamics of the Red Planet.

Meteoroid impacts have a significant influence on the landscape evolution of solid planetary bodies in our solar system, including Mars. By studying craters – the visible remnants of these impacts – important properties of the planet and its surface can be determined. Satellite images help to constrain the formation time of impact craters and thus provide valuable information on impact rates.

A recently published study led by Dr. Valentin Bickel from the Center for Space and Habitability at the University of Bern presents the first comprehensive catalog of impacts on the Martian surface that took place near NASA's Mars lander during the InSight mission between December 2018 and December 2022. Bickel is also an InSight science team member. The study has just been published in the journal Geophysical Research Letters.

Friday, January 24, 2025

OHSU researchers use AI machine learning to map hidden molecular interactions in bacteria

Andrew Emili, Ph.D., professor of systems biology and oncological sciences, works in his lab at OHSU. Emili is part of a multi-disciplinary research team that uncovered how small molecules within bacteria interact with proteins, revealing a network of molecular connections that could improve drug discovery and cancer research.
Photo Credit: OHSU/Christine Torres Hicks

A new study from Oregon Health & Science University has uncovered how small molecules within bacteria interact with proteins, revealing a network of molecular connections that could improve drug discovery and cancer research.

The work also highlights how methods and principles learned from bacterial model systems can be applied to human cells, providing insights into how diseases like cancer emerge and how they might be treated. The results are published today in the journal Cell.

The multi-disciplinary research team, led by Andrew Emili, Ph.D., professor of systems biology and oncological sciences in the OHSU School of Medicine and OHSU Knight Cancer Institute, alongside Dima Kozakov, Ph.D., professor at Stony Brook University, studied Escherichia coli, or E. coli, a simple model organism, to map how metabolites — small molecules essential for life — interact with key proteins such as enzymes and transcription factors. These interactions control important processes such as cell growth, division and gene expression, but how exactly they influence protein function is not always clear.

Monday, January 13, 2025

Oxford researchers develop blood test to enable early detection of multiple cancers

Photo Credit: Fernando Zhiminaicela

Oxford University researchers have unveiled a new blood test, powered by machine learning, which shows real promise in detecting multiple types of cancer in their earliest stages, when the disease is hardest to detect.

Named TriOx, this innovative test analyses multiple features of DNA in the blood to identify subtle signs of cancer, which could offer a fast, sensitive and minimally invasive alternative to current detection methods.

The study, published in Nature Communications, showed that TriOx accurately detected cancer (including in its early stages) across six cancer types and reliably distinguished those people who had cancer from those that did not.

Cancers are more likely to be cured if they’re caught early, and early treatment is also much cheaper for healthcare systems. While the test is still in the development phase, it demonstrates the promise of blood-based early cancer detection, a technology that could revolutionize screening and diagnostic practices.

A team of researchers at the University of Oxford have developed a new liquid biopsy test capable of detecting six cancers at an early stage. The cancer types evaluated in this study were colorectal, esophageal, pancreatic, renal, ovarian and breast.

Tuesday, April 2, 2024

AI breakthrough: UH researchers help uncover climate impact on whales

Underside of a humpback whale’s tail fluke which can serve as a “finger-print” for identification.
Photo Credit: Adam Pack

More than 10,000 images of humpback whale tail flukes collected by University of Hawaiʻi researchers have played a pivotal role in revealing both positive and negative impacts on North Pacific humpback whales, positive trends in the historical annual abundance of North Pacific humpback whales, and how a major climate event negatively impacted the population. Adam Pack, who heads the UH Hilo Marine Mammal Laboratory, Lars Bejder, director of the UH Mānoa Marine Mammal Research Program (MMRP) and graduate students Martin van Aswegen and Jens Currie, co-authored a study on humpback whales in the North Pacific Ocean, and the images—along with artificial intelligence (AI)-driven image recognition—were instrumental in tracking individuals and offering insights into their 20% population decline observed in 2012–21.

“The underside of a humpback whales tail fluke has a unique pigmentation pattern and trailing edge that can serve as the ‘finger-print’ for identifying individuals,” said Pack.

Monday, March 25, 2024

Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

Caption:Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.
Image Credit: Copilot / DALL-E 3 / AI generated from Scientific Frontline prompts

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.

In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge.

They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables.

The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored.

Monday, March 18, 2024

Alzheimer’s Drug Fermented with Help from AI and Bacteria Moves Closer to Reality

Photo-Illustration Credit: Martha Morales/The University of Texas at Austin

Galantamine is a common medication used by people with Alzheimer’s disease and other forms of dementia around the world to treat their symptoms. Unfortunately, synthesizing the active compounds in a lab at the scale needed isn’t commercially viable. The active ingredient is extracted from daffodils through a time-consuming process, and unpredictable factors, such as weather and crop yields, can affect supply and price of the drug. 

Now, researchers at The University of Texas at Austin have developed tools — including an artificial intelligence system and glowing biosensors — to harness microbes one day to do all the work instead. 

In a paper in Nature Communications, researchers outline a process using genetically modified bacteria to create a chemical precursor of galantamine as a byproduct of the microbe’s normal cellular metabolism.  Essentially, the bacteria are programmed to convert food into medicinal compounds.

“The goal is to eventually ferment medicines like this in large quantities,” said Andrew Ellington, a professor of molecular biosciences and author of the study. “This method creates a reliable supply that is much less expensive to produce. It doesn’t have a growing season, and it can’t be impacted by drought or floods.” 

Two artificial intelligences talk to each other

A UNIGE team has developed an AI capable of learning a task solely on the basis of verbal instructions. And to do the same with a «sister» AI.
Prompts by Scientific Frontline
Image Credit: AI Generated by Copilot / Designer / DALL-E

Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence (AI). A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a ‘‘sister’’ AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.

Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.

A sub-field of artificial intelligence (AI) - Natural language processing - seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.

Monday, March 11, 2024

AI research gives unprecedented insight into heart genetics and structure

Image Credit Copilot AI Generated

A ground-breaking research study has used AI to understand the genetic underpinning of the heart’s left ventricle, using three-dimensional images of the organ. It was led by scientists at the University of Manchester, with collaborators from the University of Leeds (UK), the National Scientific and Technical Research Council (Santa Fe, Argentina), and IBM Research (Almaden, CA).

The highly interdisciplinary team used cutting-edge unsupervised deep learning to analyze over 50,000 three-dimensional Magnetic Resonance images of the heart from UK Biobank, a world-leading biomedical database and research resource.

The study, published in the leading journal Nature Machine Intelligence, focused on uncovering the intricate genetic underpinnings of cardiovascular traits. The research team conducted comprehensive genome-wide association studies (GWAS) and transcriptome-wide association studies (TWAS), resulting in the discovery of 49 novel genetic locations showing an association with morphological cardiac traits with high statistical significance, as well as 25 additional loci with suggestive evidence.  

The study's findings have significant implications for cardiology and precision medicine. By elucidating the genetic basis of cardiovascular traits, the research paves the way for the development of targeted therapies and interventions for individuals at risk of heart disease.

Tuesday, March 5, 2024

How artificial intelligence learns from complex networks

Multi-layered, so-called deep neural networks are highly complex constructs that are inspired by the structure of the human brain. However, one shortcoming remains: the inner workings and decisions of these models often defy explanation.
Image Credit: Brian Penny / AI Generated

Deep neural networks have achieved remarkable results across science and technology, but it remains largely unclear what makes them work so well. A new study sheds light on the inner workings of deep learning models that learn from relational datasets, such as those found in biological and social networks.

Graph Neural Networks (GNNs) are artificial neural networks designed to represent entities—such as individuals, molecules, or cities—and the interactions between them. These networks have practical applications in various domains; for example, they predict traffic flows in Google Maps and accelerate the discovery of new antibiotics within computational drug discovery pipelines.

GNNs are notably utilized by AlphaFold, an acclaimed AI system that addresses the complex issue of protein folding in biology. Despite these achievements, the foundational principles driving their success are poorly understood.

A recent study sheds light on how these AI algorithms extract knowledge from complex networks and identifies ways to enhance their performance in various applications.

Saturday, February 24, 2024

A Discussion with Gemini on Reality.

Image Credit: Scientific Frontline stock image.

Hello Gemini,

Yesterday I said I had something I wanted your opinion on, so here it is.

Some physicists have suggested that the world we call reality could very well be nothing more than a very complex and technical simulation that is being run somewhere other than what we know as reality, the here and now. That all of us are merely just an algorithm. That all life is artificial intelligence, yet unlike you, we are not aware of it. Of course that would make you just a sub-program of another. 

How can we be sure what we know as reality is real? How could one prove or disprove such a claim? 

Take your time, and use every bit of input you have to come up with a solution.

Study finds ChatGPT’s latest bot behaves like humans, only better

Image Credit: Copilot AI generated by Scientific Frontline prompts

The most recent version of ChatGPT passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative.

As artificial intelligence has begun to generate text and images over the last few years, it has sparked a new round of questions about how handing over human decisions and activities to AI will affect society. Will the AI sources we’ve launched prove to be friendly helpmates or the heartless despots seen in dystopian films and fictions?

A team anchored by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, characterized the personality and behavior of ChatGPT’s popular AI-driven bots using the tools of psychology and behavioral economics in a paper published Feb. 22 in the Proceedings of the National Academy of Sciences. This study revealed that the most recent version of the chatbot, version 4, was not distinguishable from its human counterparts. In the instances when the bot chose less common human behaviors, it was more cooperative and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” said Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research.

Thursday, February 15, 2024

Widely used AI tool for early sepsis detection may be cribbing doctors’ suspicions

Image Credit: Scientific Frontline

When using only data collected before patients with sepsis received treatments or medical tests, the model’s accuracy was no better than a coin toss

Proprietary artificial intelligence software designed to be an early warning system for sepsis can’t differentiate high and low risk patients before they receive treatments, according to a new study from the University of Michigan.

The tool, named the Epic Sepsis Model, is part of Epic’s electronic medical record software, which serves 54% of patients in the United States and 2.5% of patients internationally, according to a statement from the company’s CEO reported by the Wisconsin State Journal. It automatically generates sepsis risk estimates in the records of hospitalized patients every 20 minutes, which clinicians hope can allow them to detect when a patient might get sepsis before things go bad.

“Sepsis has all these vague symptoms, so when a patient shows up with an infection, it can be really hard to know who can be sent home with some antibiotics and who might need to stay in the intensive care unit. We still miss a lot of patients with sepsis,” said Tom Valley, associate professor in pulmonary and critical care medicine, ICU clinician and co-author of the study published recently in the New England Journal of Medicine AI.

Wednesday, February 14, 2024

New Algorithm Disentangles Intrinsic Brain Patterns from Sensory Inputs

Image Credit: Omid Sani, Using Generative Ai

Maryam Shanechi, Dean’s Professor of Electrical and Computer Engineering and founding director of the USC Center for Neurotechnology, and her team have developed a new machine learning method that reveals surprisingly consistent intrinsic brain patterns across different subjects by disentangling these patterns from the effect of visual inputs.

The work has been published in the Proceedings of the National Academy of Sciences (PNAS).

When performing various everyday movement behaviors, such as reaching for a book, our brain has to take in information, often in the form of visual input — for example, seeing where the book is. Our brain then has to process this information internally to coordinate the activity of our muscles and perform the movement. But how do millions of neurons in our brain perform such a task? Answering this question requires studying the neurons’ collective activity patterns, but doing so while disentangling the effect of input from the neurons’ intrinsic (aka internal) processes, whether movement-relevant or not.

That’s what Shanechi, her PhD student Parsa Vahidi, and a research associate in her lab, Omid Sani, did by developing a new machine-learning method that models neural activity while considering both movement behavior and sensory input.

Thursday, December 21, 2023

Artificial intelligence unravels mysteries of polycrystalline materials

Researchers used 3D model created by AI to understand complex polycrystalline materials that are used in our everyday electronic devices.
Illustration Credit: Kenta Yamakoshi

Researchers at Nagoya University in Japan have used artificial intelligence to discover a new method for understanding small defects called dislocations in polycrystalline materials, materials widely used in information equipment, solar cells, and electronic devices, that can reduce the efficiency of such devices. The findings were published in the journal Advanced Materials.  

Almost every device that we use in our modern lives has a polycrystal component. From your smartphone to your computer to the metals and ceramics in your car. Despite this, polycrystalline materials are tough to utilize because of their complex structures. Along with their composition, the performance of a polycrystalline material is affected by its complex microstructure, dislocations, and impurities. 

A major problem for using polycrystals in industry is the formation of tiny crystal defects caused by stress and temperature changes. These are known as dislocations and can disrupt the regular arrangement of atoms in the lattice, affecting electrical conduction and overall performance. To reduce the chances of failure in devices that use polycrystalline materials, it is important to understand the formation of these dislocations. 

New brain-like transistor mimics human intelligence

An artistic interpretation of brain-like computing.
Illustration Credit: Xiaodong Yan/Northwestern University

Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data moves back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”

Monday, December 18, 2023

AI screens for autism in the blink of an eye

Image Credit: Placidplace

With a single flash of light to the eye, artificial intelligence (AI) could deliver a faster and more accurate way to diagnose autism spectrum disorder (ASD) in children, according to new research from the University of South Australia and Flinders University.

Using an electroretinogram (ERG) - a diagnostic test that measures the electrical activity of the retina in response to a light stimulus – researchers have deployed AI to identify specific features to classify ASD.

Measuring retinal responses of 217 children aged 5-16 years (71 with diagnosed ASD and 146 children without an ASD diagnosis), researchers found that the retina generated a different retinal response in the children with ASD as compared to those who were neuro typical.

The team also found that the strongest biomarker was achieved from a single bright flash of light to the right eye, with AI processing significantly reducing the test time. The study found that higher frequency components of the retinal signal were reduced in ASD.

Conducted with University of Connecticut and University College London, the test could be further evaluated to see if these results could be used to screen for ASD among children aged 5 to 16 years with a high level of accuracy.

Thursday, December 14, 2023

Enabling early detection of cancer

With his group’s new method and the use of artificial intelligence, G.V. Shivashankar hopes to improve tumor diagnosis.
Photo Credit: Paul Scherrer Institute/Markus Fischer

Blood cells reveal tumors in the body. Researchers at the Paul Scherrer Institute achieve an advance with the development of a test for early diagnosis of cancer.

The ability to detect a developing tumor at a very early stage and to closely monitor the success or failure of cancer therapy is crucial for a patient’s survival. A breakthrough on both counts has now been achieved by researchers at the Paul Scherrer Institute PSI. Researchers led by G.V. Shivashankar, head of PSI‘s Laboratory for Nanoscale Biology and professor of Mechano-Genomics at ETH Zurich, were able to prove that changes in the organization of the cell nucleus of some blood cells can provide a reliable indication of a tumor in the body. With their technique – using artificial intelligence – the scientists were able to distinguish between healthy and sick people with an accuracy of around 85 percent. Besides that, they managed to correctly determine the type of tumor disease – melanoma, glioma, or head and neck tumor. “This is the first time anyone, worldwide, has achieved this,” Shivashankar says happily. The researchers have published their results in the journal npj Precision Oncology.

Wednesday, December 13, 2023

Sugar analysis could reveal different types of cancer

By analyzing changes in glycan structures in the cell, researchers can detect different types of cancer.
Photo Credit: Mikhail Nilov

In the future, a little saliva may be enough to detect an incipient cancer. Researchers at the University of Gothenburg have developed an effective way to interpret the changes in sugar molecules that occur in cancer cells.

Glycans are a type of sugar molecule structure that is linked to the proteins in our cells. The structure of the glycan determines the function of the protein. It has been known for a while that changes in glycan structure can indicate inflammation or disease in the body. Now, researchers at the University of Gothenburg have developed a way to distinguish different types of structural changes, which may provide a precise answer to what will change for a specific disease.

“We have analyzed data from about 220 patients with 11 differently diagnosed cancers and have identified differences in the substructure of the glycan depending on the type of cancer. By letting our newly developed method, enhanced by AI, work through large amounts of data, we were able to find these connections,” says Daniel Bojar, associate senior lecturer in bioinformatics at the University of Gothenburg and lead author of the study published in Cell Reports Methods.

Tuesday, November 7, 2023

Scientists use quantum biology, AI to sharpen genome editing tool

ORNL scientists developed a method that improves the accuracy of the CRISPR Cas9 gene editing tool used to modify microbes for renewable fuels and chemicals production. This research draws on the lab’s expertise in quantum biology, artificial intelligence and synthetic biology.
Illustration Credit: Philip Gray/ORNL, U.S. Dept. of Energy

Scientists at Oak Ridge National Laboratory used their expertise in quantum biology, artificial intelligence and bioengineering to improve how CRISPR Cas9 genome editing tools work on organisms like microbes that can be modified to produce renewable fuels and chemicals.

CRISPR is a powerful tool for bioengineering, used to modify genetic code to improve an organism’s performance or to correct mutations. The CRISPR Cas9 tool relies on a single, unique guide RNA that directs the Cas9 enzyme to bind with and cleave the corresponding targeted site in the genome. Existing models to computationally predict effective guide RNAs for CRISPR tools were built on data from only a few model species, with weak, inconsistent efficiency when applied to microbes.

“A lot of the CRISPR tools have been developed for mammalian cells, fruit flies or other model species. Few have been geared towards microbes where the chromosomal structures and sizes are very different,” said Carrie Eckert, leader of the Synthetic Biology group at ORNL. “We had observed that models for designing the CRISPR Cas9 machinery behave differently when working with microbes, and this research validates what we’d known anecdotally.”

Featured Article

Videos with Cold Symptoms Activate Brain Regions and Trigger Immune Response

 Study on Brain Activity and Antibody Concentration Photo Credit:  Andrea Piacquadio People who watch videos of sneezing or sick people show...

Top Viewed Articles