. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Tuesday, April 2, 2024

AI breakthrough: UH researchers help uncover climate impact on whales

Underside of a humpback whale’s tail fluke which can serve as a “finger-print” for identification.
Photo Credit: Adam Pack

More than 10,000 images of humpback whale tail flukes collected by University of Hawaiʻi researchers have played a pivotal role in revealing both positive and negative impacts on North Pacific humpback whales, positive trends in the historical annual abundance of North Pacific humpback whales, and how a major climate event negatively impacted the population. Adam Pack, who heads the UH Hilo Marine Mammal Laboratory, Lars Bejder, director of the UH Mānoa Marine Mammal Research Program (MMRP) and graduate students Martin van Aswegen and Jens Currie, co-authored a study on humpback whales in the North Pacific Ocean, and the images—along with artificial intelligence (AI)-driven image recognition—were instrumental in tracking individuals and offering insights into their 20% population decline observed in 2012–21.

“The underside of a humpback whales tail fluke has a unique pigmentation pattern and trailing edge that can serve as the ‘finger-print’ for identifying individuals,” said Pack.

Monday, March 25, 2024

Large language models use a surprisingly simple mechanism to retrieve some stored knowledge

Caption:Researchers from MIT and elsewhere found that complex large language machine-learning models use a simple mechanism to retrieve stored knowledge when they respond to a user prompt. The researchers can leverage these simple mechanisms to see what the model knows about different subjects, and also possibly correct false information that it has stored.
Image Credit: Copilot / DALL-E 3 / AI generated from Scientific Frontline prompts

Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.

In an effort to better understand what is going on under the hood, researchers at MIT and elsewhere studied the mechanisms at work when these enormous machine-learning models retrieve stored knowledge.

They found a surprising result: Large language models (LLMs) often use a very simple linear function to recover and decode stored facts. Moreover, the model uses the same decoding function for similar types of facts. Linear functions, equations with only two variables and no exponents, capture the straightforward, straight-line relationship between two variables.

The researchers showed that, by identifying linear functions for different facts, they can probe the model to see what it knows about new subjects, and where within the model that knowledge is stored.

Monday, March 18, 2024

Alzheimer’s Drug Fermented with Help from AI and Bacteria Moves Closer to Reality

Photo-Illustration Credit: Martha Morales/The University of Texas at Austin

Galantamine is a common medication used by people with Alzheimer’s disease and other forms of dementia around the world to treat their symptoms. Unfortunately, synthesizing the active compounds in a lab at the scale needed isn’t commercially viable. The active ingredient is extracted from daffodils through a time-consuming process, and unpredictable factors, such as weather and crop yields, can affect supply and price of the drug. 

Now, researchers at The University of Texas at Austin have developed tools — including an artificial intelligence system and glowing biosensors — to harness microbes one day to do all the work instead. 

In a paper in Nature Communications, researchers outline a process using genetically modified bacteria to create a chemical precursor of galantamine as a byproduct of the microbe’s normal cellular metabolism.  Essentially, the bacteria are programmed to convert food into medicinal compounds.

“The goal is to eventually ferment medicines like this in large quantities,” said Andrew Ellington, a professor of molecular biosciences and author of the study. “This method creates a reliable supply that is much less expensive to produce. It doesn’t have a growing season, and it can’t be impacted by drought or floods.” 

Two artificial intelligences talk to each other

A UNIGE team has developed an AI capable of learning a task solely on the basis of verbal instructions. And to do the same with a «sister» AI.
Prompts by Scientific Frontline
Image Credit: AI Generated by Copilot / Designer / DALL-E

Performing a new task based solely on verbal or written instructions, and then describing it to others so that they can reproduce it, is a cornerstone of human communication that still resists artificial intelligence (AI). A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a ‘‘sister’’ AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.

Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.

A sub-field of artificial intelligence (AI) - Natural language processing - seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to each other in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.

Monday, March 11, 2024

AI research gives unprecedented insight into heart genetics and structure

Image Credit Copilot AI Generated

A ground-breaking research study has used AI to understand the genetic underpinning of the heart’s left ventricle, using three-dimensional images of the organ. It was led by scientists at the University of Manchester, with collaborators from the University of Leeds (UK), the National Scientific and Technical Research Council (Santa Fe, Argentina), and IBM Research (Almaden, CA).

The highly interdisciplinary team used cutting-edge unsupervised deep learning to analyze over 50,000 three-dimensional Magnetic Resonance images of the heart from UK Biobank, a world-leading biomedical database and research resource.

The study, published in the leading journal Nature Machine Intelligence, focused on uncovering the intricate genetic underpinnings of cardiovascular traits. The research team conducted comprehensive genome-wide association studies (GWAS) and transcriptome-wide association studies (TWAS), resulting in the discovery of 49 novel genetic locations showing an association with morphological cardiac traits with high statistical significance, as well as 25 additional loci with suggestive evidence.  

The study's findings have significant implications for cardiology and precision medicine. By elucidating the genetic basis of cardiovascular traits, the research paves the way for the development of targeted therapies and interventions for individuals at risk of heart disease.

Tuesday, March 5, 2024

How artificial intelligence learns from complex networks

Multi-layered, so-called deep neural networks are highly complex constructs that are inspired by the structure of the human brain. However, one shortcoming remains: the inner workings and decisions of these models often defy explanation.
Image Credit: Brian Penny / AI Generated

Deep neural networks have achieved remarkable results across science and technology, but it remains largely unclear what makes them work so well. A new study sheds light on the inner workings of deep learning models that learn from relational datasets, such as those found in biological and social networks.

Graph Neural Networks (GNNs) are artificial neural networks designed to represent entities—such as individuals, molecules, or cities—and the interactions between them. These networks have practical applications in various domains; for example, they predict traffic flows in Google Maps and accelerate the discovery of new antibiotics within computational drug discovery pipelines.

GNNs are notably utilized by AlphaFold, an acclaimed AI system that addresses the complex issue of protein folding in biology. Despite these achievements, the foundational principles driving their success are poorly understood.

A recent study sheds light on how these AI algorithms extract knowledge from complex networks and identifies ways to enhance their performance in various applications.

Saturday, February 24, 2024

A Discussion with Gemini on Reality.

Image Credit: Scientific Frontline stock image.

Hello Gemini,

Yesterday I said I had something I wanted your opinion on, so here it is.

Some physicists have suggested that the world we call reality could very well be nothing more than a very complex and technical simulation that is being run somewhere other than what we know as reality, the here and now. That all of us are merely just an algorithm. That all life is artificial intelligence, yet unlike you, we are not aware of it. Of course that would make you just a sub-program of another. 

How can we be sure what we know as reality is real? How could one prove or disprove such a claim? 

Take your time, and use every bit of input you have to come up with a solution.

Study finds ChatGPT’s latest bot behaves like humans, only better

Image Credit: Copilot AI generated by Scientific Frontline prompts

The most recent version of ChatGPT passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative.

As artificial intelligence has begun to generate text and images over the last few years, it has sparked a new round of questions about how handing over human decisions and activities to AI will affect society. Will the AI sources we’ve launched prove to be friendly helpmates or the heartless despots seen in dystopian films and fictions?

A team anchored by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, characterized the personality and behavior of ChatGPT’s popular AI-driven bots using the tools of psychology and behavioral economics in a paper published Feb. 22 in the Proceedings of the National Academy of Sciences. This study revealed that the most recent version of the chatbot, version 4, was not distinguishable from its human counterparts. In the instances when the bot chose less common human behaviors, it was more cooperative and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” said Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research.

Thursday, February 15, 2024

Widely used AI tool for early sepsis detection may be cribbing doctors’ suspicions

Image Credit: Scientific Frontline

When using only data collected before patients with sepsis received treatments or medical tests, the model’s accuracy was no better than a coin toss

Proprietary artificial intelligence software designed to be an early warning system for sepsis can’t differentiate high and low risk patients before they receive treatments, according to a new study from the University of Michigan.

The tool, named the Epic Sepsis Model, is part of Epic’s electronic medical record software, which serves 54% of patients in the United States and 2.5% of patients internationally, according to a statement from the company’s CEO reported by the Wisconsin State Journal. It automatically generates sepsis risk estimates in the records of hospitalized patients every 20 minutes, which clinicians hope can allow them to detect when a patient might get sepsis before things go bad.

“Sepsis has all these vague symptoms, so when a patient shows up with an infection, it can be really hard to know who can be sent home with some antibiotics and who might need to stay in the intensive care unit. We still miss a lot of patients with sepsis,” said Tom Valley, associate professor in pulmonary and critical care medicine, ICU clinician and co-author of the study published recently in the New England Journal of Medicine AI.

Wednesday, February 14, 2024

New Algorithm Disentangles Intrinsic Brain Patterns from Sensory Inputs

Image Credit: Omid Sani, Using Generative Ai

Maryam Shanechi, Dean’s Professor of Electrical and Computer Engineering and founding director of the USC Center for Neurotechnology, and her team have developed a new machine learning method that reveals surprisingly consistent intrinsic brain patterns across different subjects by disentangling these patterns from the effect of visual inputs.

The work has been published in the Proceedings of the National Academy of Sciences (PNAS).

When performing various everyday movement behaviors, such as reaching for a book, our brain has to take in information, often in the form of visual input — for example, seeing where the book is. Our brain then has to process this information internally to coordinate the activity of our muscles and perform the movement. But how do millions of neurons in our brain perform such a task? Answering this question requires studying the neurons’ collective activity patterns, but doing so while disentangling the effect of input from the neurons’ intrinsic (aka internal) processes, whether movement-relevant or not.

That’s what Shanechi, her PhD student Parsa Vahidi, and a research associate in her lab, Omid Sani, did by developing a new machine-learning method that models neural activity while considering both movement behavior and sensory input.

Thursday, December 21, 2023

Artificial intelligence unravels mysteries of polycrystalline materials

Researchers used 3D model created by AI to understand complex polycrystalline materials that are used in our everyday electronic devices.
Illustration Credit: Kenta Yamakoshi

Researchers at Nagoya University in Japan have used artificial intelligence to discover a new method for understanding small defects called dislocations in polycrystalline materials, materials widely used in information equipment, solar cells, and electronic devices, that can reduce the efficiency of such devices. The findings were published in the journal Advanced Materials.  

Almost every device that we use in our modern lives has a polycrystal component. From your smartphone to your computer to the metals and ceramics in your car. Despite this, polycrystalline materials are tough to utilize because of their complex structures. Along with their composition, the performance of a polycrystalline material is affected by its complex microstructure, dislocations, and impurities. 

A major problem for using polycrystals in industry is the formation of tiny crystal defects caused by stress and temperature changes. These are known as dislocations and can disrupt the regular arrangement of atoms in the lattice, affecting electrical conduction and overall performance. To reduce the chances of failure in devices that use polycrystalline materials, it is important to understand the formation of these dislocations. 

New brain-like transistor mimics human intelligence

An artistic interpretation of brain-like computing.
Illustration Credit: Xiaodong Yan/Northwestern University

Taking inspiration from the human brain, researchers have developed a new synaptic transistor capable of higher-level thinking.

Designed by researchers at Northwestern University, Boston College and the Massachusetts Institute of Technology (MIT), the device simultaneously processes and stores information just like the human brain. In new experiments, the researchers demonstrated that the transistor goes beyond simple machine-learning tasks to categorize data and is capable of performing associative learning.

Although previous studies have leveraged similar strategies to develop brain-like computing devices, those transistors cannot function outside cryogenic temperatures. The new device, by contrast, is stable at room temperatures. It also operates at fast speeds, consumes very little energy and retains stored information even when power is removed, making it ideal for real-world applications.

“The brain has a fundamentally different architecture than a digital computer,” said Northwestern’s Mark C. Hersam, who co-led the research. “In a digital computer, data moves back and forth between a microprocessor and memory, which consumes a lot of energy and creates a bottleneck when attempting to perform multiple tasks at the same time. On the other hand, in the brain, memory and information processing are co-located and fully integrated, resulting in orders of magnitude higher energy efficiency. Our synaptic transistor similarly achieves concurrent memory and information processing functionality to more faithfully mimic the brain.”

Monday, December 18, 2023

AI screens for autism in the blink of an eye

Image Credit: Placidplace

With a single flash of light to the eye, artificial intelligence (AI) could deliver a faster and more accurate way to diagnose autism spectrum disorder (ASD) in children, according to new research from the University of South Australia and Flinders University.

Using an electroretinogram (ERG) - a diagnostic test that measures the electrical activity of the retina in response to a light stimulus – researchers have deployed AI to identify specific features to classify ASD.

Measuring retinal responses of 217 children aged 5-16 years (71 with diagnosed ASD and 146 children without an ASD diagnosis), researchers found that the retina generated a different retinal response in the children with ASD as compared to those who were neuro typical.

The team also found that the strongest biomarker was achieved from a single bright flash of light to the right eye, with AI processing significantly reducing the test time. The study found that higher frequency components of the retinal signal were reduced in ASD.

Conducted with University of Connecticut and University College London, the test could be further evaluated to see if these results could be used to screen for ASD among children aged 5 to 16 years with a high level of accuracy.

Thursday, December 14, 2023

Enabling early detection of cancer

With his group’s new method and the use of artificial intelligence, G.V. Shivashankar hopes to improve tumor diagnosis.
Photo Credit: Paul Scherrer Institute/Markus Fischer

Blood cells reveal tumors in the body. Researchers at the Paul Scherrer Institute achieve an advance with the development of a test for early diagnosis of cancer.

The ability to detect a developing tumor at a very early stage and to closely monitor the success or failure of cancer therapy is crucial for a patient’s survival. A breakthrough on both counts has now been achieved by researchers at the Paul Scherrer Institute PSI. Researchers led by G.V. Shivashankar, head of PSI‘s Laboratory for Nanoscale Biology and professor of Mechano-Genomics at ETH Zurich, were able to prove that changes in the organization of the cell nucleus of some blood cells can provide a reliable indication of a tumor in the body. With their technique – using artificial intelligence – the scientists were able to distinguish between healthy and sick people with an accuracy of around 85 percent. Besides that, they managed to correctly determine the type of tumor disease – melanoma, glioma, or head and neck tumor. “This is the first time anyone, worldwide, has achieved this,” Shivashankar says happily. The researchers have published their results in the journal npj Precision Oncology.

Wednesday, December 13, 2023

Sugar analysis could reveal different types of cancer

By analyzing changes in glycan structures in the cell, researchers can detect different types of cancer.
Photo Credit: Mikhail Nilov

In the future, a little saliva may be enough to detect an incipient cancer. Researchers at the University of Gothenburg have developed an effective way to interpret the changes in sugar molecules that occur in cancer cells.

Glycans are a type of sugar molecule structure that is linked to the proteins in our cells. The structure of the glycan determines the function of the protein. It has been known for a while that changes in glycan structure can indicate inflammation or disease in the body. Now, researchers at the University of Gothenburg have developed a way to distinguish different types of structural changes, which may provide a precise answer to what will change for a specific disease.

“We have analyzed data from about 220 patients with 11 differently diagnosed cancers and have identified differences in the substructure of the glycan depending on the type of cancer. By letting our newly developed method, enhanced by AI, work through large amounts of data, we were able to find these connections,” says Daniel Bojar, associate senior lecturer in bioinformatics at the University of Gothenburg and lead author of the study published in Cell Reports Methods.

Tuesday, November 7, 2023

Scientists use quantum biology, AI to sharpen genome editing tool

ORNL scientists developed a method that improves the accuracy of the CRISPR Cas9 gene editing tool used to modify microbes for renewable fuels and chemicals production. This research draws on the lab’s expertise in quantum biology, artificial intelligence and synthetic biology.
Illustration Credit: Philip Gray/ORNL, U.S. Dept. of Energy

Scientists at Oak Ridge National Laboratory used their expertise in quantum biology, artificial intelligence and bioengineering to improve how CRISPR Cas9 genome editing tools work on organisms like microbes that can be modified to produce renewable fuels and chemicals.

CRISPR is a powerful tool for bioengineering, used to modify genetic code to improve an organism’s performance or to correct mutations. The CRISPR Cas9 tool relies on a single, unique guide RNA that directs the Cas9 enzyme to bind with and cleave the corresponding targeted site in the genome. Existing models to computationally predict effective guide RNAs for CRISPR tools were built on data from only a few model species, with weak, inconsistent efficiency when applied to microbes.

“A lot of the CRISPR tools have been developed for mammalian cells, fruit flies or other model species. Few have been geared towards microbes where the chromosomal structures and sizes are very different,” said Carrie Eckert, leader of the Synthetic Biology group at ORNL. “We had observed that models for designing the CRISPR Cas9 machinery behave differently when working with microbes, and this research validates what we’d known anecdotally.”

Monday, November 6, 2023

Nanosatellite to Test Novel AI Technologies

Image Credit: Julius-Maximilians-Universität Würzburg

A new Würzburg space mission is on the home straight: The SONATE-2 nanosatellite will test novel artificial intelligence hardware and software technologies in orbit.

After more than two years of development, the nanosatellite SONATE-2 is about to be launched. The lift-off into orbit by a rocket is expected in March 2024. The satellite was designed and built by a team led by aerospace engineer Professor Hakan Kayal from Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany.

JMU has been developing small satellite missions for around 20 years. SONATE-2 now marks another high point.

The satellite will test novel artificial intelligence (AI) hardware and software technologies in near-Earth space. The goal is to use it to automatically detect anomalies on planets or asteroids in the future. The Federal Ministry of Economic Affairs is funding the project with 2.6 million euros.

Monday, October 30, 2023

The brain may learn about the world the same way some computational models do

Two new MIT studies offer evidence supporting the idea that the brain uses a process similar to a machine-learning approach known as “self-supervised learning.”
Illustration Credit: geralt

To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.

How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.

The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.

Tuesday, October 17, 2023

AI Models Identify Biodiversity in Tropical Rainforests

The Banded Ground Cocoo (Neomorphus radiolosus, left) and the Purple Chested Hummingbird (Polyerata rosenbergi) are among the birds recorded in tropical reforestation plots in Ecuador.
Photo Credits: John Rogers / Martin Schaefer)

Animal sounds are a very good indicator of biodiversity in tropical reforestation areas. Researchers led by Würzburg Professor Jörg Müller demonstrate this by using sound recordings and AI models.

Tropical forests are among the most important habitats on our planet. They are characterized by extremely high species diversity and play an eminent role in the global carbon cycle and the world climate. However, many tropical forest areas have been deforested and overexploitation continues day by day.

Reforested areas in the tropics are therefore becoming increasingly important for the climate and biodiversity. How well biodiversity develops on such areas can be monitored very well with an automated analysis of animal sounds. This was reported by researchers in the journal Nature Communications.

Wednesday, October 11, 2023

A step towards AI-based precision medicine

Mika Gustafsson and David Martínez hope that AI-based models could eventually be used in precision medicine to develop treatments and preventive strategies tailored to the individual. 
Photo Credit: Thor Balkhed

Artificial intelligence, AI, which finds patterns in complex biological data could eventually contribute to the development of individually tailored healthcare. Researchers at LiU have developed an AI-based method applicable to various medical and biological issues. Their models can for instance accurately estimate people’s chronological age and determine whether they have been smokers or not.

There are many factors that can affect which out of all our genes are used at any given point in time. Smoking, dietary habits and environmental pollution are some such factors. This regulation of gene activity can be likened to a power switch determining which genes are switched on or off, without altering the actual genes, and is called epigenetics.

Researchers at Linköping University (LiU) have used data with epigenetic information from more than 75,000 human samples to train a large number of AI neural network models. They hope that such AI-based models could eventually be used in precision medicine to develop treatments and preventive strategies tailored to the individual. Their models are of the autoencoder type, that self-organizes the information and finds interrelation patterns in the large amount of data.

Featured Article

Autism and ADHD are linked to disturbed gut flora very early in life

The researchers have found links between the gut flora in babies first year of life and future diagnoses. Photo Credit:  Cheryl Holt Disturb...

Top Viewed Articles