. Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Monday, January 30, 2023

Earth likely to cross critical climate thresholds even if emissions decline

Already, the world is 1.1 degrees Celsius hotter on average than it was before fossil fuel combustion took off in the 1800s. More extreme rainfall and flooding are among the litany of impacts from that warming.
Photo Credit: Chris Gallagher

Artificial intelligence provides new evidence our planet will cross the global warming threshold of 1.5 degrees Celsius within 10 to 15 years. Even with low emissions, we could see 2 C of warming. But a future with less warming remains within reach.

A new study has found that emission goals designed to achieve the world’s most ambitious climate target – 1.5 degrees Celsius above pre-industrial levels – may in fact be required to avoid more extreme climate change of 2 degrees Celsius.

The study, published Jan. 30 in Proceedings of the National Academy of Sciences, provides new evidence that global warming is on track to reach 1.5 degrees Celsius (2.7 Fahrenheit) above pre-industrial averages in the early 2030s, regardless of how much greenhouse gas emissions rise or fall in the coming decade.

The new “time to threshold” estimate results from an analysis that employs artificial intelligence to predict climate change using recent temperature observations from around the world.

Friday, January 27, 2023

A.I. used to predict space weather like Coronal Mass Ejections

 Dr Andy Smith of Northumbria University
Photo Credit: Northumbria University/Simon Veit-Wilson.

A physicist from Northumbria University has received over £500,000 to create AI that will safeguard the Earth from destructive space storms.

Coronal Mass Ejections, which are solar eruptions from the Sun, can send plasma hurtling towards Earth at high speeds. These space storms can cause severe disruptions to power grids and communication systems.

With our increasing reliance on technology, solar storms pose a serious threat to our everyday lives, leading to severe space weather being added to the UK National Risk Assessment for the first time in 2011.

Researcher and his team analyzed huge amounts of data from satellites and space missions over the last 20 years to gain a better understanding of the conditions under which storms are likely to occur.

Machine learning identifies drugs that could potentially help smokers quit

Penn State College of Medicine researchers helped identify eight medications that may be repurposed to help people quit smoking. A team of more than 70 researchers contributed to the analysis of genetic and smoking behavior data from more than 1.3 million people.
Image Credit: Scientific Frontline

Medications like dextromethorphan, used to treat coughs caused by cold and flu, could potentially be repurposed to help people quit smoking cigarettes, according to a study by Penn State College of Medicine and University of Minnesota researchers. They developed a novel machine learning method, where computer programs analyze data sets for patterns and trends, to identify the drugs and said that some of them are already being tested in clinical trials.

Cigarette smoking is risk factor for cardiovascular disease, cancer and respiratory diseases and accounts for nearly half a million deaths in the United States each year. While smoking behaviors can be learned and unlearned, genetics also plays a role in a person’s risk for engaging in those behaviors. The researchers found in a prior study that people with certain genes are more likely to become addicted to tobacco.

Using genetic data from more than 1.3 million people, Dajiang Liu, Ph.D., professor of public health sciences, and of biochemistry and molecular biology and Bibo Jiang, Ph.D., assistant professor of public health sciences, co-led a large multi-institution study that used machine learning to study these large data sets — which include specific data about a person’s genetics and their self-reported smoking behaviors.

Friday, January 20, 2023

MIT researchers develop an AI model that can detect future lung cancer risk

Caption:Researchers from Massachusetts General Hospital and MIT stand in front of a CT scanner at MGH, where some of the validation data was generated. Left to right: Regina Barzilay, Lecia Sequist, Florian Fintelmann, Ignacio Fuentes, Peter Mikhael, Stefan Ringer, and Jeremy Wohlwend
 Photo Credit: Guy Zylberberg.

The name Sybil has its origins in the oracles of Ancient Greece, also known as sibyls: feminine figures who were relied upon to relay divine knowledge of the unseen and the omnipotent past, present, and future. Now, the name has been excavated from antiquity and bestowed on an artificial intelligence tool for lung cancer risk assessment being developed by researchers at MIT's Abdul Latif Jameel Clinic for Machine Learning in Health, Mass General Cancer Center (MGCC), and Chang Gung Memorial Hospital (CGMH).

Lung cancer is the No. 1 deadliest cancer in the world, resulting in 1.7 million deaths worldwide in 2020, killing more people than the next three deadliest cancers combined. 

"It’s the biggest cancer killer because it’s relatively common and relatively hard to treat, especially once it has reached an advanced stage,” says Florian Fintelmann, MGCC thoracic interventional radiologist and coauthor on the new work. “In this case, it’s important to know that if you detect lung cancer early, the long-term outcome is significantly better. Your five-year survival rate is closer to 70 percent, whereas if you detect it when it’s advanced, the five-year survival rate is just short of 10 percent.” 

Thursday, January 19, 2023

Why faces might not be as attention-grabbing as we think

Data from the study’s 30 participants revealed they looked at the faces of just 16 per cent of the people they walked past.
Photo Credit: John Cameron

Research combining wearable eye-tracking technology and AI body detection software suggests our eyes aren’t drawn to the faces of passers-by as much as previously thought.

Faces are key to everyday social interaction. Just a brief glance can give us important signals about someone’s emotional state, intentions and identity that helps us to navigate our social world.

But researchers studying social attention – how we notice and process the actions and behaviors of others in social contexts – have been mostly limited to lab-based studies where participants view social scenes on computer screens. Now, researchers from the School of Psychology at UNSW Science have developed a new approach that could enable more studies of social attention in natural settings.

The novel method correlates eye-movement data from wearable eye-tracking glasses with analysis from an automatic face and body detection algorithm to record when and where participants looked when fixating on other people. The methodology, detailed in the journal Scientific Reports, could have a range of future applications in settings from clinical research to sports science.

Monday, December 19, 2022

Scientists use machine learning to gain unprecedented view of small molecules

Metabolites are extremely small – the diameter of a human hair is 100,000 nanometers, while that of a glucose molecule is approximately one nanometer.
Illustration Credit: Matti Ahlgren/Aalto University.

A new tool to identify small molecules offers benefits for diagnostics, drug discovery and fundamental research.

A new machine learning model will help scientists identify small molecules, with applications in medicine, drug discovery and environmental chemistry. Developed by researchers at Aalto University and the University of Luxembourg, the model was trained with data from dozens of laboratories to become one of the most accurate tools for identifying small molecules.

Thousands of different small molecules, known as metabolites, transport energy and transmit cellular information throughout the human body. Because they are so small, metabolites are difficult to distinguish from each other in a blood sample analysis – but identifying these molecules is important to understand how exercise, nutrition, alcohol use and metabolic disorders affect wellbeing.

Thursday, December 15, 2022

Artificial Intelligence in Veterinary Medicine Raises Ethical Challenges

Chimmi (Chimichanga) a few hours before having his spleen removed due to a mass. Detected by Hi-Def Ultrasound by a radiologist. 7/2021
Photo Credit: Heidi-Ann Fourkiller

Use of artificial intelligence (AI) is increasing in the field of veterinary medicine, but veterinary experts caution that the rush to embrace the technology raises some ethical considerations.

“A major difference between veterinary and human medicine is that veterinarians have the ability to euthanize patients – which could be for a variety of medical and financial reasons – so the stakes of diagnoses provided by AI algorithms are very high,” says Eli Cohen, associate clinical professor of radiology at NC State’s College of Veterinary Medicine. “Human AI products have to be validated prior to coming to market, but currently there is no regulatory oversight for veterinary AI products.”

In a review for Veterinary Radiology and Ultrasound, Cohen discusses the ethical and legal questions raised by veterinary AI products currently in use. He also highlights key differences between veterinary AI and AI used by human medical doctors.

Tuesday, December 13, 2022

AI model predicts if a covid-19 test might be positive or not

Xingquan “Hill” Zhu, Ph.D., (left) senior author and a professor; and co-author Magdalyn E. Elkin, a Ph.D. student, both in FAU’s Department of Electrical Engineering and Computer Science.
Photo Credit: Florida Atlantic University

COVID-19 and its latest Omicron strains continue to cause infections across the country as well as globally. Serology (blood) and molecular tests are the two most commonly used methods for rapid COVID-19 testing. Because COVID-19 tests use different mechanisms, they vary significantly. Molecular tests measure the presence of viral SARS-CoV-2 RNA while serology tests detect the presence of antibodies triggered by the SARS-CoV-2 virus.

Currently, there is no existing study on the correlation between serology and molecular tests and which COVID-19 symptoms play a key role in producing a positive test result. A study from Florida Atlantic University ’s College of Engineering and Computer Science using machine learning provides important new evidence in understanding how molecular tests versus serology tests are correlated, and what features are the most useful in distinguishing between COVID-19 positive versus test outcomes.

Researchers from the College of Engineering and Computer Science trained five classification algorithms to predict COVID-19 test results. They created an accurate predictive model using easy-to-obtain symptom features, along with demographic features such as number of days post-symptom onset, fever, temperature, age and gender.

Monday, December 12, 2022

Fossil-Sorting Robots Will Help Researchers Study Oceans, Climate


Researchers have developed and demonstrated a robot capable of sorting, manipulating, and identifying microscopic marine fossils. The new technology automates a tedious process that plays a key role in advancing our understanding of the world’s oceans and climate – both today and in the prehistoric past.

“The beauty of this technology is that it is made using relatively inexpensive off-the-shelf components, and we are making both the designs and the artificial intelligence software open source,” says Edgar Lobaton, co-author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Our goal is to make this tool widely accessible, so that it can be used by as many researchers as possible to advance our understanding of oceans, biodiversity and climate.”

The technology, called Forabot, uses robotics and artificial intelligence to physically manipulate the remains of organisms called foraminifera, or forams, so that those remains can be isolated, imaged and identified.

Forams are protists, neither plant nor animal, and have been prevalent in our oceans for more than 100 million years. When forams die, they leave behind their tiny shells, mostly less than a millimeter wide. These shells give scientists insights into the characteristics of the oceans as they existed when the forams were alive. For example, different types of foram species thrive in different kinds of ocean environments, and chemical measurements can tell scientists about everything from the ocean’s chemistry to its temperature when the shell was being formed.

Friday, December 9, 2022

Aging is driven by unbalanced genes


Northwestern University researchers have discovered a previously unknown mechanism that drives aging.

In a new study, researchers used artificial intelligence to analyze data from a wide variety of tissues, collected from humans, mice, rats and killifish. They discovered that the length of genes can explain most molecular-level changes that occur during aging.

All cells must balance the activity of long and short genes. The researchers found that longer genes are linked to longer lifespans, and shorter genes are linked to shorter lifespans. They also found that aging genes change their activity according to length. More specifically, aging is accompanied by a shift in activity toward short genes. This causes the gene activity in cells to become unbalanced.

Surprisingly, this finding was near universal. The researchers uncovered this pattern across several animals, including humans, and across many tissues (blood, muscle, bone and organs, including liver, heart, intestines, brain and lungs) analyzed in the study.

The new finding potentially could lead to interventions designed to slow the pace of — or even reverse — aging.

Neural Network Learned to Create a Molecular Dynamics Model of Liquid Gallium

The melt viscosity determines the choice of casting mode, ingot formation conditions and other parameters.
Photo Credit: Ilya Safarov

Scientists at the Institute of Metallurgy, Ural Branch of the Russian Academy of Sciences, and Ural Federal University have developed a method for theoretically high-precision determination of the viscosity of liquid metals using a trained artificial neural network. The method was successfully tested in the process of building the deep learning potential of the neural network on the example of liquid gallium. Scientists were able to significantly increase the spatiotemporal scale of the simulation. The results of molecular dynamics modeling of liquid gallium are particularly accurate. Previous calculations were notoriously inaccurate, especially in the low temperature range. An article describing the research was published in the journal Computational Materials Science.

"First, liquids are in principle difficult to be described theoretically. The reason, in our opinion, lies in the absence of a simple initial approximation for this class of systems (for example, the initial approximation for gases is the ideal gas model). Secondly, the atomistic calculation of viscosity requires processing of a large volume of statistical data and, at the same time, a large accuracy of description of the potential energy surface and forces acting on atoms. Direct calculations cannot achieve such an effect. Thirdly, gallium in the liquid state is difficult to describe theoretically, because, due to certain features, its structure differs from that of most other metals," explains Vladimir Filippov, Senior Researcher at the Department of Rare Metals and Nanomaterials at UrFU, research participant and co-author of the article.

Tuesday, November 29, 2022

Machine learning model builds on imaging methods to better detect ovarian lesions

(From left) The top row shows an ultrasound image of a malignant lesion, the blood oxygen saturation, and hemoglobin concentration. The bottom row is an ultrasound image of a benign lesion, the blood oxygen saturation, and hemoglobin concentration.
Image Credit: Zhu lab

Although ovarian cancer is the deadliest type of cancer for women, only about 20% of cases are found at an early stage, as there are no real screening tests for them and few symptoms to prompt them. Additionally, ovarian lesions are difficult to diagnose accurately — so difficult, in fact that there is no sign of cancer in more than 80% of women who undergo surgery to have lesions removed and tested.

Quing Zhu, the Edwin H. Murty Professor of Biomedical Engineering at Washington University in St. Louis’ McKelvey School of Engineering, and members of her lab have applied a variety of imaging methods to diagnose ovarian cancer more accurately. Now, they have developed a new machine learning fusion model that takes advantage of existing ultrasound features of ovarian lesions to train the model to recognize whether a lesion is benign or cancerous from reconstructed images taken with photoacoustic tomography. Machine learning traditionally has been focused on single modality data. Recent findings have shown that multi-modality machine learning is more robust in its performance over unimodality methods. In a pilot study of 35 patients with more than 600 regions of interest, the model’s accuracy was 90%.

Friday, November 25, 2022

Improving AI training for edge sensor time series


Engineers at the Tokyo Institute of Technology (Tokyo Tech) have demonstrated a simple computational approach for improving the way artificial intelligence classifiers, such as neural networks, can be trained based on limited amounts of sensor data. The emerging applications of the internet of things often require edge devices that can reliably classify behaviors and situations based on time series. However, training data is difficult and expensive to acquire. The proposed approach promises to substantially increase the quality of classifier training, at almost no extra cost.

In recent times, the prospect of having huge numbers of Internet of Things (IoT) sensors quietly and diligently monitoring countless aspects of human, natural, and machine activities has gained ground. As our society becomes more and more hungry for data, scientists, engineers, and strategists increasingly hope that the additional insight which we can derive from this pervasive monitoring will improve the quality and efficiency of many production processes, also resulting in improved sustainability.

The world in which we live is incredibly complex, and this complexity is reflected in a huge multitude of variables that IoT sensors may be designed to monitor. Some are natural, such as the amount of sunlight, moisture, or the movement of an animal, while others are artificial, for example, the number of cars crossing an intersection or the strain applied to a suspended structure like a bridge. What these variables all have in common is that they evolve over time, creating what is known as time series, and that meaningful information is expected to be contained in their relentless changes. In many cases, researchers are interested in classifying a set of predetermined conditions or situations based on these temporal changes, as a way of reducing the amount of data and making it easier to understand. For instance, measuring how frequently a particular condition or situation arises is often taken as the basis for detecting and understanding the origin of malfunctions, pollution increases, and so on.

Thursday, November 24, 2022

Engineers improve electrochemical sensing by incorporating machine learning

Aida Ebrahimi, Thomas and Sheila Roell Early Career Assistant Professor of Electrical Engineering and assistant professor of biomedical engineering, and Vinay Kammarchedu, 2022-23 Milton and Albertha Langdon Memorial Graduate Fellowship in Electrical Engineering, developed a new approach to improve the performance of electrochemical biosensors by combining machine learning with multimodal measurement.
Photo Credit: Kate Myers | Pennsylvania State University

Combining machine learning with multimodal electrochemical sensing can significantly improve the analytical performance of biosensors, according to new findings from a Penn State research team. These improvements may benefit noninvasive health monitoring, such as testing that involves saliva or sweat. The findings were published this month in Analytica Chimica Acta.

The researchers developed a novel analytical platform that enabled them to selectively measure multiple biomolecules using a single sensor, saving space and reducing complexity as compared to the usual route of using multi-sensor systems. In particular, they showed that their sensor can simultaneously detect small quantities of uric acid and tyrosine — two important biomarkers associated with kidney and cardiovascular diseases, diabetes, metabolic disorders, and neuropsychiatric and eating disorders — in sweat and saliva, making the developed method suitable for personalized health monitoring and intervention.

Many biomarkers have similar molecular structures or overlapping electrochemical signatures, making it difficult to detect them simultaneously. Leveraging machine learning for measuring multiple biomarkers can improve the accuracy and reliability of diagnostics and as a result improve patient outcomes, according to the researchers. Further, sensing using the same device saves resources and biological sample volumes needed for tests, which is critical with clinical samples with scarce amounts.

Wednesday, November 23, 2022

Machine learning gives nuanced view of Alzheimer’s stages


A Cornell-led collaboration used machine learning to pinpoint the most accurate means, and timelines, for anticipating the advancement of Alzheimer’s disease in people who are either cognitively normal or experiencing mild cognitive impairment.

The modeling showed that predicting the future decline into dementia for individuals with mild cognitive impairment is easier and more accurate than it is for cognitively normal, or asymptomatic, individuals. At the same time, the researchers found that the predictions for cognitively normal subjects are less accurate for longer time horizons, but for individuals with mild cognitive impairment, the opposite is true.

The modeling also demonstrated that magnetic resonance imaging (MRI) is a useful prognostic tool for people in both stages, whereas tools that track molecular biomarkers, such as positron emission tomography (PET) scans, are more useful for people experiencing mild cognitive impairment.

The team’s paper, “Machine Learning Based Multi-Modal Prediction of Future Decline Toward Alzheimer’s Disease: An Empirical Study,” published in PLOS ONE. The lead author is Batuhan Karaman, a doctoral student in the field of electrical and computer engineering.

Monday, November 21, 2022

A possible game changer for next generation microelectronics

Magnetic fields created by skyrmions in two-dimensional sheet of material composed of iron, germanium and tellurium.
Image Credit: Argonne National Laboratory.

Magnets generate invisible fields that attract certain materials. A common example is fridge magnets. Far more important to our everyday lives, magnets also can store data in computers. Exploiting the direction of the magnetic field (say, up or down), microscopic bar magnets each can store one bit of memory as a zero or a one — the language of computers.

Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory wants to replace the bar magnets with tiny magnetic vortices. As tiny as billionths of a meter, these vortices are called skyrmions, which form in certain magnetic materials. They could one day usher in a new generation of microelectronics for memory storage in high performance computers.

“We estimate the skyrmion energy efficiency could be 100 to 1000 times better than current memory in the high-performance computers used in research.” — Arthur McCray, Northwestern University graduate student working in Argonne’s Materials Science Division

“The bar magnets in computer memory are like shoelaces tied with a single knot; it takes almost no energy to undo them,” said Arthur McCray, a Northwestern University graduate student working in Argonne’s Materials Science Division (MSD). And any bar magnets malfunctioning due to some disruption will affect the others.

Tuesday, November 15, 2022

Prehistoric predator? Artificial intelligence says no

Artificial intelligence has proven vital in identifying a mysterious Aussie dinosaur
Image Credit: Dr Anthony Romilio

Artificial intelligence has revealed that prehistoric footprints thought to be made by a vicious dinosaur predator were in fact from a timid herbivore.

In an international collaboration, University of Queensland paleontologist Dr Anthony Romilio used AI pattern recognition to re-analyze footprints from the Dinosaur Stampede National Monument, south-west of Winton in Central Queensland.

“Large dinosaur footprints were first discovered back in the 1970s at a track site called the Dinosaur Stampede National Monument, and for many years they were believed to be left by a predatory dinosaur, like Australovenator, with legs nearly two meters long,” said Dr Romilio.

“The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago.

“But working out what dino species made the footprints exactly – especially from tens of millions of years ago – can be a pretty difficult and confusing business.

Solving brain dynamics gives rise to flexible machine-learning models

Studying the brains of small species recently helped MIT researchers better model the interaction between neurons and synapses — the building blocks of natural and artificial neural networks — into a class of flexible, robust machine-learning models that learn on the job and can adapt to changing conditions.
Image Credit: Ramin Hasani/Stable Diffusion

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution.

Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets — flexible, causal, robust, and explainable — but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training — while many traditional models are fixed.

Wednesday, November 2, 2022

Study urges caution when comparing neural networks to the brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.
Image Credits: Christine Daniloff | Massachusetts Institute of Technology

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.

“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

Thursday, October 27, 2022

Step by step


Berkeley researchers may be one step closer to making robot dogs our new best friends. Using advances in machine learning, two separate teams have developed cutting-edge approaches to shorten in-the-field training times for quadruped robots, getting them to walk — and even roll over — in record time.

In a first for the robotics field, a team led by Sergey Levine, associate professor of electrical engineering and computer sciences, demonstrated a robot learning to walk without prior training from models and simulations in just 20 minutes. The demonstration marks a significant advancement, as this robot relied solely on trial and error in the field to master the movements necessary to walk and adapt to different settings.

“Our work shows that training robots in the real world is more feasible than previously thought, and we hope, as a result, to empower other researchers to start tackling more real-world problems,” said Laura Smith, a Ph.D. student in Levine’s lab and one of the lead authors of the paper posted on arXiv.

In past studies, robots of comparable complexity required several hours to weeks of data input to learn to walk using reinforcement learning (RL). Often, they also were trained in controlled lab settings, where they learned to walk on relatively simple terrain and received precise feedback about their performance.

Featured Article

Autism and ADHD are linked to disturbed gut flora very early in life

The researchers have found links between the gut flora in babies first year of life and future diagnoses. Photo Credit:  Cheryl Holt Disturb...

Top Viewed Articles