Mastodon Scientific Frontline: Artificial Intelligence
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday, November 25, 2022

Improving AI training for edge sensor time series


Engineers at the Tokyo Institute of Technology (Tokyo Tech) have demonstrated a simple computational approach for improving the way artificial intelligence classifiers, such as neural networks, can be trained based on limited amounts of sensor data. The emerging applications of the internet of things often require edge devices that can reliably classify behaviors and situations based on time series. However, training data is difficult and expensive to acquire. The proposed approach promises to substantially increase the quality of classifier training, at almost no extra cost.

In recent times, the prospect of having huge numbers of Internet of Things (IoT) sensors quietly and diligently monitoring countless aspects of human, natural, and machine activities has gained ground. As our society becomes more and more hungry for data, scientists, engineers, and strategists increasingly hope that the additional insight which we can derive from this pervasive monitoring will improve the quality and efficiency of many production processes, also resulting in improved sustainability.

The world in which we live is incredibly complex, and this complexity is reflected in a huge multitude of variables that IoT sensors may be designed to monitor. Some are natural, such as the amount of sunlight, moisture, or the movement of an animal, while others are artificial, for example, the number of cars crossing an intersection or the strain applied to a suspended structure like a bridge. What these variables all have in common is that they evolve over time, creating what is known as time series, and that meaningful information is expected to be contained in their relentless changes. In many cases, researchers are interested in classifying a set of predetermined conditions or situations based on these temporal changes, as a way of reducing the amount of data and making it easier to understand. For instance, measuring how frequently a particular condition or situation arises is often taken as the basis for detecting and understanding the origin of malfunctions, pollution increases, and so on.

Thursday, November 24, 2022

Engineers improve electrochemical sensing by incorporating machine learning

Aida Ebrahimi, Thomas and Sheila Roell Early Career Assistant Professor of Electrical Engineering and assistant professor of biomedical engineering, and Vinay Kammarchedu, 2022-23 Milton and Albertha Langdon Memorial Graduate Fellowship in Electrical Engineering, developed a new approach to improve the performance of electrochemical biosensors by combining machine learning with multimodal measurement.
Photo Credit: Kate Myers | Pennsylvania State University

Combining machine learning with multimodal electrochemical sensing can significantly improve the analytical performance of biosensors, according to new findings from a Penn State research team. These improvements may benefit noninvasive health monitoring, such as testing that involves saliva or sweat. The findings were published this month in Analytica Chimica Acta.

The researchers developed a novel analytical platform that enabled them to selectively measure multiple biomolecules using a single sensor, saving space and reducing complexity as compared to the usual route of using multi-sensor systems. In particular, they showed that their sensor can simultaneously detect small quantities of uric acid and tyrosine — two important biomarkers associated with kidney and cardiovascular diseases, diabetes, metabolic disorders, and neuropsychiatric and eating disorders — in sweat and saliva, making the developed method suitable for personalized health monitoring and intervention.

Many biomarkers have similar molecular structures or overlapping electrochemical signatures, making it difficult to detect them simultaneously. Leveraging machine learning for measuring multiple biomarkers can improve the accuracy and reliability of diagnostics and as a result improve patient outcomes, according to the researchers. Further, sensing using the same device saves resources and biological sample volumes needed for tests, which is critical with clinical samples with scarce amounts.

Wednesday, November 23, 2022

Machine learning gives nuanced view of Alzheimer’s stages


A Cornell-led collaboration used machine learning to pinpoint the most accurate means, and timelines, for anticipating the advancement of Alzheimer’s disease in people who are either cognitively normal or experiencing mild cognitive impairment.

The modeling showed that predicting the future decline into dementia for individuals with mild cognitive impairment is easier and more accurate than it is for cognitively normal, or asymptomatic, individuals. At the same time, the researchers found that the predictions for cognitively normal subjects are less accurate for longer time horizons, but for individuals with mild cognitive impairment, the opposite is true.

The modeling also demonstrated that magnetic resonance imaging (MRI) is a useful prognostic tool for people in both stages, whereas tools that track molecular biomarkers, such as positron emission tomography (PET) scans, are more useful for people experiencing mild cognitive impairment.

The team’s paper, “Machine Learning Based Multi-Modal Prediction of Future Decline Toward Alzheimer’s Disease: An Empirical Study,” published in PLOS ONE. The lead author is Batuhan Karaman, a doctoral student in the field of electrical and computer engineering.

Monday, November 21, 2022

A possible game changer for next generation microelectronics

Magnetic fields created by skyrmions in two-dimensional sheet of material composed of iron, germanium and tellurium.
Image Credit: Argonne National Laboratory.

Magnets generate invisible fields that attract certain materials. A common example is fridge magnets. Far more important to our everyday lives, magnets also can store data in computers. Exploiting the direction of the magnetic field (say, up or down), microscopic bar magnets each can store one bit of memory as a zero or a one — the language of computers.

Scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory wants to replace the bar magnets with tiny magnetic vortices. As tiny as billionths of a meter, these vortices are called skyrmions, which form in certain magnetic materials. They could one day usher in a new generation of microelectronics for memory storage in high performance computers.

“We estimate the skyrmion energy efficiency could be 100 to 1000 times better than current memory in the high-performance computers used in research.” — Arthur McCray, Northwestern University graduate student working in Argonne’s Materials Science Division

“The bar magnets in computer memory are like shoelaces tied with a single knot; it takes almost no energy to undo them,” said Arthur McCray, a Northwestern University graduate student working in Argonne’s Materials Science Division (MSD). And any bar magnets malfunctioning due to some disruption will affect the others.

Tuesday, November 15, 2022

Prehistoric predator? Artificial intelligence says no

Artificial intelligence has proven vital in identifying a mysterious Aussie dinosaur
Image Credit: Dr Anthony Romilio

Artificial intelligence has revealed that prehistoric footprints thought to be made by a vicious dinosaur predator were in fact from a timid herbivore.

In an international collaboration, University of Queensland paleontologist Dr Anthony Romilio used AI pattern recognition to re-analyze footprints from the Dinosaur Stampede National Monument, south-west of Winton in Central Queensland.

“Large dinosaur footprints were first discovered back in the 1970s at a track site called the Dinosaur Stampede National Monument, and for many years they were believed to be left by a predatory dinosaur, like Australovenator, with legs nearly two meters long,” said Dr Romilio.

“The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago.

“But working out what dino species made the footprints exactly – especially from tens of millions of years ago – can be a pretty difficult and confusing business.

Solving brain dynamics gives rise to flexible machine-learning models

Studying the brains of small species recently helped MIT researchers better model the interaction between neurons and synapses — the building blocks of natural and artificial neural networks — into a class of flexible, robust machine-learning models that learn on the job and can adapt to changing conditions.
Image Credit: Ramin Hasani/Stable Diffusion

Last year, MIT researchers announced that they had built “liquid” neural networks, inspired by the brains of small species: a class of flexible, robust machine learning models that learn on the job and can adapt to changing conditions, for real-world safety-critical tasks, like driving and flying. The flexibility of these “liquid” neural nets meant boosting the bloodline to our connected world, yielding better decision-making for many tasks involving time-series data, such as brain and heart monitoring, weather forecasting, and stock pricing.

But these models become computationally expensive as their number of neurons and synapses increase and require clunky computer programs to solve their underlying, complicated math. And all of this math, similar to many physical phenomena, becomes harder to solve with size, meaning computing lots of small steps to arrive at a solution.

Now, the same team of scientists has discovered a way to alleviate this bottleneck by solving the differential equation behind the interaction of two neurons through synapses to unlock a new type of fast and efficient artificial intelligence algorithms. These modes have the same characteristics of liquid neural nets — flexible, causal, robust, and explainable — but are orders of magnitude faster, and scalable. This type of neural net could therefore be used for any task that involves getting insight into data over time, as they’re compact and adaptable even after training — while many traditional models are fixed.

Wednesday, November 2, 2022

Study urges caution when comparing neural networks to the brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.
Image Credits: Christine Daniloff | Massachusetts Institute of Technology

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.

“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

Thursday, October 27, 2022

Step by step


Berkeley researchers may be one step closer to making robot dogs our new best friends. Using advances in machine learning, two separate teams have developed cutting-edge approaches to shorten in-the-field training times for quadruped robots, getting them to walk — and even roll over — in record time.

In a first for the robotics field, a team led by Sergey Levine, associate professor of electrical engineering and computer sciences, demonstrated a robot learning to walk without prior training from models and simulations in just 20 minutes. The demonstration marks a significant advancement, as this robot relied solely on trial and error in the field to master the movements necessary to walk and adapt to different settings.

“Our work shows that training robots in the real world is more feasible than previously thought, and we hope, as a result, to empower other researchers to start tackling more real-world problems,” said Laura Smith, a Ph.D. student in Levine’s lab and one of the lead authors of the paper posted on arXiv.

In past studies, robots of comparable complexity required several hours to weeks of data input to learn to walk using reinforcement learning (RL). Often, they also were trained in controlled lab settings, where they learned to walk on relatively simple terrain and received precise feedback about their performance.

Thursday, October 20, 2022

Reprogrammable materials selectively self-assemble

With just a random disturbance that energizes the cubes, they selectively self-assemble into a larger block. 
Credit: MIT CSAIL

While automated manufacturing is ubiquitous today, it was once a nascent field birthed by inventors such as Oliver Evans, who is credited with creating the first fully automated industrial process, in flour mill he built and gradually automated in the late 1700s. The processes for creating automated structures or machines are still very top-down, requiring humans, factories, or robots to do the assembling and making.

However, the way nature does assembly is ubiquitously bottom-up; animals and plants are self-assembled at a cellular level, relying on proteins to self-fold into target geometries that encode all the different functions that keep us ticking. For a more bio-inspired, bottom-up approach to assembly, then, human-architected materials need to do better on their own. Making them scalable, selective, and reprogrammable in a way that could mimic nature’s versatility means some teething problems, though.

Now, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have attempted to get over these growing pains with a new method: introducing magnetically reprogrammable materials that they coat different parts with — like robotic cubes — to let them self-assemble. Key to their process is a way to make these magnetic programs highly selective about what they connect with, enabling robust self-assembly into specific shapes and chosen configurations.

Monday, October 17, 2022

A Machine Learning-Based Solution Could Help Firefighters Circumvent Deadly Backdrafts

NIST researchers conducted hundreds of fire experiments to find out what conditions make a room ripe for backdraft and fed the data to a machine learning algorithm. The result was a backdraft-predicting computer model. The NIST's team plans to incorporate the model into handheld devices that firefighters could use to take simple measurements through small openings in a room.

A lack of oxygen can reduce even the most furious flame to smoldering ash. But when fresh air rushes in, say after a firefighter opens a window or door to a room, the blaze may be suddenly and violently resurrected. This explosive phenomenon, called backdraft, can be lethal and has been challenging for firefighters to anticipate.

Now, researchers at the National Institute of Standards and Technology (NIST) have hatched a plan for informing firefighters of what dangers lie behind closed doors. The team obtained data from hundreds of backdrafts in the lab to use as a basis for a model that can predict backdrafts. The results of a new study, described at the 2022 Suppression, Detection and Signaling Research and Applications Conference, suggest that the model offers a viable solution to make predictions based on particular measurements. In the future, the team seeks to implement the technology into small-scale devices that firefighters could deploy in the field to avoid or adapt to dangerous conditions.

Currently, firefighters are looking for visual indicators of a potential backdraft, including soot-stained windows, smoke puffing through small openings and the absence of flames. If the cues are present, they may vent the room by creating holes in its ceiling to reduce their risk. If not, they may charge right in. Ultimately, first responders must rely on their eyes in a hazy environment to guess the correct action. And guessing wrong could come at a steep cost.

Wednesday, October 12, 2022

An AI model reveals how the body’s defense system recognizes skin cancer

Boosting the body’s own defense system has proven to be a particularly effective therapy for skin cancer.
Photo credit: National Cancer Institute

The artificial intelligence model could be utilized to enable more effective care for skin cancer patients and could lead to similar breakthroughs in the diagnosis and treatment of other cancers.

Researchers from the University of Helsinki, HUS Comprehensive Cancer Center, Aalto University and Stanford University have developed an artificial intelligence model that predicts which skin cancer patients will benefit from a treatment that activates the immune defense system. In practice, the AI model makes it possible to diagnose skin cancer with a blood test, determine the prognosis and target therapies increasingly accurately.

The skin cancer–related study was published in the esteemed Nature Communications journal.

The right medication for the right patient

Boosting the body’s own defense system has proven to be a particularly effective therapy for skin cancer. The problem with therapies that activate the immune system are the differences between patient groups: while some patients can be said to be cured, others gain no benefit from the treatment at all.

“Prior research has been unable to provide doctors with tools that would predict who will benefit from treatment that activates the defense system. The correct targeting of therapies is extremely important, since drug therapies are expensive and serious adverse effects fairly common,” says doctor and Doctoral Researcher Jani Huuhtanen from the University of Helsinki and Aalto University.

Tuesday, October 11, 2022

Team uses digital cameras, machine learning to predict neurological disease

From left, Richard Sowers, Rachneet Kaur and Manuel Hernandez led the development of a new approach for identifying people with multiple sclerosis or Parkinson’s disease. Their method involves videotaping the hips and lower extremities of individuals walking on a treadmill and allowing a machine-learning algorithm to differentiate gait abnormalities associated with each of these neurological conditions.
Photo credit: Fred Zwicky

In an effort to streamline the process of diagnosing patients with multiple sclerosis and Parkinson’s disease, researchers used digital cameras to capture changes in gait – a symptom of these diseases – and developed a machine-learning algorithm that can differentiate those with MS and PD from people without those neurological conditions.

Their findings are reported in the IEEE Journal of Biomedical and Health Informatics.

The goal of the research was to make the process of diagnosing these diseases more accessible, said Manuel Hernandez, a University of Illinois Urbana-Champaign professor of kinesiology and community health who led the work with graduate student Rachneet Kaur and industrial and enterprise systems engineering and mathematics professor Richard Sowers.

Currently, patients must wait – sometimes for years – to get an appointment with a neurologist to make a diagnosis, Hernandez said. And people in rural communities often must travel long distances to a facility where their condition can be assessed. To be able to gather gait information using nothing more than a digital camera and have that data assessed online could allow clinicians to do a quick screening that sends to a specialist only those deemed likely to have a neurological condition.

Monday, October 10, 2022

Claims AI can boost workplace diversity are ‘spurious and dangerous’, researchers argue

Co-author Dr Eleanor Drage testing the 'personality machine' built by Cambridge undergraduates.
  Credit: Eleanor Drage

Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce, from use of chatbots and CV scrapers to line up prospective candidates, through to analysis software for video interviews.

Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns and even facial micro-expressions to assess huge pools of job applicants for the right personality type and “culture fit”.

However, in a new report published in Philosophy and Technology, researchers from Cambridge’s Centre for Gender Studies argue these claims make some uses of AI in hiring little better than an “automated pseudoscience” reminiscent of physiognomy or phrenology: the discredited beliefs that personality can be deduced from facial features or skull shape.

They say it is a dangerous example of “techno solutionism”: turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.

Mathematical Formula Tackles Complex Moral Decision-Making in AI

Photo credit: Andy Kelly.

An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or “carebots” used in healthcare settings.

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, older adults and other people who require health monitoring or physical assistance,” says Veljko Dubljević, corresponding author of a paper on the work and an associate professor in the Science, Technology & Society program at North Carolina State University. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.

“For example, let’s say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?

“Previous efforts to incorporate ethical decision-making into AI programs have been limited in scope and focused on utilitarian reasoning, which neglects the complexity of human moral decision-making,” Dubljević says. “Our work addresses this and, while I used carebots as an example, is applicable to a wide range of human-AI teaming technologies.”

Thursday, October 6, 2022

As ransomware attacks increase, new algorithm may help prevent power blackouts

Saurabh Bagchi, a Purdue professor of electrical and computer engineering, develops ways to improve the cybersecurity of power grids and other critical infrastructure.
Credit: Purdue University photo/Vincent Walter

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid’s utilities could fill in the most crucial security gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an algorithm to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Power grids are a type of critical infrastructure, which is any network – whether physical like water systems or virtual like health care record keeping – considered essential to a country’s function and safety. The biggest ransomware attacks in history have happened in the past year, affecting most sectors of critical infrastructure in the U.S. such as grain distribution systems in the food and agriculture sector and the Colonial Pipeline, which carries fuel throughout the East Coast.

Repurposing existing drugs to fight new COVID-19 variants

Photo Credit: Myriam Zilles

MSU researchers are using big data and AI to identify current drugs that could be applied to treat new COVID-19 variants

Finding new ways to treat the novel coronavirus and its ever-changing variants has been a challenge for researchers, especially when the traditional drug development and discovery process can take years. A Michigan State University researcher and his team are taking a hi-tech approach to determine whether drugs already on the market can pull double duty in treating new COVID variants.

“The COVID-19 virus is a challenge because it continues to evolve,” said Bin Chen, an associate professor in the College of Human Medicine. “By using artificial intelligence and really large data sets, we can repurpose old drugs for new uses.”

Chen built an international team of researchers with expertise on topics ranging from biology to computer science to tackle this challenge. First, Chen and his team turned to publicly available databases to mine for the unique coronavirus gene expression signatures from 1,700 host transcriptomic profiles that came from patient tissues, cell cultures and mouse models. These signatures revealed the biology shared by COVID-19 and its variants.

Tuesday, October 4, 2022

Scientists chart how exercise affects the body

MIT and Harvard Medical School researchers mapped out many of the cells, genes, and cellular pathways that are modified by exercise or high-fat diet.
Photo Credit: Gabin Vallet

Exercise is well-known to help people lose weight and avoid gaining it. However, identifying the cellular mechanisms that underlie this process has proven difficult because so many cells and tissues are involved.

In a new study in mice that expands researchers’ understanding of how exercise and diet affect the body, MIT and Harvard Medical School researchers have mapped out many of the cells, genes, and cellular pathways that are modified by exercise or high-fat diet. The findings could offer potential targets for drugs that could help to enhance or mimic the benefits of exercise, the researchers say.

“It is extremely important to understand the molecular mechanisms that are drivers of the beneficial effects of exercise and the detrimental effects of a high-fat diet, so that we can understand how we can intervene, and develop drugs that mimic the impact of exercise across multiple tissues,” says Manolis Kellis, a professor of computer science in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Broad Institute of MIT and Harvard.

The researchers studied mice with high-fat or normal diets, who were either sedentary or given the opportunity to exercise whenever they wanted. Using single-cell RNA sequencing, the researchers cataloged the responses of 53 types of cells found in skeletal muscle and two types of fatty tissue.

Mouse-human comparison shows unimagined functions of the Thalamus

With mathematical models, Bochum and US researchers have simulated processes in the brain of mice and humans.
Credit: RUB, Marquard

Researchers have reproduced the brain functions of the mouse and human in the computer. Artificial intelligence could learn from this.

For a long time, the thalamus was considered a brain region that is primarily responsible for processing sensory stimuli. Current studies have increased the evidence that it is a central switch in cognitive processes. Researchers of neuroscience around Prof. Dr. Burkhard Pleger in Collaborative Research Center 874 of the Ruhr University Bochum and a team from the Massachusetts Institute of Technology (MIT, USA) observed learning processes in the brains of mice and humans and reproduced them in mathematical models. They were able to show that the region of the mediodoral nucleus in the thalamus has a decisive share in cognitive flexibility. They report in the journal PLOS Computational Biology.

Monday, October 3, 2022

AI boosts usability of paper-making waste products

Photo and graphic with birch tree by J. Löfgren

In a new and exciting collaboration with the Department of Bioproducts and Biosystems, researchers in the CEST group have published a study demonstrating how artificial intelligence (AI) can boost the production of renewable biomaterials. Their publication focuses on the extraction of lignin, an organic polymer that together with cellulose makes up the cell walls of plants. As a side-product of papermaking, lignin is produced in large quantities around the world but seldom used as anything other than cheap fuel. Developing valuable materials and chemicals from lignin would consequently be a big step towards a sustainable society.

A key challenge for the valorization of lignin is to find the right experimental extraction conditions. These include things like the temperature in the hot-water reactor where the wood is processed, the reaction time and the ratio of wood to water. These conditions not only affect the amount of lignin that can be extracted, but also the physical and chemical properties of the extracted lignin itself. Therefore, knowing how to choose the right experimental conditions is important since the more lignin can be extracted the better, and different lignin-based products may require lignin with different properties.

Saturday, October 1, 2022

Machine learning may enable bioengineering of the most abundant enzyme

Photo Credit: Melissa Askew

A Newcastle University study has for the first time shown that machine learning can predict the biological properties of the most abundant enzyme on Earth - Rubisco.

Rubisco (Ribulose-1,5-bisphosphate carboxylase/oxygenase) is responsible for providing carbon for almost all life on Earth. Rubisco functions by converting atmospheric CO2 from the Earth’s atmosphere to organic carbon matter, which is essential to sustain most life on Earth.

For some time now, natural variation has been observed among Rubisco proteins of land plants and modelling studies have shown that transplanting Rubisco proteins with certain functional properties can increase the amount of atmospheric CO2 crop plants can uptake and store.

Study lead author, Wasim Iqbal, a PhD researcher at Newcastle University’s School of Natural and Environmental Sciences, part of Dr Maxim Kapralov’s group, developed a machine learning tool which can predict the performance properties of numerous land plant Rubisco proteins with surprisingly good accuracy. The hope is that this tool will enable the hunt for a ‘supercharged’ Rubisco protein that can be bioengineered into major crops such as wheat.

Featured Article

A warmer Arctic Ocean leads to more snowfall further south

An increasingly warm and ice-free Arctic Ocean has, in recent decades, led to more moisture in higher latitudes. This moisture is transporte...

Top Viewed Articles