Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Thursday, August 18, 2022

A new neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today’s compute platforms

 The NeuRRAM chip is an innovative neuromorphic chip
Credit: David Baillot/University of California San Diego

An international team of researchers has designed and built a chip that runs computations directly in memory and can run a wide variety of AI applications–all at a fraction of the energy consumed by computing platforms for general-purpose AI computing.

The NeuRRAM neuromorphic chip brings AI a step closer to running on a broad range of edge devices, disconnected from the cloud, where they can perform sophisticated cognitive tasks anywhere and anytime without relying on a network connection to a centralized server. Applications abound in every corner of the world and every facet of our lives, and range from smart watches, to VR headsets, smart earbuds, smart sensors in factories and rovers for space exploration.

The NeuRRAM chip is not only twice as energy efficient as the state-of-the-art “compute-in-memory” chips, an innovative class of hybrid chips that runs computations in memory, it also delivers results that are just as accurate as conventional digital chips. Conventional AI platforms are a lot bulkier and typically are constrained to using large data servers operating in the cloud.

In addition, the NeuRRAM chip is highly versatile and supports many different neural network models and architectures. As a result, the chip can be used for many different applications, including image recognition and reconstruction as well as voice recognition.

Wednesday, August 17, 2022

Machine learning meets medicine in the fight against superbugs

Scanning electron micrograph of a human neutrophil ingesting MRSA.
Image: National Institute of Allergy and Infectious Diseases, National Institutes of Health on Flickr, CC BY-NC 2.0

MRSA is an antibiotic-resistant staph infection that can be deadly for those in hospital care or with weakened immune systems. Staphylococcus aureus bacteria live in the nose without necessarily producing any symptoms but can also spread to other parts of the body, leading to persistent infections. Management of MRSA is long-term and laborious, so any steps to optimize treatments and reduce re-infections will benefit patients. This new research can predict how effective different treatments will be by combining patient data with estimates of how MRSA moves between different parts of the body. The study was published in the Journal of the Royal Society Interface.

The researchers compared data from 2000 patients with MRSA after hospital visits. In one group, patients were given standard information about how to treat MRSA and prevent its spread. The second group followed a more intensive ‘decolonization’ protocol to eliminate MRSA through wound disinfection, cleaning the armpits and groin, and using nasal spray. Both groups were tested for MRSA on different body parts at various time points over nine months.

The current state-of-the-art in medical research often involves comparing two groups in this way, to see if an intervention or treatment could be effective. The new study added another element: a mathematical model that looked at the interactions between treatments and body parts. 'The model shows how MRSA moves between body parts,' says senior author Pekka Marttinen, professor at Aalto University and the Finnish Center for Artificial Intelligence FCAI. 'It can help us optimize the combination of treatments and even predict how new treatments would work before they have been tested on patients.'

Friday, August 12, 2022

Two Monumental Milestones Achieved in CT Imaging

Conventional chest CT image (left side) of the human airways compared to the new and improved PCD-CT system (right side). The image produced with the PCD-CT system shows better delineation of the bronchial walls. Preliminary studies showed that the PCD-CT system allowed radiologists to see smaller airways than with standard CT systems.
Image credit: Cynthia McCollough, Mayo Clinic, Rochester, Minnesota.

Two biomedical imaging technologies developed with support from the National Institute of Biomedical Imaging and Bioengineering (NIBIB) have been cleared for clinical use by the Food and Drug Administration (FDA). Both technologies offer advances in computed tomography (CT).

In one of these developments, project lead Cynthia McCollough, Ph.D., director of Mayo Clinic’s CT Clinical Innovation Center and her team helped develop the first photon-counting detector (PCD)-CT system, which is superior to current CT technology. CT imaging has been an immense clinical asset for diagnosing many diseases and injuries. However, since its introduction into the clinic in 1971, the way that the CT detector converts x-rays to electrical signals has remained essentially the same. Photon-counting detectors operate using a fundamentally different mechanism than any prior CT detector ever has.

“This is the first major imaging advancement cleared by the FDA for CT in a decade,” stated Behrouz Shabestari, Ph.D., director of the division of Health Informatics Technologies. “The impact of this development will be far-reaching and provide clinicians with more detailed information for medical diagnoses.”

A CT scan is obtained when an x-ray beam rotates around a patient, allowing x-rays to pass through the patient. As the x-rays leave the patient a picture is taken by a detector and the information is transmitted to a computer for further processing. “Standard CT detectors use a two-step process, where x-rays are turned into light and then light is converted to an electrical signal,” explained Cynthia McCollough. “The photon-counting detector uses a one-step process where the x-ray is immediately transformed into an electrical signal.”

AI could help patients with chronic pain avoid opioids

Image by Andrea from Pixabay
Cognitive behavioral therapy is an effective alternative to opioid painkillers for managing chronic pain. But getting patients to complete those programs is challenging, especially because psychotherapy often requires multiple sessions and mental health specialists are scarce.

A new study suggests that pain CBT supported by artificial intelligence renders the same results as guideline-recommended programs delivered by therapists, while requiring substantially less clinician time, making this therapy more accessible.

“Chronic pain is incredibly common: back pain, osteoarthritis, migraine headaches and more. Because of pain, people miss work, develop depression, some people drink more alcohol than is healthy, and chronic pain is one of the main drivers of the opioid epidemic,” said John Piette, a professor at the University of Michigan’s School of Public Health and senior research scientist at the Veterans Administration.

“We’re very excited about the results of this study, because we were able to demonstrate that we can achieve pain outcomes that are at least as good as standard cognitive behavioral therapy programs, and maybe even better. And we did that with less than half the therapist time as guideline-recommended approaches.”

Traditionally, CBT is delivered by a therapist in 6 to 12 weekly in-person sessions that target patients’ behaviors, help them cope mentally and assist them in regaining functioning.

Wednesday, August 10, 2022

AI May Come to the Rescue of Future Firefighters

A view from NIST's Burn Observation Bubble (BOB) of a burning structure during an experiment, one minute before flashover. 
Credit: NIST

In firefighting, the worst flames are the ones you don’t see coming. Amid the chaos of a burning building, it is difficult to notice the signs of impending flashover — a deadly fire phenomenon wherein nearly all combustible items in a room ignite suddenly. Flashover is one of the leading causes of firefighter deaths, but new research suggests that artificial intelligence (AI) could provide first responders with a much-needed heads-up.

Researchers at the National Institute of Standards and Technology (NIST), the Hong Kong Polytechnic University and other institutions have developed a Flashover Prediction Neural Network (FlashNet) model to forecast the lethal events precious seconds before they erupt. In a new study published in Engineering Applications of Artificial Intelligence, FlashNet boasted an accuracy of up to 92.1% across more than a dozen common residential floorplans in the U.S. and came out on top when going head-to-head with other AI-based flashover predicting programs.

Flashovers tend to suddenly flare up at approximately 600 degrees Celsius (1,100 degrees Fahrenheit) and can then cause temperatures to shoot up further. To anticipate these events, existing research tools either rely on constant streams of temperature data from burning buildings or use machine learning to fill in the missing data in the likely event that heat detectors succumb to high temperatures.

Until now, most machine learning-based prediction tools, including one the authors previously developed, have been trained to operate in a single, familiar environment. In reality, firefighters are not afforded such luxury. As they charge into hostile territory, they may know little to nothing about the floorplan, the location of fire or whether doors are open or closed.

Tuesday, August 9, 2022

How water turns into ice — with quantum accuracy

Researchers at Princeton University combined artificial intelligence and quantum mechanics to simulate what happens at the molecular level when water freezes. The result is the most complete simulation yet of the first steps in ice “nucleation,” a process important for climate and weather modeling.  
Video by Pablo Piaggi, Princeton University

A team based at Princeton University has accurately simulated the initial steps of ice formation by applying artificial intelligence (AI) to solving equations that govern the quantum behavior of individual atoms and molecules.

The resulting simulation describes how water molecules transition into solid ice with quantum accuracy. This level of accuracy, once thought unreachable due to the amount of computing power it would require, became possible when the researchers incorporated deep neural networks, a form of artificial intelligence, into their methods. The study was published in the journal Proceedings of the National Academy of Sciences.

“In a sense, this is like a dream come true,” said Roberto Car, Princeton’s Ralph W. *31 Dornte Professor in Chemistry, who co-pioneered the approach of simulating molecular behaviors based on the underlying quantum laws more than 35 years ago. “Our hope then was that eventually we would be able to study systems like this one, but it was not possible without further conceptual development, and that development came via a completely different field, that of artificial intelligence and data science.”

The ability to model the initial steps in freezing water, a process called ice nucleation, could improve accuracy of weather and climate modeling as well as other processes like flash-freezing food.

The new approach enables the researchers to track the activity of hundreds of thousands of atoms over time periods that are thousands of times longer, albeit still just fractions of a second, than in early studies.

Car co-invented the approach to using underlying quantum mechanical laws to predict the physical movements of atoms and molecules. Quantum mechanical laws dictate how atoms bind to each other to form molecules, and how molecules join with each other to form everyday objects.

Monday, August 1, 2022

Artificial Intelligence Edges Closer to the Clinic

TransMED can help predict the outcomes of COVID-19 patients, generating predictions from different kinds of clinical data, including clinical notes, laboratory tests, diagnosis codes and prescribed drugs. The other uniqueness of TransMED lies in its ability to transfer learn from existing diseases to better predict and reason about progression of new and rare diseases. 
Credit: Shannon Colson | Pacific Northwest National Laboratory

The beginning of the COVID-19 pandemic presented a huge challenge to healthcare workers. Doctors struggled to predict how different patients would fare under treatment against the novel SARS-CoV-2 virus. Deciding how to triage medical resources when presented with very little information took a mental and physical toll on caregivers as the pandemic progressed.

To ease this burden, researchers at Pacific Northwest National Laboratory (PNNL), Stanford University, Virginia Tech, and John Snow Labs developed TransMED, a first-of-its-kind artificial intelligence (AI) prediction tool aimed at addressing issues caused by emerging or rare diseases.

“As COVID-19 unfolded over 2020, it brought a number of us together into thinking how and where we could contribute meaningfully,” said chief scientist Sutanay Choudhury. “We decided we could make the most impact if we worked on the problem of predicting patient outcomes.”

“COVID presented a unique challenge,” said Khushbu Agarwal, lead author of the study published in Nature Scientific Reports. “We had very limited patient data for training an AI model that could learn the complex patterns underlying COVID patient trajectories.”

The multi-institutional team developed TransMED to address this challenge, analyzing data from existing diseases to predict outcomes of an emerging disease.

Thursday, July 28, 2022

AI tackles the challenge of materials structure prediction


The researchers from Cambridge and Linkoping Universities, have designed a way to predict the structure of materials given its constitutive elements. The results are reported in the journal Science Advances.

The arrangement of atoms in a material determines its properties. The ability to predict this arrangement computationally for different combinations of elements, without having to make the material in the lab, would enable researchers to quickly design and improve materials. This paves the way for advances such as better batteries and photovoltaics.

However, there are many ways that atoms can ‘pack’ into a material: some packings are stable, others are not. Determining the stability of a packing is computationally intensive, and calculating every possible arrangement of atoms to find the best one is not practical. This is a significant bottleneck in materials science.

“This materials structure prediction challenge is similar to the protein folding problem in biology,” said Dr Alpha Lee from Cambridge’s Cavendish Laboratory, who co-led the research. “There are many possible structures that a material can ‘fold’ into. Except the materials science problem is perhaps even more challenging than biology because it considers a much broader set of elements.”

Thursday, June 23, 2022

Robots play with play dough


The inner child in many of us feels an overwhelming sense of joy when stumbling across a pile of the fluorescent, rubbery mixture of water, salt, and flour that put goo on the map: play dough. (Even if this happens rarely in adulthood.)

While manipulating play dough is fun and easy for 2-year-olds, the shapeless sludge is hard for robots to handle. Machines have become increasingly reliable with rigid objects, but manipulating soft, deformable objects comes with a laundry list of technical challenges, and most importantly, as with most flexible structures, if you move one part, you’re likely affecting everything else.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Stanford University recently let robots take their hand at playing with the modeling compound, but not for nostalgia’s sake. Their new system learns directly from visual inputs to let a robot with a two-fingered gripper see, simulate, and shape doughy objects. “RoboCraft” could reliably plan a robot’s behavior to pinch and release play dough to make various letters, including ones it had never seen. With just 10 minutes of data, the two-finger gripper rivaled human counterparts that teleoperated the machine — performing on-par, and at times even better, on the tested tasks.

Wednesday, June 22, 2022

Where Once Were Black Boxes, NIST’s New LANTERN Illuminates

How do you figure out how to alter a gene so that it makes a usefully different protein? The job might be imagined as interacting with a complex machine (at left) that sports a vast control panel filled with thousands of unlabeled switches, which all affect the device’s output somehow. A new tool called LANTERN figures out which sets of switches — rungs on the gene’s DNA ladder — have the largest effect on a given attribute of the protein. It also summarizes how the user can tweak that attribute to achieve a desired effect, essentially transmuting the many switches on our machine’s panel into another machine (at right) with just a few simple dials.
Credit: B. Hayes/NIST

Researchers at the National Institute of Standards and Technology (NIST) have developed a new statistical tool that they have used to predict protein function. Not only could it help with the difficult job of altering proteins in practically useful ways, but it also works by methods that are fully interpretable — an advantage over the conventional artificial intelligence (AI) that has aided with protein engineering in the past.

The new tool, called LANTERN, could prove useful in work ranging from producing biofuels to improving crops to developing new disease treatments. Proteins, as building blocks of biology, are a key element in all these tasks. But while it is comparatively easy to make changes to the strand of DNA that serves as the blueprint for a given protein, it remains challenging to determine which specific base pairs — rungs on the DNA ladder — are the keys to producing a desired effect. Finding these keys has been the purview of AI built of deep neural networks (DNNs), which, though effective, are notoriously opaque to human understanding.

Tuesday, May 24, 2022

AI reveals unsuspected math underlying search for exoplanets

Artist’s concept of a sun-like star (left) and a rocky planet about 60% larger than Earth in orbit in the star’s habitable zone. Gravitational microlensing has the ability to detect such planetary systems and determine the masses and orbital distances, even though the planet itself is too dim to be seen. 
Image credit: NASA Ames/JPL-Caltech/T. Pyle

Artificial intelligence (AI) algorithms trained on real astronomical observations now outperform astronomers in sifting through massive amounts of data to find new exploding stars, identify new types of galaxies and detect the mergers of massive stars, accelerating the rate of new discovery in the world’s oldest science.

But AI, also called machine learning, can reveal something deeper, University of California, Berkeley, astronomers found: unsuspected connections hidden in the complex mathematics arising from general relativity — in particular, how that theory is applied to finding new planets around other stars.

In a paper appearing this week in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it — a process called gravitational microlensing — revealed that the decades-old theories now used to explain these observations are woefully incomplete.

Friday, May 20, 2022

Artificial intelligence predicts patients’ race from their medical images

Researchers demonstrated that medical AI systems can easily learn to recognize racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate.
 Credit: Massachusetts Institute of Technology

The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to reoffend as someone who’s white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting.

Removing sensitive features from the data seems like a viable tweak. But what happens when it’s not enough?

Featured Article

Extreme events stress the oceans

Sea snails - the picture shows a pteropod - play an important role in the marine food web. They are especially sensitive to ocean warming an...

Top Viewed Articles