|The Scientific Frontline Communication Center / Technology News|
|News Brief Categories|
|Announcements | Aviation | Achievements & Awards | Boeing | ESA | Lockheed Martin | Medical | NASA | Northrop Grumman | Science | Space | Technology | Univ. Announcements | Univ. Achievements & Awards | Univ. Grants & Funding| Univ. Medical | Univ. Science | Univ. Space | Univ. Technology | Womens Health|
Sandia team developing right-sized reactor
A smaller scale, economically efficient nuclear reactor that could be mass-assembled in factories and supply power for a medium-size city or military base has been designed by Sandia National Laboratories.
The exportable, proliferation-resistant “right-sized reactor” was conceived by a Sandia research team led by Tom Sanders.
Sanders has been collaborating with numerous Sandians on advancing the small reactor concept to an integrated design that incorporates intrinsic safeguards, security and safety. This opens the way for possible exportation of the reactor to developing countries that do not have the infrastructure to support large power sources. The smaller reactor design decreases the potential need for countries to develop an advanced nuclear regulatory framework.
Incorporated into the design, said team member Gary Rochau, is what is referred to as “nuke-star,” an integrated monitoring system that provides the exporters of such technologies a means of assuring the safe, secure, and legitimate use of nuclear technology.
“This small reactor would produce somewhere in the range of 100 to 300 megawatts of thermal power and could supply energy to remote areas and developing countries at lower costs and with a manufacturing turnaround period of two years as opposed to seven for its larger relatives,” Sanders said. “It could also be a more practical means to implement nuclear base load capacity comparable to natural gas-fired generating stations and with more manageable financial demands than a conventional power plant.”
About the size of half a fairly large office building, a right-sized reactor facility will be considerably smaller than conventional nuclear power plants in the U.S. that typically produce 3,000 megawatts of power.
With approximately 85 percent of the design efforts completed for the reactor core, Sanders and his team are seeking an industry partner through a cooperative research and development agreement (CRADA). The CRADA team will be able to complete the reactor design and enhance the plant side, which is responsible for turning the steam into electricity.
Team member Steve Wright is doing research using internal Sandia Laboratory Directed Research and Development (LDRD) program funding. The right-sized reactor is expected to operate at efficiencies greater than any current designs, ultimately giving the reactor the greatest return on investment.
“In the past, concerns over nuclear proliferation and waste stymied and eventually brought to a halt nuclear energy R&D in the United States and caused constraints on U.S. supply industries that eventually forced them offshore,” Sanders said. “Today the prospects of nuclear proliferation, terrorism, global warming and environmental degradation have resulted in growing recognition that a U.S.-led nuclear power enterprise can prevent proliferation while providing a green source of energy to a developing country.”
Sanders said developing countries around the world have notified the International Atomic Energy Agency (IAEA) of their intent to enter the nuclear playing field. This technology will provide a large, ready market for properly scaled, affordable power systems. The right-sized nuclear power system is poised to have the right combination of features to meet export requirements, cost considerations and waste concerns.
The reactor system is built around a small uranium core, submerged in a tank of liquid sodium. The liquid sodium is piped through the core to carry the heat away to a heat exchanger also submerged in the tank of sodium. In the Sandia system, the reactor heat is transferred to a very efficient supercritical CO2 turbine to produce electricity.
These smaller reactors would be factory built and mass-assembled, with potential production of 50 a year. They all would have the exact same design, allowing for quick licensing and deployment. Mass production will keep the costs down, possibly to as low as $250 million per unit. Just as Henry Ford revolutionized the automobile industry with mass production of automobiles via an assembly line, the team’s concept would revolutionize the current nuclear industry, Sanders said.
Because the right-sized reactors are breeder reactors — meaning they generate their own fuel as they operate — they are designed to have an extended operational life and only need to be refueled once every couple of decades, which helps alleviate proliferation concerns. The reactor core is replaced as a unit and “in effect is a cartridge core for which any intrusion attempt is easily monitored and detected,” Sanders said. The reactor system has no need for fuel handling. Conventional nuclear power plants in the U.S. have their reactors refueled once every 18 months.
Sanders said much of the reactor technology needed for the smaller fission machines has been demonstrated through 50 years of operating experimental breeder reactors in Idaho. In addition, he said, Sandia is one of a handful of research facilities that has the capability to put together a project of this magnitude. The project would tap into the Labs’ expertise in complex systems engineering involving high performance computing systems for advanced modeling and simulation, advanced manufacturing, robotics and sensors as well as its experience in moving from research to development to deployment.
“Sandia operates one of three nuclear reactors and the only fuel-critical test facility remaining in the DOE complex,” Sanders said. “It is the nation’s lead laboratory for the development of all radiation-hardened semiconductor components as well as the lead lab for testing these components in extreme radiation environments.”
The goal of the right-sized reactors is to produce electricity at less than five cents per kilowatt hour, making them economically comparable to gas turbine systems.
Sanders said the smaller reactors will probably be built initially to provide power to military bases, both in the U.S. and outside the country.
Scientific Frontline® The Comm Center The E.A.R.® Global Video News Space Weather Center Stellar Nights® Cassini Gallery Mars Gallery Missions Gallery Observatories Gallery Exploration Gallery Aviation Gallery Nature Trail Gallery
Carnegie Institution for Science
National Institutes of Health
Geological Society of America
The National Center for Atmospheric Research
Royal Astronomical Society
The Earth Institute
International Research Institute for Climate and Society
Australian Research Council
Max Planck Society
Sites / Blogs of Interest
Sun | Trek
The Imagineer’s Chronicles
Sun in Motion
The Belt of Venus
Scientific Frontline® Is supported in part by “Readers Like You”
|Source: Sandia National Laboratories Permalink: http://www.sflorg.com/comm_center/tech/p917_15.html Time Stamp: 8/27/2009 at 7:15:40 PM UTC|
Supercomputer at Oak Ridge Smashes Sustained Petaflops Record
"Jaguar" Sets Historical Milestone Surpassing the Sustained Petaflops Speed Barrier on Two Scientific Applications
Global supercomputer leader Cray Inc. (NASDAQ: CRAY) today announced the Cray XT supercomputer at the Department of Energy's (DOE) Oak Ridge National Laboratory (ORNL) set a new world record for computer speed with sustained performance of over a petaflops (quadrillion mathematical calculations per second) on two scientific applications. Sustained performance on real-world applications is the most critical measure of supercomputing performance. This allows scientists and engineers to dramatically increase the size, realism and complexity of simulations used to address fundamental scientific problems.
An ORNL research team recorded an unprecedented sustained performance of 1.35 petaflops on a superconductivity application used in nanotechnology and materials science research on the 1.64 petaflops system, nicknamed "Jaguar." The team's simulation ran on over 150,000 of Jaguar's 180,000-plus processing cores. The latest simulations on Jaguar were the first in which the team had enough computing power to move beyond ideal, perfectly ordered materials to the imperfect materials that typify the real world. Research into the nature of materials promises to revolutionize many areas of modern life, from power generation and transmission to transportation to the production of faster, smaller, more versatile computers and storage devices.
The petaflops barrier was broken on a second application with 1.05 petaflops of sustained performance. The new performance levels for this application, a first-principles material science computer model used to perform studies involving the interactions between a large number of atoms, are expected to support advancements in magnetic storage.
"Compute performance has a direct impact on our ability to tackle the greatest engineering and scientific challenges we face, resulting in the breakthroughs that change society," said Dr. Thomas Zacharia, Oak Ridge National Laboratory associate director for computing and computational sciences. "The scalability, reliability and upgradeability of the Cray XT4 have made Jaguar an increasingly powerful computing resource for our researchers and scientists. This upgrade will enable even greater achievements in today's most important areas of science."
Jaguar was recently upgraded to a peak 1.64 petaflops and is now the world's fastest supercomputer for open scientific computing. The upgrade represents a major milestone in a four-year project, begun in 2004, between DOE, ORNL and Cray. The new petaflops machine will make it possible to address some of the most challenging scientific problems in areas such as climate modeling, renewable energy, materials science, fusion and combustion. Annually, 80 percent of Jaguar's resources are allocated through DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, a competitively selected, peer reviewed process open to researchers from universities, industry, government and non-profit organizations.
This marks the third successive major speed barrier broken by a Cray supercomputer. A Cray machine was first to achieve sustained gigaflops speed (one billion calculations per second) on a full 64-bit application in 1989, and another Cray computer broke the teraflops barrier (one trillion calculations per second) on a similar application in 1998.
"We congratulate the two application teams and Oak Ridge National Laboratory for shattering the petaflops performance barrier on these important scientific codes," said Cray CEO and President Peter Ungaro. "This milestone is a great reminder of what organizations can achieve when great determination and talent meet up with great supercomputing technology. On behalf of Cray, I'd also like to thank our major partners in building Jaguar, AMD and Data Direct Networks, for helping us achieve this historical milestone."
"It is only when we transcend the boundaries of what is considered possible that true technology advancements are achieved," said Alex Bouzari, CEO and Co-founder, Data Direct Networks. "We are proud to have partnered with Cray and AMD to help ORNL with this record-breaking accomplishment that will alter the future trajectory of supercomputing."
Jaguar uses over 45,000 of the latest quad-core AMD Opteron processors and features 362 terabytes of memory and a 10-petabye DDN file system. The computer has 578 terabytes per second of memory bandwidth and I/O bandwidth of 284 gigabytes per second. The current upgrade is the result of an addition of 200 Cray XT5 cabinets to the existing 84 cabinets of the Cray XT4 Jaguar system. During the third quarter of 2008, Cray successfully delivered all of the cabinets for the petaflops system to ORNL ahead of schedule. The upgraded system is now undergoing acceptance testing which is expected to conclude in late 2008 or early 2009.
Throughout its series of upgrades, Jaguar has maintained a consistent programming model for the users. This programming model allows users to continue to evolve their existing codes rather than write new ones. Applications that ran on previous versions of Jaguar can be recompiled, tuned for efficiency and then run on the new machine.
|Source: Cray Inc. Permalink: http://www.sflorg.com/comm_center/tech/p735_14.html Time Stamp: 11/17/2008 at 5:12:10 PM UTC|
Tiny Solar Cells Built to Power Microscopic Machines
Some of the tiniest solar cells ever built have been successfully tested as a power source for even tinier microscopic machines. An article in the inaugural issue of the Journal of Renewable and Sustainable Energy (JRSE), published by the American Institute of Physics (AIP), describes an inch-long array of 20 of these cells -- each one about a quarter the size of a lowercase "o" in a standard 12-point font.
The cells were made of an organic polymer and were joined together in an experiment aimed at proving their ability to power tiny devices that can be used to detect chemical leaks and for other applications, says Xiaomei Jiang, who led the research at the University of South Florida.
Traditional solar cells, such as the commercial type installed on rooftops, use a brittle backing made of silicon, the same sort of material upon which computer chips are built. By contrast, organic solar cells rely upon a polymer that has the same electrical properties of silicon wafers but can be dissolved and printed onto flexible material.
"I think these materials have a lot more potential than traditional silicon," says Jiang. "They could be sprayed on any surface that is exposed to sunlight -- a uniform, a car, a house."
Jiang and her colleagues fabricated their array of 20 tiny solar cells as a power source for running a microscopic sensor for detecting dangerous chemicals and toxins. The detector, known as a microeletromechanical system (MEMS) device, is built with carbon nanotubes and has already been tested using ordinary DC power supplied by batteries. When fully powered and hooked into a circuit, the carbon nanotubes can sensitively detect particular chemicals by measuring the electrical changes that occur when chemicals enter the tubes. The type of chemical can be distinguished by the exact change in the electrical signal.
The device needs a 15-volt power source to work, so far and Jiang's solar cell array can provide about half of that -- up to 7.8 volts in their laboratory tests. The next step, she says, is to optimize the device to increase the voltage and then combine the miniature solar array to the carbon nanotube chemical sensors. Jiang estimates they will be able to demonstrate this level of power with their next generation solar array by the end of the year.
The article "Fabrication of organic solar array for applications in microelectromechanical systems" by Xiaomei Jiang has been published in The Journal of Renewable and Sustainable Energy on November 2008.
|Source: American Institute of Physics Permalink: http://www.sflorg.com/comm_center/tech/p693_13.html Time Stamp: 11/7/2008 at 4:05:59 PM UTC|
New Process Promises Bigger, Better Diamond Crystals
Researchers at the Carnegie Institution have developed a new technique for improving the properties of diamonds—not only adding sparkle to gemstones, but also simplifying the process of making high-quality diamond for scalpel blades, electronic components, even quantum computers. The results are published in the October 27-31 online edition of the Proceedings of the National Academy of Sciences.
A diamond may be forever, but the very qualities that make it a superior material for many purposes—its hardness, optical clarity, and resistance to chemicals, radiation, and electrical fields― can also make it a difficult substance with which to work. Defects can be purged by a heating process called annealing, but this can turn diamond to graphite, the soft, grey form of carbon used in pencil leads. To prevent graphitization, diamond treatments have previously required high pressures (up to 60,000 times atmospheric pressure) during annealing, but high pressure/high temperature annealing is expensive and there are limits on the size and quantities of diamonds that can be treated.
Yu-fei Meng, Chih-shiue Yan, Joseph Lai, Szczesny Krasnicki, Haiyun Shu, Thomas Yu, Qi Liang, Ho-kwang Mao, and Russell Hemley of the Carnegie Institution’s Geophysical Laboratory used a method called chemical vapor deposition (CVD) to grow synthetic diamonds for their experiments. Unlike other methods, which mimic the high pressures deep within the earth where natural diamonds are formed, the CVD method produces single-crystal diamonds at low pressure. The resulting diamonds, which can be grown very rapidly, have precisely controlled compositions and comparatively few defects.
The Carnegie team then annealed the diamonds at temperatures up to 2000° C using a microwave plasma at pressures below atmospheric pressure. The crystals, which are originally yellow-brown if produced at very high growth rates, turned colorless or light pink. Despite the absence of stabilizing pressure there was minimal graphitization. Using analytical methods such as photoluminescence and absorption spectroscopy, the researchers were also able to identify the specific crystal defects that caused the color changes. In particular, the rosy pink color is produced by structures called nitrogen-vacancy (NV) centers, where a nitrogen atom takes the place of a carbon atom at a position in the crystal lattice next to a vacant site..
“This low-pressure/high-temperature annealing enhances the optical properties of this rapid-grown CVD single crystal diamond.” says Meng. “We see a significant decrease in the amount of light absorbed across the spectrum from ultraviolet to visible and infrared. We were also able to determine that the decrease arises from the changes in defect structure associated with hydrogen atoms incorporated in the crystal lattice during CVD growth.”
“It is striking to see brown CVD diamonds transformed by this cost-efficient method into clear, pink-tinted crystals,” says Yan. And because the researchers pinpointed the cause of the color changes in their diamonds, “Our work may also help the gem industry to distinguish natural from synthetic diamond.”
“The most exciting aspect of this new annealing process is the unlimited size of the crystals that can be treated. The breakthrough will allow us to push to kilocarat diamonds of high optical quality” says coauthor Ho-kwang Mao. Because the method does not require a high pressure press, it promises faster processing of diamonds and more types of diamonds to be de-colored than current high-pressure annealing methods. There is also no restriction on the size of crystals or the number of crystals, because the method is not limited by the chamber size of a high pressure press. The microwave unit is also significantly less expensive than a large high-pressure apparatus.
“The optimized process will produce better diamond for new-generation high pressure devices and window materials with improved optical properties in the ultraviolet to infrared range.” concludes laboratory director Russell Hemley. “It has the advantage of being applicable in CVD reactors as a subsequent treatment after growth.”
The high-quality, single crystal diamond made possible by the new process has a wide variety of applications in science and technology, such as the use of diamond crystals as anvils in high-pressure research and in optical applications that take advantage of diamond’s exceptional transparency. Among the more exotic future applications of the pink diamonds made in this way is quantum computing, which could use the diamonds’ NV centers for storing quantum information.
|Source: Carnegie Institution of Washington Permalink: http://www.sflorg.com/comm_center/tech/p657_12.html Time Stamp: 10/28/2008 at 3:32:25 PM UTC|
Princeton Has New High-Speed Connection to ESnet’s Dynamic ESnet4 Network
The U.S. Department of Energy’s Energy Sciences Network (ESnet) just improved its Internet connections to several institutions on Princeton University’s Forrestal Campus, including the Princeton Plasma Physics Laboratory (PPPL), the High Energy Physics (HEP) Group within the Physics Department at Princeton University, and the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (GFDL).
Now researchers around the globe can access data from these science facilities with increasing speeds and scalability, helping enable international collaborations on bandwidth-intensive applications and experiments.
“This is a great achievement,” says Steve Cotter, head of ESnet. “With the availability of cutting-edge instruments and supercomputers, scientists around the world are collaborating to carry out large experiments that produce tremendous amounts of data. This upgrade links Princeton’s physics researchers to that data through our robust and reliable network, ESnet4, via point-to-point dedicated circuits and IP services at multiple gigabit per second speeds.”
The Princeton network upgrade took approximately five months to complete, and involved running fiber optic cabling underground from the Forrestal Campus outside Princeton, New Jersey, along Route 1 to South Brunswick, then to Philadelphia, where it is transported across the ESnet infrastructure to ESnet's main point of presence in McLean, Va.
On the Princeton campus, PPPL’s Internet connection is now operating at 10 gigabit speeds, 10 billion bits per second, significantly faster than its previous speed of 155 megabits, or 155 million bits per second. This is a 6,400 percent improvement in performance, and ESnet’s international connectivity will facilitate collaborations on world-class facilities, including the future ITER fusion reactor in France and existing fusion energy facilities such as the superconducting tokamaks in Korea (KSTAR) and in China (EAST).
Meanwhile, the upgrade brought a new 1 gigabit circuit to GFDL, providing high speed access to other ESnet sites such as the Oak Ridge National Laboratory’s Leadership Computing Facility, where climate simulations will be run. The HEP Group in the Physics Department also received its own 1 gigabit circuit, allowing it to access data from Europe’s Large Hadron Collider (LHC). Based at the European Center for Nuclear Research (CERN) in Switzerland, LHC is the world’s largest particle accelerator. Over 15 million gigabytes of data per year will be distributed to researchers across the globe, when the LHC begins smashing together beams of protons to search for new particles and forces, and beams of heavy nuclei to study new states of matter. The ESnet4 network plays a significant role in providing access to this data for U.S. researchers.
“This world-class network capability places the Princeton institutes on par with the upper echelon of research institutions, and allows researchers to collaborate with institutions around the world at speeds necessary to conduct large scale science,” says Joe Burrescia, General Manager for ESnet.
This upgrade is a collaborative effort involving the U.S. Department of Energy (DOE), National Oceanic and Atmospheric Administration (NOAA), the University of Pennsylvania, and Princeton University. Internet2’s regional connector, MAGPI, based at the University of Pennsylvania, will coordinate and manage the multi-agency consortium that connects Princeton to the ESnet4 network. The DOE and NOAA equally shared the cost of the fiber installation to Princeton institutes, while the University contributes to the on-campus cost of the optical equipment.
About ESnet and
Berkeley Lab is a U.S. Department of Energy national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California.
|For more information about ESnet visit http://www.es.net/ . Source: Berkeley Laboratory Computing Science Permalink: http://www.sflorg.com/comm_center/tech/655_11.html Time Stamp: 10/27/2008 at 7:04:54 PM UTC|
Sandia manages new DOE renewable energy program
DOE to invest up to $24 million for breakthrough solar energy products
Sandia National Laboratories has been chosen as project manager of a new Department of Energy renewable energy program called Solar Energy Grid Integration Systems (SEGIS). The project will involve 12 industry teams from around the country. DOE will invest up to $24 million in FY08 and beyond on the project, depending on the availability of funds.
The program will provide critical research and development funding to develop less expensive, higher performing products to enhance the value of solar photovoltaics (PV) systems to homeowners, business owners, and the nation’s electric utilities. These projects are part of President Bush’s Solar America Initiative, which aims to make solar energy cost-competitive with conventional forms of electricity by 2015.
“We are pleased to have the opportunity to lead this large effort that promises to be an important component of our country’s energy strategy for years to come,” says Margie Tatro, director of Sandia’s Fuel and Water Systems Center. “Increasing the use of alternative and clean energy technologies such as solar is critical to diversifying the nation’s energy sources and reducing our dependence on foreign oil.”
The SEGIS funding opportunity was announced in November 2007. The projects selected for awards focus on collaborative research and development with U.S. industry teams to develop products that will enable photovoltaics to become a more integral part of household, commercial, and utility intelligent energy systems.
A recent DOE news release announcing SEGIS cites examples of research teams working together to develop intelligent system controls that integrate solar systems with utility infrastructures and traditional building energy management.
DOE and Sandia selected 12 industry teams to participate in the first slate of cost-shared collaborative contracts focusing on conceptual design of hardware components and market analysis.
For these 12 winning projects, $2.9 million in DOE funding is leveraging $1.7 million in industry investments. The plan is to award follow-on contracts in FY09 and beyond — subject to the availability of funds — for projects demonstrating the most promising technology advances exhibiting a high likelihood of commercial success. When the projects are combined with the overall industry investment of up to $16 million, more than $40 million in total could be invested in these SEGIS projects, with future funding subject to appropriations from Congress.
SEGIS contracts awardees include Apollo Solar, EMTEC, Enphase, General Electric, Nextek Power Systems, Petra Solar, Princeton Power, Premium Power, PV Powered, Smart Spark, Florida Solar Energy Center of the University of Central Florida, and VPT Energy Inc.
|Source: Sandia National Laboratories Permalink: http://www.sflorg.com/comm_center/tech/p602_10.html Time Stamp: 10/7/2008 at 4:52:59 PM UTC|
NERSC Releases Software Test for Its Next Supercomputer
The Department of Energy’s National Energy Research Scientific Computing Center (NERSC) is looking for a new supercomputer, but is not willing to spend millions of dollars on just any machine. The computer scientists and engineers want to know that their new supercomputer can reliably handle a diverse scientific workload, so they’ve developed the Sustained System Performance (SSP) Benchmarks, a comprehensive test for any system they consider.
The benchmarks were released in conjunction with the NERSC-6 request for proposals on September 4. This version of SSP marks the first time that both vendors and the performance research community can easily access all applications and test cases.
The NERSC-6 system will be the next major system acquisition to support the DOE Office of Science computational challenges. Operated by Lawrence Berkeley National Laboratory, NERSC is the flagship scientific computing facility for DOE’s Office of Science and a world leader in accelerating scientific discovery through computation, including the deployment and support of large-scale resources.
“SSP is a key part of NERSC’s comprehensive evaluation for large-scale systems,” says William Kramer, General Manager at NERSC. “We look for balanced systems that expand our computational and analytics capability by assessing the systems’ abilities to provide sustained performance, effective work dispatching, reliability, consistency and usability, for the entire range of the Office of Science computational challenges.”
Instead of peak performance estimates, that is, the number of teraflop/s that could potentially be performed, NERSC scientists and engineers are concerned with the actual number of teraflop/s that the system will achieve in tackling a scientific problem. NERSC staff refer to this concept as sustained performance, and measure it using the SSP.
The SSP suite consists of seven applications and associated inputs, which span a wide range of science disciplines, algorithms, concurrencies and scaling methods. Kramer notes that this benchmark provides a fair way to compare systems that are introduced with different time frames and technologies. The test also provides a powerful method to assess sustained price/performance for the systems under consideration.
“NERSC-6 will be the system that provides the best value overall for supporting the DOE computational workload, taking into account Performance, Effectiveness, Reliability, Consistency and Usability, summed up in the acronym PERCU,” says Kramer.
An updated version of the Effective System Performance (ESP) test, developed to encourage and assess improved job launching and resource management, was also released with the request for proposals, as are the other tests and benchmarks NERSC uses to assess large scale systems.
new SSP suite can be downloaded from
information about the NERSC-6 request for proposals is at
Information about ESP can be found at
The NERSC Center currently serves 3,000 scientists at national laboratories and universities across the country researching problems in combustion, climate modeling, fusion energy, materials science, physics, chemistry and computational biology. Established in 1974, the NERSC Center has long been a leader in providing systems, services and expertise to advance computational science throughout the DOE research community. NERSC is managed by Lawrence Berkeley National Laboratory for DOE. For more information about the NERSC Center, go to http://www.nersc.gov/. Source: Berkeley Lab's Computing Sciences Permalink: http://www.sflorg.com/comm_center/tech/p545_09.html Time Stamp: 9/12/2008 at 6:45:35 PM UTC
Researcher develops inference technique that estimates how many people will fall sick in an epidemic
Tool focuses on anthrax and smallpox outbreaks
Imagine an outbreak of a disease like SARS (severe acute respiratory syndrome) that could become an epidemic affecting thousands of people. Wouldn’t it be helpful to know early in the epidemic how fast the disease would spread and how many people may be infected so that the medical community could be prepared to treat them?
Sandia National Laboratories/California researcher Jaideep Ray has developed a computer model that can do just that.
In his third year of internal Laboratory Directed Research & Development (LDRD) funding, Ray has figured out a way to determine the number of people likely to be infected and die from noncommunicable illnesses like anthrax — ailments that could be caused by a potential bioterrorist attack — as well as communicable diseases like smallpox.
“In the past decision makers were only able to observe — watch people get sick, go to the hospital, and maybe die,” Ray says. “They had no idea how many people would get sick tomorrow or two days from now.”
He came to this realization in 2004 when he was working on a project for the Department of Defense where he developed a computer model that had decision makers responding to an epidemic at a naval base.
“It struck me that we were going about this completely backwards,” Ray says.
He proposed an LDRD project where he would develop mathematical tools that, using information from the first days of an epidemic, would estimate how many people were going to get sick during the course of the epidemic.
He spent the next three years working on the software and in the middle of 2007 successfully developed a model that could infer the characteristics of a bioterrorism-related epidemic of a noncommunicable disease like anthrax. These inferences were drawn from observations of people with symptoms of anthrax exposure collected over the first three to five days of an epidemic. He is within a few months of refining a computer model that would do the same for communicable diseases.
Russian anthrax outbreak
Ray says that characterizing diseases requires observations of real outbreaks and then building computer models around them. He did this for a 1979 anthrax outbreak in Sverdlovsk (called Yekaterinburg, after the fall of the Soviet Union), a city of about 1.2 million people in central Russia. Initially the Soviets said the victims contracted the disease by eating anthrax-contaminated meat or having contact with dead animals. At the end of the Cold War American physicians reviewed documents published by pathologists who performed autopsies during the epidemic, confirming the pathogen was airborne. Records showed that 80 humans were infected, most of them by inhaling the pathogen. A total of 68 died of the disease.
Using the computer program, Ray ran the data obtained from hospital records of people who became sick in the early days of the epidemic. The program automatically tried many combinations of the unknown number of infected people, time and dose of anthrax exposure until it got as close to the real observation as possible. In the final runs, using data from the first nine days of the 42-day outbreak, the model inferred that almost certainly less than 100 people had been infected, with the most probable number around 55.
That was “pretty close,” to the real event, he says. The program, which also estimated the time of the release and the dose of anthrax inhaled, took 10 minutes to run.
“If they had had this program in 1979 the Soviet officials would have known that this was going to be a small outbreak,” Ray says. “Instead they got into a panic and vaccinated 50,000 to 60,000 people — the whole southern end of the city.”
Nigerian smallpox epidemic
After proving the software actually works, he turned his attention to communicable diseases, specifically smallpox. He modeled a documented smallpox outbreak in Nigeria in 1967, which broke out in a fundamentalist sect (Faith Tabernacle Church, FTC) in the town of Abakaliki. The sect consisted of 120 people who lived in nine different compounds, along with 177 of their nonsectarian brethren. The FTC members mixed strongly in their compounds and across compounds at church four times a week and social visits.
A small girl first introduced the disease into the population. It spread rapidly in her compound and jumped to other compounds via the church and social visits. The sect members refused medical treatment and did not quarantine the sick and contagious members. While the World Health Organization (WHO) monitored the outbreak and kept records of who got sick and when, it did not record the dates of recovery or deaths of the infected people.
Of the 32 people who became infected during the epidemic, 30 were FTC members.
Differentiating the communicable disease model from the noncommunicable disease model is the importance of social networks. Communicable diseases spread faster through people in closer proximity. For example, close family members of an infected small child would have a higher probability of contracting the disease than someone who lives in another compound or house.
Ray says the challenge is that making inferences about social networks is hard. There is a tendency for the inference mechanism to quickly “settle down” into one of a few possible network configurations. He estimates that it will take about four to six months to overcome this “stickiness” of the inference mechanism.
As of today, these inference techniques can work with incomplete observations. Using data from the first 40 days of the three-month epidemic, Ray was able to develop “true” characterizations.
”These preliminary results are useful and encouraging,” Ray says. “Within a few months we should be able to remove the simplifications and perform inferences with models which are even more reflective of the actual spread of the disease.”
|Image Caption: Sandia researcher Jaideep Ray has developed a model that can determine in the first few days of an epidemic how fast the disease will spread and how many people may be infected. Image Credit: Randy Wong Source: Sandia National Laboratories Permalink: http://www.sflorg.com/comm_center/tech/p492_08.html Time Stamp: 8/14/2008 at 3:45:18 AM UTC|