Friday, October 08, 2010

A Step Toward Lead-Free Electronics


Research published October 4 by materials engineers from the University of Leeds could help pave the way towards 100% lead-free electronics.
The work, carried out at the UK's synchrotron facility, Diamond Light Source, reveals the potential of a new artificial material to replace lead-based ceramics in countless electronic devices, ranging from inkjet printers and digital cameras to hospital ultrasound scanners and diesel fuel injectors.
European regulations now bar the use of most lead-containing materials in electronic and electrical devices. Ceramic crystals known as 'piezoelectrics' are currently exempt from these regulations but this may change in the future, owing to growing concerns over the disposal of lead-based materials.
Piezoelectric materials generate an electrical field when pressure is applied, and vice-versa. In gas igniters on ovens and fires, for example, piezoelectric crystals produce a high voltage when they are hit with a spring-loaded hammer, generating a spark across a small gap that lights the fuel.
The most common piezoelectric material is a ceramic crystal called lead zirconium titanate, or PZT.
Using a high intensity X-ray beam at the Diamond Light Source, the University of Leeds researchers have now shown that a simple, lead-free ceramic could potentially do the same job as PZT.
"With the 'Extreme Conditions' beamline at Diamond we were able to probe the interior of the lead-free ceramic- potassium sodium bismuth titanate (KNBT) to learn more about its piezoelectric properties. We could see the changes in crystal structure actually happening while we applied the electric field," said Tim Comyn, lead investigator on the project."
"PZT is the best material for the job at the moment, because it has the greatest piezoelectric effect, good physical durability, and can be radically tailored to suit particular applications," said Adam Royles, PhD student on the project. "The lead-free ceramic that we have been studying is lightweight and can be used at room temperature. This could make it an ideal choice for many applications."
In the medical field, PZT is used in ultrasound transducers, where it generates sound waves and sends the echoes to a computer to convert into a picture. Piezoelectric ceramics also hold great potential for efficient energy harvesting, a possible solution for a clean sustainable energy source in the future.
The Leeds team will continue to work at Diamond to study the transformation induced by an electric field at high speed (1000 times per second) and under various conditions using state of the art detectors.
The results of the work are published online in the journal Applied Physics Letters.

Geothermal Mapping Project Reveals Large, Green Energy Source in West Virginia


This illustration demonstrates subsurface temperatures at various depths in West Virginia from 4.5 km to 7.5 km, indicating the hottest geothermal resources for further exploration. 
New research produced by Southern Methodist University's Geothermal Laboratory, funded by a grant from Google.org, suggests that the temperature of the Earth beneath the state of West Virginia is significantly higher than previously estimated and capable of supporting commercial baseload geothermal energy production.
Geothermal energy is the use of the Earth's heat to produce heat and electricity. "Geothermal is an extremely reliable form of energy, and it generates power 24/7, which makes it a baseload source like coal or nuclear," said David Blackwell, Hamilton Professor of Geophysics and Director of the SMU Geothermal Laboratory.
The SMU Geothermal Laboratory has increased its estimate of West Virginia's geothermal generation potential to 18,890 megawatts (assuming a conservative 2% thermal recovery rate). The new estimate represents a 75 percent increase over estimates in MIT's 2006 "The Future of Geothermal Energy" report and exceeds the state's total current generating capacity, primarily coal based, of 16,350 megawatts.
Researchers from SMU's Geothermal Laboratory will present a detailed report on the discovery at the 2010 Geothermal Resources Council annual meeting in Sacramento, Oct. 24-27. A summary of the report is available online (http://smu.edu/smunews/geothermal/documents/west-virginia-temperatures.asp).
The West Virginia discovery is the result of new detailed mapping and interpretation of temperature data derived from oil, gas, and thermal gradient wells -- part of an ongoing project to update the Geothermal Map of North America that Blackwell produced with colleague Maria Richards in 2004. Temperatures below the Earth almost always increase with depth, but the rate of increase (the thermal gradient) varies due to factors such as the thermal properties of the rock formations.
"By adding 1,455 new thermal data points from oil, gas, and water wells to our geologic model of West Virginia, we've discovered significantly more heat than previously thought," Blackwell said. "The existing oil and gas fields in West Virginia provide a geological guide that could help reduce uncertainties associated with geothermal exploration and also present an opportunity for co-producing geothermal electricity from hot waste fluids generated by existing oil and gas wells."
The high temperature zones beneath West Virginia revealed by the new mapping are concentrated in the eastern portion of the state. Starting at depths of 4.5 km (greater than 15,000 feet), temperatures reach over 150°C (300°F), which is hot enough for commercial geothermal power production.
Traditionally, commercial geothermal energy production has depended on high temperatures in existing subsurface reservoirs to produce electricity, requiring unique geological conditions found almost exclusively in tectonically active regions of the world, such as the western United States. Newer technologies and drilling methods can be used to develop resources in wider ranges of geologic conditions. Three non-conventional geothermal resources that can be developed in areas with little or no tectonic activity or volcanism such as West Virginia are:
  • Low-Temperature Hydrothermal -- Energy is produced from areas with naturally occurring high fluid volumes at temperatures ranging from 80°C (165°F) to 150°C (300°F) using advanced binary cycle technology. Low-Temperature systems have been developed in Alaska, Oregon, and Utah.
  • Geopressure and Co-produced Fluids Geothermal -- Oil and/or natural gas produced together with hot geothermal fluids drawn from the same well. Geopressure and Co-produced Fluids systems are currently operating or under development in Wyoming, North Dakota, Utah, Louisiana, Mississippi, and Texas.
  • Enhanced Geothermal Systems (EGS) -- Areas with low natural rock permeability but high temperatures of more than 150°C (300°F) are "enhanced" by injecting fluid and other reservoir engineering techniques. EGS resources are typically deeper than hydrothermal and represent the largest share of total geothermal resources. EGS is being pursued globally in Germany, Australia, France, the United Kingdom, and the U.S. EGS is being tested in deep sedimentary basins similar to West Virginia's in Germany and Australia.
"The early West Virginia research is very promising," Blackwell said, "but we still need more information about local geological conditions to refine estimates of the magnitude, distribution, and commercial significance of their geothermal resource."
Zachary Frone, an SMU graduate student researching the area said, "More detailed research on subsurface characteristics like depth, fluids, structure and rock properties will help determine the best methods for harnessing geothermal energy in West Virginia." The next step in evaluating the resource will be to locate specific target sites for focused investigations to validate the information used to calculate the geothermal energy potential in this study.
The team's work may also shed light on other similar geothermal resources. "We now know that two zones of Appalachian age structures are hot -- West Virginia and a large zone covering the intersection of Texas, Arkansas, and Louisiana known as the Ouachita Mountain region," said Blackwell. "Right now we don't have the data to fill in the area in between," Blackwell continued, "but it's possible we could see similar results over an even larger area."
Blackwell thinks the finding opens exciting possibilities for the region. "The proximity of West Virginia's large geothermal resource to east coast population centers has the potential to enhance U.S. energy security, reduce CO2 emissions, and develop high paying clean energy jobs in West Virginia," he said.
SMU's Geothermal Laboratory conducted this research through funding provided by Google.org's RE

New Graphene Fabrication Method Uses Silicon Carbide Templates to Create Desired Growth


Graphene transistors
Researchers at the Georgia Institute of Technology have developed a new "templated growth" technique for fabricating nanometer-scale graphene devices. The method addresses what had been a significant obstacle to the use of this promising material in future generations of high-performance electronic devices.
The technique involves etching patterns into the silicon carbide surfaces on which epitaxial graphene is grown. The patterns serve as templates directing the growth of graphene structures, allowing the formation of nanoribbons of specific widths without the use of e-beams or other destructive cutting techniques. Graphene nanoribbons produced with these templates have smooth edges that avoid electron-scattering problems.
"Using this approach, we can make very narrow ribbons of interconnected graphene without the rough edges," said Walt de Heer, a professor in the Georgia Tech School of Physics. "Anything that can be done to make small structures without having to cut them is going to be useful to the development of graphene electronics because if the edges are too rough, electrons passing through the ribbons scatter against the edges and reduce the desirable properties of graphene."
The new technique has been used to fabricate an array of 10,000 top-gated graphene transistors on a 0.24 square centimeter chip -- believed to be the largest density of graphene devices reported so far.
The research was reported Oct. 3 in the advance online edition of the journal Nature Nanotechnology. The work was supported by the National Science Foundation, the W.M. Keck Foundation and the Nanoelectronics Research Initiative Institute for Nanoelectronics Discovery and Exploration (INDEX).
In creating their graphene nanostructures, De Heer and his research team first use conventional microelectronics techniques to etch tiny "steps" -- or contours -- into a silicon carbide wafer. They then heat the contoured wafer to approximately 1,500 degrees Celsius, which initiates melting that polishes any rough edges left by the etching process.
They then use established techniques for growing graphene from silicon carbide by driving off the silicon atoms from the surface. Instead of producing a consistent layer of graphene one atom thick across the surface of the wafer, however, the researchers limit the heating time so that graphene grows only on the edges of the contours.
To do this, they take advantage of the fact that graphene grows more rapidly on certain facets of the silicon carbide crystal than on others. The width of the resulting nanoribbons is proportional to the depth of the contour, providing a mechanism for precisely controlling the nanoribbons. To form complex graphene structures, multiple etching steps can be carried out to create a complex template, de Heer explained.
"By using the silicon carbide to provide the template, we can grow graphene in exactly the sizes and shapes that we want," he said. "Cutting steps of various depths allows us to create graphene structures that are interconnected in the way we want them to be.
In nanometer-scale graphene ribbons, quantum confinement makes the material behave as a semiconductor suitable for creation of electronic devices. But in ribbons a micron or more wide, the material acts as a conductor. Controlling the depth of the silicon carbide template allows the researchers to create these different structures simultaneously, using the same growth process.
"The same material can be either a conductor or a semiconductor depending on its shape," noted de Heer, who is also a faculty member in Georgia Tech's National Science Foundation-supported Materials Research Science and Engineering Center (MRSEC). "One of the major advantages of graphene electronics is to make the device leads and the semiconducting ribbons from the same material. That's important to avoid electrical resistance that builds up at junctions between different materials."
After formation of the nanoribbons -- which can be as narrow as 40 nanometers -- the researchers apply a dielectric material and metal gate to construct field-effect transistors. While successful fabrication of high-quality transistors demonstrates graphene's viability as an electronic material, de Heer sees them as only the first step in what could be done with the material.
"When we manage to make devices well on the nanoscale, we can then move on to make much smaller and finer structures that will go beyond conventional transistors to open up the possibility for more sophisticated devices that use electrons more like light than particles," he said. "If we can factor quantum mechanical features into electronics, that is going to open up a lot of new possibilities."
De Heer and his research team are now working to create smaller structures, and to integrate the graphene devices with silicon. The researchers are also working to improve the field-effect transistors with thinner dielectric materials.
Ultimately, graphene may be the basis for a generation of high-performance devices that will take advantage of the material's unique properties in applications where the higher cost can be justified. Silicon will continue to be used in applications that don't require such high performance, de Heer said.
"This is another step showing that our method of working with epitaxial graphene on silicon carbide is the right approach and the one that will probably be used for making graphene electronics," he added. "This is a significant new step toward electronics manufacturing with graphene."
In addition to those already mentioned, the research has involved M. Sprinkle, M. Ruan, Y Hu, J. Hankinson, M. Rubio-Roy, B. Zhang, X. Wu and C. Berger.

2010 Nobel Prize in Chemistry: Creating Complex Carbon-Based Molecules Using Palladium


New ways of linking carbon atoms together has allowed scientists to make medicines and better electronics.
The Royal Swedish Academy of Sciences has awarded the Nobel Prize in Chemistry for 2010 to Richard F. Heck, Ei-ichi Negishi and Akira Suzuki for developing new ways of linking carbon atoms together that has allowed scientists to make medicines and better electronics.
American citizen Richard F. Heck, 79, of the University of Delaware in Newark, Delaware, Japanese citizens Akira Suzuki, 80, of Hokkaido University in Sapporo, Japan, and Ei-Ichi Negishi, 75, of Purdue University in West Lafayette, Indiana, will share the 10 million Swedish crowns ($1.5 million) award for their development of "palladium-catalyzed cross couplings in organic systems."
Carbon, the atom that is the backbone of molecules in living organisms, is usually very stable and it can be difficult in the laboratory chemically to synthesize large molecules containing carbon. In the Heck reaction, Negishi reaction and Suzuki reaction, carbon atoms meet on a palladium atom, which acts as a catalyst. The carbon atoms attach to the palladium atom and are thus positioned close enough to each other for chemical reactions to start. This allows chemists to synthesize large, complex carbon-containing molecules.
The Academy said it's a "precise and efficient" tool that is used by researchers worldwide, "as well as in the commercial production of for example pharmaceuticals and molecules used in the electronics industry."
Great art in a test tube
Organic chemistry has developed into an art form where scientists produce marvelous chemical creations in their test tubes. Humankind benefits from this in the form of medicines, ever-more precise electronics and advanced technological materials. The Nobel Prize in Chemistry 2010 awards one of the most sophisticated tools available to chemists today.
This year's Nobel Prize in Chemistry is awarded to Richard F. Heck, Ei-ichi Negishi and Akira Suzuki for the development of palladium-catalyzed cross coupling. This chemical tool has vastly improved the possibilities for chemists to create sophisticated chemicals -- for example, carbon-based molecules as complex as those created by nature itself.
Carbon-based (organic) chemistry is the basis of life and is responsible for numerous fascinating natural phenomena: colour in flowers, snake poison and bacteria killing substances such as penicillin. Organic chemistry has allowed man to build on nature's chemistry; making use of carbon's ability to provide a stable skeleton for functional molecules. This has yielded new medicines and revolutionary materials such as plastics.
In order to create these complex chemicals, chemists need to be able to join carbon atoms together. However, carbon is stable and carbon atoms do not easily react with one another. The first methods used by chemists to bind carbon atoms together were therefore based upon various techniques for rendering carbon more reactive. Such methods worked when creating simple molecules, but when synthesizing more complex molecules chemists ended up with too many unwanted by-products in their test tubes.
Palladium-catalyzed cross coupling solved that problem and provided chemists with a more precise and efficient tool to work with. In the Heck reaction, Negishi reaction and Suzuki reaction, carbon atoms meet on a palladium atom, whereupon their proximity to one another kick-starts the chemical reaction.
Palladium-catalyzed cross coupling is used in research worldwide, as well as in the commercial production of for example pharmaceuticals and molecules used in the electronics industry.

Possible Green Replacement for Asphalt Derived from Petroleum to Be Tested on Iowa Bike Trail


Christopher Williams used his Iowa State University and Institute for Transportation lab to study and develop asphalt mixtures made from bio-oil fractions.
Iowa State University's Christopher Williams was just trying to see if adding bio-oil to asphalt would improve the hot- and cold-weather performance of pavements. What he found was a possible green replacement for asphalt derived from petroleum.
That finding will move from Williams' laboratory at the Institute for Transportation's Asphalt Materials and Pavements Program at Iowa State to a demonstration project this fall. The project will pave part of a Des Moines bicycle trail with an asphalt mixture containing what is now known as Bioasphalt.
If the demonstration and other tests go well, "This would be great stuff for the state of Iowa," said Williams, an associate professor of civil, construction and environmental engineering.
He said that's for a lot of reasons: Asphalt mixtures derived from plants and trees could replace petroleum-based mixes. That could create a new market for Iowa crop residues. It could be a business opportunity for Iowans. And it saves energy and money because Bioasphalt can be mixed and paved at lower temperatures than conventional asphalt.
Bio-oil is created by a thermochemical process called fast pyrolysis. Corn stalks, wood wastes or other types of biomass are quickly heated without oxygen. The process produces a liquid bio-oil that can be used to manufacture fuels, chemicals and asphalt plus a solid product called biochar that can be used to enrich soils and remove greenhouses gases from the atmosphere.
Robert C. Brown -- an Anson Marston Distinguished Professor of Engineering, the Gary and Donna Hoover Chair in Mechanical Engineering and the Iowa Farm Bureau director of Iowa State's Bioeconomy Institute -- has led research and development of fast pyrolysis technologies at Iowa State. Three of his former graduate students -- Jared Brown, Cody Ellens and Anthony Pollard, all December 2009 graduates -- have established a startup company, Avello Bioenergy Inc., that specializes in pyrolysis technology that improves, collects and separates bio-oil into various liquid fractions.
Williams used bio-oil fractions provided by Brown's fast pyrolysis facility at Iowa State's BioCentury Research Farm to study and develop Bioasphalt. That research was supported by the Iowa Energy Center and the Iowa Department of Transportation.
Avello has licensed the Bioasphalt technology from the Iowa State University Research Foundation Inc. and has produced oak-based bio-oil fractions for the bike trail project using funding from the Iowa Department of Economic Development. Williams said the project will include a mix of 5 percent Bioasphalt.
Jeb Brewer, the city engineer for the City of Des Moines, said the Bioasphalt will be part of phase two of the Waveland Trail on the city's northwest side. The 10-foot-wide trail will run along the west side of Glendale Cemetery from University Avenue to Franklin Avenue.
Brewer said the demonstration project is a good fit for the city.
"We have a fairly active program for finding ways to conserve energy and be more sustainable," he said. "We're interested in seeing how this works out and whether it can be part of our toolbox to create more sustainable projects."
Contractors involved in the Bioasphalt demonstration project are Elder Corp. of Des Moines, Bituminous Materials and Supplies of Des Moines and Grimes Asphalt and Paving Corp. of Grimes with the Asphalt Paving Association of Iowa supporting the project.
Iowa State's Williams said a successful demonstration would lead to more pavement tests containing higher and higher percentages of Bioasphalt.
"This demonstration project is a great opportunity," he said. "We're introducing a green technology into a green environment in Des Moines. And it's a technology that's been developed here in Iowa."

Fuel Cells in Operation: A Closer Look


In a basic solid oxide fuel cell, diagrammed at left, the cathode on one side of the electrolyte ionizes oxygen, which flows through the electrolyte to the anode (left), where fuel is oxidized to free electrons. In the model cell built for the APXPS experiment, all the components are on the same side of the electrolyte and can be reached by the x-ray beam.
Measuring a fuel cell's overall performance is relatively easy, but measuring its components individually as they work together is a challenge. That's because one of the best experimental techniques for investigating the details of an electrochemical device while it's operating is x-ray photoelectron spectroscopy (XPS). Traditional XPS works only in a vacuum, while fuel cells need gases under pressure to function.
Now a team of scientists from the University of Maryland, the U.S. Department of Energy's Sandia National Laboratories, and DOE's Lawrence Berkeley National Laboratory has used a new kind of XPS, called ambient-pressure XPS (APXPS), to examine every feature of a working solid oxide electrochemical cell. The tests were made while the sample cell operated in an atmosphere of hydrogen and water vapor at one millibar pressure (about one-thousandth atmospheric pressure) and at very high temperatures, up to 750° Celsius (1,382 degrees Fahrenheit).
"Our team, led by Bryan Eichhorn of the Department of Chemistry and Biochemistry at the University of Maryland, combined the expertise in fuel cells at U Maryland, the experience of our Sandia Lab colleagues in collecting electrochemical data, and Berkeley Lab's own development of a method for doing x-ray photoelectron spectroscopy in situ," says Zahid Hussain of Berkeley Lab's Advanced Light Source (ALS). "Together we were able to measure the fundamental properties of a solid oxide fuel cell under realistic operating conditions."
The researchers report their results in the November 2010 issue of Nature Materials, in an article now published online.
How a solid oxide fuel cell works
Like a battery, a fuel cell is a device that uses chemical reactions to produce electricity. Unlike a battery, a fuel cell won't run down as long as it's supplied with fuel and oxidant from outside. The main components are two electrodes, an anode and a cathode, separated by an electrolyte.
In a solid oxide cell (SOC) the cycle begins at the cathode, which ionizes oxygen (usually from air) by adding free electrons. These oxygen ions then flow through the solid oxide electrolyte (from which the SOC gets its name), often a material known as yttria-stabilized zirconia. High temperature is needed to maintain good conduction of oxygen ions through the electrolyte.
The oxygen ions travel through the electrolyte to reach the anode, where they oxidize the fuel. (The fuel may be pure hydrogen gas or a hydrocarbon.) Electrons freed by oxidation form the current in the device's electrical circuit and eventually return to the cathode. Unused fuel or other compounds, plus water formed from the positive hydrogen ions and negative oxygen ions, exits the fuel cell.
For the APXPS experiment, the University of Maryland collaborators built a model fuel cell that combined the essential elements of an SOC in a special miniaturized design less than two millimeters in length. Except for the electrolyte of yttria-stabilized zirconia, which formed the base of the device, the various components were thin films measuring from 30 nanometers (billionths of a meter) up to 300 nanometers thick.
Says the University of Maryland's Eichhorn, "We designed and fabricated solid oxide electrochemical cells that provided precise dimensional control of all the components, while providing full optical access to the entire cell from anode to cathode."
Instead of stacking the components as in a real fuel cell, the sample's arrangement was a planar design that placed all the components on the same side of the electrolyte, so the x-ray beam from the ALS could reach them. This allowed direct measurement of local chemical states and electric potentials at surfaces and interfaces during the cell's operation.
Introducing ambient-pressure x-ray photoelectron spectroscopy
Photoemission occurs when light ejects electrons from a material. By collecting the emitted electrons and analyzing their energies and trajectories, photoelectron spectroscopy establishes exactly what elements are in the material and their chemical and electronic states within narrow regions. At the Advanced Light Source, intense x-ray light is used to explore what happens at or near the surface of materials: the only photoelectrons that can escape are from atoms near the surface.
The APXPS system begins by shining the x-ray beam on the sample fuel cell inside a chamber at the ambient pressure of the gas needed for it to operate. The emitted electrons then travel through chambers pumped to lower pressure, finally entering the high-vacuum chamber of the detector. By itself this arrangement would lose emitted electrons at every stage because of their spreading trajectories, leaving a signal too weak to be useful. So Berkeley Lab researchers developed a system of "lenses" -- not made of glass but of electric fields -- to capture and refocus the emitted electrons at each stage, preventing excessive loss.
"This is what allows us to find out what's happening within small regions on the surface of a sample in the presence of a gas," says Hendrik Bluhm of Berkeley Lab's Chemical Sciences Division, one of the inventors of APXPS, which was awarded a coveted R&D 100 Award in 2010. "Using the APXPS instruments at the ALS's molecular environmental science beamline, 11.0.2., and the chemical and materials science beamline, 9.3.2, we can spatially correlate the catalytic activity with the electrical electrical potentials across the different components of the model fuel cells."
Says Zhi Liu of the ALS, "At first we weren't sure we could use this technique with an operating fuel cell, because we had to bring it to 750° C -- an extreme temperature for such ambient pressure experiments. Few people have done it before. Now we're able to perform this kind of analysis routinely."
Michael Grass of the ALS says, "What you need to know to improve any kind of fuel cell is where the inefficiencies are -- places where energy is being lost compared to what theoretically should be possible. By scanning across the surface of the cell while it was operating, we could directly measure both the inefficiencies and the chemical states associated with them."
A new way to study electrochemistry in action
With their model SOC, the Maryland-Sandia-Berkeley Lab team saw details never seen before in an operating fuel cell. Where an overall measurement gave only the fuel cell's total losses in potential energy, the APXPS measurements found the local potential losses associated with the interfaces of electrode and electrolyte, as well as with charge transport within the ceria electrode. The sum of the losses was equal to the cell's total loss, or inefficiency.
"The in situ XPS experiments at 750 C allowed us to pinpoint the electroactive regions, measure length scales of electron transport through mixed ionic-electronic conductors, and map out potential losses across the entire cell," Eichhorn says. "Others have suggested similar experiments in the past, but it was the remarkable facilities and scientific expertise at the ALS that facilitated these challenging measurements for the first time."
APXPS can provide this kind of fundamental information to solid oxide fuel cell designers, information not available using any other technique. New fuel cell designs are already taking advantage of this new way to study fuel cells in operation.
"Measuring fundamental properties in operating solid oxide electrochemical cells by using in situ x-ray photoelectron spectroscopy," by Chunjuan Zhang, Michael Grass, Anthony McDaniel, Steven DeCaluwe, Farid El Gabaly, Zhi Liu, Kevin McCarty, Roger Farrow, Mark Linne, Zahid Hussain, Gregory Jackson, Hendrik Bluhm, and Bryan Eichhorn, appears in the November, 2010 issue of Nature Materials and is available to subscribers in advance online publication. Zhang, DeCaluwe, Jackson, and Eichhorn are with the University of Maryland. McDaniel, Gabaly, McCarty, Farrow, and Linne are with Sandia National Laboratories. Grass, Liu, Hussain, and Bluhm are with Berkeley Lab.

Greatest Warming Is in the North, but Biggest Impact on Life Is in the Tropics, New Research Shows


New research finds that even though the temperature increase has been smaller in the tropics, the impact of warming on life could be much greater there than in colder climates.
In recent decades documented biological changes in the far Northern Hemisphere have been attributed to global warming, changes from species extinctions to shifting geographic ranges. Such changes were expected because warming has been fastest in the northern temperate zone and the Arctic.
But new research published in the Oct. 7 edition of Nature adds to growing evidence that, even though the temperature increase has been smaller in the tropics, the impact of warming on life could be much greater there than in colder climates.
The study focused on ectothermic, or cold-blooded, organisms (those whose body temperature approximates the temperature of their surroundings). Researchers used nearly 500 million temperature readings from more than 3,000 stations around the world to chart temperature increases from 1961 through 2009, then examined the effect of those increases on metabolism.
"The expectation was that physiological changes would also be greatest in the north temperate-Arctic region, but when we ran the numbers that expectation was flipped on its head," said lead author Michael Dillon, an assistant professor of zoology and physiology at the University of Wyoming.
Metabolic changes are key to understanding some major impacts of climate warming because a higher metabolic rate requires more food and more oxygen, said co-author Raymond Huey, a University of Washington biology professor. If, for example, an organism has to spend more time eating or conserving energy, it might have less time and energy for reproduction.
"Metabolic rate tells you how fast the animal is living and thus its intensity of life," Huey said.
Using a well-documented, century-old understanding that metabolic rates for cold-blooded animals increase faster the warmer the temperature, the researchers determined that the effects on metabolism will be greatest in the tropics, even though that region has the smallest actual warming. Metabolic impacts will be less in the Arctic, even though it has shown the most warming. In essence, organisms in the tropics show greater effects because they start at much higher temperatures than animals in the Arctic.
Dillon and co-author George Wang of the Max Planck Institute for Developmental Biology in Tübingen, Germany, sifted through temperature data maintained by the National Oceanic and Atmospheric Administration's National Climatic Data Center. They came up with readings from 3,186 stations that met their criteria of recording temperature at least every six hours during every season from 1961 through 2009. The stations, though not evenly spaced, represented every region of the globe except Antarctica.
The data, the scientists said, reflect temperature changes since 1980 that are consistent with other recent findings that show the Earth is getting warmer. Temperatures rose fastest in the Arctic, not quite as fast in the northern temperate zone and even more slowly in the tropics.
"Just because the temperature change in the tropics is small doesn't mean the biological impacts will be small," Huey said. "All of the studies we're doing suggest the opposite is true."
In fact, previous research from the University of Washington has indicated that small temperature changes can push tropical organisms beyond their optimal body temperatures and cause substantial stress, while organisms in temperate and polar regions can tolerate much larger increases because they already are used to large seasonal temperature swings.
The scientists say the effects of warming temperatures in the tropics have largely been ignored because temperature increases have been much greater farther north and because so few researchers work in the tropics.
"I think this argues strongly that we need more studies of the impacts of warming on organisms in the tropics," Dillon said.
The work was funded in part by the National Science Foundation

Real Price of Each Pack of Cigarettes Is Nearly $150, Spanish Study Finds


The real cost of cigarettes is much higher, when premature death is taken into account.
Researchers from the Polytechnic University of Cartagena (UPCT) estimate that each pack of cigarettes really costs €107 for men and €75 for women, when premature death is taken into account. These figures confirm previous studies, and are of key importance in the cost-benefit analysis of smoking-prevention policies.
"One of the conclusions of the article is that the price one pays for each pack of cigarettes at a newsstand is only a very small price of the true price that smokers pay for their habit," says Ángel López Nicolás, co-author of the study that has been published in the Revista Española de Salud Pública and a researcher at the UPCT.
"Given that tobacco consumption raises the risk of death in comparison with non-smokers, it can be assigned a premature death cost for people who do smoke," the researcher explains.
According to the study, the average cost of a pack of cigarettes is not in fact €3-4, but €107 for male smokers and €75 for female smokers.
The study questions the axiom of classic economics on "consumer sovereignty," saying that those who smoke do not do so because the pleasure of smoking is greater than its cost, but rather because of the addictive power of nicotine and their failure to understand its true cost.
In order to determine the mortality cost associated with tobacco consumption in Spain, the experts used the so-called Vale of a Statistical Life (VSL), in other words the amount that people are prepared to pay in order to reduce their risk of death. The VSL estimates the average price to be €2.91 million. "For smokers this is €3.78 million," López Nicolás explains.
"But one must not confuse the cost of premature death with the cost of healthcare. The cost of premature death is borne by the smokers themselves," López points out.
The team also handled the information on workers in the European Community Household Panel (ECHP) for the 1996-2001 period, and the results of the Ministry of Labour and Immigration Survey on Occupational Accidents.
Understanding the costs helps to prevent smoking
"The estimated cost of premature death from a pack of cigarettes is a key element in the cost-benefit analysis of policies designed to prevent and control smoking," the researchers say.
In this sense, the study indicates that the taxes and smoking restrictions imposed in public places strengthen smokers' self-control mechanisms. According to the study, "smoking prevention and control policies could generate considerable social benefits, since the wellbeing losses associated with tobacco consumption are much greater than suggested by the external costs."
"Despite the law on healthcare measures to combat smoking having come into effect in 2006, more can still be done in Spain on measures to control tobacco consumption," the experts conclude.

Wednesday, October 06, 2010

Human waste power plant goes online in the UK

The new biogas plant, sited next to the Didcot sewage works in Oxfordshire, has been offic...
The new biogas plant, sited next to the Didcot sewage works in Oxfordshire, has been officially opened by Energy and Climate Change Secretary Chris Huhne

The biomethane project that turns human waste into green gas that we featured in May has now gone live. The project is now converting the treated sewage of 14 million Thames Water customers into clean, green gas and is pumping that gas into people's homes.
The new biogas plant – sited next to the Didcot sewage works in Oxfordshire – has been officially opened by Energy and Climate Change Secretary Chris Huhne, who said: "It's not every day that a Secretary of State can announce that, for the first time ever in the UK, people can cook and heat their homes with gas generated from sewage. This is an historic day for the companies involved, for energy from waste technologies, and for progress to increase the amount of renewable energy in the UK."
Hoped to be the first of many such installations, the process starts when one of Thames Water's 14 million customers flushes the loo. The waste makes its way to the Didcot sewage works to begin its treatment and/or recycling. The solids, or sludge, go on to be warmed up in huge vats so that bacteria can break down any biodegradable material in a process known as anaerobic digestion.
The end result of this process is biogas, which is further cleaned up before being fed into the gas grid. It takes around 20 days from flush to finish for the process to complete and will produce enough renewable gas to up to supply 200 homes.
The average person is said to produce about 30kg/66lbs (dry weight) of sludge every year. This means that if all the 9,600 waste treatment facilities in the UK similarly processed sewage from the whole population, it could meet the annual gas demand of over 200,000 homes. A study by the national grid has indicated that up to 15 per cent of domestic gas needs could be met by biomethane as soon as the year 2020.
Martin Baggs of Thames Water said: "We already produce GBP15 million [US$23.8 million] a year of electricity by burning biogas from the 2.8 billion liters [739.7 million US gallons] a day of sewage produced by our 14 million customers. Feeding this renewable gas directly into the gas grid is the logical next step in our ‘energy from waste' business. What we have jointly achieved at Didcot is a sign of what is to come."
The joint venture between Thames Water, British Gas and Scotia Gas Networks is seen as an important move towards low carbon gas production in the UK. According to Gearóid Lane of British Gas, the project "is just one part of a bigger project, which will see us using brewery and food waste and farm slurry to generate gas to heat our British Gas homes."
The biogas project took six months to complete at a cost of GBP2.5 million (US$3.9 million).

NASA Mission 'E-Minus' One Month to Comet Flyby


Logo of NASA's EPOXI mission, which is just one month away from its encounter with comet Hartley 2.
Fans of space exploration are familiar with the term T-minus, which NASA uses as a countdown to a rocket launch. But what of those noteworthy mission events where you already have a spacecraft in space, as with the upcoming flyby of a comet?
"We use 'E-minus' to help with our mission planning," said Tim Larson, EPOXI mission project manager at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "The 'E' stands for encounter, and that is exactly what is going to happen one month from today, when our spacecraft has a close encounter with comet Hartley 2."
The EPOXI mission's Nov. 4 encounter with Hartley 2 will be only the fifth time in history that a comet has been imaged close-up. At point of closest approach, the spacecraft will be about 700 kilometers (435 miles) from the comet.
"Hartley 2 better not blink, because we'll be screaming by at 12.3 kilometers per second (7.6 miles per second), said Larson.
One month out, the spacecraft is closing the distance with the comet at a rate of 976,000 kilometers (607,000 miles) per day. As it gets closer, the rate of closure will increase to a little over 1,000,000 kilometers (620,000 miles) per day.
For those interested in what the "T-minus" stands for in a NASA countdown to a rocket launch -- it translates to "Time-minus." For example, when a rocket is getting ready for liftoff, it will be lifting off at a specific time. If that time is 45 seconds away, it is said to be "T-minus 45 seconds and counting."
EPOXI is an extended mission that utilizes the already "in-flight" Deep Impact spacecraft to explore distinct celestial targets of opportunity. The name EPOXI itself is a combination of the names for the two extended mission components: the extrasolar planet observations, called Extrasolar Planet Observations and Characterization (EPOCh), and the flyby of comet Hartley 2, called the Deep Impact Extended Investigation (DIXI). The spacecraft will continue to be referred to as "Deep Impact."
NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the EPOXI mission for NASA's Science Mission Directorate, Washington. The University of Maryland, College Park, is home to the mission's principal investigator, Michael A'Hearn. Drake Deming of NASA's Goddard Space Flight Center, Greenbelt, Md., is the science lead for the mission's extrasolar planet observations. The spacecraft was built for NASA by Ball Aerospace & Technologies Corp., Boulder, Colo.

Wind Farms Extend Growing Season in Certain Regions


Many wind farms, especially in the Midwestern United States, are located on farmland. According to Roy, the nocturnal warming effect could offer farmland some measure of frost protection and may even slightly extend the growing season. 
Wind power is likely to play a large role in the future of sustainable, clean energy, but wide-scale adoption has remained elusive. Now, researchers have found wind farms' effects on local temperatures and proposed strategies for mediating those effects, increasing the potential to expand wind farms to a utility-scale energy resource.
Led by University of Illinois professor of atmospheric sciences Somnath Baidya Roy, the research team will publish its findings in the Proceedings of the National Academy of Sciences. The paper will appear in the journal's Online Early Edition this week.
Roy first proposed a model describing the local climate impact of wind farms in a 2004 paper. But that and similar subsequent studies have been based solely on models because of a lack of available data. In fact, no field data on temperature were publicly available for researchers to use, until Roy met Neil Kelley at a 2009 conference. Kelley, a principal scientist at the National Wind Technology Center, part of the National Renewable Energy Laboratory, had collected temperature data at a wind farm in San Gorgonio, Calif., for more than seven weeks in 1989.
Analysis of Kelley's data corroborated Roy's modeling studies and provided the first observation-based evidence of wind farms' effects on local temperature. The study found that the area immediately surrounding turbines was slightly cooler during the day and slightly warmer at night than the rest of the region.
As a small-scale modeling expert, Roy was most interested in determining the processes that drive the daytime cooling and nocturnal warming effects. He identified an enhanced vertical mixing of warm and cool air in the atmosphere in the wake of the turbine rotors. As the rotors turn, they generate turbulence, like the wake of a speedboat motor. Upper-level air is pulled down toward the surface while surface-level air is pushed up, causing warmer and cooler air to mix.
The question for any given wind-farm site then becomes, will warming or cooling be the predominant effect?
"It depends on the location," Roy said. "For example, in the Great Plains region, the winds are typically stronger at night, so the nocturnal effect may dominate. In a region where daytime winds are stronger -- for example a sea breeze -- then the cooling effect will dominate. It's a very location-specific thing."
Many wind farms, especially in the Midwestern United States, are located on farmland. According to Roy, the nocturnal warming effect could offer farmland some measure of frost protection and may even slightly extend the growing season.
Understanding the temperature effects and the processes that cause them also allows researchers to develop strategies to mitigate wind farms' impact on local climate. The group identified two possible solutions. First, engineers could develop low-turbulence rotors. Less turbulence would not only lead to less vertical mixing and therefore less climate impact, but also would be more efficient for energy generation. However, research and development for such a device could be a costly, labor-intensive process.
The second mediation strategy is locational. Turbulence from the rotors has much less consequence in an already turbulent atmosphere. The researchers used global data to identify regions where temperature effects of large wind farms are likely to be low because of natural mixing in the atmosphere, providing ideal sites.
"These regions include the Midwest and the Great Plains as well as large parts of Europe and China," Roy said. "This was a very coarse-scale study, but it would be easy to do a local-scale study to compare possible locations."
Next, Roy's group will generate models looking at both temperature and moisture transport using data from and simulations of commercial rotors and turbines. They also plan to study the extent of the thermodynamic effects, both in terms of local magnitude and of how far downwind the effects spread.
"The time is right for this kind of research so that, before we take a leap, we make sure it can be done right," Roy said. "We want to identify the best way to sustain an explosive growth in wind energy over the long term. Wind energy is likely to be a part of the solution to the atmospheric carbon dioxide and the global warming problem. By indentifying impacts and potential mitigation strategies, this study will contribute to the long-term sustainability of wind power."

NASA's WISE Mission Warms Up but Keeps Chugging Along


Cosmic Rosebud: This image shows a cosmic rosebud blossoming with new stars. The stars, called the Berkeley 59 cluster, are the blue dots to the right of the image center. They are ripening out of the dust cloud from which they formed, and at just a few million years old, are young on stellar time scales.
After completing its primary mission to map the infrared sky, NASA's Wide-field Infrared Survey Explorer, or WISE, has reached the expected end of its onboard supply of frozen coolant. Although WISE has 'warmed up,' NASA has decided the mission will still continue. WISE will now focus on our nearest neighbors -- the asteroids and comets traveling together with our solar system's planets around the sun.
"Two of our four infrared detectors still work even at warmer temperatures, so we can use those bands to continue our hunt for asteroids and comets," said Amy Mainzer of NASA's Jet Propulsion Laboratory in Pasadena, Calif. Mainzer is the principal investigator of the new phase of the mission, now known as the NEOWISE Post-Cryogenic Mission. It takes its name from the acronym for a near-Earth object, NEO, and WISE. A cryogen is a coolant used to make the detectors more sensitive. In the case of WISE, the cryogen was frozen hydrogen.
WISE launched Dec. 14, 2009, from Vandenberg Air Force Station in California aboard a Delta II launch vehicle. Its 40-centimeter (16-inch) infrared telescope scans the skies from an Earth-circling orbit crossing the poles. It has already snapped more than 1.8 million pictures at four infrared wavelengths. Currently, the survey has covered the sky about one-and-one-half times, producing a vast catalogue containing hundreds of millions of objects, from near-Earth asteroids to cool stars called "brown dwarfs," to distant, luminous galaxies.
To date, WISE has discovered 19 comets and more than 33,500 asteroids, including 120 near-Earth objects, which are those bodies with orbits that pass relatively close to Earth's path around the sun. More discoveries regarding objects outside our solar system, such as the brown dwarfs and luminous galaxies, are expected.
"The science data collected by WISE will be used by the scientific community for decades," said Jaya Bajpayee, the WISE program executive in the Astrophysics Division of NASA's Science Mission Directorate, at the agency's headquarters in Washington. "It will also provide a sky map for future observatories like NASA's James Webb Space Telescope."
The NEOWISE Post-Cryogenic Mission is designed to complete the survey of the solar system and finish the second survey of the rest of the sky at its new warmer temperature of about minus 203 degrees Celsius (minus 334 degrees Fahrenheit) using its two shortest-wavelength detectors. The survey extension will last one to four months, depending on early results.
NEOWISE will also keep observing other targets, such as the closest brown dwarfs to the sun. In addition, data from the second sky scan will help identify objects that have moved in the sky since they were first detected by WISE. This allows astronomers to pick out the brown dwarfs closest to our sun. The closer the object is, the more it will appear to move from our point of view.
The WISE science team now is analyzing millions of objects captured in the images, including many never seen before. A first batch of WISE data, covering more than half the sky, will be released to the astronomical community in spring 2011, with the rest to follow about one year later.
"WISE has provided a guidebook to the universe with thousands of targets worth viewing with a large telescope," said Edward (Ned) Wright, WISE principal investigator from UCLA. "We're working on figuring out just how far away the brown dwarfs are, and how luminous the galaxies are."
A gallery of WISE images is available at: http://www.nasa.gov/mission_pages/WISE/multimedia/gallery/gallery-index.html.
JPL manages the Wide-field Infrared Survey Explorer for the Science Mission Directorate. The science instrument was built by the Space Dynamics Laboratory in Logan, Utah, and the spacecraft was built by Ball Aerospace & Technologies Corp. in Boulder, Colo. Science operations and data processing take place at the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena.

Tuesday, October 05, 2010

Painless Way to Achieve Huge Energy Savings: Stop Wasting Food

A team of researchers recently revealed in their study report, that Americans waste so much quantity of food that if they stop this wastage completely, the nation will be able to save at least 350 million barrels worth oil annually (2 percent of nation’s yearly energy budget). Commenting on the study findings, Michael Webber, associate director of the university's Center for International Energy and Environmental Policy, stated, "As a nation we're grappling with energy issues. A lot more energy goes into food than people realize."
Scientists have identified a way that the United States could immediately save the energy equivalent of about 350 million barrels of oil a year — without spending a penny or putting a ding in the quality of life: Just stop wasting food.
Their study, reported in ACS' semi-monthly journal Environmental Science & Technology, found that it takes the equivalent of about 1.4 billion barrels of oil to produce, package, prepare, preserve and distribute a year's worth of food in the United States.
Michael Webber and Amanda Cuéllar note that food contains energy and requires energy to produce, process, and transport. Estimates indicate that between 8 and 16 percent of energy consumption in the United States went toward food production in 2007. Despite this large energy investment, the U.S. Department of Agriculture estimates that people in the U.S. waste about 27 percent of their food. The scientists realized that the waste might represent a largely unrecognized opportunity to conserve energy and help control global warming.
Their analysis of wasted food and the energy needed to ready it for consumption concluded that the U.S. wasted about 2030 trillion BTU of energy in 2007, or the equivalent of about 350 million barrels of oil. That represents about 2 percent of annual energy consumption in the U.S.
"Consequently, the energy embedded in wasted food represents a substantial target for decreasing energy consumption in the U.S.," the article notes. "The wasted energy calculated here is a conservative estimate both because the food waste data are incomplete and outdated and the energy consumption data for food service and sales are incomplete."

Physicists Control Chemical Reactions Mechanically


Giovanni Zocchi, UCLA professor of physics (right), and UCLA graduate student Hao Qu.
UCLA physicists have taken a significant step in controlling chemical reactions mechanically, an important advance in nanotechnology, UCLA physics professor Giovanni Zocchi and colleagues report.
Chemical reactions in the cell are catalyzed by enzymes, which are protein molecules that speed up reactions. Each protein catalyzes a specific reaction. In a chemical reaction, two molecules collide and exchange atoms; the enzyme is the third party, the "midwife to the reaction."
But the molecules have to collide in a certain way for the reaction to occur. The enzyme binds to the molecules and lines them up, forcing them to collide in the "right" way, so the probability that the molecules will exchange atoms is much higher.
"Instead of just watching what the molecules do, we can mechanically prod them," said Zocchi, the senior author of the research.
To do that, Zocchi and his graduate students, Chiao-Yu Tseng and Andrew Wang, attached a controllable molecular spring made of DNA to the enzyme. The spring is about 10,000 times smaller than the diameter of a human hair. They can mechanically turn the enzyme on and off and control how fast the chemical reaction occurs. In their newest research, they attached the molecular spring at three different locations on the enzyme and were able to mechanically influence different specific steps of the reaction.
They published their research in the journal Europhysics Letters, a publication of the European Physical Society, in July.
"We have stressed the enzyme in different ways," Zocchi said. "We can measure the effect on the chemical reaction of stressing the molecule this way or that way. Stressing the molecule in different locations produces different responses. If you attach the molecular spring in one place, nothing much happens to the chemical reaction, but you attach it to a different place and you affect one step in the chemical reaction. Then you attach it to a third place and affect another step in this chemical reaction."
Zocchi, Tseng and Wang studied the rate of the chemical reactions and reported in detail what happened to the steps of the reactions as they applied mechanical stress to the enzyme at different places.
"Standing on the shoulders of 50 years of structural studies of proteins, we looked beyond the structural description at the dynamics, specifically the question of what forces -- and applied where -- have what effect on the reaction rates," Zocchi said.
In a related second paper, Zocchi and his colleagues reached a surprising conclusion in solving a longstanding physics puzzle.
When one bends a straight tree branch or a straight rod by compressing it longitudinally, the branch or rod at first remains straight and does not bend until a certain critical force is exceeded. At the critical force, it does not bend a little -- it suddenly buckles and bends a lot.
"This phenomenon is well known to any child who has made bows from hazelnut bush branches, for example, which are typically quite straight. To string the bow, you have to press down on it hard to buckle it, but once it is bent, you need only a smaller force to keep it so," Zocchi said.
The UCLA physicists studied the elastic energy of their DNA molecular spring when it is sharply bent.
"Such a short double-stranded DNA molecule is somewhat similar to a rod, but the elasticity of DNA at this scale was not known," Zocchi said. "What is the force the DNA molecular spring is exerting on the enzyme? We have answered this question.
"We find there is a similar bifurcation with this DNA molecule. It goes from being bent smoothly to having a kink. When we bend this molecule, there is a critical force where there is a qualitative difference. The molecule is like the tree branch and the rod in this respect. If you're just a little below the threshold, the system has one kind of behavior; if you're just a little above the threshold force, the behavior is totally different. The achievement was to measure directly the elastic energy of this stressed molecule, and from the elastic energy characterize the kink."
Co-authors on this research are UCLA physics graduate students Hao Qu, Chiao-Yu Tseng and Yong Wang and UCLA associate professor of chemistry and biochemistry Alexander Levine, who is a member of the California NanoSystems Institute at UCLA. The research was published in April, also in the journal Europhysics Letters.
"We can now measure for any specific DNA molecule what the elastic energy threshold for the instability is," Zocchi said. "I see beauty in this important phenomenon. How is it possible that the same principle applies to a tree branch and to a molecule? Yet it does. The essence of physics is finding common behavior in systems that seem very different."
While Zocchi's research may have applications for medicine and other fields, he emphasizes the advance in knowledge itself.
"There is value in science that adds to our knowledge and helps us understand our world, apart from the value of future applications," he said. "I study problems that I find interesting, where I think I can make a contribution. Why study a particular problem rather than another? Perhaps for the same reason a painter chooses a particular landscape. Perhaps we see beauty there."

Microbes Engineered for Low-Cost Production of Anti-Cancer Drug, Taxol


Artist's 3-D rendering of E. coli bacteria.
MIT researchers and collaborators from Tufts University have now engineered E. coli bacteria to produce large quantities of a critical compound that is a precursor to the cancer drug Taxol, originally isolated from the bark of the Pacific yew tree. The tree's bacteria can produce 1,000 times more of the precursor, known as taxadiene, than any other engineered microbial strain.
The technique, described in the Oct. 1 issue of Science, could bring down the manufacturing costs of Taxol and also help scientists discover potential new drugs for cancer and other diseases such as hypertension and Alzheimer's, said Gregory Stephanopoulos, who led the team of MIT and Tufts researchers and is one of the senior authors of the paper.
"If you can make Taxol a lot cheaper, that's good, but what really gets people excited is the prospect of using our platform to discover other therapeutic compounds in an era of declining new pharmaceutical products and rapidly escalating costs for drug development," said Stephanopoulos, the W.H. Dow Professor of Chemical Engineering at MIT.
Taxol, also known as paclitaxel, is a powerful cell-division inhibitor commonly used to treat ovarian, lung and breast cancers. It is also very expensive -- about $10,000 per dose, although the cost of manufacturing that dose is only a few hundred dollars. (Patients usually receive one dose.)
Two to four Pacific yew trees are required to obtain enough Taxol to treat one patient, so in the 1990s, bioengineers came up with a way to produce it in the lab from cultured plant cells, or by extracting key intermediates from plant material like the needles of the decorative yew. These methods generate enough material for patients, but do not produce sufficient quantities for synthesizing variants that may be far more potent for treating cancer and other diseases. Organic chemists have succeeded in synthesizing Taxol in the lab, but these methods involve 35 to 50 steps and have a very low yield, so they are not economical. Also, they follow a different pathway than the plants, which makes it impossible to produce the pathway intermediates and change them to make new, potentially more powerful variations.
"By mimicking nature, we can now begin to produce these intermediates that the plant makes, so people can look at them and see if they have any therapeutic properties," said Stephanopoulos. Moreover, they can synthesize variants of these intermediates that may have therapeutic properties for other diseases.
The complex metabolic sequence that produces Taxol involves at least 17 intermediate steps and is not fully understood. The team's goal was to optimize production of the first two Taxol intermediates, taxadiene and taxadien-5-alpha-ol. E. coli does not naturally produce taxadiene, but it does synthesize a compound called IPP, which is two steps away from taxadiene. Those two steps normally occur only in plants. MIT postdoctoral associate Ajikumar Parayil recognized that the key to more efficient production is a well-integrated pathway that does not allow potentially toxic intermediates to accumulate. To accomplish this, researchers took a two-pronged approach in engineering E. coli to produce taxadiene.
First, the team focused on the IPP pathway, which has eight steps, and determined that four of those reactions were bottlenecks in the synthesis -- that is, there is not enough enzyme at those steps, so the entire process is slowed down. Parayil then engineered the bacteria to express multiple copies of those four genes, eliminating the bottlenecks and speeding up IPP production.
To get E. coli to convert IPP to taxadiene, the researchers added two plant genes, modified to function in bacteria, that code for the enzymes needed to perform the reactions. They also varied the number of copies of the genes to find the most efficient combination. These methods allowed the researchers to boost taxadiene production 1,000 times over levels achieved by other researchers using engineered E. coli, and 15,000 times over a control strain of E. coli to which they just added the two necessary plant genes but did not optimize gene expression of either pathway.
Following taxadiene synthesis, researchers advanced the pathway by adding one more critical step towards Taxol synthesis, the conversion of taxadiene to taxadiene 5-alpha-ol. This is the first time that taxadiene-5-alpha-ol has been produced in microbes. There are still several more steps to go before achieving synthesis of the intermediate baccatin III, from which Taxol can be chemically synthesized. "Though this is only a first step, it is a very promising development and certainly supports this approach and its potential," said Blaine Pfeifer, assistant professor of chemical and biological engineering at Tufts and an author of the Science paper.
Now that the researchers have achieved taxadiene synthesis, there are still another 15 to 20 steps to go before they can generate Taxol. In this study, they showed that they can perform the first of those steps.
Stephanopoulos and Pfeifer expect that if this technique can eventually be used to manufacture Taxol, it would reduce significantly the cost to produce one gram of the drug. Researchers could also experiment with using these bacteria to create other useful chemicals such as fragrances, flavors and cosmetics, said Pfeifer.
Development of the new technology was funded by the Singapore-MIT Alliance, National Institutes of Health and a Milheim Foundation Grant for Cancer Research. MIT has filed a patent on the technology and new strain of E. coli, and the researchers are considering licensing the technology or starting a new company to commercialize it, said Stephanopoulos.

Experts Urge Making Cigarettes Non-Addictive a Research Priority


Experts say a nicotine reduction strategy should be an urgent research priority because of its potential to profoundly reduce the death and disease from tobacco use.
After a major review of scientific information, six leading tobacco research and policy experts have concluded that a nicotine reduction strategy should be an urgent research priority because of its potential to profoundly reduce the death and disease from tobacco use.
Their findings were published in the journal Tobacco Control.
According to this new report, reducing the amount of nicotine in cigarettes to non-addictive levels could have a significant public health impact on prevention and smoking cessation. Over time, the move could dramatically reduce the number of annual deaths related to cigarette smoking by decreasing adolescent experimentation with cigarettes preventing a progression to addiction, and by reducing dependence on tobacco among currently addicted smokers of all ages.
Dorothy Hatsukami, Ph.D., University of Minnesota Medical School, and Mitch Zeller, J.D., Pinney Associates in Bethesda, MD, led the overall effort as co-chairs of the National Cancer Institute's Tobacco Harm Reduction Network. They convened several meetings of researchers, policy makers, tobacco control advocates and government representatives that explored the science base for a nicotine reduction strategy.
Currently, about 44 million (or 20 percent) of adults in the United States smoke cigarettes. Other research cited by the authors had found that reducing nicotine to non-addictive levels could potentially reduce smoking prevalence to about 5 percent.
"Nicotine addiction sustains tobacco use. Quitting tobacco can be as difficult to overcome as heroin or cocaine addiction," said Hatsukami, director of the University of Minnesota's Tobacco Use Research Center and the Masonic Cancer Center's Cancer Control and Prevention Research Program "Reducing the nicotine in cigarettes to a level that is non-addicting could have a profound impact on reducing death and disability related to cigarettes and improving overall public health."
Hatsukami adds that studies to date have found that substantial reduction in nicotine in cigarettes does not lead to smokers smoking more lower-nicotine cigarettes because it is harder to compensate for very low nicotine intake.
"In addition, studies have shown a significantly lower number of cigarettes are smoked when low-nicotine cigarettes are used, resulting in eventual abstinence in a considerable number of smokers," she said.
"Imagine a world where the only cigarettes that kids could experiment with would neither create nor sustain addiction," Zeller said. "The public health impact of this would be enormous if we can prevent youthful experimentation from progressing to regular smoking, addiction, and the resulting premature disease and death later. Reducing the nicotine content in cigarettes may be a very effective way to accomplish this major impact," he added.
Hatsukami, Zeller, and their colleagues recommend engaging scientific, research and government agencies to conduct the necessary research and set priorities and goals as the next step toward determining the feasibility of a nicotine reduction approach.
This study was funded by the National Cancer Institute, National Institute on Drug Abuse and the American Legacy Foundation. Other authors of this paper include Drs. Kenneth Perkins (University of Pittsburgh), Mark LeSage (Minneapolis Medical Research Foundation and University Minnesota), David Ashley (formerly at the Centers for Disease Control and Prevention, now at the Food and Drug Administration), Jack Heningfield (Pinney Associates), Neal Benowitz (University of California, San Francisco), Cathy Backinger (National Cancer Institute).

Acidification of Oceans May Contribute to Global Declines of Shellfish


Images of 36-d-old M. mercenaria grown under different levels of CO2, <250, 390, 750, and 1,500 ppm.
The acidification of the Earth's oceans due to rising levels of carbon dioxide (CO2) may be contributing to a global decline of clams, scallops and other shellfish by interfering with the development of shellfish larvae, according to two Stony Brook University scientists, whose findings are published online and in the current issue of Proceedings of the National Academy of Sciences (PNAS).
Professor Christopher J. Gobler, Ph.D., and Ph.D. candidate Stephanie C. Talmage of the School of Marine and Atmospheric Sciences at Stony Brook conducted experiments to evaluate the impacts of past, present and future ocean acidification on the larvae of two commercially valuable shellfish: the Northern quahog, or hard clam, and the Atlantic bay scallop. The ability of both to produce shells partly depends on ocean water pH. Previous studies have shown that increases in atmospheric CO2 levels can lower the ocean's pH level, causing it to become more acidic.
"In general, the study of ocean acidification on marine animals is a relatively new field. Ocean acidification has been going on since the dawn of the Industrial Revolution but it has been investigated as a process for less than a decade," Dr. Gobler said. "People have known about rising levels of CO2 and have been talking about that for decades but had originally assumed the oceans would be able to maintain their pH while they were absorbing this CO2." The largest contributor to CO2 in the atmosphere and oceans is the burning of fossil fuels, Dr. Gobler said.
While previous studies have demonstrated that shellfish are sensitive to the increases in CO2 projected for the future, "the extent to which the rise in CO2 that has occurred since the dawn of the Industrial Revolution has impacted these populations is poorly understood," the researchers wrote.
While studying the impact of rising global temperatures on shellfish the researchers shifted their focus to another worldwide threat. "Temperatures have risen about 8 percent since the dawn of the Industrial Revolution but carbon dioxide is up 40 percent, increasing over 100 parts per million," Dr. Gobler said.
The researchers reported that larvae grown at approximately pre-industrial CO2 concentrations of 250 ppm had higher survival rates, grew faster and had thicker and more robust shells than those grown at the modern concentration of about 390 ppm. In addition, larvae that were grown at CO2 concentrations projected to occur later this century developed malformed and eroded shells. The findings may provide insight into future evolutionary pressures of ocean acidification on marine species that form calcium carbonate shells, the authors wrote.
"CO2 entering the ocean decreases the availability of carbonate ions (CO3−2) and reduces ocean pH, a process known as ocean acidification," the authors wrote. "These changes in ocean chemistry may have dire consequences for ocean animals that produce hard parts made from calcium carbonate (CaCO3)."
Other calcifying organisms impacted by ocean acidification include coccolithophores, coral reefs, crustose coralline algae, echinoderms, foraminifera, and pteropods, the authors wrote.
In their experiments with the Northern quahog, Mercenaria mercenaria, and the Atlantic bay scallop, Argopecten irradians, the scientists introduced different levels of CO2 gas to filtered seawater taken from Shinnecock Bay, NY, USA. Shellfish larvae grown under near preindustrial levels of CO2 (250 ppm) displayed the highest rates of metamorphosis, growth, and survival, they found. Those grown under higher levels developed thinner shells. The high CO2 "severely altered the development of the hinge structure of early stage bivalves. As CO2 levels increased from approximately 250 to 1,500 ppm, there were dramatic declines in the size, integrity, and connectedness of the hinge." That can impact the ability of the shellfish to feed, they wrote. This research was conducted on the Southampton campus of Stony Brook.
"Our findings regarding the effects of future CO2 levels on larval shellfish are consistent with recent investigations of ocean acidification demonstrating that calcifying organisms will experience declines in survival and growth, as well as malformed CaCO3 shells and hard parts," they wrote. "However, our examination of the development of larval shellfish at levels of CO2 present before the industrialization of the planet provides important insight regarding the potential effects ocean acidification has had on calcifying organisms during the past two hundred years."
Together with rising global temperatures, pollution and algae blooms, ocean acidification can have a devastating impact on shellfish populations, Dr. Gobler said. "There a lot of efforts right now to bring back our shellfish in New York and around the world, but this study demonstrates this could be more difficult than anticipated," he said. While some threats such as overharvesting can be dealt with through limits on licenses and off-limits areas, "when you're dealing with a global phenomenon it's harder to counteract."

NASA's EPOXI Mission Sets Up for Comet Flyby


NASA's Deep Impact/EPOXI spacecraft flew past Earth on June 27, 2010, to get a boost from Earth's gravity. It is now on its way to comet Hartley 2, depicted in this artist's concept, with a planned flyby this fall. 
On Sept. 29, 2010, navigators and mission controllers for NASA's EPOXI mission watched their computer screens as 23.6 million kilometers (14.7 million miles) away, their spacecraft successfully performed its 20th trajectory correction maneuver. The maneuver refined the spacecraft's orbit, setting the stage for its flyby of comet Hartley 2 on Nov. 4. Time of closest approach to the comet was expected to be about 10: 02 a.m. EDT (7:02 a.m. PDT).
The trajectory correction maneuver began at 2 p.m. EDT (11 a.m. PDT), when the spacecraft fired its engines for 60 seconds, changing the spacecraft's velocity by 1.53 meters per second (3.4 mph).
"We are about 23 million miles and 36 days away from our comet," said EPOXI project manager Tim Larson of NASA's Jet Propulsion Laboratory in Pasadena, Calif. "I can't wait to see what Hartley 2 looks like."
On Nov. 4, the spacecraft will fly past the comet at a distance of about 700 kilometers (435 miles). It will be only the fifth time in history that a spacecraft has been close enough to image a comet's nucleus, and the first time in history that two comets have been imaged with the same instruments and same spatial resolution.
"We are imaging the comet every day, and Hartley 2 is proving to be a worthy target for exploration," said Mike A'Hearn, EPOXI principal investigator from the University of Maryland, College Park.
EPOXI is an extended mission that utilizes the already "in flight" Deep Impact spacecraft to explore distinct celestial targets of opportunity. The name EPOXI itself is a combination of the names for the two extended mission components: the extrasolar planet observations, called Extrasolar Planet Observations and Characterization (EPOCh), and the flyby of comet Hartley 2, called the Deep Impact Extended Investigation (DIXI). The spacecraft will continue to be referred to as "Deep Impact."
NASA's Jet Propulsion Laboratory, Pasadena, Calif., manages the EPOXI mission for NASA's Science Mission Directorate, Washington. The University of Maryland, College Park, is home to the mission's principal investigator, Michael A'Hearn. Drake Deming of NASA's Goddard Space Flight Center, Greenbelt, Md., is the science lead for the mission's extrasolar planet observations. The spacecraft was built for NASA by Ball Aerospace & Technologies Corp., Boulder, Colo.

Dual Nature of Dew: Researcher Measures the Effect of Dew on Desert Plants


Dew on plant
When the scientific and traditional worlds collide, they do so in the most surprising ways. Classical meteorological and plant science has, in the last century, assumed that dew can negatively affects plant life, leading to rot and fungus. But in some traditions, dew is most welcomed as an important source of vegetative and plant life, celebrated in poetry and prayer.
Now Prof. Pinhas Alpert of Tel Aviv University's Department of Geophysics and Planetary Sciences has developed an explanation for the perplexing paradox with his colleagues. According to scientific literature, he says, dew that accumulates through the night has a negative effect on vegetation and fruits because it creates a "spongy" effect. But in a recent issue of the Water Resources Journal, Prof. Alpert demonstrates that dew is an important water source for plant life in climates such as those in the Eastern Mediterranean and parts of the U.S. Great Basin Desert.
"Semi-arid zones are dry for over half the year," he explains. "Dew is therefore an important source of moisture in the air. It surrounds the plant leaves nearly every morning for approximately two to three hours past sunrise." This finding, he says, explains why dew is such an important part of certain traditions.
Creating the ideal conditions for growth
A plant's growth is based on photosynthesis, employing stomata, the small openings in vegetation and fruit leaves that absorb carbon dioxide. The combination of water, carbon dioxide in the air and sunlight help a plant to produce sugars which allow it to grow. In temperate zones, most of a plant's growth occurs in the middle of the day, when the most sunlight is available.
But there are climatic influences as well. According to Prof. Alpert, plants in a semi-arid zone close these stomatic openings in the midday as a defence mechanism, to avoid losing moisture, or in other words water, when the weather is at its driest. When this happens, photosynthesis and plant growth cannot take place.
For these reasons, the early-morning hours -- and not those of midday -- are the period of maximum growth for plants in the Eastern Mediterranean region, Prof. Alpert says. And it's all due to the dew. "In the early morning, dew surrounds the leaves of a plant with moisture, and the plant does not close its stomata. Therefore, it can grow."
Life-giving dew
In order to research the effect of dew on plant life, Prof. Alpert and his fellow scientists studied the interaction between the leaves of a plant and the air. They measured how much moisture departs from the leaves and how much carbon dioxide enters them at various times of the day.
These findings explain a very old paradox, says Prof. Alpert. Despite its negative reputation in other climates, dew has been idealized in the Eastern Mediterranean for its ability to help fruits and vegetables grow in a dry and inhospitable region.

Turning Waste Heat Into Power


A "forest" of molecules holds the promise of turning waste heat into electricity. UA physicists discovered that because of quantum effects, electron waves traveling along the backbone of each molecule interfere with each other, leading to the buildup of a voltage between the hot and cold electrodes (the golden structures on the bottom and top).
What do a car engine, a power plant, a factory and a solar panel have in common? They all generate heat -- a lot of which is wasted.
University of Arizona physicists have discovered a new way of harvesting waste heat and turning it into electrical power.
Using a theoretical model of a so-called molecular thermoelectric device, the technology holds great promise for making cars, power plants, factories and solar panels more efficient, to name a few possible applications. In addition, more efficient thermoelectric materials would make ozone-depleting chlorofluorocarbons, or CFCs, obsolete.
The research group led by Charles Stafford, associate professor of physics, published its findings in the September issue of the scientific journal, ACS Nano.
"Thermoelectricity makes it possible to cleanly convert heat directly into electrical energy in a device with no moving parts," said lead author Justin Bergfield, a doctoral candidate in the UA College of Optical Sciences.
"Our colleagues in the field tell us they are pretty confident that the devices we have designed on the computer can be built with the characteristics that we see in our simulations."
"We anticipate the thermoelectric voltage using our design to be about 100 times larger than what others have achieved in the lab," Stafford added.
Catching the energy lost through waste heat has been on the wish list of engineers for a long time but, so far, a concept for replacing existing devices that is both more efficient and economically competitive has been lacking.
Unlike existing heat-conversion devices such as refrigerators and steam turbines, the devices of Bergfield and Stafford require no mechanics and no ozone-depleting chemicals. Instead, a rubber-like polymer sandwiched between two metals acting as electrodes can do the trick.
Car or factory exhaust pipes could be coated with the material, less than 1 millionth of an inch thick, to harvest energy otherwise lost as heat and generate electricity.
The physicists take advantage of the laws of quantum physics, a realm not typically tapped into when engineering power-generating technology. To the uninitiated, the laws of quantum physics appear to fly in the face of how things are "supposed" to behave.
The key to the technology lies in a quantum law physicists call wave-particle duality: Tiny objects such as electrons can behave either as a wave or as a particle.
"In a sense, an electron is like a red sports car," Bergfield said. "The sports car is both a car and it's red, just as the electron is both a particle and a wave. The two are properties of the same thing. Electrons are just less obvious to us than sports cars."
Bergfield and Stafford discovered the potential for converting heat into electricity when they studied polyphenyl ethers, molecules that spontaneously aggregate into polymers, long chains of repeating units. The backbone of each polyphenyl ether molecule consists of a chain of benzene rings, which in turn are built from carbon atoms. The chain link structure of each molecule acts as a "molecular wire" through which electrons can travel.
"We had both worked with these molecules before and thought about using them for a thermoelectric device," Bergfield said, "but we hadn't really found anything special about them until Michelle Solis, an undergrad who worked on independent study in the lab, discovered that, low and behold, these things had a special feature."
Using computer simulations, Bergfield then "grew" a forest of molecules sandwiched between two electrodes and exposed the array to a simulated heat source.
"As you increase the number of benzene rings in each molecule, you increase the power generated," Bergfield said.
The secret to the molecules' capability to turn heat into power lies in their structure: Like water reaching a fork in a river, the flow of electrons along the molecule is split in two once it encounters a benzene ring, with one flow of electrons following along each arm of the ring.
Bergfield designed the benzene ring circuit in such a way that in one path the electron is forced to travel a longer distance around the ring than the other. This causes the two electron waves to be out of phase once they reunite upon reaching the far side of the benzene ring. When the waves meet, they cancel each other out in a process known as quantum interference. When a temperature difference is placed across the circuit, this interruption in the flow of electric charge leads to the buildup of an electric potential -- voltage -- between the two electrodes.
Wave interference is a concept exploited by noise-cancelling headphones: Incoming sound waves are met with counter waves generated by the device, wiping out the offending noise.
"We are the first to harness the wave nature of the electron and develop a concept to turn it into usable energy," Stafford said.
Analogous to solid state versus spinning hard drive type computer memory, the UA-designed thermoelectric devices require no moving parts. By design, they are self-contained, easier to manufacture and easier to maintain compared to currently available technology.
"You could just take a pair of metal electrodes and paint them with a single layer of these molecules," Bergfield said. "That would give you a little sandwich that would act as your thermoelectric device. With a solid-state device you don't need cooling agents, you don't need liquid nitrogen shipments, and you don't need to do a lot of maintenance."
"You could say, instead of Freon gas, we use electron gas," Stafford added.
"The effects we see are not unique to the molecules we used in our simulation," Bergfield said. "Any quantum-scale device where you have a cancellation of electric charge will do the trick, as long as there is a temperature difference. The greater the temperature difference, the more power you can generate."
Molecular thermoelectric devices could help solve an issue currently plaguing photovoltaic cells harvesting energy from sunlight.
"Solar panels get very hot and their efficiency goes down," Stafford said. "You could harvest some of that heat and use it to generate additional electricity while simultaneously cooling the panel and making its own photovoltaic process more efficient."
"With a very efficient thermoelectric device based on our design, you could power about 200 100-Watt light bulbs using the waste heat of an automobile," he said. "Put another way, one could increase the car's efficiency by well over 25 percent, which would be ideal for a hybrid since it already uses an electrical motor."
So, next time you watch a red sports car zip by, think of the hidden power of the electron and how much more efficient that sports car could be with a thermoelectric device wrapped around its exhaust pipe.
Funding for this research was provided by the University of Arizona physics department.

Fungal Spores Travel Farther by Surfing Their Own Wind


The spore cups, or apothecia, of the fungus Sclerotinia sclerotiorum are about a half-centimeter across and produce thousands of spores throughout the spring and summer. The fungus is often hidden in fields, where they infect crops ranging from peanuts and cabbage to sunflowers.
Long before geese started flying in chevron formation or cyclists learned the value of drafting, fungi discovered an aerodynamic way to reduce drag on their spores so as to spread them as high and as far as possible.
One fungus, the destructive Sclerotinia sclerotiorum, spews thousands of spores nearly simultaneously to form a plume that reduces drag to nearly zero and even creates a wind that carries many of the spores 20 times farther than a single spore could travel alone, according to a new study by mathematicians and biologists from the University of California, Berkeley, Harvard University and Cornell University.
"In the Tour de France, riders form a peloton that can reduce air drag by 40 percent," said co-lead author Marcus Roper, a postdoctoral researcher in the Department of Mathematics at UC Berkeley and at Lawrence Berkeley National Laboratory. "The ascospores of Sclerotinia do the peloton perfectly, reducing air drag to zero and sculpting a flow of air that carries them even farther."
Presumably, this strategy helps the fungi get their spores off the ground into the foliage of their host plants, or into airstreams that can carry them to host plants, the scientists say.
Co-lead author Agnese Seminara, a postdoctoral researcher and theoretical physicist in Harvard's School of Engineering and Applied Sciences, added: "I realized that the spores behaved much like cloud droplets. To follow their paths, I adapted algorithms I had developed to describe cloud formation."
Roper, Seminara, and colleagues report the findings in the early online edition of the journal Proceedings of the National Academy of Sciences (PNAS).
"These findings could have implications for methods of controlling the spread of fungal pathogens," said senior author Anne Pringle, associate professor of organismic and evolutionary biology at Harvard. "Sclerotinia alone costs U.S. farmers on the order of $1 billion annually, including costs of controlling the fungus and crop losses. Research directed at understanding how to disrupt the cooperative ejection of spores may provide novel tools for the control of these fungal pathogens."
Researchers in the field of bioballistics -- how plants, fungi and animals accelerate seeds, spores or even parts of their body to high speed -- have found an amazing variety of techniques to overcome friction with the air, the main limitation for small spores and seeds.
"Understanding how Sclerotinia is discharging its spores and getting them onto the plants will eventually lead us to new ways of looking at plant architecture," said co-author Helene Dillard, a plant pathologist who heads Cornell University's Cooperative Extension and is associate dean of the College of Agriculture and Life Sciences. "When plant breeders are developing new varieties of crops -- such as beans, cabbage or sunflowers -- they can keep in mind how Sclerotinia gets the spores to reach their targets, which is usually the flowers."
Scientists have recognized for more than 100 years that many spore-producing fungi -- the ascomycetes -- release their spores in plumes that carry them long distances. More than 50 years ago, scientists noted that these spore plumes create a wind of their own, but the physics of the plumes was not understood, Roper said. In addition, little work has been done on how seeds or spores cooperate to improve dispersal to new environments.
With training in the mathematics and physics of fluid flow, Roper and Seminara decided to investigate in collaboration with Pringle, a Harvard mycologist.
For the current PNAS paper, the researchers used high-speed video to clock the speed of spores ejected by Sclerotinia, finding that they are expelled at a speed of about 8.4 meters per second (19 miles per hour). However, because the spores are so small -- 10 microns long -- air drag brings them to a stop in a mere 3 millimeters. When thousands of spores are ejected at the same time, however, some can travel more than 100 millimeters, or 4 inches.
These high-speed video images enabled Roper and Seminara to model spore plume movement precisely with standard equations of hydrodynamics. They showed that the thousands of spores ejected at the same time quickly eliminate all drag and allow the spores to travel about a centimeter, by which time the wind generated by the spores captures and whisks them to a speed of 60 centimeters per second. Their upward motion is stopped only by gravity, Roper said.
The added range from "hydrodynamic cooperation" allows fungi on the ground to shoot their spores into flowers or plant wounds, whence they can quickly spread throughout the plant and kill it.
Often called white mold, Sclerotinia rot, or wilt, the fungus attacks more than 400 species of plants, Dillard said, including beans, sunflowers, soybeans, canola and peanuts, and can wipe out entire fields. In spring and summer, the fungus produces cups (apothecia) about one-half centimeter across that spew spores into the air to infect plants. The fungus produces overwintering seed-like bodies called sclerotia on the infected plant tissues.
"It grows across a cabbage head and produces these small sclerotia that look like mouse droppings," Dillard says. "The sclerotia fall on the ground, and are then in position to initiate the infection process the following year."
The researchers were also curious how fungi manage to eject their spores simultaneously. To investigate this, they grew another mold, a coprophilic fungus from the genus Ascobolus, on horse dung and focused their high-speed video camera on the two-millimeter, cup-shaped fruiting body containing tens of thousands of spore sacs (asci), each containing eight spores. They found that, while the spore sac that ejects first seems to be random, after the first one or two go off, a wave of ejection travels outward as successive rings of spore sacs rupture in sequence. Because this happens in one-tenth of a second, the ejection seems simultaneous.
"What looks like a plume is actually a series of sheets going off," Roper said.
By tweaking their mathematical model to take account of this, Roper and Seminara discovered that cooperative ejection in sheets is a highly effective method for shooting spores long distances. The scientists continue to investigate how spore ejection is initiated, and whether and how spores can cheat to make sure that they get ejected farther than their companions.
Other authors of the paper are Mahesh M. Bandi of Harvard and Ann Cobb of Cornell. The work was funded by a Miller Institute for Basic Research in Science Fellowship to Roper, a Marie Curie Fellowship from the European Union Framework 7 to Seminara, and Harvard University.