Friday, October 29, 2010

Uranium in Groundwater? 'Fracking' Mobilizes Uranium in Marcellus Shale


UB geologist Tracy Bank and colleagues found that uranium and hydrocarbons in Marcellus shale are not just physically, but also chemically, bound. 
Scientific and political disputes over drilling Marcellus shale for natural gas have focused primarily on the environmental effects of pumping millions of gallons of water and chemicals deep underground to blast through rocks to release the natural gas.
But University at Buffalo researchers have now found that that process -- called hydraulic fracturing or "fracking"-- also causes uranium that is naturally trapped inside Marcellus shale to be released, raising additional environmental concerns.
The research will be presented at the annual meeting of the Geological Society of America in Denver on Nov. 2.
Marcellus shale is a massive rock formation that stretches from New York through Pennsylvania, Ohio and West Virginia, and which is often described as the nation's largest source of natural gas.
"Marcellus shale naturally traps metals such as uranium and at levels higher than usually found naturally, but lower than manmade contamination levels," says Tracy Bank, PhD, assistant professor of geology in UB's College of Arts and Sciences and lead researcher. "My question was, if they start drilling and pumping millions of gallons of water into these underground rocks, will that force the uranium into the soluble phase and mobilize it? Will uranium then show up in groundwater?"
To find out, Bank and her colleagues at UB scanned the surfaces of Marcellus shale samples from Western New York and Pennsylvania. Using sensitive chemical instruments, they created a chemical map of the surfaces to determine the precise location in the shale of the hydrocarbons, the organic compounds containing natural gas.
"We found that the uranium and the hydrocarbons are in the same physical space," says Bank. "We found that they are not just physically -- but also chemically -- bound.
"That led me to believe that uranium in solution could be more of an issue because the process of drilling to extract the hydrocarbons could start mobilizing the metals as well, forcing them into the soluble phase and causing them to move around."
When Bank and her colleagues reacted samples in the lab with surrogate drilling fluids, they found that the uranium was indeed, being solubilized.
In addition, she says, when the millions of gallons of water used in hydraulic fracturing come back to the surface, it could contain uranium contaminants, potentially polluting streams and other ecosystems and generating hazardous waste.
The research required the use of very sophisticated methods of analysis, including one called Time-of-Flight Secondary Ion Mass Spectrometry, or ToF-SIMS, in the laboratory of Joseph A. Gardella Jr., Larkin Professor of Chemistry at UB.
The UB research is the first to map samples using this technique, which identified the precise location of the uranium.
"Even though at these levels, uranium is not a radioactive risk, it is still a toxic, deadly metal," Bank concludes. "We need a fundamental understanding of how uranium exists in shale. The more we understand about how it exists, the more we can better predict how it will react to 'fracking.'"

MIT develops solar-powered, portable desalination system

MIT researchers have developed a portable, solar-powered water desalination system that co...
MIT researchers have developed a portable, solar-powered water desalination system that could provide water in disaster zones and remote regions around the globe
Researchers from MIT's Field and Space Robotics Laboratory (FSRL) have designed a portable, solar-powered desalination system that is cost-effective and easy to assemble to bring drinkable water in disaster zones and remote regions around the globe.
Relief efforts in the aftermath of large-scale natural disasters often call for water as one of the very first priorities: such was the case in the Haiti earthquake back in January. When coping with disasters of this scale the possibility to obtain drinkable water locally, such as by desalination of sea water, dramatically improves the effectiveness of the rescue efforts.
Desalination systems, however, are usually quite large and need a lot of energy to operate; these situations, instead, call for a quick, effective way to turn seawater into drinkable water in loco, with a small and portable system that doesn't need external sources of electrical power to work.
The system developed by MIT researchers does exactly this, and its characteristics make it particularly apt to the task of assisting people in emergency situations. It's designed so it can be cost-effectively assembled from standard parts and put into operation within hours even without the need of technicians. Its specifics mean the apparatus could also found use in remote areas where supplying energy and clean water can be logistically complex, such as desert locations or small villages in developing countries.
Photovoltaic panels power high-pressure pumps that push seawater through a filtering membrane. Unlike conventional solar-powered desalination systems that run on battery power when direct sunlight is not available, this system can operate efficiently even in cloudy conditions. Algorithms in the system's computer can change variables such as the power of the pump or the position of the valves to maximize water output in response to changing weather and current water demand.
As a result, the prototype can yield as many as 80 gallons of water a day in a variety of weather conditions while a larger version of the unit, which would only cost about US$8,000 to construct, could provide about 1,000 gallons of water per day. Because of its reduced dimensions, the team estimated that one C-130 cargo airplane could transport two dozen desalination units, enough to provide water for 10,000 people.
The researchers are now working on improving the system's efficiency even further and to change its design to make it more durable. The research was funded by MIT's Center for Clean Water and Clean Energy and the King Fahd University of Petroleum and Minerals.

Wednesday, October 27, 2010

Scented Consumer Products Shown to Emit Many Unlisted Chemicals


The sweet smell of fresh laundry may contain a sour note. Widely used fragranced products -- including those that claim to be "green" -- give off many chemicals that are not listed on the label, including some that are classified as toxic.
A study led by the University of Washington discovered that 25 commonly used scented products emit an average of 17 chemicals each. Of the 133 different chemicals detected, nearly a quarter are classified as toxic or hazardous under at least one federal law. Only one emitted compound was listed on a product label, and only two were publicly disclosed anywhere. The article is published online in the journal Environmental Impact Assessment Review.
"We analyzed best-selling products, and about half of them made some claim about being green, organic or natural," said lead author Anne Steinemann, a UW professor of civil and environmental engineering and of public affairs. "Surprisingly, the green products' emissions of hazardous chemicals were not significantly different from the other products."
More than a third of the products emitted at least one chemical classified as a probable carcinogen by the U.S. Environmental Protection Agency, and for which the EPA sets no safe exposure level.
Manufacturers are not required to disclose any ingredients in cleaning supplies, air fresheners or laundry products, all of which are regulated by the Consumer Product Safety Commission. Neither these nor personal care products, which are regulated by the Food and Drug Administration, are required to list ingredients used in fragrances, even though a single "fragrance" in a product can be a mixture of up to several hundred ingredients, Steinemann said.
So Steinemann and colleagues have used chemical sleuthing to discover what is emitted by the scented products commonly used in homes, public spaces and workplaces.
The study analyzed air fresheners including sprays, solids and oils; laundry products including detergents, fabric softeners and dryer sheets; personal care products such as soaps, hand sanitizers, lotions, deodorant and shampoos; and cleaning products including disinfectants, all-purpose sprays and dish detergent. All were widely used brands, with more than half being the top-selling product in its category.
Researchers placed a sample of each product in a closed glass container at room temperature and then analyzed the surrounding air for volatile organic compounds, small molecules that evaporate off a product's surface. They detected chemical concentrations ranging from 100 micrograms per cubic meter (the minimum value reported) to more than 1.6 million micrograms per cubic meter.
The most common emissions included limonene, a compound with a citrus scent; alpha-pinene and beta-pinene, compounds with a pine scent; ethanol; and acetone, a solvent found in nail polish remover.
All products emitted at least one chemical classified as toxic or hazardous. Eleven products emitted at least one probable carcinogen according to the EPA. These included acetaldehyde, 1,4-dioxane, formaldehyde and methylene chloride.
The only chemical listed on any product label was ethanol, and the only additional substance listed on a chemical safety report, known as a material safety data sheet, was 2-butoxyethanol.
"The products emitted more than 420 chemicals, collectively, but virtually none of them were disclosed to consumers, anywhere," Steinemann said.
Because product formulations are confidential, it was impossible to determine whether a chemical came from the product base, the fragrance added to the product, or both.
Tables included with the article list all chemicals emitted by each product and the associated concentrations, although they do not disclose the products' brand names.
"We don't want to give people the impression that if we reported on product 'A' and they buy product 'B,' that they're safe," Steinemann said. "We found potentially hazardous chemicals in all of the fragranced products we tested."
The study establishes the presence of various chemicals but makes no claims about the possible health effects. Two national surveys published by Steinemann and a colleague in 2009 found that about 20 percent of the population reported adverse health effects from air fresheners, and about 10 percent complained of adverse effects from laundry products vented to the outdoors. Among asthmatics, such complaints were roughly twice as common.
The Household Product Labeling Act, currently being reviewed by the U.S. Senate, would require manufacturers to list ingredients in air fresheners, soaps, laundry supplies and other consumer products. Steinemann says she is interested in fragrance mixtures, which are included in the proposed labeling act, because of the potential for unwanted exposure, or what she calls "secondhand scents."
As for what consumers who want to avoid such chemicals should do in the meantime, Steinemann suggests using simpler options such as cleaning with vinegar and baking soda, opening windows for ventilation and using products without any fragrance.
"In the past two years, I've received more than 1,000 e-mails, messages, and telephone calls from people saying: 'Thank you for doing this research, these products are making me sick, and now I can start to understand why,'" Steinemann said.
Steinemann is currently a visiting professor in civil and environmental engineering at Stanford University. Co-authors are Ian MacGregor and Sydney Gordon at Battelle Memorial Institute in Columbus, Ohio; Lisa Gallagher, Amy Davis and Daniel Ribeiro at the UW; and Lance Wallace, retired from the U.S. Environmental Protection Agency. The research was partially funded by Seattle Public Utilities.

Six New Isotopes of the Superheavy Elements Discovered


The six new isotopes placed on the chart of heavy nuclides.
A team of scientists at the U.S. Department of Energy's Lawrence Berkeley National Laboratory has detected six isotopes, never seen before, of the superheavy elements 104 through 114. Starting with the creation of a new isotope of the yet-to-be-named element 114, the researchers observed successive emissions of alpha particles that yielded new isotopes of copernicium (element 112), darmstadtium (element 110), hassium (element 108), seaborgium (element 106), and rutherfordium (element 104). Rutherfordium ended the chain when it decayed by spontaneous fission.
Information gained from the new isotopes will contribute to a better understanding of the theory of nuclear shell structure, which underlies predictions of an "Island of Stability," a group of long-lasting isotopes thought to exist amidst a sea of much shorter-lived, intrinsically unstable isotopes of the superheavy elements.
The group that found the new isotopes is led by Heino Nitsche, head of the Heavy Element Nuclear and Radiochemistry Group in Berkeley Lab's Nuclear Science Division (NSD) and professor of chemistry at the University of California at Berkeley. Ken Gregorich, a senior staff scientist in NSD, is responsible for the group's day-to-day research operation at the 88-inch Cyclotron and the Berkeley Gas-filled Separator, the instrument used to isolate and identify the new isotopes. Paul Ellison of NSD, a graduate student in the UC Berkeley Department of Chemistry, formally proposed and managed the experiment and was first author of the paper reporting the results in the 29 October 2010 issue of Physical Review Letters.
"We were encouraged to try creating new superheavy isotopes by accelerating calcium 48 projectiles with Berkeley Lab's 88-Inch Cyclotron and bombarding plutonium 242 targets inside the Berkeley Gas-filled Separator here," Nitsche says. "This was much the same set-up we used a year ago to confirm the existence of element 114."
The 20-member team included scientists from Berkeley Lab, UC Berkeley, Lawrence Livermore National Laboratory, Germany's GSI Helmholtz Center for Heavy Ion Research, Oregon State University, and Norway's Institute for Energy Technology. Many of its members were also on the team that first confirmed element 114 in September of 2009. Ten years earlier scientists at the Joint Institute for Nuclear Research in Dubna, Russia, had isolated element 114 but it had not been confirmed until the Berkeley work. (Elements heavier than 114 have been seen but none have been independently confirmed.)
The nuclear shell game
Nuclear stability is thought to be based in part on shell structure -- a model in which protons and neutrons are arranged in increasing energy levels in the atomic nucleus. A nucleus whose outermost shell of either protons or neutrons is filled is said to be "magic" and therefore stable. The possibility of finding "magic" or "doubly magic" isotopes of superheavy elements (with both proton and neutron outer shells completely filled) led to predictions of a region of enhanced stability in the 1960s.
The challenge is to create such isotopes by bombarding target nuclei rich in protons and neutrons with a beam of projectiles having the right number of protons, and also rich in neutrons, to yield a compound nucleus with the desired properties. The targets used by the Berkeley researchers were small amounts of plutonium 242 (242Pu) mounted on the periphery of a wheel less than 10 centimeters in diameter, which was rotated to disperse the heat of the beam.
Gregorich notes that calcium 48 (48Ca), which has a doubly magic shell structure (20 protons and 28 neutrons), "is extremely rich in neutrons and can combine with plutonium" -- which has 94 protons -- "at relatively low energies to make compound nuclei. It's an excellent projectile for producing compound nuclei of element 114."
Ellison says, "There's only a very low probability that the two isotopes will interact to form a compound nucleus. To make it happen, we need very intense beams of calcium on the target, and then we need a detector that can sift through the many unwanted reaction products to find and identify the nuclei we want by their unique decay patterns." The 88-Inch Cyclotron's intense ion beams and the Berkeley Gas-filled Separator, designed specifically to sweep away unwanted background and identify desired nuclear products, are especially suited to this task.
Element 114 itself was long thought to lie in the Island of Stability. Traditional models predicted that if an isotope of 114 having 184 neutrons (298114) could be made, it would be doubly magic, with both its proton and neutron shells filled, and would be expected to have an extended lifetime. The isotopes of 114 made so far have many fewer neutrons, and their half-lives are measured in seconds or fractions of a second. Moreover, modern models predict the proton magic number to be 120 or 126 protons. Therefore, where 298114 would actually fall inside the region of increased stability is now in question.
"Making 298114 probably won't be possible until we build heavy ion accelerators capable of accelerating beams of rare projectile isotopes more intense than any we are likely to achieve in the near future," says Nitsche. "But in the meantime we can learn much about the nuclear shell model by comparing its theoretical predictions to real observations of the isotopes we can make."
The team that confirmed element 114 observed nuclei of two isotopes, 286114 and 287114, which decayed in a tenth of a second and half a second respectively. In a subsequent collaboration with researchers at the GSI Helmholtz Center for Heavy Ion Research, two more isotopes, 288114 and 289114, were made; these decayed in approximately two-thirds of a second and two seconds respectively.
While these times aren't long, they're long enough for spontaneous fission to terminate the series of alpha decays. Alpha particles have two protons and two neutrons -- essentially they are helium nuclei -- and many heavy nuclei commonly decay by emitting alpha particles to form atoms just two protons lighter on the chart of the nuclides. By contrast, spontaneous fission yields much lighter fragments.
A new strategy
So this year the Berkeley group decided to make new isotopes using a unique strategy: instead of trying to add more neutrons to 114, they would look for isotopes with fewer neutrons. Their shorter half-lives should make it possible for new isotopes to be formed by alpha emission before spontaneous fission interrupted the process.
"This was a very deliberate strategy," says Ellison, "because we hoped to track the isotopes that resulted from subsequent alpha decays farther down into the main body of the chart of nuclides, where the relationships among isotope number, shell structure, and stability are better understood. Through this connection, and by observing the energy of the alpha decays, we could hope to learn something about the accuracy of predictions of the shell structure of the heaviest elements."
The sum of protons and neutrons of 48Ca and 242Pu is 114 protons and 176 neutrons. To make the desired "neutron poor" 285114 nucleus, one having only 171 neutrons, first required a beam of 48Ca projectiles whose energy was carefully adjusted to excite the resulting compound nucleus enough for five neutrons to "evaporate."
"The process of identifying what you've made comes down to tracking the time between decays and decay energies," says Ellison. As a check against possible mistakes, the data from the experiment were independently analyzed using separate programs devised by Ellison, Gregorich, and team member Jacklyn Gates of NSD.
In this way, after more than three weeks of running the beam, the researchers observed one chain of decays from the desired neutron-light 114 nucleus. The first two new isotopes, 285114 itself, and copernicium 281 produced by its alpha decay, lived less than a fifth of a second before emitting alpha particles. The third new isotope, darmstadtium 277, lived a mere eight-thousandths of second. Hassium 273 lasted a third of a second. Seaborgium 269 made it to three minutes and five seconds but managed to emit an alpha particle. Finally, after another two and a half minutes, rutherfordium 265 decayed by spontaneous fission.
Ellison says, "In the grand scheme, the theoretical predictions were pretty good" when the actual measurements were compared to the decay properties predicted by modern nuclear models. "But there were small-scale interesting differences."
In particular, the heaviest new isotopes, those of 114 and copernicium, showed smaller energies associated with the alpha decay than theory predicts. These discrepancies can be used to refine the theoretical models used to predict the stability of the superheavy elements.
As Gregorich puts it, "our new isotopes are on the western shore of the Island of Stability" -- the shore that's less stable, not more. Yet the discovery of six new isotopes, reaching in an unbroken chain of decays from element 114 down to rutherfordium, is a major step toward better understanding the theory underlying exploration of the region of enhanced stability that is thought to lie in the vicinity of element 114 -- and possibly beyond.
This research was supported by the DOE Office of Science and the National Nuclear Security Administration.

Stable Way to Store the Sun's Heat: Storing Thermal Energy in Chemical Could Lead to Advances in Storage and Portability


A molecule of fulvalene diruthenium, seen in diagram, changes its configuration when it absorbs heat, and later releases heat when it snaps back to its original shape.
Researchers at MIT have revealed exactly how a molecule called fulvalene diruthenium, which was discovered in 1996, works to store and release heat on demand. This understanding, reported in a paper published on Oct. 20 in the journal Angewandte Chemie, should make it possible to find similar chemicals based on more abundant, less expensive materials than ruthenium, and this could form the basis of a rechargeable battery to store heat rather than electricity.
The molecule undergoes a structural transformation when it absorbs sunlight, putting it into a higher-energy state where it can remain stable indefinitely. Then, triggered by a small addition of heat or a catalyst, it snaps back to its original shape, releasing heat in the process. But the team found that the process is a bit more complicated than that.
"It turns out there's an intermediate step that plays a major role," said Jeffrey Grossman, the Carl Richard Soderberg Associate Professor of Power Engineering in the Department of Materials Science and Engineering. In this intermediate step, the molecule forms a semi-stable configuration partway between the two previously known states. "That was unexpected," he said. The two-step process helps explain why the molecule is so stable, why the process is easily reversible and also why substituting other elements for ruthenium has not worked so far.
In effect, explained Grossman, this process makes it possible to produce a "rechargeable heat battery" that can repeatedly store and release heat gathered from sunlight or other sources. In principle, Grossman said, a fuel made from fulvalene diruthenium, when its stored heat is released, "can get as hot as 200 degrees C, plenty hot enough to heat your home, or even to run an engine to produce electricity."
Compared to other approaches to solar energy, he said, "it takes many of the advantages of solar-thermal energy, but stores the heat in the form of a fuel. It's reversible, and it's stable over a long term. You can use it where you want, on demand. You could put the fuel in the sun, charge it up, then use the heat, and place the same fuel back in the sun to recharge."
In addition to Grossman, the work was carried out by Yosuke Kanai of Lawrence Livermore National Laboratory, Varadharajan Srinivasan of MIT's Department of Materials Science and Engineering, and Steven Meier and Peter Vollhardt of the University of California, Berkeley.
The problem of ruthenium's rarity and cost still remains as "a dealbreaker," Grossman said, but now that the fundamental mechanism of how the molecule works is understood, it should be easier to find other materials that exhibit the same behavior. This molecule "is the wrong material, but it shows it can be done," he said.
The next step, he said, is to use a combination of simulation, chemical intuition, and databases of tens of millions of known molecules to look for other candidates that have structural similarities and might exhibit the same behavior. "It's my firm belief that as we understand what makes this material tick, we'll find that there will be other materials" that will work the same way, Grossman said.
Grossman plans to collaborate with Daniel Nocera, the Henry Dreyfus Professor of Energy and Professor of Chemistry, to tackle such questions, applying the principles learned from this analysis in order to design new, inexpensive materials that exhibit this same reversible process. The tight coupling between computational materials design and experimental synthesis and validation, he said, should further accelerate the discovery of promising new candidate solar thermal fuels.
Funding: The National Science Foundation and an MIT Energy Initiative seed grant.

NASA's Kepler Mission Changing How Astronomers Study Distant Stars


The Kepler spacecraft is continuously observing 170,000 stars in the Cygnus-Lyra region of the Milky Way galaxy.
The quantity and quality of data coming back from NASA's Kepler Mission is changing how astronomers study stars, said Iowa State University's Steve Kawaler.
"It's really amazing," said Kawaler, an Iowa State professor of physics and astronomy. "It's as amazing as I feared. I didn't appreciate how hard it is to digest all the information efficiently."
The Kepler spacecraft, he said, "is a discovery machine."
Kepler launched March 6, 2009, from Florida's Cape Canaveral Air Force Station. The spacecraft is orbiting the sun carrying a photometer, or light meter, to measure changes in star brightness. The photometer includes a telescope 37 inches in diameter connected to a 95 megapixel CCD camera. That instrument is continually pointed at the Cygnus-Lyra region of the Milky Way galaxy. Its primary job is to use tiny variations in the brightness of the stars within its view to find earth-like planets that might be able to support life.
The Kepler Asteroseismic Investigation is also using data from that photometer to study stars. The investigation is led by a four-member steering committee: Kawaler, Chair Ron Gilliland of the Space Telescope Science Institute based in Baltimore, Jorgen Christensen-Dalsgaard and Hans Kjeldsen, both of Aarhus University in Aarhus, Denmark.
And Kepler has already buried the star-studiers in data.
Kawaler, who has served as director of a ground-based research consortium called the Whole Earth Telescope, said one year of data from Kepler will be the equivalent of about 300 years of data from the Whole Earth Telescope.
Kawaler has had a hand in turning some of that data into eight scientific papers that have been published or are in the process of being published. At Iowa State, he shares the analysis work with undergraduate Sheldon Kunkel; graduate students Bert Pablo and Riley Smith; visiting scientist Andrzej Baran of Krakow, Poland; and nearly 50 astronomers from around the world who are part of the Working Group on Compact Pulsators.
Some of the data describe a binary star system -- two stars held together by their gravity and orbiting a common center of mass. In this case, one star is a white dwarf, a star in the final stages of its life cycle; the other is a subdwarf B star, a star in an intermediate stage of development. Kepler not only returned information about the star system's velocity and mass, but also data providing a new demonstration of Einstein's Theory of Relativity.
Kawaler said when the subdwarf's orbit sends it toward Earth, Kepler detects 0.2 percent more light than when it moves away from Earth. This very slight difference is one effect of Einstein's Special Theory of Relativity: the theory predicts a very small increase in the overall brightness when the star is moving towards us (and a decrease when moving away). This relativistic "beaming" is a very small effect that has been accurately measured for stars for the first time with Kepler.
Kawaler said another Kepler advantage is its ability to collect data on a lot of stars. It is expected to continuously observe about 170,000 stars for at least three and a half years.
That gives researchers a much better idea about the average star, Kawaler said.
In the past, researchers analyzed a few interesting stars at a time. "But here, we're learning more about star fundamentals by studying the average guys. The large number of stars we're getting data from gives us a much more accurate picture of stars."
Kepler, for example, is giving researchers a better picture of red giant stars by more precisely measuring their oscillations or changes in brightness. Studies of those star quakes can answer questions about the interior properties of stars such as their density, temperature and composition. It's similar to how geologists study earthquakes to learn about the Earth's interior.
Our sun will evolve into a red giant in about five billion years. It will exhaust its hydrogen fuel, expand enormously and shine hundreds of times brighter than it does today. After that, it will be similar to the stars that Kawaler's group has been studying.
Thanks to Kepler, "We're understanding these stars better," Kawaler said. "And that's a very exciting thing, because these stars represent the future of our own sun."

Consuming Polyunsaturated Fatty Acids May Lower the Incidence of Gum Disease


Periodontitis, a common inflammatory disease in which gum tissue separates from teeth, leads to accumulation of bacteria and potential bone and tooth loss. Although traditional treatments concentrate on the bacterial infection, more recent strategies target the inflammatory response. In an article in the November issue of the Journal of the American Dietetic Association, researchers from Harvard Medical School and Harvard School of Public Health found that dietary intake of polyunsaturated fatty acids (PUFAs) like fish oil, known to have anti-inflammatory properties, shows promise for the effective treatment and prevention of periodontitis.
"We found that n-3 fatty acid intake, particularly docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA), are inversely associated with periodontitis in the US population," commented Asghar Z. Naqvi, MPH, MNS, Department of Medicine, Beth Israel Deaconess Medical Center. "To date, the treatment of periodontitis has primarily involved mechanical cleaning and local antibiotic application. Thus, a dietary therapy, if effective, might be a less expensive and safer method for the prevention and treatment of periodontitis. Given the evidence indicating a role for n-3 fatty acids in other chronic inflammatory conditions, it is possible that treating periodontitis with n-3 fatty acids could have the added benefit of preventing other chronic diseases associated with inflammation, including stoke as well."
Using data from the National Health and Nutrition Examination Survey (NHANES), a nationally representative survey with a complex multistage, stratified probability sample, investigators found that dietary intake of the PUFAs DHA and (EPA) were associated with a decreased prevalence of periodontitis, although linolenic acid (LNA) did not show this association.
The study involved over 9,000 adults who participated in NHANES between 1999 and 2004 who had received dental examinations. Dietary DHA, EPA and LNA intake were estimated from 24-hour food recall interviews and data regarding supplementary use of PUFAs were captured as well. The NHANES study also collected extensive demographic, ethnic, educational and socioeconomic data, allowing the researchers to take other factors into consideration that might obscure the results.
The prevalence of periodontitis in the study sample was 8.2%. There was an approximately 20% reduction in periodontitis prevalence in those subjects who consumed the highest amount of dietary DHA. The reduction correlated with EPA was smaller, while the correlation to LNA was not statistically significant.
In an accompanying commentary, Elizabeth Krall Kaye, PhD, Professor, Boston University Henry M. Goldman School of Dental Medicine, notes that three interesting results emerged from this study. One was that significantly reduced odds of periodontal disease were observed at relatively modest intakes of DHA and EPA. Another result of note was the suggestion of a threshold dose; that is, there seemed to be no further reduction in odds or periodontal disease conferred by intakes at the highest levels. Third, the results were no different when dietary plus supplemental intakes were examined. These findings are encouraging in that they suggest it may be possible to attain clinically meaningful benefits for periodontal disease at modest levels of n-3 fatty acid intakes from foods.
Foods that contain significant amounts of polyunsaturated fats include fatty fish like salmon, peanut butter, margarine, and nuts

Tuesday, October 26, 2010

Unexpected Findings of Lead Exposure May Lead to Treating Blindness


Donald A. Fox, a professor of vision sciences in UH’s College of Optometry (UHCO)
Some unexpected effects of lead exposure that may one day help prevent and reverse blindness have been uncovered by a University of Houston (UH) professor and his team.
Donald A. Fox, a professor of vision sciences in UH's College of Optometry (UHCO), described his team's findings in a paper titled "Low-Level Gestational Lead Exposure Increases Retinal Progenitor Cell Proliferation and Rod Photoreceptor and Bipolar Cell Neurogenesis in Mice," published recently online in Environmental Health Perspectives and soon to be published in the print edition.
The study suggests that lead, or a new drug that acts like lead, could transform human embryonic retinal stem cells into neurons that would be transplanted into patients to treat retinal degenerations.
"We saw a novel change in the cellular composition of the retina in mice exposed to low levels of lead during gestation. The retina contained more cells in the rod vision pathway than normal or than we expected," said Fox, who also is a professor of biology and biochemistry, pharmacology and health and human performance. "The rod photoreceptors and bipolar cells in this pathway are responsible for contrast and light/dark detection. These new findings directly relate to the supernormal retinal electrophysiological changes seen in children, monkeys and rats with low-level gestational lead exposure."
Fox said these effects occur at blood lead levels at or below 10 micrograms per deciliter, the current low-level of concern by the Centers for Disease Control and Prevention. Because the effects occur below the "safe level," Fox says it raises more questions about what should be considered the threshold level for an adverse effect of lead on the brain and retina.
Fox has studied lead toxicity for 35 years, specifically as it relates to its effects on the brain and retina of children. His interest in gestational lead exposure started in 1999, when he and colleague Stephen Rothenberg studied a group of children in Mexico City whose mothers had lead exposure throughout their pregnancies. The study was funded to measure the adverse effects of lead poisoning on the nervous system of children born in Mexico City -- a city that has elevated levels of lead in the air due to the use of leaded gasoline, as well as continued use of lead-containing pottery and glassware for food preparation. The study was funded by the U.S. Environmental Protection Agency and the Mexican government and was published in 2002 in the journal Investigative Ophthalmology and Visual Sciences.
Supported by a $1.7 million National Institutes of Health (NIH) grant, Fox and his group set out to find possible reasons for this supernormal retinal response in children. The researchers employed rat and mice models that covered the three levels of lead found in the blood of the Mexico City mothers -- some below, some right at and some higher than the CDC "safe level." The researchers exposed rodents to lead throughout pregnancy and the first 10 days of life, which is a time period equivalent to human gestation.
Fox said that the early-born retinal progenitor cells give rise to four neuron types, which were not affected by lead exposure. The later-born retinal progenitor cells, he said, give rise to two types of neurons and a glial cell. Surprisingly, only the late-born neurons increased in number. The glial cells, which nurture neurons and sometimes protect them from disease, were not changed at all. The rats and mice both had "bigger, fatter retinas," according to Fox. Interestingly, the lower and moderate doses of lead produced a larger increase in cell number than the high lead dose.
"This is really a novel and highly unexpected result, because lead exposure after birth or during adulthood kills retinal and brain cells, but our study showed that low-level lead exposure during gestation caused cells to proliferate, increased neurons and did not affect glia," Fox said. "So, gestational exposure produces an exact opposite to what was previously shown by our lab and others. It also shows that the timing of chemical exposure during development is just as important as the amount of exposure."
This brought the researchers to a crossroads. On the one hand, the retina is not built to have all these extra cells and, according to unpublished data from Fox's mouse studies, the retinas will start to degenerate as the mice age. This suggests that the retinas of the children from the original Mexico City study should be examined as they might start to degenerate when they are 40 years of age.
"This work has long-term implications in retinal degeneration and diseases where photoreceptors die. If we can figure out how low-level lead increases the number of retinal progenitor cells and selectively produces photoreceptors and bipolar cells, then perhaps a drug can be created to help those with degenerative retinal diseases that eventually cause blindness," Fox said. "Researchers may be able to use lead as tool in transforming embryonic retinal stem cells into rods and bipolar cells that could be transplanted into diseased retinas, ultimately saving sight and reversing blindness."
Fox said that more research is needed before such a potential drug could be developed to mimic the effects of lead. Ideally, this drug would induce human embryonic retinal stem cells to form rods and bipolars that could be transplanted into patients to treat early stages of retinal degeneration.
In addition to Fox and research assistant professor Dr. Weimin Xiao in the UHCO, the research team consisted of a number of Fox's current and former students, including Anand Giddabasappa, a former Ph.D. student and now UHCO alumnus; Jerry E. Johnson, a UH alumnus from the department of biology and biochemistry, former post-doctoral fellow in Fox's lab and now an assistant professor at UH-Downtown; and current Ph.D. students W. Ryan Hamilton, Shawntay Chaney and Shradha Mukherjee.
Supplementing the NIH research project grant, this study also was funded by National Eye Institute training and core grants and a National Institute for Occupational Safety and Health educational resource grant.

Global Warming to Bring More Intense Storms to Northern Hemisphere in Winter and Southern Hemisphere Year Round


Stormy sea in the Southern hemisphere. More intense storms will occur in the Southern Hemisphere throughout the year, whereas in the Northern Hemisphere, the change in storminess will depend on the season -- with more intense storms occurring in the winter and weaker storms in the summer.
Weather systems in the Southern and Northern hemispheres will respond differently to global warming, according to an MIT atmospheric scientist's analysis that suggests the warming of the planet will affect the availability of energy to fuel extratropical storms, or large-scale weather systems that occur at Earth's middle latitudes. The resulting changes will depend on the hemisphere and season, the study found.
More intense storms will occur in the Southern Hemisphere throughout the year, whereas in the Northern Hemisphere, the change in storminess will depend on the season -- with more intense storms occurring in the winter and weaker storms in the summer. The responses are different because even though the atmosphere will get warmer and more humid due to global warming, not all of the increased energy of the atmosphere will be available to power extratropical storms. It turns out that the changes in available energy depend on the hemisphere and season, according to the study, published in the Proceedings of the National Academy of Sciences.
Fewer extratropical storms during the summer in the Northern Hemisphere could lead to increased air pollution, as "there would be less movement of air to prevent the buildup of pollutants in the atmosphere," says author Paul O'Gorman, the Victor P. Starr Career Development Assistant Professor of Atmospheric Science in MIT's Department of Earth, Atmospheric and Planetary Sciences. Likewise, stronger storms year-round in the Southern Hemisphere would lead to stronger winds over the Antarctic Ocean, which would impact ocean circulation. Because the ocean circulation redistributes heat throughout the world's oceans, any change could impact the global climate.
O'Gorman's analysis examined the relationship between storm intensity and the amount of energy available to create the strong winds that fuel extratropical storms. After analyzing data compiled between 1981 and 2000 on winds in the atmosphere, he noticed that the energy available for storms depended on the season. Specifically, it increased during the winter, when extratropical storms are strong, and decreased during the summer, when they are weak.
Because this relationship could be observed in the current climate, O'Gorman was confident that available energy would be useful in relating temperature and storminess changes in global-warming simulations for the 21st century. After analyzing these simulations, he observed that changes in the energy available for storms were linked to changes in temperature and storm intensity, which depended on the season and hemisphere. He found that available energy increased throughout the year for the Southern Hemisphere, which led to more intense storms. But for the Northern Hemisphere, O'Gorman observed that available energy increased during the winter and decreased during the summer.
This makes sense, O'Gorman says, because the changes in the strength of extratropical storms depend on where in the atmosphere the greatest warming occurs; if the warming is greatest in the lower part of the atmosphere, this tends to create stronger storms, but if it is greatest higher up, this leads to weaker storms. During the Northern Hemisphere summer, the warming is greatest at higher altitudes, which stabilizes the atmosphere and leads to less intense storms.
Although the analysis suggests that global warming will result in weaker Northern Hemisphere storms during the summer, O'Gorman says that it's difficult to determine the degree to which those storms will weaken. That depends on the interaction between the atmosphere and the oceans, and for the Northern Hemisphere, this interaction is linked to how quickly the Arctic Ocean ice disappears. Unfortunately, climate scientists don't yet know the long-term rate of melting.

Energy Saving Lamp Is Eco-Winner: Swiss Researcher Evaluates Environmental Friendliness of Light Sources

In a new study, researchers with Swizerland's Empa have investigated the ecobalances of various household light sources. In doing so not only did they take into account energy consumption, but also the manufacture and disposal processes. They also evaluated usage with different electrical power mixes. The clear winner is the compact fluorescent lamp, commonly known as the energy saving lamp.
Since Sept. 1, 2009 the sale and import of incandescent light bulbs -- more accurately known as tungsten filament bulbs -- with the lowest energy efficiency classifications F and G have been banned in Switzerland. In addition, on the same day this country also adopted the EU's incandescent light bulb ban, which legislates for a step-by-step phasing-out of these inefficient light sources. In accordance with the new EU rules, 100 Watt bulbs were banned on September 1st, 2009, and a year later all bulbs rated between 75 and 100 Watts will be withdrawn from the market. After another year's transition period, all bulbs rated above 60 Watts and above will be banned, and then finally, on September 1st, 2012 no more conventional incandescent light bulbs will be allowed to be sold. These regulations have met with resistance from many quarters, with a great deal of criticism being directed at compact fluorescent lamps (CFLs), often called energy saving lamps. One of the main concerns of opponents of these light sources is the fact that they contain mercury.
Roland Hischier, Tobias Welz und Lorenz Hilty, of Empa's "Technology and Society" Laboratory, have examined in detail the different lighting methods currently in use in order to find out which source of illumination is in actual fact the most environmentally friendly. They investigated four different kinds of lamp; the classical incandescent bulb, halogen lamps, fluorescent tubes and energy saving lamps. In order to evaluate the total effect of a lamp on the environment over its entire life the researchers prepared a life cycle analysis for each kind. This takes into consideration the raw material and energy consumption of a lamp during its complete life cycle, from the production and usage to final disposal. The total ecological burden can, for example, be represented by co-called "eco indicator points" (EIPs). The total point tally is a measure of the sum of all the damage the product in question inflicts on human health and the environment, as well as the usage of resources incurred during its manufacture.
Production and disposal play an insignificant role
The first result the Empa scientists uncovered was that the proportion of the total environmental effects caused by the production of all the lamps was small. Using the Swiss electrical power mix as a basis, the manufacture of an incandescent bulb, for example, was responsible for just one per cent of its total environmental effect. By comparison, the production of an energy saving lamp at 15 per cent of the total is significantly higher, but still negligible. The reason why energy saving lamps have a larger ecological footprint is because of the electronic circuitry they contain. Using the European power mix (which includes a significant fraction of electricity generated by coal fired power stations) as a basis for calculation leads to much lower values for incandescent bulbs and energy saving lamps of 0.3 per cent and four per cent respectively.
The method of disposal of the lamps at the end of their useful life is also not an important factor in the overall ecobalance calculation. In fact, in the case of energy saving lamps the environmental effects reduce by as much as 15 per cent when they are recycled instead of being incinerated. But even when they are incinerated in a waste disposal facility the much criticized mercury release is quantitatively insignificant. This is because the overwhelming proportion of mercury in the environment is emitted by fossil fuel burning power stations.
The scale of this phenomenon becomes clear by taking a coal-fired power generation plant as an example. Depending on whether it uses brown or anthracite as fuel, a power station emits some 0.042 to 0.045 milligrams of mercury for every kilowatt-hour of energy it produces. A plant generating 1000 megawatts of power therefore releases 42 to 45 grams of mercury into the atmosphere every hour. By comparison, since 2005 compact fluorescent lamps sold in Europe may contain a maximum of only 5 milligram of mercury. In other words, a coal fired power station emits the same quantity of mercury every hour as is contained in 8400 to 9000 energy saving lamps.
It all depends on the use
By far the greatest environmental effects are caused by actually using the lamps. An important factor here is the source of power used, since an incandescent lamp run on electricity generated by a hydroelectric plant is less polluting than an energy saving lamp running on the European power mix. "By choosing to power lamps with electricity generated in an environmentally friendly way one can achieve more in ecological terms than by simply replacing incandescent bulbs with compact fluorescent lamps," clarifies Roland Hischier.
But energy saving lamps do have an ecological advantage. This is shown by the determination of the "environmental break-even point," which is the time for which a lamp operates in order to inflict a certain degree of total environmental damage. Using the European power mix, which is produced mainly by fossil fuel powered generation plants, both incandescent lamps and energy saving lamps reach their environmental break-even points very quickly -- after some 50 hours -- due to the significantly higher power consumption of the tungsten filament bulb. With the Swiss power mix this point is reached after 187 hours. But with a typical lifetime of about 10,000 hours for a compact fluorescent energy saving lamp (compared to some 1000 hours for an incandescent bulb), the purchase of such a lamp pays for itself very quickly in an ecological sense.

Monday, October 25, 2010

Researchers working on batteries smaller than a grain of salt

DARPA's proposed batteries would be smaller than these grains of salt (Photo: Mark Schellh... DARPA's proposed batteries would be smaller than these grains of salt
As development of micro- and nano-scale devices continues to advance, so does the need for an equally-tiny method of powering them. There’s not much point in developing a surveillance micro air vehicle the size of a housefly, for instance, if it requires a watch battery in order to fly. That’s why DARPA (the U.S. Defense Advanced Research Projects Agency) is funding a project to create really tiny batteries. Just how tiny are we talking, here? Well, they’re aiming for something smaller than a grain of salt.
Jane Chang, an engineer from the University of California, Los Angeles, is designing the electrolyte that will allow the charge to flow between electrodes in such batteries. “We're trying to achieve the same power densities, the same energy densities as traditional lithium ion batteries, but we need to make the footprint much smaller,” she stated.
In order to do so, she has coated well-ordered micro-pillars or nano-wires with lithium aluminosilicate, an electrolyte material. The structures are fabricated to maximize their surface-to-volume ratio, for maximum energy density. The lithium was applied through a process of atomic layer deposition, in which one-atom-thick layers of a material can be sprayed onto a surface.
The electrodes have also been developed, but a fully-functioning salt-sized battery has still yet to be assembled, and probably won’t be for some time yet.
Chang presented her work at the AVS 57th International Symposium & Exhibition this week in Albuquerque, New Mexico

New Nano Techniques Integrate Electron Gas-Producing Oxides With Silicon


In cold weather, many children can't resist breathing onto a window and writing in the condensation. Now imagine the window as an electronic device platform, the condensation as a special conductive gas, and the letters as lines of nanowires.
A team led by University of Wisconsin-Madison Materials Science and Engineering Professor Chang-Beom Eom has demonstrated methods to harness essentially this concept for broad applications in nanoelectronic devices, such as next-generation memory or tiny transistors. The discoveries were published Oct. 19 by the journal Nature Communications.
Eom's team has developed techniques to produce structures based on electronic oxides that can be integrated on a silicon substrate -- the most common electronic device platform.
"The structures we have developed, as well as other oxide-based electronic devices, are likely to be very important in nanoelectronic applications, when integrated with silicon," Eom says.
The term "oxide" refers to a compound with oxygen as a fundamental element. Oxides include millions of compounds, each with unique properties that could be valuable in electronics and nanoelectronics.
Usually, oxide materials cannot be grown on silicon because oxides and silicon have different, incompatible crystal structures. Eom's technique combines single-crystal expitaxy, postannealing and etching to create a process that permits the oxide structure to reside on silicon -- a significant accomplishment that solves a very complex challenge.
The new process allows the team to form a structure that puts three-atom-thick layers of lanthanum-aluminum-oxide in contact with strontium-titanium-oxide and then put the entire structure on top of a silicon substrate.
These two oxides are important because an "electron gas" forms at the interface of their layers, and a scanning probe microscope can make this gas layer conductive. The tip of the microscope is dragged along the surface with nanometer-scale accuracy, leaving behind a pattern of electrons that make the one-nanometer-thick gas layer. Using the tip, Eom's team can "draw" lines of these electrons and form conducting nanowires. The researchers also can "erase" those lines to take away conductivity in a region of the gas.
In order to integrate the oxides on silicon, the crystals must have a low level of defects, and researchers must have atomic control of the interface. More specifically, the top layer of strontium-titanium-oxide has to be totally pure and match up with a totally pure layer of lanthanum-oxide at the bottom of the lanthanum-aluminum-oxide; otherwise, the gas layer won't form between the oxide layers. Finally, the entire structure has been tuned to be compatible with the underlying silicon.
Eom's team includes UW-Madison Physics Professor Mark Rzchowski, postdocs and graduate students in materials science and engineering and physics, as well as collaborators from the University of Michigan, Ann Arbor, and the University of Pittsburgh, Pennsylvania. The National Science Foundation supports the research.

Chemical Engineers Use Gold to Discover Breakthrough for Creating Biorenewable Chemicals


University of Virginia chemical engineers Robert J. Davis and Matthew Neurock have uncovered the key features that control the high reactivity of gold nanoparticles in a process that oxidizes alcohols in water. The research is an important first step in unlocking the potential of using metal catalysts for developing biorenewable chemicals.
The scientific discovery could one day serve as the foundation for creating a wide range of consumer products from biorenewable carbon feedstocks, as opposed to the petroleum-based chemicals currently being used as common building blocks for commodities such as cosmetics, plastics, pharmaceuticals and fuels.
The researchers' paper on the subject appears in the journal Science.
The U.Va. researchers have shown that gold -- the most inert of all metals -- has high catalytic reactivity when placed in alkaline water. They studied the mechanism for oxidizing ethanol and glycerol into acids, such as acetic acid and glyceric acid, which are used in everything from food additives to glues, by using gold and platinum as catalysts.
"We've shown that by better understanding the oxidation chemistry on gold and other metal catalysts, we can begin to outline a path for developing a range of different reactions needed to transition from a petroleum-based chemical industry to one that uses biorenewable carbon feedstocks," said Davis, principal investigator on the research paper and professor and chair of the Department of Chemical Engineering in U.Va.'s School of Engineering and Applied Science.
By using water to help oxidize the alcohols with oxygen in the air as opposed to using expensive inorganic oxidants and harmful organic solvents, the growing field of biorenewable chemicals aims to offer a more sustainable, environmentally safe alternative to traditional petrochemical processes.
Until the completion of the U.Va. group's research, it wasn't fully understood how water can play an important role in the oxidation catalysis of alcohols. In the past, catalysis in water hasn't been a major issue for the chemical industry: Because petroleum and many petroleum products aren't water-soluble, water hasn't generally been considered to be a useful solvent.
The researchers, all from the Department of Chemical Engineering in U.Va.'s Engineering School, combined concepts in electrochemistry and catalysis to uncover the critical factors in the oxidation of alcohols to chemical intermediates.
The research also required merging experimental lab work led by Davis with Neurock's expertise in the theory of catalytic chemistry. Graduate students Bhushan N. Zope and David D. Hibbitts were essential members of the investigative teams.

Nanotube Thermopower: Efforts to Store Energy in Carbon Nanotubes


When weighing options for energy storage, different factors can be important, such as energy density or power density, depending on the circumstances. Generally batteries -- which store energy by separating chemicals -- are better for delivering lots of energy, while capacitors -- which store energy by separating electrical charges -- are better for delivering lots of power (energy per time). It would be nice, of course, to have both.
At the AVS 57th International Symposium & Exhibition, which takes place this week at the Albuquerque Convention Center in New Mexico, Michael Strano and his colleagues at MIT are reporting on efforts to store energy in thin carbon nanotubes by adding fuel along the length of the tube, chemical energy, which can later be turned into electricity by heating one end of the nanotubes. This thermopower process works as follows: the heat sets up a chain reaction, and a wave of conversion travels down the nanotubes at a speed of about 10 m/s.
"Carbon nanotubes continue to teach us new things -- thermopower waves as a first discovery open a new space of power generation and reactive wave physics," Strano says.
A typical lithium ion battery has a power density of 1 kW/kg. Although the MIT researchers have yet to scale up their nanotube materials, they obtain discharge pulses with power densities around 7 kW/kg.
Strano is also reporting new results on experiments exploiting carbon nanopores of unprecedented size, 1.7 nm in diameter and 500 microns long.
"Carbon nanopores," he says, "allow us to trap and detect single molecules and count them one by one," the first time this has been done. And this was at room temperature.
The single molecules under study can move across the nanotubes one at a time in a process called coherence resonance. "This has never been shown before for any inorganic system to date," says Strano, "but it underpins the workings of biological ion channels."

Plants Play Larger Role Than Thought in Cleaning Up Air Pollution, Research Shows


Poplars, aspens, other trees provide extensive "ecosystem services." 
Vegetation plays an unexpectedly large role in cleansing the atmosphere, a new study finds.
The research, led by scientists at the National Center for Atmospheric Research (NCAR) in Boulder, Colo., uses observations, gene expression studies, and computer modeling to show that deciduous plants absorb about a third more of a common class of air-polluting chemicals than previously thought.
The new study, results of which are being published in Science Express, was conducted with co-authors from the University of Northern Colorado and the University of Arizona. It was supported in part by the National Science Foundation (NSF), NCAR's sponsor.
"Plants clean our air to a greater extent than we had realized," says NCAR scientist Thomas Karl, the lead author. "They actively consume certain types of air pollution."
The research team focused on a class of chemicals known as oxygenated volatile organic compounds (oVOCs), which can have long-term impacts on the environment and human health.
"The team has made significant progress in understanding the complex interactions between plants and the atmosphere," says Anne-Marie Schmoltner of NSF's Division of Atmospheric and Geospace Sciences, which funded the research.
The compounds form in abundance in the atmosphere from hydrocarbons and other chemicals that are emitted from both natural sources--including plants--and sources related to human activities, including vehicles and construction materials.
The compounds help shape atmospheric chemistry and influence climate.
Eventually, some oVOCs evolve into tiny airborne particles, known as aerosols, that have important effects on both clouds and human health.
By measuring oVOC levels in a number of ecosystems in the United States and other countries, the researchers determined that deciduous plants appear to be taking up the compounds at an unexpectedly fast rate--as much as four times more rapidly than previously thought.
The uptake was especially rapid in dense forests and most evident near the tops of forest canopies, which accounted for as much as 97 percent of the oVOC uptake that was observed.
Karl and his colleagues then tackled a follow-up question: How do plants absorb such large quantities of these chemicals?
The scientists moved their research into their laboratories and focused on poplar trees. The species offered a significant advantage in that its genome has been sequenced.
The team found that when the study trees were under stress, either because of a physical wound or because of exposure to an irritant such as ozone pollution, they began sharply increasing their uptake of oVOCs.
At the same time, changes took place in expression levels of certain genes that indicated heightened metabolic activity in the poplars.
The uptake of oVOCs, the scientists concluded, appeared to be part of a larger metabolic cycle.
Plants can produce chemicals to protect themselves from irritants and repel invaders such as insects, much as a human body may increase its production of white blood cells in reaction to an infection.
But these chemicals, if produced in enough quantity, can become toxic to the plant itself.
In order to metabolize these chemicals, the plants start increasing the levels of enzymes that transform the chemicals into less toxic substances.
At the same time, as it turns out, the plant draws down more oVOCs, which can be metabolized by the enzymes.
"Our results show that plants can actually adjust their metabolism and increase their uptake of atmospheric chemicals as a response to various types of stress," says Chhandak Basu of the University of Northern Colorado, a co-author.
"This complex metabolic process within plants has the side effect of cleansing our atmosphere."
Once they understood the extent to which plants absorb oVOCs, the research team fed the information into a computer model that simulates chemicals in the atmosphere worldwide.
The results indicated that, on a global level, plants are taking in 36 percent more oVOCs than had previously been accounted for in studies of atmospheric chemistry.
Additionally, since plants are directly removing the oVOCs, fewer of the compounds are evolving into aerosols.
"This really transforms our understanding of some fundamental processes taking place in our atmosphere,"

Value-Added Sulfur Scrubbing: Converting Acid Rain Chemicals Into Useful Products

Power plants that burn fossil fuels remain the main source of electricity generation across the globe. Modern power plants have scrubbers to remove sulfur compounds from their flue gases, which has helped reduce the problem of acid rain. Now, researchers in India have devised a way to convert the waste material produced by the scrubbing process into value-added products.
They describe details in the International Journal of Environment and Pollution.
Fossil fuels contain sulfur compounds that are released as sulfur dioxide during combustion. As such, flue gas desulfurisation (FGD) has become mandatory in most of the developed world. There are numerous methods, but most are based on wet limestone and caustic scrubbing. Wet limestone scrubbing generate s large quantities of solid gypsum waste, while wet caustic scrubbing generates alkaline waste containing aqueous mixture of bisulfite, sulfite and sulfate. Sulfate can be removed from water by desalination processes such as reverse osmosis and ion exchange, but these are expensive.
Rima Biswas of the National Environmental Engineering Research Institute (NEERI), in Nagpur, in India, and colleagues have designed a chemo-biological approach for treating the sulfate-rich effluent generated during wet scrubbing of flue gas emissions from fossil fuel fired power plants. The technique involves microbial sulfate reduction using an anaerobic up-flow packed bed bioreactor containing microbes, with ethanol as the carbon source essential for microbial growth.
The team found that more than 90% of the total equivalent sulfate present in the effluent was reduced to sulfide at a rate of up to 3 kilograms per day per cubic meter of sulfate residue. In this form the waste can be easily converted into elemental sulfur for industrial use or into metal sulfide nanoparticles for research.

Sunday, October 24, 2010

NASA-Engineered Collision Spills New Moon Secrets


Peter Schultz and graduate student Brendan Hermalyn analyzed data from bits of the Moon's surface kicked up by a NASA-engineered collision. They found unexpected complexity -- and traces of silver.
Scientists led by Brown University are offering the first detailed explanation of the crater formed when a NASA rocket slammed into the Moon last fall and information about the composition of the lunar soil at the poles that never has been sampled. The findings are published in a set of papers in Science stemming from the successful NASA mission, called LCROSS for Lunar CRater Observing and Sensing Satellite.
Mission control at NASA Ames sent the emptied upper stage of a rocket crashing into the Cabeus crater near the Moon's south pole last October. A second spacecraft followed to analyze the ejected debris for signs of water and other constituents of the super-chilled lunar landscape.
In one of the papers, Brown planetary geologist Peter Schultz and graduate student Brendan Hermalyn, along with NASA scientists, write that the cloud kicked up by the rocket's impact showed the Moon's soil and subsurface is more complex than believed: Not only did the lunar regolith -- the soil -- contain water, it also harbored other compounds, such as hydroxyl, carbon monoxide, carbon dioxide, ammonia, free sodium, and, in a surprise, silver.
Combined, the assortment of volatiles -- the chemical elements weakly attached to regolith grains -- gives scientists clues where they came from and how they got to the polar craters, many of which haven't seen sunlight for billions of years and are among the coldest spots in the solar system.
Schultz, lead author on the Science paper detailing the impact crater and the ejecta cloud, thinks many of the volatiles originated with the billions of years-long fusillade of comets, asteroids and meteoroids that have pummeled the Moon. He thinks an assortment of elements and compounds, deposited in the regolith all over the Moon, could have been quickly liberated by later small impacts or could have been heated by the sun, supplying them with energy to escape and move around until they reached the poles, where they became trapped beneath shadows of the frigid craters.
"This place looks like it's a treasure chest of elements, of compounds that have been released all over the Moon," Schultz said, "and they've been put in this bucket in the permanent shadows."
Schultz believes the variety of volatiles found in Cabeus crater's soil implies a kind of tug of war between what is being accumulated and what is being lost to the tenuous lunar atmosphere.
"There's a balance between delivery and removal," explained Schultz, who has been on the Brown faculty since 1984 and has been studying the Moon since the 1960s. "This suggests the delivery is winning. We're collecting material, not simply getting rid of it."
Astronauts sent as part of NASA's Apollo missions found trace amounts of silver, along with gold, on the near-side (Earth-facing side) of the Moon. The discovery of silver at Cabeus crater suggests that silver atoms throughout the moon migrated to the poles. Nevertheless, the concentration detected from Cabeus "doesn't mean we can go mining for it," Schultz said.
The crater formed by the rocket's impact within Cabeus produced a hole 70 to 100 feet in diameter and tossed up six-foot deep lunar material. The plume of debris kicked up by the impact reached more than a half-mile above the floor of Cabeus, high enough to rise into sunlight, where its properties could be measured for almost four minutes by a variety of spectroscopic instruments. The amount of ejecta measured was almost two tons, the scientists report. The scientists also noted there was a slight delay, lasting roughly one-third of a second, in the flash generated after the collision. This indicated to them that the surface struck may be different than the loose, almost crunchy surface trod by the Apollo astronauts.
"If it had been simply lunar dust, then it would have heated up immediately and brightened immediately," Schultz said. "But this didn't happen."
The scientists also noticed a one-half-mile, near-vertical column of ejecta still returning to the surface. Even better, the LCROSS spacecraft was able to observe the plume as it followed on the heels of the crashing rocket. Schultz and Hermalyn had observed such a plume when conducting crater-impact experiments using hollow spheres (that mimicked the rocket that crashed into Cabeus) at the NASA Ames Vertical Gun Range in California before the LCROSS impact.
"This was not your ordinary impact," Hermalyn said. "So in order to understand what we were going to see (with LCROSS) and maybe what effects that would have on the results, we had to do all these different experiments."
Even though the mission has been judged a success, Schultz said it posed at least as many questions as it answered.
"There's this archive of billions of years (in the Moon's permanently shadowed craters)," Schultz said. "There could be clues there to our Earth's history, our solar system, our galaxy. And it's all just sitting there, this hidden history, just begging us to go back."
Contributing authors on the paper include Anthony Colaprete, Kimberly Ennico, Mark Shirley, and William Marshall, all from NASA Ames Research Center in California. NASA funded the research.

Light on Silicon Better Than Copper?


This is Nan Jokerst, left, and Sabarni Palit in the lab
Step aside copper and make way for a better carrier of information -- light.
As good as the metal has been in zipping information from one circuit to another on silicon inside computers and other electronic devices, optical signals can carry much more, according to Duke University electrical engineers. So the engineers have designed and demonstrated microscopically small lasers integrated with thin film-light guides on silicon that could replace the copper in a host of electronic products.
The structures on silicon not only contain tiny light-emitting lasers, but connect these lasers to channels that accurately guide the light to its target, typically another nearby chip or component. This new approach could help engineers who, in their drive to create tinier and faster computers and devices, are studying light as the basis for the next generation information carrier.
The engineers believe they have solved some of the unanswered riddles facing scientists trying to create and control light at such a miniscule scale.
"Getting light onto silicon and controlling it is the first step toward chip scale optical systems," said Sabarni Palit, who this summer received her Ph.D. while working in the laboratory of Nan Marie Jokerst, J.A. Jones Distinguished Professor of Electrical and Computer Engineering at Duke's Pratt School of Engineering.
The results of team's experiments, which were supported by the Army Research Office, were published online in the journal Optics Letters.
"The challenge has been creating light on such a small scale on silicon, and ensuring that it is received by the next component without losing most of the light," Palit said.
"We came up with a way of creating a thin film integrated structure on silicon that not only contains a light source that can be kept cool, but can also accurately guide the wave onto its next connection," she said. "This integration of components is essential for any such chip-scale, light-based system."
The Duke team developed a method of taking the thick substrate off of a laser, and bonding this thin film laser to silicon. The lasers are about one one-hundreth of the thickness of a human hair. These lasers are connected to other structures by laying down a microscopic layer of polymer that covers one end of the laser and goes off in a channel to other components. Each layer of the laser and light channel is given its specific characteristics, or functions, through nano- and micro-fabrication processes and by selectively removing portions of the substrate with chemicals.
"In the process of producing light, lasers produce heat, which can cause the laser to degrade," Sabarni said. "We found that including a very thin band of metals between the laser and the silicon substrate dissipated the heat, keeping the laser functional."
For Jokerst, the ability to reliably facilitate individual chips or components that "talk" to each other using light is the next big challenge in the continuing process of packing more processing power into smaller and smaller chip-scale packages.
"To use light in chip-scale systems is exciting," she said. "But the amount of power needed to run these systems has to be very small to make them portable, and they should be inexpensive to produce. There are applications for this in consumer electronics, medical diagnostics and environmental sensing."
The work on this project was conducted in Duke's Shared Materials Instrumentation Facility, which, like similar facilities in the semiconductor industry, allows the fabrication of intricate materials in a totally "clean" setting. Jokerst is the facility's executive director.
Other members of the team were Duke's Mengyuan Huang, as well as Dr. Jeremy Kirch and professor Luke Mawst from the University of Wisconsin at Madision.

Proton Mechanism Used by Flu Virus to Infect Cells Discovered


Mei Hong of Iowa State University and the Ames Laboratory, left, and Fanghao Hu of Iowa State used solid-state nuclear magnetic resonance spectroscopy to investigate the proton channel that connects a flu virus to a healthy cell.
The flu virus uses a shuttle mechanism to relay protons through a channel in a process necessary for the virus to infect a host cell, according to a research project led by Mei Hong of Iowa State University and the Ames Laboratory.
The findings are published in the Oct. 22 issue of the journal Science.
Hong, an Iowa State professor of chemistry and an associate of the U.S. Department of Energy's Ames Laboratory, said her research team used solid-state nuclear magnetic resonance (NMR) spectroscopy to determine the structure and workings of the proton channel that connects the flu virus to a healthy cell.
She said a full understanding of that mechanism could help medical researchers design drugs that stop protons from moving through the channel.
That proton channel is an important part of the life cycle of a flu virus. The virus begins an infection by attaching itself to a healthy cell. The healthy cell surrounds the virus and takes it inside through a process called endocytosis. Once inside the cell, the virus uses a protein called M2 to open a channel. Protons from the healthy cell flow through the channel into the virus and raise its acidity. That triggers the release of the virus' genetic material into the healthy cell. The virus then hijacks the healthy cell's resources to replicate itself.
Hong and her research team -- Fanghao Hu, an Iowa State doctoral student in chemistry; and Wenbin Luo, a former Iowa State doctoral student who is now a spectroscopist research associate at Penn State University -- focused their attention on the structure and dynamics of the proton-selective amino acid residue, a histidine in the transmembrane part of the protein, to determine how the channel conducts protons. Their work was supported by grants from the National Science Foundation and the National Institutes of Health.
Two models had been proposed for the proton-conducting mechanism:
  • A "shutter" channel that expands at the charged histidine because of electrostatic repulsion, thus allowing a continuous hydrogen-bonded water chain that takes protons into the virus.
  • Or a "shuttle" model featuring histidine rings that rearrange their structure in some way to capture protons and relay them inside.
Hong's research team found that the histidine rings reorient by 45 degrees more than 50,000 times per second in the open state, but are immobile in the closed state. The energy barrier for the open-state ring motion agrees well with the energy barrier for proton conduction, which suggests that the M2 channel dynamically shuttles the protons into the virus. The chemists also found that the histidine residue forms multiple hydrogen bonds with water, which helps it to dissociate the extra proton.
"The histidine acts like a shuttle," Hong said. "It picks up a proton from the exterior and flips to let it get off to the interior."
The project not only provided atomic details of the proton-conducting apparatus of the flu virus, but also demonstrated the abilities of solid-state NMR.
"The structural information obtained here is largely invisible to conventional high-resolution techniques," the researchers wrote in their Science paper, "and demonstrates the ability of solid-state NMR to elucidate functionally important membrane protein dynamics and chemistry."

Energy Revolution Key to Complex Life: Depends on Mitochondria, Cells' Tiny Power Stations


Artist's rendering of basic cell structure, including mitochondria.

The evolution of complex life is strictly dependent on mitochondria, the tiny power stations found in all complex cells, according to a new study by Dr Nick Lane, from UCL (University College London), and Dr William Martin, from the University of Dusseldorf.
"The underlying principles are universal. Energy is vital, even in the realm of evolutionary inventions," said Dr Lane, UCL Department of Genetics, Evolution and Environment. "Even aliens will need mitochondria."
For 70 years scientists have reasoned that evolution of nucleus was the key to complex life. Now, in work published in Nature, Lane and Martin reveal that in fact mitochondria were fundamental to the development of complex innovations like the nucleus because of their function as power stations in the cell.
"This overturns the traditional view that the jump to complex 'eukaryotic' cells simply required the right kinds of mutations. It actually required a kind of industrial revolution in terms of energy production," explained Dr Lane.
At the level of our cells, humans have far more in common with mushrooms, magnolias and marigolds than we do with bacteria. The reason is that complex cells like those of plants, animals and fungi have specialized compartments including an information centre, the nucleus, and power stations -- mitochondria. These compartmentalised cells are called 'eukaryotic', and they all share a common ancestor that arose just once in four billion years of evolution.
Scientists now know that this common ancestor, 'the first eukaryote', was a lot more sophisticated than any known bacterium. It had thousands more genes and proteins than any bacterium, despite sharing other features, like the genetic code. But what enabled eukaryotes to accumulate all these extra genes and proteins? And why don't bacteria bother?
By focusing on the energy available per gene, Lane and Martin showed that an average eukaryotic cell can support an astonishing 200,000 times more genes than bacteria.
"This gives eukaryotes the genetic raw material that enables them to accumulate new genes, big gene families and regulatory systems on a scale that is totally unaffordable to bacteria," said Dr Lane. "It's the basis of complexity, even if it's not always used."
"Bacteria are at the bottom of a deep chasm in the energy landscape, and they never found a way out," explained Dr Martin. "Mitochondria give eukaryotes four or five orders of magnitude more energy per gene, and that enabled them to tunnel straight through the walls of the chasm."
The authors went on to address a second question: why can't bacteria just compartmentalise themselves to gain all the advantages of having mitochondria? They often made a start but never got very far.
The answer lies in the tiny mitochondrial genome. These genes are needed for cell respiration, and without them eukaryotic cells die. If cells get bigger and more energetic, they need more copies of these mitochondrial genes to stay alive.
Bacteria face exactly the same problem. They can deal with it by making thousands of copies of their entire genome -- as many as 600,000 copies in the case of giant bacterial cells like Epulopiscium, an extreme case that lives only in the unusual guts of surgeonfish. But all this DNA has a big energetic cost that cripples even giant bacteria -- stopping them from turning into more complex eukaryotes. "The only way out," said Dr Lane, "is if one cell somehow gets inside another one -- an endosymbiosis."
Cells compete among themselves. When living inside other cells they tend to cut corners, relying on their host cell wherever possible. Over evolutionary time, they lose unnecessary genes and become streamlined, ultimately leaving them with a tiny fraction of the genes they started out with: only the ones they really need.
The key to complexity is that these few remaining genes weigh almost nothing. Calculate the energy needed to support a normal bacterial genome in thousands of copies and the cost is prohibitive. Do it for the tiny mitochondrial genome and the cost is easily affordable, as shown in the Nature paper. The difference is the amount of DNA that could be supported in the nucleus, not as repetitive copies of the same old genes, but as the raw material for new evolution.
"If evolution works like a tinkerer, evolution with mitochondria works like a corps of engineers," said Dr Martin.
The trouble is that, while cells within cells are common in eukaryotes, which often engulf other cells, they're vanishingly rare in more rigid bacteria. And that, Lane and Martin conclude, may well explain why complex life -- eukaryotes -- only evolved once in all of Earth's history.

Measuring Changes in Rock: Research Looks at Effect of Captured and Stored Carbon Dioxide on Minerals


The capture and storage of carbon dioxide in deep geologic formations, a strategy for minimizing the impacts of greenhouse gases on global warming, may currently be technologically feasible. But one key question that must be answered is the ability of subsurface materials to maintain their integrity in the presence of supercritical carbon dioxide -- a fluid state in which the gas is condensed at high temperature and pressure into a liquid.
A research team at the Pacific Northwest National Laboratory has developed tools in EMSL, a national user facility at PNNL, to study the effects of supercritical CO2 on minerals commonly found in potential storage sites. They are presenting their results at the AVS 57th International Symposium & Exhibition, which takes place this week at the Albuquerque Convention Center in New Mexico.
"The mechanisms of surface interactions with carbon dioxide under these conditions are unknown," says Scott Lea of PNNL. "We need to know if the carbon dioxide can dry out the clay minerals, creating cracks or have other interactions that could create pores in the rock."
Because carbon dioxide will be stored at pressures many times greater than atmospheric pressure, the integrity of the rock must be assured.
The same temperature and pressure conditions create a challenge for researchers trying to observe changes in rock samples as they occur. The PNNL group will present a high pressure atomic force microscope (AFM) head that can integrate with existing commercial systems. The new AFM is designed to handle pressures up to 1500 psi. The presentation will show that the AFM head is capable of operating at temperatures and pressures required to maintain carbon dioxide in a supercritical state and that the noise levels are low enough to observe the atomic scale topographic changes due to chemical reactions that may occur between mineral substances and supercritical CO2.

Small Is Beautiful in Hydroelectric Power Plant Design: Invention Could Enable Renewable Power Generation at Thousands of Unused Sites


TU Muenchen civil engineer Albert Sepp (left) and Professor Peter Rutschmann are co-developers of the shaft power plant design. The power plant, most of which lies concealed below the riverbed, is designed to let fish pass along with the water. 
Hydroelectric power is the oldest and the "greenest" source of renewable energy. In Germany, the potential would appear to be completely exploited, while large-scale projects in developing countries are eliciting strong criticism due to their major impact on the environment. Researchers at Technische Universitaet Muenchen (TUM) have developed a small-scale hydroelectric power plant that solves a number of problems at the same time: The construction is so simple, and thereby cost-efficient, that the power generation system is capable of operating profitably in connection with even modest dam heights.
Moreover, the system is concealed in a shaft, minimizing the impact on the landscape and waterways. There are thousands of locations in Europe where such power plants would be viable, in addition to regions throughout the world where hydroelectric power remains an untapped resource.
In Germany, hydroelectric power accounts for some three percent of the electricity consumed -- a long-standing figure that was not expected to change in any significant way. After all, the good locations for hydroelectric power plants have long since been developed. In a number of newly industrialized nations, huge dams are being discussed that would flood settled landscapes and destroy ecosystems. In many underdeveloped countries, the funds and engineering know-how that would be necessary to bring hydroelectric power on line are not available.
Smaller power stations entail considerable financial input and are also not without negative environmental impact. Until now, the use of hydroelectric power in connection with a relatively low dam height meant that part of the water had to be guided past the dam by way of a so-called bay-type power plant -- a design with inherent disadvantages:
  • The large size of the plant, which includes concrete construction for the diversion of water and a power house, involves high construction costs and destruction of natural riverside landscapes.
  • Each plant is a custom-designed, one-off project. In order to achieve the optimal flow conditions at the power plant, the construction must be planned individually according to the dam height and the surrounding topography. How can an even flow of water to the turbines be achieved? How will the water be guided away from the turbines in its further course?
  • Fish-passage facilities need to be provided to help fish bypass the power station. In many instances, their downstream passage does not succeed as the current forces them in the direction of the power plant. Larger fish are pressed against the rakes protecting the intake of the power plant, while smaller fish can be injured by the turbine.
A solution to all of these problems has now been demonstrated, in the small-scale hydroelectric power plant developed as a model by a team headed by Prof. Peter Rutschmann and Dipl.-Ing. Albert Sepp at the Oskar von Miller-Institut, the TUM research institution for hydraulic and water resources engineering. Their approach incurs very little impact on the landscape. Only a small transformer station is visible on the banks of the river. In place of a large power station building on the riverside, a shaft dug into the riverbed in front of the dam conceals most of the power generation system. The water flows into a box-shaped construction, drives the turbine, and is guided back into the river underneath the dam. This solution has become practical due to the fact that several manufacturers have developed generators that are capable of underwater operation -- thereby dispensing with the need for a riverbank power house.
The TUM researchers still had additional problems to solve: how to prevent undesirable vortex formation where water suddenly flows downward; and how to best protect the fish. Rutschmann and Sepp solved two problems with a single solution -- by providing a gate in the dam above the power plant shaft. In this way, enough water flows through to enable fish to pass. At the same time, the flow inhibits vortex formation that would reduce the plant's efficiency and increase wear and tear on the turbine.
The core of the concept is not optimizing efficiency, however, but optimizing cost: Standardized pre-fabricated modules should make it possible to order a "power plant kit" just like ordering from a catalog. "We assume that the costs are between 30 and 50 percent lower by comparison with a bay-type hydropower plant," Peter Rutschmann says. The shaft power plant is capable of operating economically given a low "head" of water of only one to two meters, while a bay-type power plant requires at least twice this head of water. Series production could offer an additional advantage: In the case of wider bodies of water, several shafts could be dug next to each other -- also at different points in time, as determined by demand and available financing.
Investors can now consider locations for the utilization of hydropower that had hardly been interesting before. This potential has gained special significance in light of the EU Water Framework Directive. The directive stipulates that fish obstacles are to be removed even in smaller rivers. In Bavaria alone, there are several thousand existing transverse structures, such as weirs, that will have to be converted, many of which also meet the prerequisites for shaft power plants. Construction of thousands of fish ladders would not only cost billions but would also load the atmosphere with tons of climate-altering greenhouse gas emissions. If in the process shaft power plants with fish gates and additional upstream fish ladders were installed, investors could shoulder the costs and ensure the generation of climate-friendly energy over the long term -- providing enough power for smaller communities from small, neighborhood hydroelectric plants.
Shaft power plants could also play a significant role in developing countries. "Major portions of the world's population have no access to electricity at all," Rutschmann notes. "Distributed, local power generation by lower-cost, easy-to-operate, low-maintenance power plants is the only solution. For cases in which turbines are not financially feasible, Rutschmann has already come up with an alternative: "It would be possible to use a cheap submersible pump and run it in reverse -- something that also works in our power plant."

Biodegradable Foam Plastic Substitute Made from Milk Protein and Clay


Amid ongoing concern about plastic waste accumulating in municipal landfills, and reliance on imported oil to make plastics, scientists are reporting development of a new ultra-light biodegradable foam plastic material made from two unlikely ingredients: The protein in milk and ordinary clay.
The new substance could be used in furniture cushions, insulation, packaging, and other products, they report in the ACS' Biomacromolecules, a monthly journal.
David Schiraldi and colleagues explain that 80 percent of the protein in cow milk is a substance called casein, which already finds uses in making adhesives and paper coatings. But casein is not very strong, and water can wash it away. To beef up casein, and boost its resistance to water, the scientists blended in a small amount of clay and a reactive molecule called glyceraldehyde, which links casein's protein molecules together.
The scientists freeze-dried the resulting mixture, removing the water to produce a spongy aerogel, one of a family of substances so light and airy that they have been termed "solid smoke." To make the gossamer foam stronger, they cured it in an oven, then tested its sturdiness. They concluded that it is strong enough for commercial uses, and biodegradable, with almost a third of the material breaking down within 30 days