Friday, September 17, 2010

Low Carbon Hemp House Put to the Test


HemPod at the University of Bath

Used to make paper, clothing and car body panels, hemp could also be used to build environmentally-friendly homes of the future say researchers at the University of Bath.
A consortium, led by the BRE (Building Research Establishment) Centre for Innovative Construction Materials based at the University, has constructed a small building on the Claverton campus out of hemp-lime to test its properties as a building material.
Called the "HemPod," this one-storey building has highly insulating walls made from the chopped woody core, or shiv, of the industrial hemp plant mixed with a specially developed lime-based binder.
The hemp shiv traps air in the walls, and the hemp itself is porous, making the walls incredibly well insulated. The lime-based binder sticks together and protects the hemp and makes the building material highly fire resistant.
The industrial hemp plant takes in carbon dioxide as it grows, and the lime render absorbs even more of the climate change gas, effectively giving the building an extremely low carbon footprint.
Dr Mike Lawrence, Research Officer from the University's Department of Architecture & Civil Engineering, explained: "Whilst there are already some houses in the UK built using hemp and lime, the HemPod will be the first hemp-lime building to be constructed purely for scientific testing.
"We will be closely monitoring the house for 18 months using temperature and humidity sensors buried in the walls, measuring how quickly heat and water vapour travels through them.
"The walls are breathable and act as a sort of passive air-conditioning system, meaning that the internal humidity is kept constant and the quality of the air within the house is very good. The walls also have a 'virtual thermal mass' because of the remarkable pore structure of hemp shiv combined with the properties of the lime binder, which means the building is much more thermally efficient and the temperature inside the house stays fairly constant."
Professor Pete Walker, Director of the BRE Centre for Innovative Construction Materials, added: "The aim of the project is to provide some robust data to persuade the mainstream building industry to use this building material more widely.
"Hemp grows really quickly; it only takes the area the size of a rugby pitch to grow enough hemp in three months to build a typical three-bedroom house.
"Using renewable crops to build houses can also provide economic benefits to rural areas by opening up new agricultural markets. Farmers can grow hemp during the summer as a break crop between their main food crops, it doesn't need much water and can be grown organically.
"Every part of the plant can be used, so there's no waste -- the shiv is used for building, the fibres can make car panels, clothing or paper, and the seeds can be used for food or oil. So it's a very efficient, renewable material.
"Lime has been used in construction for millennia, and combining it with industrial hemp is a significant development in the effort to make construction more sustainable."
Environmentally-friendly building materials are often more expensive than traditional materials, but the Renewable House project (www.renewable-house.co.uk) funded by the Department of Energy and Climate Change (DECC) and the National Non-Food Crops Centre (NNFCC) demonstrated a cost of around £75,000 (excluding foundations) to build a three-bedroom Code 4 house from hemp-lime making it competitive with conventional bricks and mortar.
The project is sponsored by the Department for Environment, Food & Rural Affairs (Defra) under the Renewable Materials LINK Programme, and brings together a team of nine partners comprising: University of Bath, BRE Ltd, Feilden Clegg Bradley Studios, Hanson UK, Hemp Technology, Lhoist Group, Lime Technology, the NNFCC and Wates Living Space.

3-D Computer Simulations Help Envision Supernovae Explosions


The new 3-D simulations like this one are based on the idea that the collapsing star itself is not sphere-like, but distinctly asymmetrical and affected by a host of instabilities.

For scientists, supernovae are true superstars -- massive explosions of huge, dying stars that shine light on the shape and fate of the universe.
For a brief burst of time, supernovae can radiate more energy than the sun will emit in its lifetime. With the potential energy of 25 hundred trillion trillion nuclear weapons, they can outshine entire galaxies, producing some of the biggest explosions ever seen, and helping track distances across the cosmos.
Now, a Princeton-led team has found a way to make computer simulations of supernovae exploding in three dimensions, which may lead to new scientific insights.
Even though these mammoth explosions have been observed for thousands of years, for the past 50 years researchers have struggled to mimic the step-by-step destructive action on computers. Researchers argue that such simulations, even crude ones, are important, as they can lead to new information about the universe and help address this longstanding problem in astrophysics.
The new 3-D simulations are based on the idea that the collapsing star itself is not sphere-like, but distinctly asymmetrical and affected by a host of instabilities in the volatile mix surrounding its core.
"I think this is a big jump in our understanding of how these things can explode," said Adam Burrows, a professor of astrophysical sciences at Princeton, who led the research. "In principle, if you could go inside the supernovae to their centers, this is what you might see."
Writing in the Sept. 1 issue of The Astrophysical Journal, Burrows -- along with first author Jason Nordhaus, a postdoctoral research fellow at Princeton, and Ann Almgren and John Bell from the Lawrence Berkeley National Laboratory in California -- reports that the Princeton team has developed simulations that are beginning to match the massive blow-outs astronomers have witnessed when gigantic stars die.
In the past, simulated explosions represented in one and two dimensions often stalled, leading scientists to conclude that their understanding of the physics was incorrect or incomplete. This team used the same guiding physics principles, but used supercomputers that were many times more powerful, employing a representation in three dimensions that allowed the various multidimensional instabilities to be expressed.
"It may well prove to be the case that the fundamental impediment to progress in supernova theory over the last few decades has not been lack of physical detail, but lack of access to codes and computers with which to properly simulate the collapse phenomenon in 3-D," the team wrote. "This could explain the agonizingly slow march since the 1960s toward demonstrating a robust mechanism of explosion."
Birth of a supernova
Supernovae are the primary source of heavy elements in the cosmos. Their brightness is so consistently intense that supernovae have been used as "standard candles" or gauges, acting as yardsticks indicating astronomical distances.
Most result from the death of single stars much more massive than the sun.
As a star ages, it exhausts its supplies of hydrogen and helium fuel at its core. With still enough mass and pressure to fuse carbon and produce other heavier elements, it gradually becomes layered like an onion with the bulkiest tiers at its center. Once its core exceeds a certain mass, it begins to implode. In the squeeze, the core heats up and grows even more dense.
"Imagine taking something as massive as the sun, then compacting it to something the size of the Earth," Burrows said. "Then imagine that collapsing to something the size of Princeton."
What comes next is even more mysterious.
At some point, the implosion reverses. Astrophysicists call it "the bounce." The core material stiffens up, acting like what Burrows calls a "spherical piston," emitting a shock wave of energy. Neutrinos, which are inert particles, are emitted too. The shock wave and the neutrinos are invisible.
Then, very visibly, there is a massive explosion, and the star's outer layers are ejected into space. This highly perceptible stage is what observers see as the supernova. What's left behind is an ultra-dense object called a neutron star. Sometimes, when an ultramassive star dies, a black hole is created instead.
Scientists have a sense of the steps leading to the explosion, but there is no agreed upon fundamental process about what happens during the "bounce" phase when the implosion at the core reverses direction. Part of the difficulty is that no one can see what is happening on the inside of a star. During this phase, the star looks undisturbed. Then, suddenly, a blast wave erupts on the surface. Scientists don't know what occurs to make the central region of the star instantly unstable. The emission of neutrinos is believed to be related, but no one is sure how or why.
"We don't know what the mechanism of explosion is," Burrows said. "As a theorist who wants to get to root causes, this is a natural problem to explore."
Multiple scientific approaches to solve the problem
The scientific visualization employed by the research team is an interdisciplinary effort combining astrophysics, applied mathematics and computer science. The endeavor produces a presentation through computer-generated images of three-dimensional phenomena. In general, researchers employ visualization techniques with the aim of making realistic renderings of quantitative information including surfaces, volumes and light sources. Time is often an important component, contributing to making the images dynamical as well.
To do their work, Burrows and his colleagues came up with mathematical values representing the energetic behaviors of stars by using mathematical representations of fluids in motion -- the same partial differential equations solved by geophysicists for climate modeling and weather forecasting. To solve these complex equations and simulate what happens inside a dying star, the team used an advanced computer code called CASTRO that took into account factors that changed over time, including fluid density, temperature, pressure, gravitational acceleration and velocity.
The calculations took months to process on supercomputers at Princeton and the Lawrence Berkeley Laboratory.
The simulations are not an end unto themselves, Burrows noted. Part of the learning process is viewing the simulations and connecting them to real observations. In this case, the most recent simulations are uncannily similar to the explosive behavior of stars in their death throes witnessed by scientists. In addition, scientists often learn from simulations and see behaviors they had not expected.
"Visualization is crucial," Burrows said. "Otherwise, all you have is merely a jumble of numbers. Visualization via stills and movies conjures the entire phenomenon and brings home what has happened. It also allows one to diagnose the dynamics, so that the event is not only visualized, but understood."
The research was funded by the U.S. Department of Energy and the National Science Foundation.

Optical Chip Enables New Approach to Quantum Computing


This is the photonic chip next to a UK penny. The chip contains micrometer and sub-micrometer features and guide light using a network of waveguides. The output of this network can be seen on the surface of the chip.

An international research group led by scientists from the University of Bristol has developed a new approach to quantum computing that could soon be used to perform complex calculations that cannot be done by today's computers.
Scientists from Bristol's Centre for Quantum Photonics have developed a silicon chip that could be used to perform complex calculations and simulations using quantum particles in the near future. The researchers believe that their device represents a new route to a quantum computer -- a powerful type of computer that uses quantum bits (qubits) rather than the conventional bits used in today's computers.
Unlike conventional bits or transistors, which can be in one of only two states at any one time (1 or 0), a qubit can be in several states at the same time and can therefore be used to hold and process a much larger amount of information at a greater rate.
"It is widely believed that a quantum computer will not become a reality for at least another 25 years," says Professor Jeremy O'Brien, Director of the Centre for Quantum Photonics. "However, we believe, using our new technique, a quantum computer could, in less than ten years, be performing calculations that are outside the capabilities of conventional computers."
The technique developed in Bristol uses two identical particles of light (photons) moving along a network of circuits in a silicon chip to perform an experiment known as a quantum walk. Quantum walk experiments using one photon have been done before and can even be modelled exactly by classical wave physics. However, this is the first time a quantum walk has been performed with two particles and the implications are far-reaching.
"Using a two-photon system, we can perform calculations that are exponentially more complex than before," says O'Brien. "This is very much the beginning of a new field in quantum information science and will pave the way to quantum computers that will help us understand the most complex scientific problems."
In the short term, the team expect to apply their new results immediately for developing new simulation tools in their own lab. In the longer term, a quantum computer based on a multi-photon quantum walk could be used to simulate processes which themselves are governed by quantum mechanics, such as superconductivity and photosynthesis.
"Our technique could improve our understanding of such important processes and help, for example, in the development of more efficient solar cells," adds O'Brien. Other applications include the development of ultra-fast and efficient search engines, designing high-tech materials and new pharmaceuticals.
The leap from using one photon to two photons is not trivial because the two particles need to be identical in every way and because of the way these particles interfere, or interact, with each other. There is no direct analogue of this interaction outside of quantum physics.
"Now that we can directly realize and observe two-photon quantum walks, the move to a three-photon, or multi-photon, device is relatively straightforward, but the results will be just as exciting" says O'Brien. "Each time we add a photon, the complexity of the problem we are able to solve increases exponentially, so if a one-photon quantum walk has 10 outcomes, a two-photon system can give 100 outcomes and a three-photon system 1000 solutions and so on."
The group, which includes researchers from Tohoku University, Japan, the Weizmann Institute in Israel and the University of Twente in the Netherlands, now plans to use the chip to perform quantum mechanical simulations. The researchers are also planning to increase the complexity of their experiment not only by adding more photons but also by using larger circuits.
The research is published in the journal Science.

Surprisingly Complicated Molecule Found in Outer Space


Diacetylene cation, a particle made up of two hydrogen atoms and four carbon atoms, has been discovered in transparent interstellar clouds.

In interstellar clouds of extremely small density scientists managed to find a molecule that has an unexpectedly complicated structure. The discovery will force a change in the way of thinking about chemical processes occurring in the apparently empty areas of the galaxy.
Translucent interstellar clouds are penetrated by highly energetic ultraviolet and cosmic radiation which can break any chemical species it meets. However, a group of scientists, the core of which is formed by Polish astrophysicists and astrochemists, managed to observe in such clouds a molecule made up of an unexpectedly large number of atoms: the diacetylene cation. Its discovery in the gas and dust clouds of small density may contribute to solving the oldest unsolved puzzle of spectroscopy. Studies were conducted mostly with the use of an 8-metre telescope in the Paranal Observatory in Chile by a group of scientists from the Nicolaus Copernicus University (NCU) in Toruń, Poland, the European Southern Observatory (ESO), the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS, Warsaw), and from the Seoul National University in Korea. The group is headed by Prof. Jacek Krełowski from the NCU's Astronomical Centre.
The density of translucent interstellar clouds is extremely small. "The dilution of matter in such clouds corresponds to the density obtained as a result of distributing one glass of air in an empty cube the face of which equals the area of a small country. This is much less than the best vacuum produced in a lab," explains one of the co-finders, Assoc. Prof. Robert Kołos from the Laboratory Astrochemistry Group of the Institute of Physical Chemistry of the PAS. However, since interstellar clouds are of huge sizes, reaching dozens of light years, their gas molecules have a chance to interact with the penetrating radiation. Spectroscopy is the field of science that deals with radiation-matter interactions.
Molecules absorb and emit photons of specific energies only (and thus of specific wavelengths) corresponding to differences between energy levels typical for a given species. Consequently, as a result of interactions with diluted gases in translucent clouds, common in our and other galaxies, starlight that reaches the Earth is slightly changed. It lacks the waves of certain length -- those absorbed by intervening interstellar atoms and molecules.
In the 1920s astrophysicists observed that light was absorbed by the interstellar medium in a manner that could not be explained by the presence of very simple components of interstellar gas known at that time. Today, with the use of radio waves, it is possible to detect quite big molecules -- the record holder is cyanopolyyne HC11N which comprises 13 atoms -- but these are created inside dense, non-transparent clouds where they are protected from disruptive radiation.
"The peculiar optical properties of translucent clouds, connected with the presence of the so-called Diffuse Interstellar Bands DIB, have been a mystery for nearly 90 years. They are even called the longest standing unsolved problem of all spectroscopy," says Prof. Krełowski, an authority in the field of optical spectroscopy of interstellar medium. The recent discovery allowed a new band to be added to the DIB set and, at the same time, to be identified as originating from the diacetylene cation H-CC-CC-H+. "Diacetylene is a species unexpectedly big for translucent clouds. So far the compounds of no more than three atoms have been found there: carbon C3 and hydrogen H3+. In order to explain the presence of diacetylene cation we will have to revisit the existing astrochemical models," adds Assoc. Prof. Kołos.
Asymmetric molecules -- such as the cyanopolyyne mentioned above, a linear sequence of carbon atoms with hydrogen at one end and nitrogen at the other -- are able to emit or absorb the electromagnetic waves in the radiofrequency range. The high symmetry of diacetylene cation makes it invisible to radio telescopes, but present optical observations suggest this species to be quite a common component of the interstellar medium. It is detected not only in the two galaxy regions that are especially rich in carbon, but also in averaged data coming from a dozen other lines of sight.
Following the detection of diacetylene cation it may be supposed that there are more diffuse interstellar bands generated by similar, symmetric molecules. "It seems probable that the DIB puzzle will soon be largely solved," sums up Prof. Krełowski.

Global Initiative Underway to Preserve Yam Biodiversity


A technician inspecting in vitro conserved yam at IITA's Genetic Resources Center

World yam collection in Nigeria provides ultimate rescue for African yam diversity in an initiative to conserve critical crop collections backed by the Global Crop Diversity Trust.
Farmers and crop scientists worldwide are engaged in an ambitious new effort to add 3,000 yam samples to international genebanks with the aim of saving the diversity of a crop that is consumed by 60 million people on a daily basis in Africa alone, according to an announcement from the Global Crop Diversity Trust.
In almost all the countries of the African yam belt, a large number of potentially important yam varieties are preserved only in fields, where they are in danger of being picked off by pests or diseases as well as more common disasters like fire or flooding. For example, a large fire recently destroyed a yam collection in Togo. Civil conflicts have also resulted in collections being destroyed.
Yam varieties gathered from West and Central African countries through the project are being sent to the International Institute for Tropical Agriculture (IITA) in Ibadan, Nigeria, where tissue samples of the crop will eventually be frozen at ultra-low temperatures in liquid nitrogen -- a technique known as cryoconservation -- which offers the most secure form of long-term storage currently available. The majority of the world's crops can be conserved over long periods simply by drying the seeds and storing them under cold, dry conditions. However, a significant number of crops, including yams, cannot be stored so easily and must be conserved as vegetative material in tissue culture.
Farmers in West Africa's "yam belt," which includes the countries of Nigeria, Côte d'Ivoire, Ghana, Benin and Togo, produce more than 90 percent of the world's yams. The project, however, will also include yam varieties collected in the Philippines, Vietnam, Costa Rica, the Caribbean and several Pacific nations. It is the first worldwide effort to conserve yam species and cultivars. The project is funded with support from the UN Foundation and the Bill and Melinda Gates Foundation.
"This opportunity to protect an incredibly wide variety of yams allows us to feel more reassured that the unique diversity of yam will be safely secured and available to future generations," said Alexandre Dansi, a yam expert at the University of Abomey-Calavi in Benin.
For Benin, which sits squarely in the buckle of the yam belt, yam is an integral part of the culture and community life. The large tubers weighing up to 70 kilos are a common sight on roadside markets. Dansi has worked with producers to catalogue about 250 discrete types of yams and more than 1,000 named yam varieties. He is collaborating with farmers to document additional varieties. According to farmers' reports, many traditional varieties are disappearing in their production zones because of high susceptibility to pests and diseases, poor soil, soil moisture content, weeds and drought, which make them less productive or more costly to grow compared to other crops such as cassava.
Through Dansi's work, Benin already has sent 847 yam samples to the IITA. At IITA, the tubers will be grown out in fields, and cuttings taken for conservation in the lab as part of an international collection that already contains about 3,200 yam samples from West Africa.
Thousands of years of cultivation have resulted in a wide diversity of yam varieties existing in farmers' fields, particularly in West Africa. In some parts of Africa (mainly Benin and Nigeria), yams are still being domesticated from wild tubers found in the forest. The popularity of the crop remains high with consumers, and sellers get a high price in urban markets. However, yams remain relatively under-researched despite their potential to bring farmers out of poverty in one of the world's poorest regions. Using the collection now being assembled to find valuable traits that provide disease-resistance and higher yields is key to improving farmer's fortunes.
"It's really akin to putting money in the bank," said Cary Fowler, executive director of the Trust. "All crops routinely face threats from plant pests, disease, or shifting weather patterns, and a country's ability to breed new varieties to overcome these challenges is directly tied to what they have in the bank, not just in terms of financial resources but in terms of the diversity in their crop collections."
The yam project is part of a broader effort involving major crop species worldwide in which the Trust is helping partners in 68 countries -- including 38 in Africa alone -- rescue and regenerate more than 80,000 endangered accessions in crop collections and send duplicates to international genebanks and the Svalbard Global Seed Vault in the Arctic Circle.
For yams, reproduced through vegetative propagation, IITA offers the only long-term form of conservation. Conserving the crop requires extracting tissue in the laboratory and freezing it in liquid nitrogen. However, the technique demands careful research and a staff of dedicated skilled technicians. Most African countries cannot afford to give their yam diversity this kind of attention.
At IITA, the DNA of the samples coming from locations around the world will also be analyzed to get a better sense of the genetic diversity contained in various collections. This is not, however, an academic exercise. It helps the genebank managers avoid keeping too many copies of the same material. It also helps the search for valuable genes that can provide the traits needed to deal with diseases or climate change.
"This project is fascinating because it involves the most traditional and the most advanced techniques of crop conservation. We would like to deploy the best tools science has to offer to secure centuries of yam cultivation," said Dominique Dumet, head of the Genetic Resources Center (GRC) at IITA.
IITA also will be offering a stable and safe haven for yam collections that sometimes must endure unusual stress. For example, Cote D'Ivoire will be sending 5050 yam samples to IITA for conservation from a collection that, after the civil war in 2002, had to be moved from Bouaké in the north to Abidjan.
"We are building up our collection again, but some varieties were lost," said Amani Kouakou, a scientist at Cote D'Ivoire's Centre National de Recherché Agronomique. "We welcome the chance to share the material with IITA and discover new materials that we have never cultivated in this country."
Meanwhile, in Benin, Dansi is using the project as an opportunity to work with farmers to test and characterise materials, exchange varieties and techniques between different yam-growing regions in Benin, and build up better community storage barns for keeping the tubers in good health until the next planting season.
"The security we now have is reassuring and allows us to focus on other things, like working with farmers to improve yields," Dansi said. "And on top of that we can now ask IITA for interesting yams from other parts of the world that we may never have seen before in Benin."

Scientists Document Fate of Deep Hydrocarbon Plumes in Gulf Oil Spill


Gulf oil spill. In the aftermath of the Deepwater Horizon disaster in the Gulf of Mexico, a team of scientists led by UC Santa Barbara's David Valentine and Texas A&M University's John Kessler embarked on a research cruise with an urgent mission: determining the fate and impact of hydrocarbon gases escaping from a deep-water oil spill

In the aftermath of the Deepwater Horizon disaster in the Gulf of Mexico, a team of scientists led by UC Santa Barbara's David Valentine and Texas A&M University's John Kessler embarked on a research cruise with an urgent mission: determining the fate and impact of hydrocarbon gases escaping from a deep-water oil spill.
The spill provided a rare opportunity for Valentine, a professor in the Department of Earth Science at UCSB, and Kessler, an assistant professor in the Department of Oceanography at Texas A&M, to study the behavior of methane and other natural gases in the murky depths of the Gulf of Mexico.
"We were fortunate to have the expertise and opportunity to get to the heart of this national disaster and to contribute meaningfully to understanding what was happening in the deep ocean during the spill," Valentine, the lead author, said. "As a scholar, it is rare that your intellectual domain comes so abruptly to the forefront of the national consciousness. Circumstances aside, I feel that it reflects well on our country's scientific and educational systems that we foster such expertise."
The scientists conducted their tests June 11-21, less than two months after the Deepwater Horizon platform exploded, causing the largest marine oil spill in history. Their team conducted its experiments as close as 1,500 feet from the epicenter of the active spill, using underwater sampling devices and sensors to measure hydrocarbons and oxygen depletion at various depths, and to collect water samples to study the biodegradation of natural gas and the associated blooms of bacteria.
Their research was funded by the National Science Foundation and the Department of Energy, and was conducted on the Research Vessel Cape Hatteras.
The team reported several new findings in their study. At the time they sampled in June, they identified four large plumes of suspended hydrocarbons that had been moved by deep currents in different directions from the leaking well. Since each plume originated from the well at a different time, the scientists were able to compare the chemicals and isotopes to determine what compounds were preferentially respired by the bacteria.
What they found was surprising: Three specific hydrocarbon gases -- ethane, propane and butane -- were responsible for most of the respiration and oxygen loss observed in the deep plumes. They further identified the dominant bacteria present in the plumes and suggested some of the organisms were targeting the natural gases.
The team also found that methane gas, the single most abundant compound spilled by Deepwater Horizon, was initially consumed very slowly, but that the rate increased as other gases were depleted. They estimate that ultimately two-thirds of the bacterial productivity and respiration in the deep-water plumes will be linked to natural gas.
"Because the Deepwater Horizon rig accident occurred almost a mile deep, the slow migration of petroleum from that depth allowed time for dissolution of volatile hydrocarbons such as methane, ethane, propane, and butane," Valentine said. "Had it occurred in shallower water, these gases would have certainly escaped into the atmosphere. This gas trapping will go down as one of the distinguishing hallmarks of a deep oil spill."
Kessler added: "Dissolving these gases in the ocean is a bit of a double-edged sword. On the one hand, these gases influenced both the air quality and the radiative budget of the atmosphere, so trapping them within the ocean is a good thing. But their eventual marine biodegradation leads to the consumption of dissolved oxygen, which is an annual problem in the northern Gulf of Mexico. Fingerprinting the main chemicals responsible for the majority of the oxygen reductions associated with this spill is extremely helpful as we decide how to deal with this and other such events."
The scientists found that propane and ethane were the primary foodstuffs for microbial respiration, accounting for up to 70 percent of the observed oxygen depletion in fresh plumes. They further suggest that butane accounts for much of the remainder. They learned that the ratio of methane to ethane and propane varied substantially throughout the deep plumes and served as the indicator of bacterial activity.
The scientists concluded that deep oil and gas spills elicit a distinctive microbial response because of the trapped gases. They also suggest that bacterial blooms may turn their attention to oil after the gas is depleted as many bacteria can cross over with their metabolism. However, they caution that the metabolism of these bugs has not been established and that the specific bacteria that bloomed may only target specific compounds in oil.
While the results of this study suggest that ethane, propane and butane plumes may disappear quickly, methane may not, due to its relatively slow consumption rate. This suggests the potential for a much longer-lived methane plume in the deep ocean, with unknown consequences. To address this issue, Valentine and Kessler are currently leading another expedition supported by the National Oceanic and Atmospheric Administration in an attempt to determine the longer-term fate of methane and oil in the deep Gulf waters.
###
Joining Valentine on this research from UCSB were Molly Redmond, Stephanie Mendes, Monica Heintz, Christopher Farwell, Franklin Kinnaman and Christine Villanueva. Other Texas A&M researchers included Lei Hu, Shari Yvon-Lewis, Mengran Du, Eric Chan and Fenix Garcia Tigreros.
(Note to editors: For more information, contact David Valentine by e-mail at valentine@geol.ucsb.edu. For downloadable images from the research cruise, go to http://www.ia.ucsb.edu/pa/display.aspx?pkey=2321)
About research at Texas A&M University: As one of the world's leading research institutions, Texas A&M is in the vanguard in making significant contributions to the storehouse of knowledge, including that of science and technology. Research conducted at Texas A&M represents an annual investment of more than $582 million, which ranks third nationally for universities without a medical school, and underwrites approximately 3,500 sponsored projects. That research creates new knowledge that provides basic, fundamental and applied contributions resulting in many cases in economic benefits to the state, nation and world.

Scientists Reveal Battery Behavior at the Nanoscale


A new electrochemical strain microscopy (ESM) technique developed at Oak Ridge National Laboratory can map lithium ion flow through a battery’s cathode material. This 1 micron x 1 micron composite image demonstrates how regions on a cathode surface display varying electrochemical behaviors when probed with ESM.
As industries and consumers increasingly seek improved battery power sources, cutting-edge microscopy performed at the Department of Energy's Oak Ridge National Laboratory is providing an unprecedented perspective on how lithium-ion batteries function.
A research team led by ORNL's Nina Balke, Stephen Jesse and Sergei Kalinin has developed a new type of scanning probe microscopy called electrochemical strain microscopy (ESM) to examine the movement of lithium ions through a battery's cathode material. The research is published in Nature Nanotechnology.
"We can provide a detailed picture of ionic motion in nanometer volumes, which exceeds state-of-the-art electrochemical techniques by six to seven orders of magnitude," Kalinin said. Researchers achieved the results by applying voltage with an ESM probe to the surface of the battery's layered cathode. By measuring the corresponding electrochemical strain, or volume change, the team was able to visualize how lithium ions flowed through the material. Conventional electrochemical techniques, which analyze electric current instead of strain, do not work on a nanoscale level because the electrochemical currents are too small to measure, Kalinin explained.
"These are the first measurements, to our knowledge, of lithium ion flow at this spatial resolution," Kalinin said.
Lithium-ion batteries, which power electronic devices from cell phones to electric cars, are valued for their low weight, high energy density and recharging ability. Researchers hope to extend the batteries' performance by lending engineers a finely tuned knowledge of battery components and dynamics.
"We want to understand -- from a nanoscale perspective -- what makes one battery work and one battery fail. This can be done by examining its functionality at the level of a single grain or an extended defect," Balke said.
The team's ESM imaging can display features such as individual grains, grain clusters and defects within the cathode material. The high-resolution mapping showed, for example, that the lithium ion flow can concentrate along grain boundaries, which could lead to cracking and battery failure. Researchers say these types of nanoscale phenomena need to be examined and correlated to overall battery functionality.
"Very small changes at the nanometer level could have a huge impact at the device level," Balke said. "Understanding the batteries at this length scale could help make suggestions for materials engineering."
Although the research focused on lithium-ion batteries, the team expects that its technique could be used to measure other electrochemical solid-state systems, including other battery types, fuel cells and similar electronic devices that use nanoscale ionic motion for information storage.
"We see this method as an example of the kinds of higher dimensional scanning probe techniques that we are developing at CNMS that enable us to see the inner workings of complex materials at the nanoscale," Jesse said. "Such capabilities are particularly relevant to the increasingly important area of energy research."
Balke, Jesse and Kalinin are research scientists at ORNL's Center for Nanophase Materials Science. The research team includes Nancy Dudney, Yoongu Kim and Leslie Adamczyk from ORNL's Materials Sciences and Technology Division. The key theoretical results in the work were obtained by Anna Morozovska and Eugene Eliseev at the National Academy of Science of Ukraine and Tony Chung and Edwin Garcia at Purdue University.
This research was supported as part of the Fluid Interface Reactions, Structures and Transport Center, an Energy Frontier Research Center funded by the Department of Energy, Office of Science.

Tiny MAVs May Someday Explore and Detect Environmental Hazards


Recent prototype of the Harvard Microrobotic Fly, a three-centimeter wingspan flapping-wing robot.
Air Force Office of Scientific Research-sponsored researcher, Dr. Robert Wood of Harvard University is leading the way in what could become the next phase of high-performance micro air vehicles for the Air Force.
His basic research is on track to evolve into robotic, insect-scale devices for monitoring and exploration of hazardous environments, such as collapsed structures, caves and chemical spills.
"We are developing a suite of capabilities which we hope will lead to MAVs that exceed the capabilities of existing small aircraft. The level of autonomy and mobility we seek has not been achieved before using robotic devices on the scale of insects," said Wood.
Wood and his research team are trying to understand how wing design can impact performance for an insect-size, flapping-wing vehicle. Their insights will also influence how such agile devices are built, powered and controlled.
"A big emphasis of our AFOSR program is the experimental side of the work," said Wood. "We have unique capabilities to create, flap and visualize wings at the scales and frequencies of actual insects."
The researchers are constructing wings and moving them at high frequencies recreating trajectories similar to those of an insect. They are also able to measure multiple force components, and they can observe fluid flow around the wings flapping at more than 100 times per second.
Performing experiments at such a small scale presents significant engineering challenges beyond the study of the structure-function relationships for the wings.
"Our answer to the engineering challenges for these experiments and vehicles is a unique fabrication technique we have developed for creating wings, actuators, thorax and airframe at the scale of actual insects and evaluating them in fluid conditions appropriate for their scale," he said.
They are also performing high-speed stereoscopic motion tracking, force measurements and flow visualization; the combination of which allows for a unique perspective on what is going on with these complex systems.

Neutrons Helping Researchers Unlock Secrets to Cheaper Ethanol


New research could help scientists identify the most effective pretreatment strategy and lower the cost of biomass conversion
New insight into the structure of switchgrass and poplars is fueling discussions that could result in more efficient methods to turn biomass into biofuel.
Researchers from the Department of Energy's Oak Ridge National Laboratory and Georgia Tech used small-angle neutron scattering to probe the structural impact of an acid pretreatment of lignocellulose from switchgrass. Pretreatment is an essential step to extract cellulose, which can through a series of enzymatic procedures be converted into sugars and then ethanol. The findings, published in Biomacromolecules, could help scientists identify the most effective pretreatment strategy and lower the cost of the biomass conversion process.
"My hope is that this paper and subsequent discussions about our observations will lead to a better understanding of the complex mechanisms of lignocellulose breakdown," said co-author Volker Urban of ORNL's Chemical Sciences Division.
A key finding is that native switchgrass that has been pretreated with hot dilute sulfuric acid undergoes significant morphological changes. While the data demonstrate that the switchgrass materials are very similar at length scales greater than 1,000 angstroms, the materials are profoundly different at shorter lengths. An angstrom is equal to 1/10th of a nanometer.
Specifically, Urban and colleagues discovered that the diameter of the crystalline portion of a cellulose fibril increases from about 21 angstroms before treatment to 42 angstroms after treatment. Also, they learned that lignin concurrently undergoes a redistribution process and forms aggregates, or droplets, which are 300 angstroms to 400 angstroms in size.
"Our study suggests that hot dilute sulfuric acid pretreatment effectively decreases recalcitrance by making cellulose more accessible to enzymes through lignin redistribution and hemi-cellulose removal," Urban said. Recalcitrance refers to a plant's robustness, or natural defenses to being chemically dismantled.
Unfortunately, the apparent increase in cellulose microfibril diameter may indicate a cellulose re-annealing that would be counterproductive and may limit the efficiency of the dilute sulfuric acid pretreatment process, the researchers reported.
"Ultimately, the ability to extract meaningful structural information from different native and pretreated biomass samples will enable evaluation of various pretreatment protocols for cost-effective biofuels production," Urban said.
Small-angle neutron scattering measurements were performed at ORNL's High Flux Isotope Reactor and analyzed using the unified fit approach, a mathematic model that allows simultaneous evaluation of the different levels of hierarchical organization that are present in biomass.
Other authors of the paper were Sai Venkatesh Pingali, William Heller, Joseph McGaughey, Hugh O'Neill, Dean Myles and Barbara Evans of ORNL and Marcus Foston and Arthur Ragauskas of Georgia Tech. Support for the research and for HFIR was provided by DOE's Office of Science.

Electron Switch Between Molecules Points Way to New High-Powered Organic Batteries


This is an illustration of an assembled set of different molecules. These molecules meet, exchange electrons and then disassemble because chloride ions, which are represented as green spheres, are present. If these chloride ions are removed, the entire process can be reversed.
The development of new organic batteries -- lightweight energy storage devices that work without the need for toxic heavy metals -- has a brighter future now that chemists have discovered a new way to pass electrons back and forth between two molecules.
The research is also a necessary step toward creating artificial photosynthesis, where fuel could be generated directly from the sun, much as plants do.
University of Texas at Austin chemists Christopher Bielawski and Jonathan Sessler led the research, which was published in Science.
When molecules meet, they often form new compounds by exchanging electrons. In some cases, the electron transfer process creates one molecule with a positive charge and one molecule with a negative charge. Molecules with opposite charges are attracted to each other and can combine to form something new.
In their research, the chemists created two molecules that could meet and exchange electrons but not unite to form a new compound.
"These molecules were effectively spring-loaded to push apart after interacting with each other," says Bielawski, professor of chemistry. "After electron transfer occurs, two positively charged molecules are formed which are repelled by each other, much like magnets held in a certain way will repel each other. We also installed a chemical switch that allowed the electron transfer process to proceed in the opposite direction."
Sessler adds, "This is the first time that the forward and backward switching of electron flow has been accomplished via a switching process at the molecular scale." Sessler is the Roland K. Pettit Centennial Chair in Chemistry at The University of Texas at Austin and a visiting professor at Yonsei University.
Bielawski says this system gives important clues for making an efficient organic battery. He says understanding the electron transfer processes in these molecules provides a way to design organic materials for storing electrical energy that could then be retrieved for later use.
"I would love it if my iPhone was thinner and lighter, and the battery lasted a month or even a week instead of a day," says Bielawski. "With an organic battery, it may be possible. We are now starting to get a handle on the fundamental chemistry needed to make this dream a commercial reality."
The next step, he says, is to demonstrate these processes can occur in a condensed phase, like in a film, rather than in solution.
Organic batteries are made of organic materials instead of heavy metals. They could be lightweight, could be molded into any shape, have the potential to store more energy than conventional batteries and could be safer and cheaper to produce.
The molecular switch could also be a step toward developing a technology that mimics plants' ability to harvest light and convert it to energy. With such a technology, fuel could be produced directly from the sun, rather than through a plant mediator, such as corn.
"I am excited about the prospect of coupling this kind of electron transfer 'molecular switch' with light harvesting to go after what might be an improved artificial photosynthetic device," says Sessler. "Realizing this dream would represent a big step forward for science."
Bielawski and Sessler credit graduate student Jung Su Park for his detailed work growing crystals of the two molecules. Other collaborators include graduate student Elizabeth Karnas from The University of Texas at Austin, Professor Shunichi Fukuzumi at Osaka University and Professor Karl Kadish at the University of Houston.

Chandra Finds Evidence for Stellar Cannibalism


The composite image on the left shows X-ray and optical data for BP Piscium (BP Psc), a more evolved version of our Sun about 1,000 light years from Earth. Chandra X-ray Observatory data are colored in purple, and optical data from the 3-meter Shane telescope at Lick Observatory are shown in orange, green and blue. BP Psc is surrounded by a dusty and gaseous disk and has a pair of jets several light years long blasting out of the system. A close-up view is shown by the artist's impression on the right. For clarity a narrow jet is shown, but the actual jet is probably much wider, extending across the inner regions of the disk. Because of the dusty disk, the star’s surface is obscured in optical and near infrared light. Therefore, the Chandra observation is the first detection of this star in any wavelength

Evidence that a star has recently engulfed a companion star or a giant planet has been found using NASA's Chandra X-ray Observatory. The likely existence of such a "cannibal" star provides new insight into how stars and the planets around them may interact as they age.
The star in question, known as BP Piscium (BP Psc), appears to be a more evolved version of our Sun, but with a dusty and gaseous disk surrounding it. A pair of jets several light years long blasting out of the system in opposite directions has also been seen in optical data. While the disk and jets are characteristics of a very young star, several clues -- including the new results from Chandra -- suggest that BP Psc is not what it originally appeared to be.
Instead, astronomers have suggested that BP Psc is an old star in its so-called red giant phase. And, rather than being hallmarks of its youth, the disk and jets are, in fact, remnants of a recent and catastrophic interaction whereby a nearby star or giant planet was consumed by BP Psc.
When stars like the Sun begin to run of nuclear fuel, they expand and shed their outer layers. Our Sun, for example, is expected to swell so that it nearly reaches or possibly engulfs Earth, as it becomes a red giant star.
"It appears that BP Psc represents a star-eat-star Universe, or maybe a star-eat-planet one," said Joel Kastner of the Rochester Institute of Technology, who led the Chandra study. "Either way, it just shows it's not always friendly out there."
Several pieces of information have led astronomers to rethink how old BP Psc might be. First, BP Psc is not located near any star-forming cloud, and there are no other known young stars in its immediate vicinity. Secondly, in common with most elderly stars, its atmosphere contains only a small amount of lithium. Thirdly, its surface gravity appears to be too weak for a young star and instead matches up with one of an old red giant.
Chandra adds to this story. Young, low-mass stars are brighter than most other stars in X-rays, and so X-ray observations can be used as a sign of how old a star may be. Chandra does detect X-rays from BP Psc, but at a rate that is too low to be from a young star. Instead, the X-ray emission rate measured for BP Psc is consistent with that of rapidly rotating giant stars.
The spectrum of the X-ray emission -- that is how the amount of X-rays changes with wavelength -- is consistent with flares occurring on the surface of the star, or with interactions between the star and the disk surrounding it. The magnetic activity of the star itself might be generated by a dynamo caused by its rapid rotation. This rapid rotation can be caused by the engulfment process.
"It seems that BP Psc has been energized by its meal," said co-author Rodolfo (Rudy) Montez Jr., also from the Rochester Institute of Technology.
The star's surface is obscured throughout the visible and near-infrared bands, so the Chandra observation represents the first detection at any wavelength of BP Psc itself.
"BP Psc shows us that stars like our Sun may live quietly for billions of years," said co-author David Rodriguez from UCLA, "but when they go, they just might take a star or planet or two with them."
Although any close-in planets were presumably devastated when BP Psc turned into a giant star, a second round of planet formation might be occurring in the surrounding disk, hundreds of millions of years after the first round. A new paper using observations with the Spitzer Space Telescope has reported possible evidence for a giant planet in the disk surrounding BP Psc. This might be a newly formed planet or one that was part of the original planetary system.
"Exactly how stars might engulf other stars or planets is a hot topic in astrophysics today," said Kastner. "We have many important details that we still need to work out, so objects like BP Psc are really exciting to find."
These results appeared in The Astrophysical Journal Letters. Other co-authors on the study were Nicolas Grosso of the University of Strasbourg, Ben Zuckerman from UCLA, Marshall Perrin from the Space Telescope Science Institute, Thierry Forveille of the Grenoble Astrophysics Laboratory in France and James Graham from University of California, Berkeley

Selfishness Can Sometimes Help the Common Good, Yeast Study Finds


3-D rendering of yeast.

Scientists have overturned the conventional wisdom that cooperation is essential for the well-being of the whole population, finding proof that slackers can sometimes help the common good.

The researchers, from Imperial College London, the Universities of Bath and Oxford, University College London and the Max Planck Institute for Evolutionary Biology, studied populations of yeast and found that a mixture of 'co-operators' and 'cheats' grew faster than a more utopian one of only 'co-operators'.
In the study, the 'co-operator' yeast produce a protein called invertase that breaks down sugar (sucrose) to give food (glucose) that is available to the rest of the population. The 'cheats' eat the broken down sugar but don't make invertase themselves, and so save their energy.
The study, which appears in the journal PLoS Biology, published by the Public Library of Science, used both laboratory experiments and a mathematical model to understand why and how a little "selfishness" can benefit the whole population.
Professor Laurence Hurst, Royal Society-Wolfson Research Merit Award Holder at the University of Bath, explained: "We found that yeast used sugar more efficiently when it was scarce, and so having 'cheats' in the population stopped the yeast from wasting their food.
"Secondly we found that because yeast cannot tell how much sucrose is available to be broken down, they waste energy making invertase even after there is no sugar left. This puts a brake on population growth. But if most of the population are 'co-operators' and the remainder are 'cheats', not all of the population is wasting their energy and limiting growth.
"For these effects to matter, we found that 'co-operators' needed to be next to other 'co-operators' so they get more of the glucose they produce. If any of these three conditions were changed, the 'cheats' no longer benefitted the population."
Dr Ivana Gudelj, NERC Advanced Fellow and Lecturer in Applied Mathematics at Imperial College London added: "Our work illustrates that the commonly used language of 'co-operators' and 'cheats' could in fact obscure the reality.
"When the addition of more invertase producers reduces the fitness of all, it is hard to see invertase production as co-operation, even if it behaves in a more classical co-operative manner, benefitting all, when rare."
The researchers suggest similar situations may exist in other species where 'cheats' help rather than hinder the population.
This study was funded by the Royal Society, the Natural Environment Research Council (NERC) and Conacyt.

MicroFusion Reactor lets you home-brew ethanol

The E-Fuel MicroFueler, used in conjunction with the MicroFusion Reactor

The E-Fuel MicroFueler, used in conjunction with the MicroFusion Reactor

A lot of people try to lessen the load on the local landfill by putting their organic waste in a compost heap, but soon there may be something else they can do with it – feed it to an E-Fuel MicroFusion Reactor. The new device, so we’re told, takes cellulosic waste material and breaks it down to nothing but sugar water and lignin powder within two minutes. The lignin powder can be used by pharmaceutical manufacturers (although it’s not clear how you’d get it to them), while the sugar water can be distilled into ethanol fuel. That’s where one of E-Fuel’s other products, the MicroFueler, comes in.
Aimed at both home users and businesses, the MicroFueler has been around since 2009. It distills sugar water obtained from organic waste into ready-to-use E-Fuel100 ethanol, which it can pump right into your car. You could also use the fuel in a generator, to provide household electricity. Unlike other ethanol production processes, the MicroFueler does not involve combustion, so is reportedly safe. While it can directly process sugar-rich liquids such as waste alcohol, it needs help breaking down cellulosic materials such as vegetable matter and wood... hence the need for prior processing by the MicroFusion Reactor.
E-Fuel appears to be holding off on releasing more information on the Reactor (including photos) until the product financing is in place.
In the meantime, should you be interested in the MicroFueler, you can purchase one from the company for US$9,995, plus at least $1,995 for a fuel tank. A $9.95 monthly network subscription fee is also required.

Smart home sensors use electrical wiring as an antenna



Smart homes of the future will automatically adapt to their surroundings using an array of sensors to record everything from the building’s temperature and humidity to the light level and air quality. One hurdle impeding the development of such intelligent homes is the fact that existing technology is still power hungry and today’s wireless devices either transmit a signal only several feet, or consume so much energy they need frequent battery replacements. Researchers have now developed sensors that run on extremely low power thanks to using a home’s electrical wiring as a giant antenna to transmit information.
The technology devised by researchers at the University of Washington and the Georgia Institute of Technology uses a home’s copper electrical wiring as a giant antenna to receive wireless signals at a set frequency, allowing for wireless sensors that run for decades on a single watch battery. A low-power sensor placed within 10 to 15 feet of electrical wiring can use the antenna to send data to a single base station plugged in anywhere in the home.

SNUPI

The device is called Sensor Nodes Utilizing Powerline Infrastructure, or SNUPI. It originated when Shwetak Patel, a UW assistant professor of computer science and of electrical engineering, and co-author Erich Stuntebeck were doctoral students at Georgia Tech and worked with thesis adviser Gregory Abowd to develop a method using electrical wiring to receive wireless signals in a home. They discovered that home wiring is a remarkably efficient antenna at 27 megahertz. Since then, Patel's team at the UW has built the actual sensors and refined this method.
"Here, we can imagine this having an out-of-the-box experience where the device already has a battery in it, and it's ready to go and run for many years," Patel said. Users could easily sprinkle dozens of sensors throughout the home, even behind walls or in hard-to-reach places like attics or crawl spaces.

Prototype testing

In testing the system sensors were placed in five locations in each room of a 3,000-square-foot house. The team found that only five percent of the house was out of range, compared to 23 percent when using over-the-air communication at the same power level. They also discovered that the sensors can transmit near bathtubs because the electrical grounding wire is typically tied to the copper plumbing pipes, that a lamp cord plugged into an outlet acts as part of the antenna, and that outdoor wiring can extend the sensors' range outside the home.
While traditional wireless systems can have trouble sending signals through walls, this system actually does better around walls that contain electrical wiring. Even more impressive is the fact that, according to the researchers, SNUPI uses less than one percent of the power for data transmission compared to the next most efficient model.
"Existing nodes consumed the vast majority of their power, more than 90 percent, in wireless communication," said lead student researcher, Gabe Cohn. "We've flipped that. Most of our power is consumed in the computation, because we made the power for wireless communication almost negligible."
The existing prototype uses UW-built custom electronics and consumes less than 1 milliwatt of power when transmitting, with less than 10 percent of that devoted to communication. Depending on the attached sensor, the device could run continuously for 50 years, much longer than the decade-long shelf life of its battery.
"Basically, the battery will start to decompose before it runs out of power," Patel said.
The team suggests longer-term applications might consider using more costly medical-grade batteries, which have a longer shelf life. The team is also looking to reduce the power consumption even further so no battery would be needed. They say they're already near the point where solar energy or body motion could provide enough energy. The researchers are commercializing the base technology, which they believe could be used as a platform for a variety of sensing systems.
The technology, which could be used in home automation or medical monitoring, doesn’t interfere with electricity flow or with other emerging systems that use electrical wiring to transmit Ethernet signal between devices plugged into two outlets.
It will be presented this month at the Ubiquitous Computing conference in Copenhagen, Denmark.

Tuesday, September 14, 2010

Sun and Volcanic Eruptions Pace North Atlantic Climate Swings


The upper panel shows the variations in North Atlantic Ocean basin wide sea surface temperatures in a simulation that includes historical variations in total solar irradiance and volcanic aerosols (blue), and in a simulation that in addition to the natural external 'forcings' also include anthropogenic 'forcings' for the last 150 years (red). Up to year 1900, the blue curve is consistent with available temperature observations, whereas only the red curve matches the observed temperature evolution in the 20th century. The lower panel shows variations in the large-scale ocean circulation in the Atlantic (black) and dates of major volcanic eruptions.

A study presented in Nature Geoscience suggests that changes in solar intensity and volcanic eruptions act as a metronome for temperature variations in the North Atlantic climate.

A research team from the Bjerknes Centre for Climate Research in Bergen, Norway, has studied the climate in the North Atlantic region over the past 600 years using the Bergen Climate Model and the observed temperature evolution. They point to changes in the solar intensity and explosive volcanic eruptions as important causes for climate variations in the North Atlantic during this period.
The sun, volcanoes or ocean currents?
The traditional and common view is that climate variations in the North Atlantic lasting a decade and more, is governed by changes in the large-scale ocean circulation. The presented analysis supports this common perception, but only when the climate effects from changes in the solar intensity and volcanic eruptions are left out.
When the scientists include actual changes in the solar forcing and the climate effect of volcanic eruptions in their model, they find a strong causal link between these external factors and variations in the Atlantic surface temperature. In particular, the study highlights volcanic eruptions as important for long-term variations in the Atlantic climate both through their strong cooling effect, but also through their direct impact on atmosphere and ocean circulation.
Regional climate variations and ocean temperatures
A wide range of regional climate variations of high societal importance have been linked to temperature variations in the North Atlantic. These include periods of prolonged droughts in the U.S. such as the 'Dust Bowl', changes in European summer temperatures, long-term changes in the East-Asian monsoon and variations in the intensity of Atlantic hurricanes. The governing mechanisms behind such long-term variations are, however, not well understood.
The study provides new insight into the causes and timing of long-term variations in the Atlantic and, consequently, for the potential for developing decadal prediction schemes for the Atlantic climate.
Warming during the 20th century
The study also shows that the observed warming in the North Atlantic during the 20th century cannot be explained by the solar and volcanic activity alone. In the model, the increased emissions of CO2 and other well-mixed greenhouse gases to the atmosphere since the onset of the industrial revolution have to be included in order to simulate the observed temperature evolution.

No Dead Zones Observed or Expected as Part of BP Deepwater Horizon Oil Spill


Oil spill

The National Oceanic and Atmospheric Administration (NOAA), the U.S. Environmental Protection Agency (EPA) and the Office of Science and Technology Policy (OSTP) released a report September 7, 2010 that showed dissolved oxygen levels have dropped by about 20 percent from their long-term average in the Gulf of Mexico in areas where federal and independent scientists previously reported the presence of subsurface oil. Scientists from agencies involved in the report attribute the lower dissolved oxygen levels to microbes using oxygen to consume the oil from the BP Deepwater Horizon oil spill.
These dissolved oxygen levels, measured within 60 miles of the wellhead, have stabilized and are not low enough to become "dead zones." A dead zone is an area of very low dissolved oxygen that cannot support most life. Dead zones are commonly observed in the nearshore waters of the western and northern Gulf of Mexico in summer, but not normally in the deep water layer (3,300 -- 4,300 feet) where the lowered oxygen areas in this study occurred. Dead zones, also known as hypoxic areas, are defined in marine waters as areas in which dissolved oxygen concentrations are below 2 mg/L (1.4 ml/L).
"All the scientists working in the Gulf have been carefully watching dissolved oxygen levels because excess carbon in the system might lead to a dead zone. While we saw a decrease in oxygen, we are not seeing a continued downward trend over time," said Steve Murawski, Ph.D., NOAA's Chief Scientist for Fisheries and the head of the Joint Analysis Group. "None of the dissolved oxygen readings have approached the levels associated with a dead zone and as the oil continues to diffuse and degrade, hypoxia becomes less of a threat."
Since the Deepwater Horizon incident began, EPA and NOAA have systematically monitored dissolved oxygen levels along with other parameters from the sea surface to about 5,000 feet deep near the spill site. Data from 419 locations sampled on multiple expeditions by nine ships -- NOAA Ships Gordon Gunter, Henry Bigelow, Nancy Foster and Thomas Jefferson and the research vessels Brooks McCall, Ferrel, Jack Fitz, Ocean Veritas and Walton Smith -- over a three-month period, were analyzed for this report.
The JAG report does not specifically address the question of the rate of biodegradation of oil, which cannot be determined looking only at dissolved oxygen data. But it references a recently published peer reviewed study conducted by researchers at the Department of Energy's Lawrence Berkeley National Laboratory. Using field sampling and laboratory experiments, the scientists found that half-lives of some components of the oil were in the range of 1.2 to 6.1 days. Their results suggest that the light components of the oil are being rapidly degraded by microbes. This report also did not find hypoxic oxygen levels.
The report released September 7 documenting moderately low dissolved oxygen levels is also consistent with a study recently published by academic researchers led by the Woods Hole Oceanographic Institution and supported by the National Science Foundation and NOAA, who also reported they did not find dead zones where they found subsurface oil.
"It is good news that dissolved oxygen has not reached hypoxic levels in these deepwater environs," said Shere Abbott, Associate Director for Environment at the Office of Science and Technology Policy. "This work testifies to the nation's commitment to applying the best science and technology -- directly through federal agencies and indirectly through the support of cutting-edge academic research -- to understand environmental impacts in the Gulf and in all our treasured ocean ecosystems."
Dissolved oxygen levels reported by the JAG were measured using a range of different instruments and methods including dissolved oxygen sensors and Winkler titrations. The dissolved oxygen data underwent preliminary quality control (QC) and quality assurance (QA) described in Appendix 4 of the report.
Dissolved oxygen levels are continuously monitored as part of the EPA monitoring protocols required for the use of subsea dispersants. If concentrations had fallen below 2 mg/L (1.4 mL/L) -- hypoxic levels -- the Unified Command would have considered discontinuing the use of subsurface dispersants. Based on evidence to date, dissolved oxygen did not decrease to these levels. The lowest dissolved oxygen measured was 3.5 mg/L (2.5 mL/L), which is above hypoxic levels. However, this report does not discuss the broad ecosystem consequences of hydrocarbons released into the environment.

Funneling Solar Energy: Antenna Made of Carbon Nanotubes Could Make Photovoltaic Cells More Efficient


This filament containing about 30 million carbon nanotubes absorbs energy from the sun as photons and then re-emits photons of lower energy, creating the fluorescence seen here. The red regions indicate highest energy intensity, and green and blue are lower intensity.

Using carbon nanotubes (hollow tubes of carbon atoms), MIT chemical engineers have found a way to concentrate solar energy 100 times more than a regular photovoltaic cell. Such nanotubes could form antennas that capture and focus light energy, potentially allowing much smaller and more powerful solar arrays.
"Instead of having your whole roof be a photovoltaic cell, you could have little spots that were tiny photovoltaic cells, with antennas that would drive photons into them," says Michael Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering and leader of the research team.
Strano and his students describe their new carbon nanotube antenna, or "solar funnel," in the Sept. 12 online edition of the journal Nature Materials. Lead authors of the paper are postdoctoral associate Jae-Hee Han and graduate student Geraldine Paulus.
Their new antennas might also be useful for any other application that requires light to be concentrated, such as night-vision goggles or telescopes.
Solar panels generate electricity by converting photons (packets of light energy) into an electric current. Strano's nanotube antenna boosts the number of photons that can be captured and transforms the light into energy that can be funneled into a solar cell.
The antenna consists of a fibrous rope about 10 micrometers (millionths of a meter) long and four micrometers thick, containing about 30 million carbon nanotubes. Strano's team built, for the first time, a fiber made of two layers of nanotubes with different electrical properties -- specifically, different bandgaps.
In any material, electrons can exist at different energy levels. When a photon strikes the surface, it excites an electron to a higher energy level, which is specific to the material. The interaction between the energized electron and the hole it leaves behind is called an exciton, and the difference in energy levels between the hole and the electron is known as the bandgap.
The inner layer of the antenna contains nanotubes with a small bandgap, and nanotubes in the outer layer have a higher bandgap. That's important because excitons like to flow from high to low energy. In this case, that means the excitons in the outer layer flow to the inner layer, where they can exist in a lower (but still excited) energy state.
Therefore, when light energy strikes the material, all of the excitons flow to the center of the fiber, where they are concentrated. Strano and his team have not yet built a photovoltaic device using the antenna, but they plan to. In such a device, the antenna would concentrate photons before the photovoltaic cell converts them to an electrical current. This could be done by constructing the antenna around a core of semiconducting material.
The interface between the semiconductor and the nanotubes would separate the electron from the hole, with electrons being collected at one electrode touching the inner semiconductor, and holes collected at an electrode touching the nanotubes. This system would then generate electric current. The efficiency of such a solar cell would depend on the materials used for the electrode, according to the researchers.
Strano's team is the first to construct nanotube fibers in which they can control the properties of different layers, an achievement made possible by recent advances in separating nanotubes with different properties.
While the cost of carbon nanotubes was once prohibitive, it has been coming down in recent years as chemical companies build up their manufacturing capacity. "At some point in the near future, carbon nanotubes will likely be sold for pennies per pound, as polymers are sold," says Strano. "With this cost, the addition to a solar cell might be negligible compared to the fabrication and raw material cost of the cell itself, just as coatings and polymer components are small parts of the cost of a photovoltaic cell."
Strano's team is now working on ways to minimize the energy lost as excitons flow through the fiber, and on ways to generate more than one exciton per photon. The nanotube bundles described in the Nature Materials paper lose about 13 percent of the energy they absorb, but the team is working on new antennas that would lose only 1 percent.
Funding: National Science Foundation Career Award, MIT Sloan Fellowship, the MIT-Dupont Alliance and the Korea Research Foundation.

Glasperlenspiel: Scientists Propose New Test for Gravity


A beam of laser light (red) should be able to cause a glass bead of approximately 300 nanometers in diameter to levitate, and the floating bead would be exquisitely sensitive to the effects of gravity. Moving a large heavy object (gold) to within a few nanometers of the bead could allow the team to test the effects of gravity at very short distances.

A new experiment proposed by physicists at the National Institute of Standards and Technology (NIST) may allow researchers to test the effects of gravity with unprecedented precision at very short distances -- a scale at which exotic new details of gravity's behavior may be detectable.
Of the four fundamental forces that govern interactions in the universe, gravity may be the most familiar, but ironically it is the least understood by physicists. While gravity's influence is well-documented on bodies separated by astronomical or human-scale distances, it has been largely untested at very close scales -- on the order of a few millionths of a meter -- where electromagnetic forces often dominate. This lack of data has sparked years of scientific debate.
"There are lots of competing theories about whether gravity behaves differently at such close range," says NIST physicist Andrew Geraci, "But it's quite difficult to bring two objects that close together and still measure their motion relative to each other very precisely."
In an attempt to sidestep the problem, Geraci and his co-authors have envisioned an experiment that would suspend a small glass bead in a laser beam "bottle," allowing it to move back and forth within the bottle. Because there would be very little friction, the motion of the bead would be exquisitely sensitive to the forces around it, including the gravity of a heavy object placed nearby.
According to the research team, the proposed experiment would permit the testing of gravity's effects on particles separated by 1/1,000 the diameter of a human hair, which could ultimately allow Newton's law to be tested with a sensitivity 100,000 times better than existing experiments.
Actually realizing the scheme -- detailed in a new paper in Physical Review Letters -- could take a few years, co-author Scott Papp says, in part because of trouble with friction, the old nemesis of short-distance gravity research. Previous experiments have placed a small object (like this experiment's glass bead) onto a spring or short stick, which have created much more friction than laser suspension would introduce, but the NIST team's idea comes with its own issues.
"Everything creates some sort of friction," Geraci says. "We have to make the laser beams really quiet, for one thing, and then also eliminate all the background gas in the chamber. And there will undoubtedly be other sources of friction we have not yet considered."
For now, Geraci says, the important thing is to get the idea in front of the scientific community.
"Progress in the scientific community comes not just from individual experiments, but from new ideas," he says. "The recognition that this system can lead to very precise force measurements could lead to other useful experiments and instruments."

Carbon Mapping Breakthrough


A new high-resolution airborne and satellite mapping approach provides detailed information on carbon stocks in Amazonia. This image shows an area of road building and development adjacent to primary forest in red tones, and secondary forest regrowth in green tones.
By integrating satellite mapping, airborne-laser technology, and ground-based plot surveys, scientists from the Carnegie Institution's Department of Global Ecology, with colleagues from the World Wildlife Fund and in coordination with the Peruvian Ministry of the Environment (MINAM), have revealed the first high-resolution maps of carbon locked up in tropical forest vegetation and emitted by land-use practices.

These new maps pave the way for accurate monitoring of carbon storage and emissions for the proposed United Nations initiative on Reduced Emissions from Deforestation and Degradation (REDD). The study is published in the September 6, 2010, early edition of the Proceedings of the National Academy of Sciences.
The United Nations REDD initiative could create financial incentives to reduce carbon emissions from deforestation and degradation. However, this and similar carbon monitoring programs have been hindered by a lack of accurate, high-resolution methods to account for changes in the carbon stored in vegetation and lost through deforestation, selective logging, and other land-use disturbances. The new high-resolution mapping method will have a major impact on the implementation of REDD in tropical regions around the world.
The study covered over 16,600 square miles of the Peruvian Amazon -- an area about the size of Switzerland. The researchers used a four-step process: They mapped vegetation types and disturbance by satellite; developed maps of 3-D vegetation structure using a LiDAR system (light detection and ranging) from the fixed-wing Carnegie Airborne Observatory; converted the structural data into carbon density using a small network of field plots on the ground; and integrated the satellite and LiDAR data for high-resolution maps of stored and emitted carbon. The scientists combined historical deforestation and degradation data with 2009 carbon stock information to calculate emissions from 1999-2009 for the Madre de Dios region.
"We found that the total regional forest carbon storage was about 395 million metric tons and emissions reached about 630,000 metric tons per year," explained lead author Greg Asner. "But what really surprised us was how carbon storage differed among forest types and the underlying geology, all in very close proximity to one another. For instance, where the local geology is up to 60 million years old, the vegetation retains about 25% less carbon than the vegetation found on geologically younger, more fertile surfaces. We also found an important interaction between geology, land use, and emissions. These are the first such patterns to emerge from the Amazon forest."
The scientists also found that the paving of the Interoceanic Highway, combined with selective logging and gold mining, caused an increase of deforestation emissions of more than 61% by 2009, while degradation emissions doubled. Forest degradation increased regional carbon emissions by 47% over deforestation alone. However, the researchers were able to detect an 18% offset to these regional emissions in forests regrowing on previously cleared and now abandoned lands.
Members of the Peruvian government participated throughout the research process to familiarize themselves with the new method. In doing so, they aimed to assess the method's advantages, evaluate deforestation and forest disturbance, and determine carbon stocks in an environmentally critical area of Madre de Dios, Peru. "A valuable opportunity has opened for MINAM to count on Carnegie's scientific and technical support. This will strengthen our ability to monitor the Amazon forest, build experience in improving the interpretation of the country's environmental and land management conditions, and contribute to the establishment of the REDD mechanism," says Doris Rueda, director of Land Management at MINAM.
To support REDD, the Intergovernmental Panel on Climate Change (IPCC) issued baseline carbon density estimates for different biomes of the world, while also encouraging higher resolution approaches. When used for the Peruvian study area, the IPCC baseline estimate for carbon storage is 587 million metric tons. Based on the new Carnegie approach, the estimated total is 395 million metric tons. Under REDD-type programs, however, the high-resolution accuracy of the new approach would yield more credit per ton of carbon, thereby providing financial incentives for slowing deforestation and degradation.
Carnegie scientists are expanding their demonstration and training efforts in the high-resolution mapping technique with the governments of Ecuador and Colombia.
The research was supported by the Government of Norway, the Gordon and Betty Moore Foundation, the W. M. Keck Foundation, and William R. Hearst III.

Artificial pressure-sensitive skin created from nanowires

An artist's impression of an artificial hand covered with the e-skin An artist's impression of an artificial hand covered with the e-skin





Using a process described as “a lint roller in reverse,” engineers from the University of California, Berkeley, have created a pressure-sensitive electronic artificial skin from semiconductor nanowires. This “e-skin,” as it’s called, could one day be used to allow robots to perform tasks that require both grip and a delicate touch, or to provide a sense of touch in patients’ prosthetic limbs.
"Humans generally know how to hold a fragile egg without breaking it," said Ali Javey, associate professor of electrical engineering and computer sciences and head of the UC Berkeley research team. "If we ever wanted a robot that could unload the dishes, for instance, we'd want to make sure it doesn't break the wine glasses in the process. But we'd also want the robot to be able to grip a stock pot without dropping it."
Some previous attempts at artificial skin have used organic materials, as they are flexible and relatively easy to process. Their main drawback has been that they are poor semiconductors, so devices using them would require large amounts of power. The e-skin, however, is made from inorganic single crystalline semiconductors. It requires only a small amount of power, is reportedly more chemically stable than organic skin, and maintains a high degree of flexibility thanks to its wire strip construction.
The team created the e-skin by growing germanium/silicon nanowires on a cylindrical drum, which was then rolled across a sticky polyimide film (although they stated that a variety of other materials would also work). The nanowires stuck to the film in a controlled, orderly fashion, forming the platform for the complete e-skin. The researchers also experimented with another process, in which the nanowires were first grown on a flat substrate, then transferred to the film by rubbing.
The test sheet of e-skin measured 7 x 7 centimeters (2.76 inches), and was divided up into an 18 x 19 pixel matrix – each pixel contained a transistor, composed of hundreds of nanowires. The outside surface of the sheet was then coated with pressure-sensitive rubber. Using just 5-volts of power, the e-skin was shown to be able to detect pressure ranging from 0 to 15 kilopascals, which is about what people typically use when typing or holding an object. It was also able to maintain its integrity after more than 2,000 bending cycles.
"This is the first truly macroscale integration of ordered nanowire materials for a functional system – in this case, an electronic skin," said study lead author Kuniharu Takei. "It's a technique that can be potentially scaled up. The limit now to the size of the e-skin we developed is the size of the processing tools we are using."
The research is described in the current issue of the journal Nature Materials.

Student-built E-Quickie electric vehicle draws energy wirelessly from the road


Over the last couple of years there have been a number of wireless chargers hitting the market, such as the Powermat and the WildCharge. These are designed to keep mobile devices charged and ready without dealing with the hassle of cords and connections. The technology has also been proposed as a way to recharge vehicles while they are parked without having to plug them in, while some companies are looking at charging cars while they are moving from electrical conductors embedded in the road. Now, a group of students in Germany has taken that idea and run with it by building an electric vehicle called the E-Quickie that runs on wireless power transmission.
Looking a bit like a recumbent bike with a driver’s cabin, the E-Quickie was built by students at the Karlsruhe University of Applied Sciences (HsKA) to investigate the practicality of a wirelessly powered electric vehicle. It gets its energy from electric conducting paths on the ground with receivers underneath the car taking energy from the tracks through electric induction and directing it to the car’s electrical hub drive.
One student working group took care of setting up the racing track, which was provided by the firm SEW, in Bruchsal. Two other teams were dedicated to the vehicle’s energy absorption and the safety of the entire system.
They designed the individual vehicle components, such as the steering and braking system and the chassis, using high-tech materials. Keeping the weight of the vehicle to a minimum and its aerodynamics were also important factors for designing the outer skin of the vehicle’s body, for which the students used carbon fiber. Before construction of the vehicle, all its components and finally the whole vehicle were optimized by computer in a virtual wind channel.
The end result was a three-wheeled vehicle that weighs just 60kg (132lb). However, Prof. Jürgen Walter from the faculty of Mechanical Engineering and Mechatronics and head of the project is confident this can be reduced to 40kg (88lb) through further optimization.
“With other vehicle types you have a weight ratio between driver and vehicle of 1:10/1:15. We’re aiming for a ratio of 1:2 through further development of the E-Quickie,” said Walter.
Even though the vehicle’s motor only has a horsepower of 2kW, its light weight means it is still able to reach a speed of 50km/h (31mph). Even though the vehicle draws its power from the track, it still has batteries onboard. However, these serve only as a buffer and are therefore much smaller than those found in other electric cars which draw energy from batteries exclusively.
“The aim was not only to show how quickly you can move around with the E-Quickie, but most of all how energy efficient the car is”, explains Walter. “We went to the start with half-filled batteries and returned with full ones.” For what then are batteries used for in this system of energy transfer? As soon as the car leaves the electrical conductor tracks, the power supply to the motor is interrupted. “Here the small accumulators then jump on-board the E-Quickie as an energy buffer,” explains Walter, “for example when it’s driven into the garage.”
The team has already achieved success: On May 19-20 this year, the students took part in the Karlsruhe E-Meile, completing 40 laps on the 222-meter (728-ft) conductor track. The team plans to use the test track at the HsKA campus to continue optimizing the vehicle for reduced energy consumption and weight.

Weird Solar Device of the Day: Solight Concept for Indoor Plants

solar light for plants image


solar light for plants image


A new concept design by Lee Ju Won is the ultimate middle man. It sticks to a window to harness the power of the sun to run LEDs that provide light to potted plants that sit under it. Um, what's wrong with this picture?


solar light for plants image
Yanko writes, "The urban landscape makes it difficult for us humans to experience the bliss of natural sunlight at home, thanks to the towering high-rises. So how can we expect our houseplants to flourish or pamper them with the natural goodness of sun? Unless of course, we get sneaky and trap some solar power in the "Solight" and then hood it over the plant; trick it into thinking it's getting some natural sunshine! Clever idea and no one will know how you manage to keep those plants happy...because you can stash the Solight away when not required!"
Wait.... If you can get enough sunlight from a window to charge a battery, the the plant you place in that window can get enough sunlight to grow. Could you not just skip this step and place the plant in the window instead?
Or say the light coming in is too weak so you'd want the extra light from the LEDs... But then the light would also be so weak that it'd take days to get enough charge to run grow lights for a few hours.
Something about this design just doesn't add up. With the amount of embodied energy in a product like this, anyone with a green thumb and a green mind would be better off just setting their plants on a table under the window, or crafting a window box on the sill.
The Solight is a good example of how solar power can be put in some very odd places.

Sunday, September 12, 2010

Biodiesel from Sewage Sludge Within Pennies a Gallon of Being Competitive

Existing technology can produce biodiesel fuel from municipal sewage sludge that is within a few cents a gallon of being competitive with conventional diesel refined from petroleum, according to an article in ACS' Energy & Fuels. Sludge is the solid material left behind from the treatment of sewage at wastewater treatment plants.
David M. Kargbo points out in the article that demand for biodiesel has led to the search for cost-effective biodiesel feedstocks, or raw materials. Soybeans, sunflower seeds and other food crops have been used as raw materials but are expensive. Sewage sludge is an attractive alternative feedstock -- the United States alone produces about seven million tons of it each year. Sludge is a good source of raw materials for biodiesel. To boost biodiesel production, sewage treatment plants could use microorganisms that produce higher amounts of oil, Kargbo says. That step alone could increase biodiesel production to the 10 billion gallon mark, which is more than triple the nation's current biodiesel production capacity, the report indicates.
The report, however, cautions that to realize these commercial opportunities, huge challenges still exist, including challenges from collecting the sludge, separation of the biodiesel from other materials, maintaining biodiesel quality, soap formation during production, and regulatory concerns.
With the challenges addressed, "Biodiesel production from sludge could be very profitable in the long run," the report states. "Currently the estimated cost of production is $3.11 per gallon of biodiesel. To be competitive, this cost should be reduced to levels that are at or below [recent] petro diesel costs of $3.00 per gallon."