Saturday, September 25, 2010

Green Machine: Wind farms make like a fish


Green machine is our weekly column on the latest advances in environmental technologies

Can the flicking tails of schooling fish help squeeze more power out of a wind farm? A group of aeronautical engineers seem to think so.
Inspired by the turbulence created by fish schools, they are now testing whether it's possible to position wind turbines so that they help each other – and so boost a farm's energy output.
Wind farm turbines tend to fall into two types. Three-bladed, horizontal-axis wind turbines are more common – the world's biggest HAWT farm opened off the coast of Kent, UK this week. However, a new breed of vertical axis wind turbines (VAWT) are on the up and up.
VAWTs feature a vertical shaft around which airstream-intercepting wings swirl (see Picture). The "lift" of the wind tangentially yanks the turbine around.
Their advantage? They can harvest airflow from all directions, plus are cheaper to maintain than standard turbines because their generators and gearboxes are on or near the ground.
One downside is that an individual VAWT is less efficient than a typical three-rotor turbine, says aeronautical engineer John Dabiri at Caltech in Pasadena.
However, HAWTs are not perfect. They have to be spaced wide apart - up to 10 times their rotor diameter – to avoid the blades of a neighbouring turbine slowing down the wind too much. That means building them offshore or where there is plenty of land.
VAWTs churn up less of the surrounding air. So Dabiri and colleagues wanted to find out if packing them as tight as possible could rival the overall efficiency of a typical wind farm.

Thought of school

As they did so, inspiration struck Dabiri: "It occurred to me that the equations I was writing were the same as I had previously seen in models of the vortices in a school of fish. From there it was just connecting the dots, conceptually speaking," he says.
One reason fish swim en masse is that it allows them to travel two to six times further than when swimming solo. In 1973, Daniel Weihs of the University of Cambridge (Nature, DOI:10.1038/241290a0) suggested that the vortices shed by the tail of a fish in a school sets the surrounding water moving faster so that neighbouring fish get a speed boost.
Could turbines in air help each other too? By modifying the hydrodynamic fish equations for aerodynamics and then feeding them into a fluid dynamics model, they were able to work out optimum turbine placements.
A VAWT farm positioned in this way could generate 10 times the power of a HAWT farm of the same area. "We're proposing that VAWTs can be a more effective solution even for large-scale utility power generation," he says.

Class test

Now the team are putting their idea to the test. This summer, the Caltech team operated six vertical turbines north of Los Angeles. The turbines are around 10 metres tall and were mounted on mobile platforms.
The platforms allowed the team to move the turbines to test the most efficient wind farm layouts. They compared their results to those from a HAWT farm on a similar plot of land.
For now, Dabiri is keeping the results close to his chest – saying only that they are "promising" – but he has been encouraged enough to press ahead with patent applications and to seek further tests. "The next step will hopefully be a larger-scale field demonstration," he says.
Robert Blake of the University of British Columbia, Canada and guest editor of the edition of the journal that the work appeared in says the study is "innovative" and "illustrates that biological systems can provide good models for engineering."
Thorsteinn Sigfusson, director-general of the Innovation Center Iceland in Reykjavik, adds: "In countries like my own where the wind is unsteady and changes direction quickly [VAWTs] could be a major contribution. I look forward to hearing how their larger scale tests go."

Insight Into the Impacts of Too Much Communication


A visual representation of the mathematics outlined in the researcher's paper.


Individuals within a networked system coordinate their activities by communicating to each other information such as their position, speed, or intention. At first glance, it seems that more of this communication will increase the harmony and efficiency of the network. However, scientists at Rensselaer Polytechnic Institute have found that this is only true if the communication and its subsequent action are immediate.
Using statistical physics and network science, the researchers were able to find something very fundamental about synchronization and coordination: if there are sustained delays in communication between just two or three parts of a system, performance of the entire system will eventually collapse. The findings apply to any network system where individuals interact with each other to collectively create a better outcome. This ranges from a flock of birds suddenly dodging to the right in one unified movement to avoid a predator to balancing load in large-scale computer networks to the spread of a rumor throughout an online social network.
The findings were published last month in Physical Review Letters.
Previous studies by the researchers have revealed that the minute interactions between neighboring individuals, referred to as nodes, are the foundation for overall network performance. The fast, accurate, and balanced movement of information between neighboring nodes is what prevents the birds from scattering and allows a story to accurately spread on the Web.
But, as is frequently the case in real-world scenarios, what happens when the information from your neighbor is not up to date? What occurs when there are delays in the transmission or processing of the information between neighbors? The researchers utilized stochastic differential equations, a type of mathematical equation used to model the time evolution of complex systems with random variables, to determine what happens when delays are input into the system.
"When there are no delays, the more you communicate with your neighbor, the better global performance becomes," said corresponding author for the paper and Associate Professor of Physics, Applied Physics, and Astronomy Gyorgy Korniss. "If there are delays, for a while performance will increase, but even if you work harder to better communicate with your neighbors, eventually performance will decrease until it reaches zero.
"Understanding the impact of delays can enable network operators to know when less communication effort can actually be more efficient for overall performance."
Their equations show that the larger the delay between nodes, the faster the overall coordination of the system will deteriorate. The work also reveals that, even with delays, there is a window of time where increasing communication will improve performance.
But, after a point, you also need to know when to "shut up," Korniss explained. After a certain period of poor communication, he said, no matter how fast or accurate you attempt to make your future communication, all communication is counterproductive.
"Our conclusion that coordination can sometimes be restored by decreasing node connectivity offers an important perspective on today's world with its abundance of connectivity in social and technological systems, raising the question of their stability," said study co-author Boleslaw Szymanski, Rensselaer's Claire & Roland Schmitt Distinguished Professor of Computer Science. Szymanski also serves as director of the Social Cognitive Network Academic Research Center (SCNARC) at Rensselaer.
The work, which is part of SCNARC, could be extended to real-life cases such a social or economic network. An example could be predicting the response of global markets to the trading of specific stocks, according to the researchers. The equations could someday help network operators to get the biggest pay off from each communication and develop an even stronger understanding of the power of the individual in mass communication.
First author for the paper was physics graduate student David Hunt. The research was funded by the Defense Threat Reduction Agency (DTRA) and by the Army Research Laboratory (ARL) through SCNARC, part of the Network Science Collaborative Technology Alliance (NS-CTA).

New Nanomesh Material Created: Silicon-Based Film May Lead to Efficient Thermoelectric Devices


Top: A scanning electron microscope image shows the grid of tiny holes in the nanomesh material. Bottom: In this drawing, each sphere represents a silicon atom in the nanomesh. The colorful bands show the temperature differences on the material, with red being hotter and blue being cooler.
Computers, light bulbs, and even people generate heat -- energy that ends up being wasted. With a thermoelectric device, which converts heat to electricity and vice versa, you can harness that otherwise wasted energy. Thermoelectric devices are touted for use in new and efficient refrigerators, and other cooling or heating machines. But present-day designs are not efficient enough for widespread commercial use or are made from rare materials that are expensive and harmful to the environment.
Researchers at the California Institute of Technology (Caltech) have developed a new type of material -- made out of silicon, the second most abundant element in Earth's crust -- that could lead to more efficient thermoelectric devices. The material -- a type of nanomesh -- is composed of a thin film with a grid-like arrangement of tiny holes. This unique design makes it difficult for heat to travel through the material, lowering its thermal conductivity to near silicon's theoretical limit. At the same time, the design allows electricity to flow as well as it does in unmodified silicon.
"In terms of controlling thermal conductivity, these are pretty sophisticated devices," says James Heath, the Elizabeth W. Gilloon Professor and professor of chemistry at Caltech, who led the work. A paper about the research will be published in the October issue of the journal Nature Nanotechnology.
A major strategy for making thermoelectric materials energy efficient is to lower the thermal conductivity without affecting the electrical conductivity, which is how well electricity can travel through the substance. Heath and his colleagues had previously accomplished this using silicon nanowires -- wires of silicon that are 10 to 100 times narrower than those currently used in computer microchips. The nanowires work by impeding heat while allowing electrons to flow freely.
In any material, heat travels via phonons -- quantized packets of vibration that are akin to photons, which are themselves quantized packets of light waves. As phonons zip along the material, they deliver heat from one point to another. Nanowires, because of their tiny sizes, have a lot of surface area relative to their volume. And since phonons scatter off surfaces and interfaces, it is harder for them to make it through a nanowire without bouncing astray. As a result, a nanowire resists heat flow but remains electrically conductive.
But creating narrower and narrower nanowires is effective only up to a point. If the nanowire is too small, it will have so much relative surface area that even electrons will scatter, causing the electrical conductivity to plummet and negating the thermoelectric benefits of phonon scattering.
To get around this problem, the Caltech team built a nanomesh material from a 22-nanometer-thick sheet of silicon. (One nanometer is a billionth of a meter.) The silicon sheet is converted into a mesh -- similar to a tiny window screen -- with a highly regular array of 11- or 16-nanometer-wide holes that are spaced just 34 nanometers apart.
Instead of scattering the phonons traveling through it, the nanomesh changes the way those phonons behave, essentially slowing them down. The properties of a particular material determine how fast phonons can go, and it turns out that -- in silicon at least -- the mesh structure lowers this speed limit. As far as the phonons are concerned, the nanomesh is no longer silicon at all. "The nanomesh no longer behaves in ways typical of silicon," says Slobodan Mitrovic, a postdoctoral scholar in chemistry at Caltech. Mitrovic and Caltech graduate student Jen-Kan Yu are the first authors on the Nature Nanotechnology paper.
When the researchers compared the nanomesh to the nanowires, they found that -- despite having a much higher surface-area-to-volume ratio -- the nanowires were still twice as thermally conductive as the nanomesh. The researchers suggest that the decrease in thermal conductivity seen in the nanomesh is indeed caused by the slowing down of phonons, and not by phonons scattering off the mesh's surface. The team also compared the nanomesh to a thin film and to a grid-like sheet of silicon with features roughly 100 times larger than the nanomesh; both the film and the grid had thermal conductivities about 10 times higher than that of the nanomesh.
Although the electrical conductivity of the nanomesh remained comparable to regular, bulk silicon, its thermal conductivity was reduced to near the theoretical lower limit for silicon. And the researchers say they can lower it even further. "Now that we've showed that we can slow the phonons down," Heath says, "who's to say we can't slow them down a lot more?"
The researchers are now experimenting with different materials and arrangements of holes in order to optimize their design. "One day, we might be able to engineer a material where you not only can slow the phonons down, but you can exclude the phonons that carry heat altogether," Mitrovic says. "That would be the ultimate goal."
The other authors on the paper are Caltech graduate students Douglas Tham and Joseph Varghese. The research was funded by the Department of Energy, the Intel Foundation, a Scholar Award from the King Abdullah University of Science and Technology, and the National Science Foundation

Pair of Aluminum Atomic Clocks Reveal Einstein's Relativity at a Personal Scale


NIST physicists compared a pair of the world's best atomic clocks to demonstrate that you age faster when you stand just a couple of steps higher on a staircase.
Scientists have known for decades that time passes faster at higher elevations -- a curious aspect of Einstein's theories of relativity that previously has been measured by comparing clocks on the earth's surface and a high-flying rocket.
Now, physicists at the National Institute of Standards and Technology (NIST) have measured this effect at a more down-to-earth scale of 33 centimeters, or about 1 foot, demonstrating, for instance, that you age faster when you stand a couple of steps higher on a staircase.
Described in the Sept. 24 issue of Science, the difference is much too small for humans to perceive directly -- adding up to approximately 90 billionths of a second over a 79-year lifetime -- but may provide practical applications in geophysics and other fields.
Similarly, the NIST researchers observed another aspect of relativity -- that time passes more slowly when you move faster -- at speeds comparable to a car travelling about 20 miles per hour, a more comprehensible scale than previous measurements made using jet aircraft.
NIST scientists performed the new "time dilation" experiments by comparing operations of a pair of the world's best experimental atomic clocks. The nearly identical clocks are each based on the "ticking" of a single aluminum ion (electrically charged atom) as it vibrates between two energy levels over a million billion times per second. One clock keeps time to within 1 second in about 3.7 billion years and the other is close behind in performance. The two clocks are located in different laboratories at NIST and connected by a 75-meter-long optical fiber.
NIST's aluminum clocks -- also called "quantum logic clocks" because they borrow logical decision-making techniques from experimental quantum computing -- are precise and stable enough to reveal slight differences that could not be seen until now. The clocks operate by shining laser light on the ions at optical frequencies, which are higher than the microwave frequencies used in today's standard atomic clocks based on the cesium atom.
Optical clocks could someday lead to time standards 100 times more accurate than today's standard clocks.
The aluminum clocks can detect small relativity-based effects because of their extreme precision and high "Q factor" -- a quantity that reflects how reliably the ion absorbs and retains optical energy in changing from one energy level to another -- says NIST postdoctoral researcher James Chin-Wen Chou, first author of the paper.
"We have observed the highest Q factor in atomic physics," Chou says. "You can think about it as how long a tuning fork would vibrate before it loses the energy stored in the resonating structure. We have the ion oscillating in sync with the laser frequency for about 400 thousand billion cycles."
The NIST experiments focused on two scenarios predicted by Einstein's theories of relativity. First, when two clocks are subjected to unequal gravitational forces due to their different elevations above the surface of the Earth, the higher clock -- experiencing a smaller gravitational force -- runs faster. Second, when an observer is moving, a stationary clock's tick appears to last longer, so the clock appears to run slow. Scientists refer to this as the "twin paradox," in which a twin sibling who travels on a fast-moving rocket ship would return home younger than the other twin. The crucial factor is the acceleration (speeding up and slowing down) of the travelling twin in making the round-trip journey.
NIST scientists observed these effects by making specific changes in one of the two aluminum clocks and measuring the resulting differences in the two ions' relative ticking rates, or frequencies.
In one set of experiments, scientists raised one of the clocks by jacking up the laser table to a height one-third of a meter (about a foot) above the second clock. Sure enough, the higher clock ran at a slightly faster rate than the lower clock, exactly as predicted.
The second set of experiments examined the effects of altering the physical motion of the ion in one clock. (The ions are almost completely motionless during normal clock operations.) NIST scientists tweaked the one ion so that it gyrated back and forth at speeds equivalent to several meters per second. That clock ticked at a slightly slower rate than the second clock, as predicted by relativity. The moving ion acts like the traveling twin in the twin paradox.
Such comparisons of super-precise clocks eventually may be useful in geodesy, the science of measuring the Earth and its gravitational field, with applications in geophysics and hydrology, and possibly in space-based tests of fundamental physics theories, suggests physicist Till Rosenband, leader of NIST's aluminum ion clock team.
NIST scientists hope to improve the precision of the aluminum clocks even further, as much as 10-fold, through changes in ion trap geometry and better control of ion motion and environmental interference. The aim is to measure differences in timekeeping well enough to measure heights to an accuracy of 1 centimeter, a performance level suitable for making geodetic measurements. The paper suggests that optical clocks could be linked to form a network of "inland tidal gauges" to measure the distance from the earth's surface to the geoid (the surface of the earth's gravity field that matches the global mean sea level). Such a network could be updated far more frequently than current techniques

Computer Simulations of Real Earthquakes Made Available to Worldwide Network


Researchers have created videos of earthquakes that incorporate both real data and computer simulations known as synthetic seismograms. These simulations fill the gaps between the actual ground motion recorded at specific locations in the region, providing a more complete view of the earthquake. This still image is from a video that illustrates the January 2010 earthquake that devastated Haiti

A Princeton University-led research team has developed the capability to produce realistic movies of earthquakes based on complex computer simulations that can be made available worldwide within hours of a disastrous upheaval.
The videos show waves of ground motion spreading out from an epicenter. In making them widely available, the team of computational seismologists and computer scientists aims to aid researchers working to improve understanding of earthquakes and develop better maps of the Earth's interior.
"In our view, this could truly change seismic science," said Princeton's Jeroen Tromp, a professor of geosciences and applied and computational mathematics, who led the effort. "The better we understand what happens during earthquakes, the better prepared we can be. In addition, advances in understanding seismic waves can aid basic science efforts, helping us understand the underlying physics at work in the Earth's interior. These visualizations, we believe, will add greatly to the research effort.''
In a scientific paper describing the system, which appeared online Sept. 16 and will be published in the October 2010 Geophysical Journal International, the team describes how it creates the videos. The movies will be made available for free to scientists and members of the public and news organizations interested in featuring such images on television and the Internet. The easily downloadable videos can be viewed at: global.shakemovie.princeton.edu. They tell the story in a language that is easy to understand, said Tromp, who also is the director of the Princeton Institute for Computational Science and Engineering (PICSciE).
When an earthquake takes place, data from seismograms measuring ground motion are collected by a worldwide network of more than 1,800 seismographic stations operated by members of the international Federation of Digital Seismograph Networks. The earthquake's location, depth and intensity also are determined. The ShakeMovie system at Princeton will now collect these recordings automatically using the Internet.
The scientists will input the recorded data into a computer model that creates a "virtual earthquake." The videos will incorporate both real data and computer simulations known as synthetic seismograms. These simulations fill the gaps between the actual ground motion recorded at specific locations in the region, providing a more complete view of the earthquake.
The animations rely on software that produces numerical simulations of seismic wave propagation in sedimentary basins. The software computes the motion of the Earth in three dimensions based on the actual earthquake recordings, as well as what is known about the subsurface structure of the region. The shape of underground geological structures in the area not recorded on seismograms is key, Tromp said, as the structures can greatly affect wave motion by bending, speeding, slowing or simply reflecting energy. The simulations are created on a parallel processing computer cluster built and maintained by PICSciE and on a computer cluster located at the San Diego Supercomputing Center.
After the three-dimensional simulations are computed, the software program plugs in data capturing surface motion, including displacement, velocity and acceleration, and maps it onto the topography of the region around the earthquake. The movies then are automatically published via the ShakeMovie portal. An e-mail also is sent to subscribers, including researchers, news media and the public.
The simulations will be made available to scientists through the data management center of the Incorporated Research Institutions for Seismology (IRIS) in Seattle. The organization distributes global scientific data to the seismological community via the Internet. Scientists can visit the IRIS website and download information. Due to the research team's work, they now will be able to compare seismograms directly with synthetic versions.
Advanced computing power made the synthetic seismograms possible, according to Dennis McRitchie, another author on the paper and a lead high-performance computing analyst for Princeton's Office of Information Technology. "This is computationally intensive -- it takes five hours to produce a 100-minute simulation," McRitchie said. The effort to numerically solve the differential equations that govern how the waves propagate through these complicated earth models requires 384 computers operating in parallel to analyze and process the numbers.
When an earthquake occurs, seismic waves are generated that propagate away from the fault rupture and course along the Earth's surface. The videos show the up-and-down motion of the waves in red (up) and blue (down). Strong red waves indicate rapid upward motion. Strong blue waves indicate the Earth's surface is moving quickly downward. The simulation shows that the waves are of uneven strength in different areas, depending on the quality of the soil and the orientation of the earthquake fault. When the waves pass through soft, sedimentary soils, they slow down and gain strength. Waves speed up through hard rock, lessening the impact on surface areas above. A clock in the video shows the time since the earthquake occurred.
The ShakeMovie portal showing earthquakes around the world is similar to one maintained at the California Institute of Technology that routinely does simulations of seismic events in the Southern California region.
Earthquake movies will be available for download about 1.5 hours after the occurrence of a quake of magnitude 5.5 or greater.
In addition to Tromp and McRitchie, other Princeton scientists on the paper include Ebru Bozdag and Daniel Peter, postdoctoral fellows, and Hejun Zhu, a graduate student, all in the Department of Geosciences. The development of the simulations also involved staff at PICSciE as well as Robert Knight, a lead high-performance computing analyst and others in Princeton's Office of Information Technology.
Other authors on the paper include: Dimitri Komatisch of the Universite de Pau et des Pays de L'Adour in Paris; Vala Hjorleifsdottir of Lamont-Doherty Earth Observatory of Columbia University; Qinya Liu of the University of Toronto; Paul Friberg of Instrumental Software Technologies in New York; and Chad Trabant and Alex Hutko of IRIS.
The research was funded by the National Science Foundation.

Mimicking Nature, Water-Based 'Artificial Leaf' Produces Electricity


Just as real leaves do, water-gel-based solar devices can act like solar cells to produce electricity, new research shows

A team led by a North Carolina State University researcher has shown that water-gel-based solar devices -- "artificial leaves" -- can act like solar cells to produce electricity. The findings prove the concept for making solar cells that more closely mimic nature. They also have the potential to be less expensive and more environmentally friendly than the current standard-bearer: silicon-based solar cells.
The bendable devices are composed of water-based gel infused with light-sensitive molecules -- the researchers used plant chlorophyll in one of the experiments -- coupled with electrodes coated by carbon materials, such as carbon nanotubes or graphite. The light-sensitive molecules get "excited" by the sun's rays to produce electricity, similar to plant molecules that get excited to synthesize sugars in order to grow, says NC State's Dr. Orlin Velev, Invista Professor of Chemical and Biomolecular Engineering and the lead author of a paper published online in the Journal of Materials Chemistry describing this new generation of solar cells.
Velev says that the research team hopes to "learn how to mimic the materials by which nature harnesses solar energy." Although synthetic light-sensitive molecules can be used, Velev says naturally derived products -- like chlorophyll -- are also easily integrated in these devices because of their water-gel matrix.
Now that they've proven the concept, Velev says the researchers will work to fine-tune the water-based photovoltaic devices, making them even more like real leaves.
"The next step is to mimic the self-regenerating mechanisms found in plants," Velev says. "The other challenge is to change the water-based gel and light-sensitive molecules to improve the efficiency of the solar cells."
Velev even imagines a future where roofs could be covered with soft sheets of similar electricity-generating artificial-leaf solar cells.
"We do not want to overpromise at this stage, as the devices are still of relatively low efficiency and there is a long way to go before this can become a practical technology," Velev says. "However, we believe that the concept of biologically inspired 'soft' devices for generating electricity may in the future provide an alternative for the present-day solid-state technologies."
Researchers from the Air Force Research Laboratory and Chung-Ang University in Korea co-authored the study. The study was funded by the Air Force Research Laboratory and the U.S. Department of Energy. The work is part of NC State's universitywide nanotechnology program, Nano@NC State.
NC State's Department of Chemical and Biomolecular Engineering is part of the university's College of Engineering

Thursday, September 23, 2010

Titanium foams replace injured bones.


Titanium foams replace injured bones.

Flexible yet rigid like a human bone, and immediately capable of bearing loads: A new kind of implant, made of titanium foam, resembles the inside of a bone in terms of its structural configuration. Not only does this make it less stiff than conventional massive implants. It also promotes ingrowth into surrounding bones.
The greater one's responsibilities, the more a person grows. The same principle applies to the human bone: The greater the forces it bears, the thicker the tissue it develops. Those parts of the human skeleton subject to lesser strains tend to have lesser bone density. The force of stress stimulates the growth of the matrix. Medical professionals will soon be able to utilize this effect more efficiently, so that implants bond to their patients' bones on more sustained and stable basis. To do so, however, the bone replacement must be shaped in a manner that fosters ingrowth -- featuring pores and channels into which blood vessels and bone cells can grow unimpeded. Among implants, the titanium alloy Ti6Al4V is the material of choice. It is durable, stable, resilient, and well tolerated by the body. But it is somewhat difficult to manufacture: titanium reacts with oxygen, nitrogen and carbon at high temperatures, for example. This makes it brittle and breakable. The range of production processes is equally limited.
There are still no established processes that can be used to produce complex internal structures. This is why massive titanium implants are primarily used for defects in load-bearing bones. Admittedly, many of these possess structured surfaces that provide bone cells with firm support. But the resulting bond remains delicate. Moreover, the traits of massive implants are different from those of the human skeleton: they are substantially stiffer, and, thus, carry higher loads. "The adjacent bone bears hardly any load any more, and even deteriorates in the worst case. Then the implant becomes loose and has to be replaced," explains Dr.-Ing. Peter Quadbeck of the Fraunhofer Institute for Manufacturing and Advanced Materials IFAM in Dresden. Quadbeck coordinates the "TiFoam" Project, which yielded a titanium-based substance for a new generation of implants. The foam-like structure of the substance resembles the spongiosa found inside the bone.
The titanium foam is the result of a powder metallurgy-based molding process that has already proven its value in the industrial production of ceramic filters for aluminum casting. Open-cell polyurethane (PU) foams are saturated with a solution consisting of a binding medium and a fine titanium powder. The powder cleaves to the cellular structures of the foams. The PU and binding agents are then vaporized. What remains is a semblance of the foam structures, which is ultimately sintered. "The mechanical properties of titanium foams made this way closely approach those of the human bone," reports Quadbeck. "This applies foremost to the balance between extreme durability and minimal rigidity." The former is an important precondition for its use on bones, which have to sustain the forces of both weight and motion. Bone-like rigidity allows for stress forces to be transmitted; with the new formation of bone cells, it also fosters healing of the implant. Consequently, stress can and should be applied to the implant immediately after insertion.
In the "TiFoam" project, the research partners concentrated on demonstrating the viability of titanium foam for replacement of defective vertebral bodies. The foam is equally suitable for "repairing" other severely stressed bones. In addition to the materials scientists from the Fraunhofer institutes IFAM and IKTS -- the Institute for Ceramic Technologies and Systems in Dresden -- physicians from the medical center at the Technical University of Dresden and from several companies were involved in developing the titanium foam. Project partner InnoTERE already announced that it would soon develop and manufacture "TiFoam"-based bone implants.

New Luggage Inspection Methods Identify Liquid Explosives


New luggage inspection methods identify liquid explosives.
Liquid explosives are easy to produce. As a result, terrorists can use the chemicals for attacks -- on aircraft, for instance. In the future, new detection systems at airport security checkpoints will help track down these dangerous substances. Researchers are currently testing equipment.
To most air travelers, it is an annoying fact of life: the prohibition of liquids in carry-on luggage. Under aviation security regulations introduced in Europe in November 2006, passengers who wish to take liquids such as creams, toothpaste or sunscreen on board must do so in containers no larger than 100 ml (roughly 3.4 fluid oz.). The EU provisions came in response to attempted attacks by terrorist suspects using liquid explosives on trans-Atlantic flights in August 2006. Now, travelers have a reason to hope to see the prohibition lifted. On November 19, 2009, the EU Regulatory Committee of the Member States passed a proposal to this effect issued by the EU Commission. Under the terms of the proposal, the prohibition of liquids will be lifted in two phases. First, beginning April 20 9, 2011, passengers in transit will be permitted to take liquids along with them. Under the second phase, beginning April 20 9, 2013, the limit on quantities of liquids will be lifted altogether. The EU Commission intends to introduce legislation to this effect this August. In the future, security checkpoints will feature equipment that can reliably distinguish between liquid explosives and harmless substances such as cola, perfume or shampoo.
This is also the intention of the European Civil Aviation Conference (ECAC), which lays down standardized detection procedures and inspection routines for liquid explosives. The explosives tests are being carried out by the Fraunhofer Institute for Chemical Technology ICT in Pfinztal. The German Federal Ministry of the Interior has officially designated the institute as a German Testing Center. The researchers there are working in cooperation with the German Federal Police. "In our safety laboratory, we can carry out the experiments under all of the safety conditions we would find in the field," remarks Dr. Dirk Röseling, a researcher at ICT. "Either on their own or at the invitation of ECAC, the manufacturers bring their detection equipment to our lab, where they show us how to operate it and then leave. Then we begin with testing."
But how do these experiments work? In their partially remote-controlled experimental facilities, first researchers at the safety laboratories manufacture explosives according to specifications provided by the ECAC. Security services provide the organization with lists of substances to use in manufacturing explosives. Then, the detection equipment must automatically identify the liquid explosive -- as well as any harmless substances -- as such. For instance, the equipment must not identify shampoo as an explosive and set off a false alarm. Depending on the scenario involved, individualized testing methods and systems are required: If open containers need to be inspected, for example, then the sensors detect the vapors given off. If luggage screeners need to scan unopened bottles in a tub, on the other hand, then x-ray equipment is used. The experts forward the findings of their tests either directly to the manufacturers of the equipment, or to the German Federal Police, which in turn passes the results along to the ECAC. The ECAC, in turn, notifies the companies of whether or not their equipment is suitable for certification. "In the past, luggage screening has only identified metals and solid explosives. The screening equipment of the future will also identify liquid explosives. Initial tests at the Frankfurt Airport have already successfully been completed," Röseling summarizes.
The researcher and his team presented details of the test scenarios and methods at the Future Security conference in Berlin, September 7 to 9, 2010.

Martian Methane Lasts Less Than a Year


Methane concentrations in Autumn (first martian year observed). Peak emissions fall over Tharsis (home to the Solar Systems's largest volcano, Olympus Mons), the Arabia Terrae plains and the Elysium region, also the site of volcanos. Bottom: True color map of Mars.
Methane in the atmosphere of Mars lasts less than a year, according to a study by Italian scientists. Sergio Fonti (Università del Salento) and Giuseppe Marzo (NASA Ames) have used observations from NASA's Mars Global Surveyor spacecraft to track the evolution of the gas over three martian years.
They are presenting their results at the European Planetary Science Congress 2010 in Rome.
"Only small amounts of methane are present in the martian atmosphere, coming from very localised sources. We've looked at changes in concentrations of the gas and found that there are seasonal and also annual variations. The source of the methane could be geological activity or it could be biological -- we can't tell at this point. However, it appears that the upper limit for methane lifetime is less than a year in the martian atmosphere," said Fonti.
Levels of methane are highest in autumn in the northern hemisphere, with localised peaks of 70 parts per billion, although methane can be detected across most of the planet at this time of year. There is a sharp decrease in winter, with only a faint band between 40-50 degrees north. Concentrations start to build again in spring and rise more rapidly in summer, spreading across the planet.
"One of the interesting things that we've found is that in summer, although the general distribution pattern is much the same as in autumn, there are actually higher levels of methane in the southern hemisphere. This could be because of the natural circulation occurring in the atmosphere, but has to be confirmed by appropriate computer simulations," said Fonti.
There are three regions in the northern hemisphere where methane concentrations are systematically higher: Tharsis and Elysium, the two main volcano provinces, and Arabia Terrae, which has high levels of underground water ice. Levels are highest over Tharsis, where geological processes, including magmatism, hydrothermal and geothermal activity could be ongoing.
"It's evident that the highest concentrations are associated with the warmest seasons and locations where there are favourable geological -- and hence biological -- conditions such as geothermal activity and strong hydration. The higher energy available in summer could trigger the release of gases from geological processes or outbreaks of biological activity," said Fonti.
The mechanisms for removing methane from the atmosphere are also not clear. Photochemical processes would not break down the gas quickly enough to match observations. However, wind driven processes can add strong oxidisers to the atmosphere, such as the highly reactive salt perchlorate, which could soak up methane much more rapidly.
Martian years are nearly twice as long as Earth years. The team used observations from the Thermal Emission Spectrometer (TES) on Mars Global Surveyor between July 1999 and October 2004, which corresponds to three martian years. The team studied one of the characteristic spectral features of methane in nearly 3 million TES observations, averaging data together to eliminate noise.
"Our study is the first time that data from an orbiting spectrometer has been used to monitor methane over an extended period. The huge TES dataset has allowed us to follow the methane cycle in the martian atmosphere with unprecedented accuracy and completeness. Our observations will be very useful in constraining the origins and significance of martian methane," said Fonti.
Methane was first detected in the martian atmosphere by ground based telescopes in 2003 and confirmed a year later by ESA's Mars Express spacecraft. Last year, observations using ground based telescopes showed the first evidence of a seasonal cycle.
The atmosphere on Mars consists of 95% carbon dioxide, 3% nitrogen, 1.6% argon, and contains traces of oxygen and water, as well as methane.

New Computer-Tomography Method Visualizes Nano-Structure of Bones


Schematic of the novel nano-tomography method that opens the door to computed tomography examinations of minute structures at nanometer resolutions.

A novel nano-tomography method developed by a team of researchers from the Technische Universitaet Muenchen, the Paul Scherrer Institute and the ETH-Zurich opens the door to computed tomography examinations of minute structures at nanometer resolutions. Three-dimensional detailed imaging of fragile bone structures becomes possible. Their first nano-CT images was published in Nature on Sept. 23, 2010. This new technique will facilitate advances in both life sciences and materials sciences.
Osteoporosis, a medical condition in which bones become brittle and fragile from a loss of density, is among the most common diseases in aging bones: In Germany around a quarter of the population aged over 50 is affected. Patients' bone material shrinks rapidly, leading to a significantly increased risk of fracture. In clinical research to date, osteoporosis is diagnosed almost exclusively by establishing an overall reduction in bone density. This approach, however, gives little information about the associated, and equally important, local structure and bone density changes. Franz Pfeiffer, professor for Biomedical Physics at the Technische Universitaet Muenchen (TUM) and head of the research team, has resolved the dilemma: "With our newly developed nano-CT method it is now possible to visualize the bone structure and density changes at high resolutions and in 3D. This enables us to do research on structural changes related to osteoporosis on a nanoscale and thus develop better therapeutic approaches."
During development, Pfeiffer's team built on X-ray computed tomography (CT). The principle is well established -- CT scanners are used every day in hospitals and medical practices for the diagnostic screening of the human body. In the process the human body is X-rayed while a detector records from different angles how much radiation is being absorbed. In principle it is nothing more than taking multiple X-ray pictures from various directions. A number of such pictures are then used to generate digital 3D images of the body's interior using image processing.
The newly developed method measures not only the overall beam intensity absorbed by the object under examination at each angle, but also those parts of the X-ray beam that are deflected in different directions -- "diffracted" in the language of physics. Such a diffraction pattern is generated for every point in the sample. This supplies additional information about the exact nanostructure, as X-ray radiation is particularly sensitive to the tiniest of structural changes. "Because we have to take and process so many individual pictures with extreme precision, it was particularly important during the implementation of the method to use high-brilliance X-ray radiation and fast, low-noise pixel detectors -- both available at the Swiss Light Source (SLS)," says Oliver Bunk, who was responsible for the requisite experimental setup at the PSI synchrotron facilities in Switzerland.
The diffraction patterns are then processed using an algorithm developed by the team. TUM researcher Martin Dierolf, lead author of the Nature article, explains: "We developed an image reconstruction algorithm that generates a high-resolution, three-dimensional image of the sample using over one hundred thousand diffraction patterns. This algorithm takes into account not only classical X-ray absorption, but also the significantly more sensitive phase shift of the X-rays." A showcase example of the new technique was the examination of a 25-micrometer, superfine bone specimen of a laboratory mouse -- with surprisingly exact results. The so-called phase contrast CT pictures show even smallest variations in the specimen's bone density with extremely high precision: Cross-sections of cavities where bone cells reside and their roughly 100 nanometer-fine interconnection network are clearly visible.
"Although the new nano-CT procedure does not achieve the spatial resolution currently available in electron microscopy, it can -- because of the high penetration of X-rays -- generate three-dimensional tomography images of bone samples," comments Roger Wepf, director of the Electron Microscopy Center of the ETH Zurich (EMEZ). "Furthermore, the new nano-CT procedure stands out with its high precision bone density measurement capacity, which is particularly important in bone research." This method will open the door to more precise studies on the early phase of osteoporosis, in particular, and evaluation of the therapeutic outcomes of various treatments in clinical studies.
The new technique is also very interesting for non-medical applications: Further fields of application include the development of new materials in materials science or in the characterization of semiconductor components. Ultimately, the nano-CT procedure may also be transferred to novel, laser-based X-ray sources, such as the ones currently under development at the Cluster of Excellence "Munich-Centre for Advanced Photonics" (MAP) and at the recently approved large-scale research project "Centre for Advanced Laser Applications" (CALA) on the TUM-Campus Garching near Munich

Wednesday, September 22, 2010

Art of Dividing: Researchers Decode Function and Protein Content of the Centrosome


(A) During cell division, chromosomes (red) are distributed evenly by thread-like structures (microtubules) emanating from the centrosome (green). (B) Inactivation of a centrosomal protein causes abnormal organisation of the mitotic spindle microtubules and a faulty distribution of chromosomes. © MPI for Molecular Genetics / B. Lange


A basic requirement for growth and life of a multicellular organism is the ability of its cells to divide. Chromosomes in the cells duplicate and are then distributed among the daughter cells. This distribution is organized by a protein complex made up of several hundred different proteins, called the centrosome. In cancer cells, the centrosome often assumes an unnatural shape or is present in uncontrolled numbers. The reasons for this were previously largely unknown.
Scientists at the Max Planck Institute for Molecular Genetics in Berlin, together with colleagues at the German Cancer Research Center in Heidelberg and at the Leibniz Institute for Age Research -- Fritz Lipmann Institute in Jena, have investigated the functions of the different centrosomal components. The researchers led by Bodo Lange now present their results in the EMBO Journal, detailing the centrosome's components and their functions. Their work extends our knowledge of regulation of cell division and opens the door to new investigations into cancer development.
As part of their research, the scientists examined centrosomes of the fruit fly Drosophila as well as those from human cells. "The fruit fly is a terrific system for investigating the centrosome, because the basic mechanisms of cell division are very similar between fly and human," Bodo Lange, the research group leader, explains.
The group isolated centrosomes from the eggs of fruit flies and, using mass spectrometry methods, identified more than 250 different proteins making up this complex. These components were then subjected to targeted inactivation through RNA interference (RNAi), to discover their role in the structure of the centrosome and in chromosome distribution. The scientists were able to determine the protein functions quantitatively through use of state-of-the-art automatic and robotic microscopes. They found a whole series of proteins responsible for the separation of chromosomes, number of centrosomes and their structure. As these characteristics are often disrupted in cancer cells, the researchers believe their findings will have a significant impact on the understanding of cell division and the development of cancerous diseases.
The work of these scientists has brought new insight into the abnormalities seen in cancer cells. "Based on our findings, we hope to be able to unravel regulatory networks in the future, which will help to target and interfere with the division of cancer cells."

Your Body Recycling Itself -- Captured on Film


This image shows UBR-box recognition of an arginine residue at the beginning of a protein (blue) targeted for degradation. The structural integrity of the UBR box depends on zinc (grey) and a histidine residue (red) that is mutated in Johanson-Blizzard syndrome.

Our bodies recycle proteins, the fundamental building blocks that enable cell growth and development. Proteins are made up of a chain of amino acids, and scientists have known since the 1980s that first one in the chain determines the lifetime of a protein. McGill researchers have finally discovered how the cell identifies this first amino acid -- and caught it on camera.
"There are lots of reasons cells recycle proteins -- fasting, which causes loss of muscle, growth and remodeling during development, and normal turnover as old proteins are replaced to make new ones," explained lead researcher, Dr. Kalle Gehring, from McGill's Department of Biochemistry. "One way that cells decide which proteins to degrade is the presence of a signal known as an N-degron at the start of the protein. By X-ray crystallography, we discovered that the N-degron is recognized by the UBR box, a component of the cells' recycling system."
The powerful technique can pinpoint the exact location of atoms and enabled the team to capture an image of the UBR box, providing insight to this incredibly tiny yet essential part of our bodies' chemical mechanics.
Aside from representing a major advance in our understanding of the life cycle of proteins, the research has important repercussions for Johanson-Blizzard syndrome, a rare disease that causes deformations and mental retardation. This syndrome is caused by a mutation in the UBR box that causes it to lose an essential zinc atom. Better understanding of the structure of the UBR box may help researchers develop treatments for this syndrome.
The research was published in Nature Structural & Molecular Biology and received funding from the Canadian Institutes of Health Research.

Quantum Computing Closer Than Ever: Scientists Using Lasers to Cool and Control Molecules


A new method for laser cooling could help pave the way for using individual molecules as information bits in quantum computing.

Ever since audiences heard Goldfinger utter the famous line, "No, Mr. Bond; I expect you to die," as a laser beam inched its way toward James Bond and threatened to cut him in half, lasers have been thought of as white-hot beams of intensely focused energy capable of burning through anything in their path.
Now a team of Yale physicists has used lasers for a completely different purpose, employing them to cool molecules down to temperatures near what's known as absolute zero, about -460 degrees Fahrenheit. Their new method for laser cooling, described in the online edition of the journal Nature, is a significant step toward the ultimate goal of using individual molecules as information bits in quantum computing.
Currently, scientists use either individual atoms or "artificial atoms" as qubits, or quantum bits, in their efforts to develop quantum processors. But individual atoms don't communicate as strongly with one another as is needed for qubits. On the other hand, artificial atoms -- which are actually circuit-like devices made up of billions of atoms that are designed to behave like a single atom -- communicate strongly with one another, but are so large they tend to pick up interference from the outside world. Molecules, however, could provide an ideal middle ground.
"It's a kind of Goldilocks problem," said Yale physicist David DeMille, who led the research. "Artificial atoms may prove too big and individual atoms may prove too small, but molecules made up of a few different atoms could be just right."
In order to use molecules as qubits, physicists first have to be able to control and manipulate them -- an extremely difficult feat, as molecules generally cannot be picked up or moved without disturbing their quantum properties. In addition, even at room temperature molecules have a lot of kinetic energy, which causes them to move, rotate and vibrate.
To overcome the problem, the Yale team pushed the molecules using the subtle kick delivered by a steady stream of photons, or particles of light, emitted by a laser. Using laser beams to hit the molecules from opposite directions, they were able to reduce the random velocities of the molecules. The technique is known as laser cooling because temperature is a direct measurement of the velocities in the motion of a group of molecules. Reducing the molecules' motions to almost nothing is equivalent to driving their temperatures to virtually absolute zero.
While scientists had previously been able to cool individual atoms using lasers, the discovery by the Yale team represents the first time that lasers have just as successfully cooled molecules, which present unique challenges because of their more complex structures.
The team used the molecule strontium monofluoride in their experiments, but DeMille believes the technique will also prove successful with other molecules. Beyond quantum computing, laser cooling molecules has potential applications in chemistry, where near absolute zero temperatures could induce currently inaccessible reactions via a quantum mechanical process known as "quantum tunneling." DeMille also hopes to use laser cooling to study particle physics, where precise measurements of molecular structure could give clues as to the possible existence of exotic, as of yet undiscovered particles.
"Laser cooling of atoms has created a true scientific revolution. It is now used in areas ranging from basic science such as Bose-Einstein condensation, all the way to devices with real-world impacts such as atomic clocks and navigation instruments," DeMille said. "The extension of this technique to molecules promises to open an exciting new range of scientific and technological applications."
Other authors of the paper include Edward Shuman and John Barry (both of Yale University).

Tuesday, September 21, 2010

Biofuel from Inedible Plant Material Easier to Produce Following Enzyme Discovery


Fluorescent microscope image showing a cross-section of stems from a beech tree, where sugars are locked away.

Researchers funded by the Biotechnology and Biological Sciences Research Council (BBSRC) have discovered key plant enzymes that normally make the energy stored in wood, straw, and other non-edible parts of plants difficult to extract. The findings, published in Proceedings of the National Academy of Sciences, can be used to improve the viability of sustainable biofuels that do not adversely affect the food chain.
The team based at the University of Cambridge, and now part of the BBSRC Sustainable Bioenergy Centre (BSBEC), has identified and studied the genes for two enzymes that toughen wood, straw and stalks and so make it difficult to extract sugars to make bioethanol or other plant-derived products. This knowledge can now be used in crop breeding programs to make non-edible plant material that requires less processing, less energy and fewer chemicals for conversion to biofuels or other renewable products and therefore have an even lower overall impact on atmospheric carbon.
The research also increases the economic viability of producing sustainable biofuels from the inedible by-products of crops through increasing our understanding of plant structures.
Lead researcher Professor Paul Dupree said: "There is a lot of energy stored in wood and straw in the form of a substance called lignocellulose. We wanted to find ways of making it easier to get at this energy and extract it in the form of sugars that can be fermented to produce bioethanol and other products."
Lignocellulose is an important component of plants, giving them strength and rigidity. One of the main components of lignocellulose is called xylan. Xylan in wood and straw represents about a third of the sugars that could potentially be used to make bioethanol, but it is locked away. Releasing the energy from lignocellulose is an important challenge to tackle as it will allow the production of fuels from plants in a sustainable way that does not affect the food chain.
Professor Dupree continued: "What we didn't want to do was end up with floppy plants that can't grow properly, so it was important to find a way of making xylan easier to break down without having any major effects."
The team studied Arabidopsis plants (a plant that is easy to study in the laboratory) that lack two of the enzymes that build the xylan part of lignocellulose in plants. They found that although the stems of the plants are slightly weaker than normal, they grow normally and reach a normal size. They also tested how easy it is to extract sugars from these plants and discovered that it takes less effort to convert all the xylan into sugar.
Professor Dupree concluded: "The next stage will be to work with our colleagues who are developing new varieties of bioenergy crops such as willow and miscanthus grass to see if we can breed plants with these properties and to use our discovery to develop more sustainable processes for generating fuels from crop residues. We expect to work closely with industrial collaborators to see how we can quickly transfer this research into real applications for transport fuels."
Duncan Eggar, BBSRC Bioenergy Champion said: "As oil reserves deplete, we must urgently find alternatives to oil-based fuels, plastics, lubricants, and other products. This research is a good example where understanding the fundamental biology of plants gives us the foundation to use plants to produce a raft of important products.
"We know that we can store a tremendous amount of energy from the sun in the form of plant material and at the same time capture atmospheric carbon dioxide. Working in consort with the other five hubs of the BBSRC Sustainable Bioenergy Centre, this research is aimed at improving our ability to release energy stored in plants in a form that is usable in normal everyday applications."

End of Microplates? Novel Electronic Biosensing Technology Could Facilitate New Era of Personalized Medicine


The new electronic microplate is shown in front of the technology it aims to replace, the conventional microplate

The multi-welled microplate, long a standard tool in biomedical research and diagnostic laboratories, could become a thing of the past thanks to new electronic biosensing technology developed by a team of microelectronics engineers and biomedical scientists at the Georgia Institute of Technology.
Essentially arrays of tiny test tubes, microplates have been used for decades to simultaneously test multiple samples for their responses to chemicals, living organisms or antibodies. Fluorescence or color changes in labels associated with compounds on the plates can signal the presence of particular proteins or gene sequences.
The researchers hope to replace these microplates with modern microelectronics technology, including disposable arrays containing thousands of electronic sensors connected to powerful signal processing circuitry. If they're successful, this new electronic biosensing platform could help realize the dream of personalized medicine by making possible real-time disease diagnosis -- potentially in a physician's office -- and by helping select individualized therapeutic approaches.
"This technology could help facilitate a new era of personalized medicine," said John McDonald, chief research scientist at the Ovarian Cancer Institute in Atlanta and a professor in the Georgia Tech School of Biology. "A device like this could quickly detect in individuals the gene mutations that are indicative of cancer and then determine what would be the optimal treatment. There are a lot of potential applications for this that cannot be done with current analytical and diagnostic technology."
Fundamental to the new biosensing system is the ability to electronically detect markers that differentiate between healthy and diseased cells. These markers could be differences in proteins, mutations in DNA or even specific levels of ions that exist at different amounts in cancer cells. Researchers are finding more and more differences like these that could be exploited to create fast and inexpensive electronic detection techniques that don't rely on conventional labels.
"We have put together several novel pieces of nanoelectronics technology to create a method for doing things in a very different way than what we have been doing," said Muhannad Bakir, an associate professor in Georgia Tech's School of Electrical and Computer Engineering. "What we are creating is a new general-purpose sensing platform that takes advantage of the best of nanoelectronics and three-dimensional electronic system integration to modernize and add new applications to the old microplate application. This is a marriage of electronics and molecular biology."
The three-dimensional sensor arrays are fabricated using conventional low-cost, top-down microelectronics technology. Though existing sample preparation and loading systems may have to be modified, the new biosensor arrays should be compatible with existing work flows in research and diagnostic labs.
"We want to make these devices simple to manufacture by taking advantage of all the advances made in microelectronics, while at the same time not significantly changing usability for the clinician or researcher," said Ramasamy Ravindran, a graduate research assistant in Georgia Tech's Nanotechnology Research Center and the School of Electrical and Computer Engineering.
A key advantage of the platform is that sensing will be done using low-cost, disposable components, while information processing will be done by reusable conventional integrated circuits connected temporarily to the array. Ultra-high density spring-like mechanically compliant connectors and advanced "through-silicon vias" will make the electrical connections while allowing technicians to replace the biosensor arrays without damaging the underlying circuitry.
Separating the sensing and processing portions allows fabrication to be optimized for each type of device, notes Hyung Suk Yang, a graduate research assistant also working in the Nanotechnology Research Center. Without the separation, the types of materials and processes that can be used to fabricate the sensors are severely limited.
The sensitivity of the tiny electronic sensors can often be greater than current systems, potentially allowing diseases to be detected earlier. Because the sample wells will be substantially smaller than those of current microplates -- allowing a smaller form factor -- they could permit more testing to be done with a given sample volume.
The technology could also facilitate use of ligand-based sensing that recognizes specific genetic sequences in DNA or messenger RNA. "This would very quickly give us an indication of the proteins that are being expressed by that patient, which gives us knowledge of the disease state at the point-of-care," explained Ken Scarberry, a postdoctoral fellow in McDonald's lab.
So far, the researchers have demonstrated a biosensing system with silicon nanowire sensors in a 16-well device built on a one-centimeter by one-centimeter chip. The nanowires, just 50 by 70 nanometers, differentiated between ovarian cancer cells and healthy ovarian epithelial cells at a variety of cell densities.
Silicon nanowire sensor technology can be used to simultaneously detect large numbers of different cells and biomaterials without labels. Beyond that versatile technology, the biosensing platform could accommodate a broad range of other sensors -- including technologies that may not exist yet. Ultimately, hundreds of thousands of different sensors could be included on each chip, enough to rapidly detect markers for a broad range of diseases.
"Our platform idea is really sensor agnostic," said Ravindran. "It could be used with a lot of different sensors that people are developing. It would give us an opportunity to bring together a lot of different kinds of sensors in a single chip."
Genetic mutations can lead to a large number of different disease states that can affect a patient's response to disease or medication, but current labeled sensing methods are limited in their ability to detect large numbers of different markers simultaneously.
Mapping single nucleotide polymorphisms (SNPs), variations that account for approximately 90 percent of human genetic variation, could be used to determine a patient's propensity for a disease, or their likelihood of benefitting from a particular intervention. The new biosensing technology could enable caregivers to produce and analyze SNP maps at the point-of-care.
Though many technical challenges remain, the ability to screen for thousands of disease markers in real-time has biomedical scientists like McDonald excited.
"With enough sensors in there, you could theoretically put all possible combinations on the array," he said. "This has not been considered possible until now because making an array large enough to detect them all with current technology is probably not feasible. But with microelectronics technology, you can easily include all the possible combinations, and that changes things."
Papers describing the biosensing device were presented at the Electronic Components and Technology Conference and the International Interconnect Technology conference in June 2010. The research has been supported in part by the National Nanotechnology Infrastructure Network (NNIN), Georgia Tech's Integrative BioSystems Institute (IBSI) and the Semiconductor Research Corporation.

Data Clippers to Set Sail to Enhance Future Planetary Missions


An artist’s illustration of the data clippers

 A new golden age of sailing may be about to begin -- in space. Future missions to explore the outer planets could employ fleets of 'data-clippers' -- manoeuvrable spacecraft equipped with solar sails, to ship vast quantities of scientific data to back Earth.
According to Joel Poncy of Thales Alenia Space, the technology could be ready in time to support mid-term missions to the moons of Jupiter and Saturn. Poncy will be presenting an assessment of data clippers at the European Planetary Science Congress (EPSC) 2010 in Rome on Sept. 20, 2010.
"Space-rated flash memories will soon be able to store the huge quantities of data needed for the global mapping of planetary bodies in high resolution. But a full high-res map of, say, Europa or Titan, would take several decades to download from a traditional orbiter, even using very large antennae. Downloading data is the major design driver for interplanetary missions. We think that data clippers would be a very efficient way of overcoming this bottleneck," said Poncy.
Poncy and his team at Thales Alenia Space have carried out a preliminary assessment for a data clipper mission. Their concept is for a clipper to fly close to a planetary orbiter, upload its data and fly by Earth, at which point terabytes of data could be downloaded to the ground station. A fleet of data clippers cruising around the Solar System could provide support for an entire suite of planetary missions.
"We have looked at the challenges of a data clipper mission and we think that it could be ready for a launch in the late 2020s. This means that the technology should be included now in the roadmap for future missions, and this is why we are presenting this study at EPSC," said Poncy.
Poncy's team have assessed the communications systems and tracking devices that a data clipper would need, as well as the flyby conditions and pointing accuracy required for the massive data transfers. Recent advances in technology mean that spacecraft propelled by solar sails, which use radiation pressure from photons emitted by the Sun, or electric sails, which harness the momentum of the solar wind, can now be envisaged for mid-term missions. The Japanese Space Agency, JAXA, is currently testing a solar sail mission, IKAROS.
"Using the Sun as a propulsion source has the considerable advantage of requiring no propellant on board. As long as the hardware doesn't age too much and the spacecraft is manoeuvrable, the duration of the mission can be very long. The use of data clippers could lead to a valuable downsizing of exploration missions and lower ground operation costs -- combined with a huge science return. The orbiting spacecraft would still download some samples of their data directly to Earth to enable real-time discoveries and interactive mission operations. But the bulk of the data is less urgent and is often processed by scientists much later. Data clippers could provide an economy delivery service from the outer Solar System, over and over again," said Poncy.

Magical BEANs: New Nano-Sized Particles Could Provide Mega-Sized Data Storage


This schematic shows enthalpy curves sketched for the liquid, crystalline and amorphous phases of a new class of nanomaterials called "BEANs" for Binary Eutectic-Alloy Nanostructures.

The ability of phase-change materials to readily and swiftly transition between different phases has made them valuable as a low-power source of non-volatile or "flash" memory and data storage. Now an entire new class of phase-change materials has been discovered by researchers with the Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC) Berkeley that could be applied to phase change random access memory (PCM) technologies and possibly optical data storage as well. The new phase-change materials -- nanocrystal alloys of a metal and semiconductor -- are called "BEANs," for binary eutectic-alloy nanostructures.
"Phase changes in BEANs, switching them from crystalline to amorphous and back to crystalline states, can be induced in a matter of nanoseconds by electrical current, laser light or a combination of both," says Daryl Chrzan, a physicist who holds joint appointments with Berkeley Lab's Materials Sciences Division and UC Berkeley's Department of Materials Science and Engineering. "Working with germanium tin nanoparticles embedded in silica as our initial BEANs, we were able to stabilize both the solid and amorphous phases and could tune the kinetics of switching between the two simply by altering the composition."
Chrzan is the corresponding author on a paper reporting the results of this research which has been published in the journal NanoLetters titled "Embedded Binary Eutectic Alloy Nanostructures: A New Class of Phase Change Materials."
Co-authoring the paper with Chrzan were Swanee Shin, Julian Guzman, Chun-Wei Yuan, Christopher Liao, Cosima Boswell-Koller, Peter Stone, Oscar Dubon, Andrew Minor, Masashi Watanabe, Jeffrey Beeman, Kin Yu, Joel Ager and Eugene Haller.
"What we have shown is that binary eutectic alloy nanostructures, such as quantum dots and nanowires, can serve as phase change materials," Chrzan says. "The key to the behavior we observed is the embedding of nanostructures within a matrix of nanoscale volumes. The presence of this nanostructure/matrix interface makes possible a rapid cooling that stabilizes the amorphous phase, and also enables us to tune the phase-change material's transformation kinetics."
A eutectic alloy is a metallic material that melts at the lowest possible temperature for its mix of constituents. The germanium tin compound is a eutectic alloy that has been considered by the investigators as a prototypical phase-change material because it can exist at room temperature in either a stable crystalline state or a metastable amorphous state. Chrzan and his colleagues found that when germanium tin nanocrystals were embedded within amorphous silica the nanocrystals formed a bilobed nanostructure that was half crystalline metallic and half crystalline semiconductor.
"Rapid cooling following pulsed laser melting stabilizes a metastable, amorphous, compositionally mixed phase state at room temperature, while moderate heating followed by slower cooling returns the nanocrystals to their initial bilobed crystalline state," Chrzan says. "The silica acts as a small and very clean test tube that confines the nanostructures so that the properties of the BEAN/silica interface are able to dictate the unique phase-change properties."
While they have not yet directly characterized the electronic transport properties of the bilobed and amorphous BEAN structures, from studies on related systems Chrzan and his colleagues expect that the transport as well as the optical properties of these two structures will be substantially different and that these difference will be tunable through composition alterations.
"In the amorphous alloyed state, we expect the BEAN to display normal, metallic conductivity," Chrzan says. "In the bilobed state, the BEAN will include one or more Schottky barriers that can be made to function as a diode. For purposes of data storage, the metallic conduction could signify a zero and a Schottky barrier could signify a one."
Chrzan and his colleagues are now investigating whether BEANs can sustain repeated phase-changes and whether the switching back and forth between the bilobed and amorphous structures can be incorporated into a wire geometry. They also want to model the flow of energy in the system and then use this modeling to tailor the light/current pulses for optimum phase-change properties.
The in-situ Transmission electron microscopy characterizations of the BEAN structures were carried out at Berkeley Lab's National Center for Electron Microscopy, one of the world's premier centers for electron microscopy and microcharacterization.
Berkeley Lab is a U.S. Department of Energy (DOE) national laboratory located in Berkeley, California. It conducts unclassified scientific research and is managed by the University of California for the DOE Office of Science.

Plastic That Grows On Trees


PNNL researchers can now take cellulose to HMF in one step, a process that might someday replace crude oil to make fuel and plastics
Some researchers hope to turn plants into a renewable, nonpolluting replacement for crude oil. To achieve this, scientists have to learn how to convert plant biomass into a building block for plastics and fuels cheaply and efficiently. In new research, chemists have successfully converted cellulose -- the most common plant carbohydrate -- directly into the building block called HMF in one step.
The result builds upon earlier work by researchers at the Department of Energy's Pacific Northwest National Laboratory. In that work scientists produced HMF from simple sugars derived from cellulose. In this new work, researchers developed a way to bypass the sugar-forming step and go straight from cellulose to HMF. This simple process generates a high yield of HMF and allows the use of raw cellulose as feed material.
"In biomass like wood, corn stover and switchgrass, cellulose is the most abundant polymer that researchers are trying to convert to biofuels and plastics," said chemist Z. Conrad Zhang, who led the work while at PNNL's Institute for Interfacial Catalysis.
HMF, also known as 5-hydroxymethylfurfural, can be used as a building block for plastics and "biofuels" such as gasoline and diesel, essentially the same fuels processed from crude oil. In previous work, PNNL researchers used a chemical and a solvent known as an ionic liquid to convert the simple sugars into HMF.
The chemical, a metal chloride known as chromium chloride, converted sugar into highly pure HMF. But to be able to feed cellulosic biomass directly from nature, the team still needed to break down cellulose into simple sugars -- Zhang and colleagues wanted to learn how to skip that step.
The ionic liquid has the added benefit of being able to dissolve cellulose, which as anyone who's boiled leafy vegetables knows can be stringy and hard to dissolve. Compounds called catalysts speed up the conversion of cellulose to HMF. After trying different metal chloride catalysts in the ionic solvent, they found a pair of catalysts that worked well: A combination of copper chloride and chromium chloride under 120 degrees Celsius broke down the cellulose without creating a lot of unwanted byproducts.
In additional experiments, the team tested how well their method compared to acid, a common way to break down cellulose. The metal chlorides-ionic liquid system worked ten times faster than the acid and at much lower temperatures. In addition, the paired metal chloride catalysts allowed Zhang's research team to avoid using another compound under investigation, a mineral acid, that is known to degrade HMF.
Optimizing their method, the team found that they could consistently achieve a high yield of HMF -- the method converted about 57 percent of the sugar content in the cellulose feedstock to HMF through this single step process. The team recovered more than 90% of the HMF formed, and the final product from the process was 96% pure.
In addition, the metal chlorides and ionic liquid could be reused multiple times without losing their effectiveness. Being able to recycle the materials will lower the cost of HMF production.
"This paper is a tremendous breakthrough. By combining the cellulose-breakdown and sugar-conversion steps, we are very close to a single-step method of converting raw biomass into a new platform chemical -- a chemical you can readily turn into a transportation fuel or for synthesis of plastics and other useful materials," said PNNL geochemist and study coauthor Jim Amonette. "Advances like this can help reduce our dependence on fossil fuels."
This research was supported by Pacific Northwest National Laboratory-directed research funding.

ScienceDaily: Your source for the latest research news and science breakthroughs -- updated daily Science News Share Blog Cite Print Email Bookmark Chemists Discover Method to Create High-Value Chemicals from Biomass


Iowa State University's Ronald Holtan, left, and Walter Trahanovsky are using high-pressure vessels like this to create high-value chemicals from biomass.

Iowa State University researchers have found a way to produce high-value chemicals such as ethylene glycol and propylene glycol from biomass rather than petroleum sources.
Walter Trahanovsky, an Iowa State professor of chemistry who likes to write out the chemical structures of compounds when he talks about his science, was looking to produce sugar derivatives from cellulose and other forms of biomass using high-temperature chemistry. And so he and members of his research group studied the reactions of cellulosic materials in alcohols at high temperatures and pressures.
They analyzed the products of the reactions using nuclear magnetic resonance spectroscopy. Early experiments produced the expected sugar derivatives. Additional work, however, clearly revealed significant yields of ethylene glycol and propylene glycol.
"It was a real surprise," Trahanovsky said. "These products were unexpected, so we never looked for them. But they were always there."
Uses for ethylene glycol include auto antifreeze, polyester fabrics and plastic bottles. Propylene glycol has many uses, including as a food additive, a solvent in pharmaceuticals, a moisturizer in cosmetics and as a coolant in liquid cooling systems.
Conversion of biomass to fuels and other chemicals can require strong acids or other harsh and expensive compounds. These processes also generate chemical wastes that have to be collected for safe disposal.
The Iowa State researchers say they have found a technology that is simpler yet effective and also better for the environment.
"There is potential here," said Trahanovsky. "It's not a wild dream to think this could be developed into a practical process."
The biomass conversion process is based on the chemistry of supercritical fluids, fluids that are heated under pressure until their liquid and gas phases merge. In this case, Trahanovsky said the key results are significant yields of ethylene glycol, propylene glycol and other chemicals with low molecular weights. He said the process also produces alkyl glucosides and levoglucosan that can be converted into glucose for ethanol production or other uses.
All this happens without the use of any expensive reagents such as acids, enzymes, catalysts or hydrogen gas, Trahanovsky said. The process even works when there are impurities in the biomass.
The Iowa State University Research Foundation Inc. has filed for a patent of the technology.
The research has been supported by grants from the Iowa Energy Center. Other Iowa State researchers who have contributed to the project include Ronald Holtan, a postdoctoral research associate in chemistry; Norm Olson, the project manager of the Iowa Energy Center's BECON facility near Nevada; Joseph Marshall, a former graduate student; and Alyse Hurd and Kyle Quasdorf, former undergraduate students.
Trahanovsky said the research team is still working to develop and improve the conversion technology.
And he does think the technology could be useful to industry.
"The starting materials for this are cheap," Trahanovsky said. "And the products are reasonably high-value chemicals."

Synthetic Fuels Research Aims to Reduce Oil Dependence

Researchers at Purdue University have developed a facility aimed at learning precisely how coal and biomass are broken down in reactors called gasifiers as part of a project to strengthen the scientific foundations of the synthetic fuel economy.
"A major focus is to be able to produce a significant quantity of synthetic fuel for the U.S. air transportation system and to reduce our dependence on petroleum oil for transportation," said Jay Gore, the Reilly University Chair Professor of Combustion Engineering at Purdue.
The research is part of work to develop a system for generating large quantities of synthetic fuel from agricultural wastes, other biomass or coal that would be turned into a gas using steam and then converted into a liquid fuel.
Other aims are to learn how to generate less carbon dioxide than conventional synthetic-fuel processing methods while increasing the yield of liquid fuel by adding hydrogen into the coal-and-biomass-processing reactor, a technique pioneered by Rakesh Agrawal, Purdue's Winthrop E. Stone Distinguished Professor of Chemical Engineering.
Researchers are using the facility to learn how coal and biomass "gasify" when exposed to steam under high pressure in order to improve the efficiency of the gasification process.
"We want to show that our system is flexible for using coal and biomass," Gore said. "The aim is to create a sustainable synthetic fuel economy. What's daunting is the size of the problem -- how much oil we need -- how much energy we need."
Findings published last year showed carbon dioxide might be reduced by 40 percent using the technique. And new findings will be detailed in a research paper being presented during a January meeting of the American Institute of Aeronautics and Astronautics in Orlando.
The research is based at the university's Maurice J. Zucrow Laboratories.
Synthetic fuels currently are being blended with petroleum fuels for performance improvement in automobile and aircraft applications and also are used in equipment trials in commercial aircraft. However, new techniques are needed to reduce the cost and improve the efficiency of making the fuels.
"At the right price, synthetic fuels could replace fossil fuels in all conceivable applications," Gore said.
The 2-meter-tall stainless steel reactor is part of a system that borrows technology from aerospace applications, including a "spark igniter" used in space shuttle engines. Materials inside the spark igniter may briefly reach temperatures of up to 3,000 degrees Celsius, or more than 5,400 degrees Fahrenheit -- hot enough to burn holes in steel.
The researchers also integrated an advanced optical diagnostics system: A laser is transmitted through a window in the stainless steel vessel, passing through the gases being processed inside. An optical sensor on the other side of the vessel decodes the light to determine the precise temperature and composition of the gases.
"It's a modular design, so the optical diagnostics part can be moved to various points to analyze how the gasification proceeds," said Robert Lucht, the Ralph and Bettye Bailey Professor of Combustion in Mechanical Engineering.
Doctoral students also designed a special feeder to transport the coal or biomass into the reactor vessel.
"One of the challenges is feeding this at high pressure -- about 10 atmospheres," Gore said. "This sort of feeder could not be bought off the shelf, so it had to be specially designed."
Gore and Lucht are working with faculty from Purdue's schools of Aeronautics and Astronautics and Chemical Engineering and other Purdue faculty members, as well as mechanical engineering doctoral students Anup Sane, Indraneel Sircar, Rohan Gejji and Brent Rankin.
Students are working on doctoral theses on the system's mechanical design, the optical diagnostics and approaches for integrating aerospace-related technologies.
The work is funded by the U.S. Air Force Office of Scientific Research.
The project also involves Li Qiao, an assistant professor of aeronautics and astronautics

'Nanosprings' Offer Improved Performance in Biomedicine, Electronics


Researchers at Oregon State University have successfully loaded biological molecules onto "nanosprings," an advance that could have many industrial and biomedical applications.

Researchers at Oregon State University have reported the successful loading of biological molecules onto "nanosprings" -- a type of nanostructure that has gained significant interest in recent years for its ability to maximize surface area in microreactors.
The findings, announced in the journal Biotechnology Progress, may open the door to important new nanotech applications in production of pharmaceuticals, biological sensors, biomedicine or other areas.
"Nanosprings are a fairly new concept in nanotechnology because they create a lot of surface area at the same time they allow easy movement of fluids," said Christine Kelly, an associate professor in the School of Chemical, Biological and Environmental Engineering at OSU.
"They're a little like a miniature version of an old-fashioned, curled-up phone cord," Kelly said. "They make a great support on which to place reactive catalysts, and there are a variety of potential applications."
The OSU researchers found a way to attach enzymes to silicon dioxide nanosprings in a way that they will function as a biological catalyst to facilitate other chemical reactions. They might be used, for instance, to create a biochemical sensor that can react to a toxin far more quickly than other approaches.
"The ability to attach biomolecules on these nanosprings, in an efficient and environmentally friendly way, could be important for a variety of sensors, microreactors and other manufacturing applications," said Karl Schilke, an OSU graduate student in chemical engineering and principal investigator on the study.
The work was done in collaboration with the University of Idaho Department of Physics and GoNano Technologies of Moscow, Idaho, a commercial producer of nanosprings. Nanosprings are being explored for such uses as hydrogen storage, carbon cycling and lab-on-chip electronic devices. The research was also facilitated by the Microproducts Breakthrough Institute, a collaboration of OSU and the Pacific Northwest National Laboratory.
"An increasingly important aspect of microreactor and biosensor technology is the development of supports that can be easily coated with enzymes, antibodies, or other biomolecules," the researchers wrote in their report.
"These requirements are neatly met by nanosprings, structures that can be grown by a chemical vapor deposition process on a wide variety of surfaces," they said. "This study represents the first published application of nanosprings as a novel and highly efficient carrier for immobilized enzymes in microreactors."

At the Crossroads of Chromosomes: Study Reveals Structure of Cell Division’s Key Molecule


Human chromosome, with conventional nucleosomes containing the major form of the histones (green), and localization of the centromere histone H3 variant, CENP-A (red)
On average, one hundred billion cells in the human body divide over the course of a day. Most of the time the body gets it right but sometimes, problems in cell replication can lead to abnormalities in chromosomes resulting in many types of disorders, from cancer to Down Syndrome.
Now, researchers at the University of Pennsylvania's School of Medicine have defined the structure of a key molecule that plays a central role in how DNA is duplicated and then moved correctly and equally into two daughter cells to produce two exact copies of the mother cell. Without this molecule, entire chromosomes could be lost during cell division.
Ben Black, PhD, assistant professor of Biochemistry and Biophysics, and Nikolina Sekulic, PhD, a postdoctoral fellow in the Black lab, report in the Sept. 16 issue of Nature the structure of the CENP-A molecule, which defines a part of the chromosome called the centromere. This is a constricted area to which specialized molecules called spindle fibers attach that help pull daughter cells apart during cell division.
"Our work gives us the first high-resolution view of the molecules that control genetic inheritance at cell division," says Black. "This is a big step forward in a puzzle that biologists have been chipping away at for over 150 years."
Investigators have known for the last 15 years that part of cell division is controlled by epigenetic processes, the series of actions that affect the protein spools around which DNA is tightly bound, rather than encoded in the DNA sequence itself. Those spools are built of histone proteins, and chemical changes to these spool proteins can either loosen or tighten their interaction with DNA. Epigenetics alter the readout of the genetic code, in some cases ramping a gene's expression up or down. In the case of the centromere, it marks the site where spindle fibers attach independently of the underlying DNA sequence. CENP-A has been suspected to be the key epigenetic marker protein.
However, what hasn't been known is how CENP-A epigenetically marks the centromere to direct inheritance. The Black team found the structural features that confer CENP-A the ability to mark centromere location on each chromosome. This is important because without CENP-A or the centromere mark it creates, the entire chromosome -- and all of the genes it houses -- are lost at cell division.
In this study, Black solved CENP-A's structure to determine how it specifically marks the centromere on each chromosome and surmise from that how the epigenetic mark is copied correctly in each cell division. They found that CENP-A changes the shape of the nucleosome of which it's a part, also making it more rigid than other nucleosomes without CENP-A. The nucleosome is the combination of DNA wound around a histone protein core --the DNA thread wrapped around the histone spool. The CENP-A nucleosome is copied several times to create a unique epigenetic area, different from the rest of the chromosome. CENP-A replaces histone H3 in the nucleosomes located at the centromere.
This CENP-A centromere identifier attracts other proteins, and in cell division builds a massive structure, the kinetochore, for pulling the duplicated chromosomes apart during cell division.
Besides the major advance in the understanding of the molecules driving human inheritance, this work also brings about the exciting prospect that the key epigenetic components are now in hand to engineer clinically useful artificial chromosomes that will be inherited alongside our own natural chromosomes -- and with the same high fidelity, says Black.
Co-authors are graduate student Emily A. Bassett and research specialist Danielle J. Rogers. The work was funded by National Institute for General Medical Sciences, the Burroughs Wellcome Fund, the Rita Allen Foundation, the American Cancer Society, and the American Heart Association.