Saturday, October 02, 2010

Underwater Robot Swims Free Thanks to Wireless Controller


A waterproof controller designed and built by York University researchers is allowing an underwater robot to go “wireless” in a unique way
A waterproof controller designed and built by York University researchers is allowing an underwater robot to go "wireless" in a unique way.
AQUA, an amphibious, otter-like robot, is small and nimble, with flippers rather than propellers, designed for intricate data collection from shipwrecks and reefs.
The robot, a joint project of York, McGill and Dalhousie universities, can now be controlled wirelessly using a waterproof tablet built at York. While underwater, divers can program the tablet to display tags onscreen, similar to barcodes read by smartphones. The robot's on-board camera then scans these two-dimensional tags to receive and carry out commands.
Cutting the cord on underwater robots has been a longstanding challenge for scientists; water interferes with radio signals, hindering traditional wireless communication via modem. Tethered communication is cumbersome and can create safety issues for divers.
"Having a robot tethered to a vehicle above water creates a scenario where communication between the diver, robot, and surface operator becomes quite complicated," says Michael Jenkin, professor in York's Faculty of Science & Engineering and co-author of the forthcoming paper, Swimming with Robots: Human Robot Communication at Depth.
"Investigating a shipwreck, for example, is a very delicate operation and the diver and robot need to be able to react quickly to changes in the environment. An error or a lag in communication could be dangerous," Jenkin says.
Realizing there was no device on the market that fit the bill, Jenkin and his team at York's Centre for Vision Research, including the paper's lead author, MSc student Bart Verzijlenberg, set to work constructing a prototype. The resulting device, fittingly dubbed AQUATablet, is watertight to a depth of 60 feet. Aluminum housing with a clear acrylic cover protects the tablet computer, which can be controlled by a diver using toggle-switches and on-screen prompts.
"A diver at 60 feet can actually teleoperate AQUA 30-40 feet deeper. Needless to say this is much easier on the diver, physically, and much safer," Jenkin says.
The tablet also allows divers to command the robot much as if they were using a video game joystick; turn the tablet right and AQUA turns right, too. In this mode, the robot is connected to the tablet by a slim length of optical cable, circumventing many of the issues of a robot-to-surface tether. The optical cable also allows AQUA to provide video feedback from its camera to the operator. In a totally wireless mode, the robot acknowledges prompts by flashing its on-board light. Its cameras can be used to build 3-D models of the environment which can then be used to guide the robot to particular tasks.
"This is a huge improvement on [a robot] having to travel to the surface to communicate with its operators," Jenkin says.
In past, divers have used laminated flashcards to visually communicate with robots while underwater. However, these limit the diver to a pre-set sequence of commands.
"It's impossible to anticipate everything you're going to want the robot to do once you get underwater. We wanted to develop a system where we could create commands on the fly, in response to the environment," he says.
Jenkin and Verzijlenberg's paper will be presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) in Taiwan.
Jenkin and Verzijlenberg are two of the researchers based in York's new state-of-the-art Sherman Health Science Research Centre, which officially opened on Sept. 14, 2010. Jenkin leads the Canadian Centre for Field Robotics, which is based on the building's main level. The centre is supported by a grant from the Canada Foundation for Innovation (CFI). The AQUA project is funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). York's Centre for Vision Research is part of the Faculty of Health.

Three Solid-State Qubits Entangled: Big Step Toward Quantum Error Correction


The quantum entanglement of three solid-state qubits, or quantum bits, represents the first step towards quantum error correction, a crucial aspect of future quantum computing.
The rules that govern the world of the very small, quantum mechanics, are known for being bizarre. One of the strangest tenets is something called quantum entanglement, in which two or more objects (such as particles of light, called photons) become inextricably linked, so that measuring certain properties of one object reveals information about the other(s), even if they are separated by thousands of miles. Einstein found the consequences of entanglement so unpalatable he famously dubbed it "spooky action at a distance."
Now a team led by Yale researchers has harnessed this counterintuitive aspect of quantum mechanics and achieved the entanglement of three solid-state qubits, or quantum bits, for the first time. Their accomplishment, described in the Sept. 30 issue of the journal Nature, is a first step towards quantum error correction, a crucial aspect of future quantum computing.
"Entanglement between three objects has been demonstrated before with photons and charged particles," said Steven Girvin, the Eugene Higgins Professor of Physics & Applied Physics at Yale and an author of the paper. "But this is the first three-qubit, solid-state device that looks and feels like a conventional microprocessor."
The new result builds on the team's development last year of the world's first rudimentary solid-state quantum processor, which they demonstrated was capable of executing simple algorithms using two qubits.
The team, led by Robert Schoelkopf, the William A. Norton Professor of Applied Physics & Physics at Yale, used artificial "atoms" -- actually made up of a billion aluminum atoms that behave as a single entity -- as their qubits. These "atoms" can occupy two different energy states, akin to the "1" and "0" or "on" and "off" states of regular bits used in conventional computers. The strange laws of quantum mechanics, however, allow for qubits to be placed in a "superposition" of these two states at the same time, resulting in far greater information storage and processing power.
In this new study, the team was able to achieve an entangled state by placing the three qubits in a superposition of two possibilities -- all three were either in the 0 state or the 1 state. They were able to attain this entangled state 88 percent of the time.
With the particular entangled state the team achieved, they also demonstrated for the first time the encoding of quantum information from a single qubit into three qubits using a so-called repetition code. "This is the first step towards quantum error correction, which, as in a classical computer, uses the extra qubits to allow the computer to operate correctly even in the presence of occasional errors," Girvin said.
Such errors might include a cosmic ray hitting one of the qubits and switching it from a 0 to a 1 state, or vice versa. By replicating the qubits, the computer can confirm whether all three are in the same state (as expected) by checking each one against the others.
"Error correction is one of the holy grails in quantum computing today," Schoelkopf said. "It takes at least three qubits to be able to start doing it, so this is an exciting step."
Other authors of the paper include Leonardo DiCarlo, Matthew Reed, Luyan Sun, Blake Johnson, Jerry Chow and Michel Devoret (all of Yale University); and Jay Gambetta (University of Waterloo).

IBEX Finds Surprising Changes at Solar Boundary


Roughly the size of a card table, the Interstellar Boundary Explorer is the latest in NASA's series of low-cost, rapidly developed Small Explorers spacecraft.
When NASA launched the Interstellar Boundary Explorer (IBEX) on October 19, 2008, space physicists held their collective breath for never-before-seen views of a collision zone far beyond the planets, roughly 10 billion miles away. That's where the solar wind, an outward rush of charged particles and magnetic fields continuously spewed by the Sun, runs into the flow of particles and fields that permeates interstellar space in our neighborhood of the Milky Way galaxy.
No spacecraft had ever imaged the collision zone, which occurs in a region known as the heliosheath, because it emits no light. But the two detectors on IBEX are designed to "see" what the human eye cannot. The interaction of the solar wind and interstellar medium creates energetic neutral atoms of hydrogen, called ENAs, that zip away from the heliosheath in all directions. Some of these atoms pass near Earth, where IBEX records their arrival direction and energy. As the spacecraft slowly spins, the detectors gradually build up pictures of the ENAs as they arrive from all over the sky.
Mission scientists got their first surprise six months after launch, once the spacecraft had scanned enough overlapping strips of sky to create a complete 360° map. Instead of recording a relatively even distribution all the way around, as expected, IBEX found that the counts of ENAs -- and thus the strength of the interaction in the heliosheath -- varied dramatically from place to place. The detectors even discovered a long, enhanced "ribbon," accentuated by an especially intense hotspot or "knot," arcing across the sky. (IBEX Explores Galactic Frontier, Releases First-Ever All-Sky Map)
Now scientists have finished assembling a second complete sweep around the sky, and IBEX has again delivered an unexpected result: the map has changed significantly. Overall, the intensity of ENAs has dropped 10% to 15%, and the hotspot has diminished and spread out along the ribbon. Details of these findings appear in the September 27th issue of Journal of Geophysical Research (Space Physics).
"We thought we might detect small changes occurring gradually throughout the Sun's 11-year-long activity cycle, but not over just 6 months," notes David McComas (Southwest Research Institute), principal investigator for the IBEX mission and the paper's lead author. "These observations show that the interaction of the Sun with the interstellar medium is far more dynamic and variable than anyone envisioned."
In the past, space physicists had little notion of what to expect along the boundary where the Sun's own magnetic bubble, the heliosphere, meets interstellar space. Even though the solar wind travels outward at roughly a million miles per hour, it still takes about a year to reach the heliosphere's edge. Also, the encounter zone within the heliosheath is believed to be several billion miles thick (roughly Pluto's distance from the Sun). Finally, the ENAs take another six months to many years to complete the return trip back to Earth, depending on their direction and energy.
With ENAs starting out from such a wide range of distances and traveling back toward Earth at different speeds, IBEX mission scientists had expected that any highs and lows in intensity arising within the heliosheath would be hopelessly smeared out in the spacecraft's all-sky maps. So they're elated by the variations and changes seen so far by IBEX. These early results hint that the solar wind and the interstellar flow might be interacting in a thinner layer than many researchers had imagined possible.
McComas says the dropoff in intensity between the two all-sky maps perhaps makes sense, because the Sun is only now emerging from an unusually long period of very low activity and a correspondingly weak solar wind. The fewer the solar-wind particles that reached the heliosheath in recent years, the fewer the ENAs that got created. "We didn't plan it this way," says McComas, "but it's an almost perfect situation, in that we're seeing the interaction in its simplest state -- before trying to interpret what turns out to be a much more complicated interaction than anticipated."
If IBEX remains healthy, and if the team gets approval to continue well past its planned two-year mission, then the changes it's seeing in the distant heliosheath should become more dramatic as solar activity ramps up later in this decade.
"The surprising results from IBEX show that there is still exciting science that can be done with small missions," comments Eric Christian, a member of the spacecraft's research team and the program's Deputy Mission Scientist at the Goddard Space Flight Center. "This is clearly a huge success for the Explorer program." IBEX is one of a dozen Explorer-class missions operated by NASA's Science Mission Directorate.
"The public might think that scientists make measurements and instantly know what's going on, but that is not how science really works," McComas observes. "We thought the outer heliosphere would be stable over time -- and IBEX is showing us that it's not. This is changing the game completely."

Growing Nanowires Horizontally Yields New Benefit: 'Nano-LEDs'


Graphic illustrates a single row of nanowires (cylinders with red tops) with fin-shaped nanowalls extending outward. The transmission electron microscope image shows four rows of nanowires and their corresponding nanowalls, nicknamed "nano LEDs" because they emit light when electrically charged. The distance across the micrograph is approximately the diameter of a human hair.
While refining their novel method for making nanoscale wires, chemists at the National Institute of Standards and Technology (NIST) discovered an unexpected bonus -- a new way to create nanowires that produce light similar to that from light-emitting diodes (LEDs). These "nano-LEDs" may one day have their light-emission abilities put to work serving miniature devices such as nanogenerators or lab-on-a-chip systems.
Nanowires typically are "grown" by the controlled deposition of molecules -- zinc oxide, for example -- from a gas onto a base material, a process called chemical vapor deposition (CVD). Most CVD techniques form nanowires that rise vertically from the surface like brush bristles. Because the wire only contacts the substrate at one end, it tends not to share characteristics with the substrate material -- a less-than-preferred trait because the exact composition of the nanowire will then be hard to define. Vertical growth also produces a dense forest of nanowires, making it difficult to find and re-position individual wires of superior quality. To remedy these shortcomings, NIST chemists Babak Nikoobakht and Andrew Herzing developed a "surface-directed" method for growing nanowires horizontally across the substrate.
Like many vertical growth CVD methods, the NIST fabrication technique uses gold as a catalyst for crystal formation. The difference is that the gold deposited in the NIST method is heated to 900 degrees Celsius (1,652 degrees Fahrenheit), converting it to a nanoparticle that serves as growth site and medium for the crystallization of zinc oxide molecules. As the zinc oxide nanocrystal grows, it pushes the gold nanoparticle along the surface of the substrate (in this experiment, gallium nitride) to form a nanowire that grows horizontally across the substrate and so exhibits properties strongly influenced by its base material.
In recent work published in ACS Nano,* Nikoobakht and Herzing increased the thickness of the gold catalyst nanoparticle from less than 8 nanometers to approximately 20 nanometers. The change resulted in nanowires that grew a secondary structure, a shark-like "dorsal fin" (referred to as a "nanowall") where the zinc oxide portion is electron-rich and the gallium nitride portion is electron-poor. The interface between these two materials -- known as a p-n heterojunction -- allows electrons to flow across it when the nanowire-nanowall combination was charged with electricity. In turn, the movement of electrons produced light and led the researchers to dub it a "nano LED."
Unlike previous techniques for producing heterojunctions, the NIST "surface-directed" fabrication method makes it easy to locate individual heterojunctions on the surface. This feature is especially useful when a large number of heterojunctions must be grouped in an array so that they can be electrically charged as a light-emitting unit.
Transmission electron microscope (TEM) examination of the zinc oxide-gallium nitride nanowires and nanowalls revealed few structural defects in the nanowires and very distinct p-n heterojunctions in the nanowalls, both affirmations of the effectiveness of the NIST "surface directed" fabrication method.
Nikoobakht and Herzing hope to improve the nano LEDs in future experiments using better geometry and material designs, and then apply them in the development of light sources and detectors useful in photonic devices or lab-on-a-chip platforms.

Thursday, September 30, 2010

Carbon Nanoobjects to Facilitate the Construction of Futuristic Power Sources


Modern electrodes covered with carbon nanolayers are generated in the Institute of Physical Chemistry of the Polish Academy of Sciences.
Scientists from the Institute of Physical Chemistry of the Polish Academy of Sciences in Warsaw are working on electrodes that have surfaces covered with layers of carbon nanoparticles and enzymes. These electrodes can be used to produce modern sensors and power sources, including such futuristic ones as biological fuel cells installed inside the  human body and fueled by substances contained in blood.
One of the most popular methods of covering surfaces with nanoparticles is the Layer-by-Layer method (LbL), known since 1997. According to this method, a substrate is covered with subsequent layers of objects with opposite electric charges. This method is applied in particular to create three-dimensional structures made of polymers only or alternating layers of polymers and nanoparticles on the surface of electrodes. "From some time it has been known that many electrode reactions proceed faster, more efficiently and selectively on surfaces covered by, for instance, nanoparticles of gold or carbon. So, we decided to construct structures consisting of nanoparticles only and examine how they affect the properties of electrodes after they have been further modified by enzymes," says Prof. Marcin Opałło from the Institute of Physical Chemistry of the Polish Academy of Sciences (IPC PAS). Electrodes covered by thin layers of carbon nanoparticles could be applied to, among other things, biological fuel cells used as sources of power for medical devices placed in the human body. Currently, the replacement of power sources in such devices as pacemakers requires invasive methods. Scientists worldwide have carried out research aimed at creating a cell fuelled by a substance dissolved in blood: for example, glucose and an oxidizing agent -- oxygen, which is also in blood. The task is difficult because a conductive support must be found which will allow the permanent deposition of enzyme in such a way that it would exchange electrons directly with the electrode. This was finally achieved in the IPC PAS thanks to depositing carbon nanoparticles on the electrode. "The result is surprising because enzyme usually requires additional compounds -- electron shuttles dissolved in the solution. There are simply no such compounds in our experiments," says Katarzyna Szot, a PhD student from the IPC PAS. It is particularly important that the electrodes being examined work in the solutions that have similar components as blood plasma.
Carbon nanoparticles used in the experiments carried out in the IPC PAS are smaller than 10 nanometers. In the context of future applications, it is important that they are cheap and easily accessible. The process of covering the substrate -- in the experiments these are glass plates with a layer of an electric conductor -- is simple and quick. The plate is immersed for one minute in the suspension of carbon nanoparticles, then it is taken out, rinsed, moved to another suspension to deposit a subsequent layer, and all the actions are repeated several times. The finished carbon layers are about 100 nanometers thick.
As carbon nanoparticles themselves are small, spaces between them are also very small, which makes the access to active surfaces located deeper in the layer more difficult. As a result the electrodes are not as efficient as they could be. In the future, it will probably be possible to improve their properties due to the modification of the process of layer depositing. In the Institute of Physical Chemistry of the PAS, experiments are just beginning on the application of carbon nanoparticles layers in the presence of small polystyrene balls with diameters of several hundred nanometers. After each layer has been created, scientists intend to coat it with polymer in order to strengthen the structure mechanically and then wash away the balls. The multilayer structure obtained thanks to this method would have larger pores facilitating access of oxygen, which would result in the increased reaction efficiency.
Electrochemical sensors will probably become another area where electrodes with carbon nanoparticles layers can be applied -- e.g. to determine the level of dopamine in relation to ascorbic acid and uric acid. This problem is considered serious in analytical chemistry since the two last substances hinder the analysis of samples due to the overlapping of their electrochemical signals. Covering the electrodes with nanoparticles allows the signals to be separated and sensitivity to be increased.
In addition to carbon nanolayers, scientists from Prof. Opałło's team are creating, in a similar way, three-dimensional structures from nanoparticles of metals and metal oxides, carbon nanotubes and nanoparticles of modified glass.

New Oil Detection Technique


CSIRO scientist, Sean Forrester, demonstrates how easily the technique can be used to detect oil in soil.
CSIRO scientists have developed a revolutionary technique for the rapid on-site detection and quantification of petroleum hydrocarbons (commonly derived from crude oil) in soil, silt, sediment, or rock.
Developed in collaboration with waste technology specialist, Ziltek Pty Ltd, the technique means that the presence of petroleum hydrocarbons can now be quantified simply by using a hand-held infrared spectrometer to take readings at the site of interest, without the need to take samples or perform any kind of processing.
The technique could be used for oil exploration purposes. It will also be particularly useful in assessing and monitoring contaminated sites such as coastal land following off-shore oil spills and industrial sites planned for urban redevelopment.
"Petroleum hydrocarbons are a valuable resource, but can also be pretty nasty environmental contaminants," says CSIRO scientist, Sean Forrester.
"They can remain in the environment for extended periods of time and can be harmful to wildlife, plants and humans. Better tools to detect them makes a rapid response possible."
The technique uses an infrared signal to detect the presence of petroleum hydrocarbons in samples.
By contrast, current methods use sampling and processing techniques that are labour intensive, time consuming, require sensitive equipment and are not well suited to on-site analysis.
"The ability of this new technique to rapidly detect the presence of contaminants at the site has the potential to provide significant cost advantages, in terms of reduced testing costs and the avoidance of delays," Mr Forrester says.
"Rapid analysis allows immediate measures to be undertaken to prevent further contamination or to limit contaminant spread."
A significant portion of the time and financial costs involved in assessing and remediating contaminated sites is consumed by monitoring and analysis.
By decreasing analysis time and reducing costs this new technique can assist in the fast and effective identification of oil and other petroleum products in the environment, as well as treatment and protection of environmental assets threatened by petroleum contamination.

One-Dimensional Window on Superconductivity, Magnetism: Atoms Are Proxies for Electrons in Ultracold Optical Emulator


Rice University graduate student Yean-an Liao created a precise analog of a one-dimensional superconducting wire by trapping ultracold lithium atoms in a grid of laser beams.

A Rice University-led team of physicists is reporting the first success in a three-year effort to build a precision simulator for superconductors using a grid of intersecting laser beams and ultracold atomic gas.
The research appears in the journal Nature. Using lithium atoms cooled to within a few billionths of a degree of absolute zero and loaded into optical tubes, the researchers created a precise analog of a one-dimensional superconducting wire.
Because the atoms in the experiment are so cold, they behave according to the same quantum mechanical rules that dictate how electrons behave. That means the lithium atoms can serve as stand-ins for electrons, and by trapping and holding the lithium atoms in beams of light, researchers can observe how electrons would behave in particular types of superconductors and other materials.
"We can tune the spacing and interactions among these ultracold atoms with great precision, so much so that using the atoms to emulate exotic materials like superconductors can teach us some things we couldn't learn by studying the superconductors themselves," said study co-author Randy Hulet, a Rice physicist who's leading a team of physicists at Rice and six other universities under the Defense Advanced Research Projects Agency's (DARPA) Optical Lattice Emulator (OLE) program.
In the Nature study, Hulet, Cornell University physicist Erich Mueller, Rice graduate students and postdoctoral researchers Yean-an Liao, Sophie Rittner, Tobias Paprotta, Wenhui Li and Gutherie Partridge and Cornell graduate student Stefan Baur created an emulator that allowed them to simultaneously examine superconductivity and magnetism -- phenomena that do not generally coexist.
Superconductivity occurs when electrons flow in a material without the friction that causes electrical resistance. Superconductivity usually happens at very low temperatures when pairs of electrons join together in a dance that lets them avoid the subatomic bumps that cause friction.
Magnetism derives from one of the basic properties of all electrons -- the fact that they rotate around their own axis. This property, which is called "spin," is inherent; like the color of someone's eyes, it never changes. Electron spin also comes in only two orientations, up or down, and magnetic materials are those where the number of electrons with up spins differs from the number with down spins, leaving a "net magnetic moment."
"Generally, magnetism destroys superconductivity because changing the relative number of up and down spins disrupts the basic mechanism of superconductivity," Hulet said. "But in 1964, a group of physicists predicted that a magnetic superconductor could be formed under an exotic set of circumstances where a net magnetic moment arose out of a periodic pattern of excess spins and pairs."
Dubbed the "FFLO" state in honor of the theorists who proposed it -- Fulde, Ferrell, Larkin and Ovchinnikov -- this state of matter has defied conclusive experimental observation for 46 years. Hulet said the new study paves the way for direct observation of the FFLO state.
"The evidence that we've gathered meets the criteria of the FFLO state, but we can't say for certain that we have observed it. To do that, we need to precisely measure the distribution of velocities of the pairs to confirm that they follow the FFLO relationship. We're working on that now."
The research was funded by the Army Research Office with funds from the DARPA OLE program, the National Science Foundation, the Office of Naval Research, the Welch Foundation and the Keck Foundation.

Wednesday, September 29, 2010

Putting waste heat from electronics to good use

A scanning electron microscope image and a rendering of Caltech's silicon nanomesh (Image:...
A scanning electron microscope image and a rendering of Caltech's silicon nanomesh
Researchers at two different institutions have recently announced the development of technologies for converting waste heat from electronics into something useful. At the California Institute of Technology (Caltech), they’ve created a silicon nanomesh film that could collect heat from electric appliances such as computers or refrigerators and convert it to electricity. Meanwhile, their colleagues at Ohio State University (OSU) have been working with a semiconducting material that has the capacity to turn waste heat from computers into additional processing power.

Silicon nanomesh

The Caltech scientists claim that their material is much more efficient and/or inexpensive and environmentally-friendly than other thermoelectric solutions that have been proposed to date. It takes the form of a 22-nanometer-thick sheet of silicon, containing a window screen-like matrix of 11- or 16-nanometer-wide holes that are spaced 34 nanometers apart.
The design significantly lowers the film’s thermal conductivity, meaning heat can’t easily travel through it and escape. At the same time, however, electricity can still travel through it well. Lowered thermal conductivity combined with decent electrical conductivity has always been one of the goals of designers of thermoelectric devices.
Heat travels via packets of vibration known as phonons, and the nanomesh slows those phonons down, so their energy can be harvested before it chaotically disperses throughout the material. The researchers are now experimenting with different arrangements of holes, and different materials.

Thermo-spintronics

For some time now, researchers around the world have been trying to develop electronics that utilize spinning electrons to read and write data – a concept known as spintronics. Unlike traditional circuits, spintronics would supposedly not create any heat. At OSU, scientists have been experimenting with using a semiconducting material to convert heat into electron-spinning energy. This means, theoretically, that the heat generated by a computer could be used to provide more processing power or memory for that same computer.
In 2008, researchers at Japan’s Tohoku University showed how heat could be converted into spin polarization. The not-entirely-understood phenomenon is known as the spin-Seebeck effect. In this case, researchers just used a piece of metal.
The OSU researchers have duplicated the results from Japan, but using semiconducting gallium manganese arsenide, which would be better-suited for use in computers.
Such a “thermo-spintronic” approach would simultaneously address two challenges facing computer designers, namely waste heat removal, and the difficulty in obtaining more computing power without creating more heat.

Solar Cells Thinner Than Wavelengths of Light Hold Huge Power Potential


This schematic diagram of a thin film organic solar cell shows the top layer, a patterned, roughened scattering layer, in green. The organic thin film layer, shown in red, is where light is trapped and electrical current is generated. The film is sandwiched between two layers that help keep light contained within the thin film.
Ultra-thin solar cells can absorb sunlight more efficiently than the thicker, more expensive-to-make silicon cells used today, because light behaves differently at scales around a nanometer (a billionth of a meter), say Stanford engineers. They calculate that by properly configuring the thicknesses of several thin layers of films, an organic polymer thin film could absorb as much as 10 times more energy from sunlight than was thought possible.

In the smooth, white, bunny-suited clean-room world of silicon wafers and solar cells, it turns out that a little roughness may go a long way, perhaps all the way to making solar power an affordable energy source, say Stanford engineers.
Their research shows that light ricocheting around inside the polymer film of a solar cell behaves differently when the film is ultra thin. A film that's nanoscale-thin and has been roughed up a bit can absorb more than 10 times the energy predicted by conventional theory.
The key to overcoming the theoretical limit lies in keeping sunlight in the grip of the solar cell long enough to squeeze the maximum amount of energy from it, using a technique called "light trapping." It's the same as if you were using hamsters running on little wheels to generate your electricity -- you'd want each hamster to log as many miles as possible before it jumped off and ran away.
"The longer a photon of light is in the solar cell, the better chance the photon can get absorbed," said Shanhui Fan, associate professor of electrical engineering. The efficiency with which a given material absorbs sunlight is critically important in determining the overall efficiency of solar energy conversion. Fan is senior author of a paper describing the work published online by Proceedings of the National Academy of Sciences.
Light trapping has been used for several decades with silicon solar cells and is done by roughening the surface of the silicon to cause incoming light to bounce around inside the cell for a while after it penetrates, rather than reflecting right back out as it does off a mirror. But over the years, no matter how much researchers tinkered with the technique, they couldn't boost the efficiency of typical "macroscale" silicon cells beyond a certain amount.
Eventually the scientists realized that there was a physical limit related to the speed at which light travels within a given material.
But light has a dual nature, sometimes behaving as a solid particle (a photon) and other times as a wave of energy, and Fan and postdoctoral researcher Zongfu Yu decided to explore whether the conventional limit on light trapping held true in a nanoscale setting. Yu is the lead author of the PNAS paper.
"We all used to think of light as going in a straight line," Fan said. "For example, a ray of light hits a mirror, it bounces and you see another light ray. That is the typical way we think about light in the macroscopic world.
"But if you go down to the nanoscales that we are interested in, hundreds of millionths of a millimeter in scale, it turns out the wave characteristic really becomes important."
Visible light has wavelengths around 400 to 700 nanometers (billionths of a meter), but even at that small scale, Fan said, many of the structures that Yu analyzed had a theoretical limit comparable to the conventional limit proven by experiment.
"One of the surprises with this work was discovering just how robust the conventional limit is," Fan said.
It was only when Yu began investigating the behavior of light inside a material of deep subwavelength-scale -- substantially smaller than the wavelength of the light -- that it became evident to him that light could be confined for a longer time, increasing energy absorption beyond the conventional limit at the macroscale.
"The amount of benefit of nanoscale confinement we have shown here really is surprising," said Yu. "Overcoming the conventional limit opens a new door to designing highly efficient solar cells."
Yu determined through numerical simulations that the most effective structure for capitalizing on the benefits of nanoscale confinement was a combination of several different types of layers around an organic thin film.
He sandwiched the organic thin film between two layers of material -- called "cladding" layers -- that acted as confining layers once the light passed through the upper one into the thin film. Atop the upper cladding layer, he placed a patterned rough-surfaced layer designed to send the incoming light off in different directions as it entered the thin film.
By varying the parameters of the different layers, he was able to achieve a 12-fold increase in the absorption of light within the thin film, compared to the macroscale limit.
Nanoscale solar cells offer savings in material costs, as the organic polymer thin films and other materials used are less expensive than silicon and, being nanoscale, the quantities required for the cells are much smaller.
The organic materials also have the advantage of being manufactured in chemical reactions in solution, rather than needing high-temperature or vacuum processing, as is required for silicon manufacture.
"Most of the research these days is looking into many different kinds of materials for solar cells," Fan said. "Where this will have a larger impact is in some of the emerging technologies; for example, in organic cells."
"If you do it right, there is enormous potential associated with it," Fan said.
Aaswath Raman, a graduate student in applied physics, also worked on the research and is a coauthor of the paper.
The project was supported by funding from the King Abdullah University of Science and Technology, which supports the Center for Advanced Molecular Photovoltaics at Stanford, and by the U.S. Department of Energy.

Scientists Obtain 'Unobtainium' for NASA's Next Space Observatory


Doug McGuffey is pictured here standing next to the Integrated Science Instrument Module (ISIM) Flight Structure.
Imagine building a car chassis without a blueprint or even a list of recommended construction materials.
In a sense, that's precisely what a team of engineers at the NASA Goddard Space Flight Center in Greenbelt, Md., did when they designed a one-of-a-kind structure that is one of nine key new technology systems of the Integrated Science Instrument Module (ISIM). Just as a chassis supports the engine and other components in a car, the ISIM will hold four highly sensitive instruments, electronics, and other shared instrument systems flying on the James Webb Space Telescope, NASA's next flagship observatory.
From scratch -- without past experience to help guide them -- the engineers designed the ISIM made of a never-before-manufactured composite material and proved through testing that it could withstand the super-cold temperatures it would encounter when the observatory reached its orbit 1.5-million kilometers (930,000 miles) from Earth. In fact, the ISIM structure survived temperatures that plunged as low as 27 Kelvin (-411 degrees Fahrenheit), colder than the surface of Pluto.
"It is the first large, bonded composite spacecraft structure to be exposed to such a severe environment," said Jim Pontius, ISIM lead mechanical engineer.
The 26-day test was specifically carried out to test whether the car-sized structure contracted and distorted as predicted when it cooled from room temperature to the frigid -- very important since the science instruments must maintain a specific location on the structure to receive light gathered by the telescope's 6.5-meter (21.3-feet) primary mirror. If the structure shrunk or distorted in an unpredictable way due to the cold, the instruments no longer would be in position to gather data about everything from the first luminous glows following the big bang to the formation of star systems capable of supporting life.
"The tolerances are much looser on the Hubble Space Telescope," said Ray Ohl, a Goddard optical engineer who leads ISIM's optical integration and test. "The optical requirements for Webb are even more difficult to meet than those on Hubble."
Despite repeated cycles of testing, the truss-like assembly designed by Goddard engineers did not crack. The structure shrunk as predicted by only 170 microns -- the width of a needle -- when it reached 27 Kelvin (-411 degrees Fahrenheit), far exceeding the design requirement of about 500 microns. "We certainly wouldn't have been able to realign the instruments on orbit if the structure moved too much," said ISIM Structure Project Manager Eric Johnson. "That's why we needed to make sure we had designed the right structure."
Obtaining the Unobtainium

Achieving the milestone was just one of many firsts for the Goddard team. Almost on every level, "we pushed the technology envelope, from the type of material we would use to build ISIM to how we would test it once it was assembled," Pontius added. "The technology challenges are what attracted the people to the program."
One of the first challenges the team tackled after NASA had named Goddard as the lead center to design and develop ISIM was identifying a structural material that would assure the instruments' precise cryogenic alignment and stability, yet survive the extreme gravitational forces experienced during launch.
An exhaustive search in the technical literature for a possible candidate material yielded nothing, leaving the team with only one alternative -- developing its own as-yet-to-be manufactured material, which team members jokingly referred to as "unobtainium." Through mathematical modeling, the team discovered that by combining two composite materials, it could create a carbon fiber/cyanate-ester resin system that would be ideal for fabricating the structure's square tubes that measure 75-mm (3-inch) in diameter.
How then would engineers attach these tubes? Again through mathematical modeling, the team found it could bond the pieces together using a combination of nickel-alloy fittings, clips, and specially shaped composite plates joined with a novel adhesive process, smoothly distributing launch loads while holding the instruments in precise locations -- a difficult engineering challenge because different materials react differently to changes in temperature.
"We engineered from the small pieces to the big pieces testing along the way to see if the failure theories were correct. We were looking to see where the design could go wrong," Pontius explained. "By incorporating the lessons learned into the final flight structure, we met the requirements and test validated our building-block approach."
Making Cold, Colder

The test inside Goddard's Space Environment Simulator -- a three-story thermal-vacuum chamber that simulates the temperature and vacuum conditions found in space -- presented its own set of technological hurdles. "We weren't sure we could get the simulator cold enough," said Paul Cleveland, a technical consultant at Goddard involved in the project. For most spacecraft, the simulator's ability to cool down to 100 Kelvin (-279.7 degrees Fahrenheit) is cold enough. Not so for the Webb telescope, which will endure a constant temperature of 39 Kelvin (-389.5 degrees Fahrenheit) when it reaches its deep-space orbit.
The group engineered a giant tuna fish can-like shroud, cooled by helium gas, and inserted it inside the 27-foot diameter chamber. "When you get down to these temperatures, the physics change," Cleveland said. Anything, including wires or small gaps in the chamber, can create an intractable heat source. "It's a totally different arena," he added. "One watt can raise the temperature by 20 degrees Kelvin. We had to meticulously close the gaps."
With the gaps closed and the ISIM safely lowered into the helium shroud, technicians began sucking air from the chamber to create a vacuum. They activated the simulator's nitrogen panels to cool the chamber to 100 Kelvin (-279.7 degrees Fahrenheit) and began injecting helium gas inside the shroud to chill the ISIM to the correct temperature.
To measure ISIM's reaction as it cooled to the sub-freezing temperatures, the team used a technique called photogrammetry, the science of making precise measurements by means of photography. However, using the technique wasn't so cut-and-dried when carried out in a frosty, airless environment, Ohl said. To protect two commercial-grade cameras from extreme frostbite, team members placed the equipment inside specially designed protective canisters and attached the camera assemblies to the ends of a motorized boom.
As the boom made nearly 360-degree sweeps inside the helium shroud, the cameras snapped photos through a gold-coated glass window of reflective, hockey puck-shaped targets bolted onto ISIM's composite tubes. From the photos, the team could precisely determine whether the targets moved, and if so, by how much.
"It passed with flying colors," Pontius said, referring to the negligible shrinkage. "This test was a huge success for us."
With the critical milestone test behind them, team members say their work likely will serve NASA in the future. Many future science missions will also operate in deep space, and therefore would have to be tested under extreme cryogenic conditions. In the meantime, though, the facility will be used to test other Webb telescope systems, including the backplane, the structure to which the Webb telescope's 18 primary mirror segments are bolted when the observatory is assembled. "We need to characterize its bending at cryogenic temperatures," Ohl said.

Nanocatalyst Is a Gas


This is an atomic-level image of tungsten oxide nanoparticles (green circles) on zirconia support. The other circles show the less-active forms of tungsten oxide.
A nanoparticle-based catalyst developed at Rice University may give that tiger in your tank a little more roar.
A new paper in the Journal of the American Chemical Society details a process by Rice Professor Michael Wong and his colleagues that should help oil refineries make the process of manufacturing gasoline more efficient and better for the environment.
In addition, Wong said, it could produce higher-octane gasoline and save money for an industry in which a penny here and a penny there add millions to the bottom line.
Wong's team at Rice, in collaboration with labs at Lehigh University, the Centre for Research and Technology Hellas and the DCG Partnership of Texas, reported this month that sub-nanometer clusters of tungsten oxide lying on top of zirconium oxide are a highly efficient catalyst that turns straight-line molecules of n-pentane, one of many hydrocarbons in gasoline, into better-burning branched n-pentane.
While the catalytic capabilities of tungsten oxide have long been known, it takes nanotechnology to maximize their potential, said Wong, a Rice professor of chemical and biomolecular engineering and of chemistry.
After the initial separation of crude oil into its basic components -- including gasoline, kerosene, heating oil, lubricants and other products -- refineries "crack" (by heating) heavier byproducts into molecules with fewer carbon atoms that can also be made into gasoline. Catalysis, a chemical process, further refines these hydrocarbons.
That's where Wong's discovery comes in. Refineries strive to make better catalysts, he said, although "compared with the academic world, industry hasn't done much in terms of new synthesis techniques, new microscopy, new biology, even new physics. But these are things we understand in the context of nanotechnology.
"We have a way to make a better catalyst that will improve the fuels they make right now. At the same time, a lot of existing chemical processes are wasteful in terms of solvents, precursors and energy. Improving a catalyst can also make the chemical process more environmentally friendly. Knock those things out, and they gain efficiencies and save money."
Wong and his team have worked for several years to find the proper mix of active tungsten oxide nanoparticles and inert zirconia. The key is to disperse nanoparticles on the zirconia support structure at the right surface coverage. "It's the Goldilocks theory -- not too much, not too little, but just right," he said. "We want to maximize the amount of these nanoparticles on the support without letting them touch.
"If we hit that sweet spot, we can see an increase of about five times in the efficiency of the catalyst. But this was very difficult to do."
No wonder. The team had to find the right chemistry, at the right high temperature, to attach particles a billionth of a meter wide to grains of zirconium oxide powder. With the right mix, the particles react with straight n-pentane molecules, rearranging their five carbon and 12 hydrogen atoms in a process called isomerization.
Now that the catalyst formula is known, making the catalyst should be straightforward for industry. "Because we're not developing a whole new process -- just a component of it -- refineries should be able to plug this into their systems without much disruption," Wong said.
Maximizing gasoline is important as the world develops new sources of energy, he said. "There's a lot of talk about biofuels as a significant contributor in the future, but we need a bridge to get there. Our discovery could help by stretching current fuel-production capabilities."
Co-authors of the paper are Nikolaos Soultanidis, a Rice chemical engineering graduate student in Wong's lab; Israel Wachs, Wu Zhou and Christopher Kiely of Lehigh University; Antonis Psarras and Eleni Iliopoulou of the Centre for Research and Technology Hellas; and Alejandro Gonzalez of the DCG Partnership, Pearland, Texas.
The National Science Foundation's Nanoscale Interdisciplinary Research Team Program supported the project, with additional support from SABIC Americas and 3M.

Tuesday, September 28, 2010

Is the Shweeb Ready for Prime Time?

shweeb bicycle monorail image
The Shweeb is derived from the German "Schweben", meaning to float or suspend. A logical name for a device invented by a New Zealander living in Tokyo. Commenters had issues when we wrote about it last year, But somebody likes it; It just won a million bucks in the Google 10100 competition. Google writes:
Shweeb is a concept for short to medium distance, urban personal transport, using human-powered vehicles on a monorail. We are providing $1 million to fund research and development to test Shweeb's technology for an urban setting.
shweeb in city rendering image
There are benefits over a regular bike, including an aerodynamic shell, protection from the weather, and the increased speed that comes from the drafting effect when the shweebcars move together.
But there are problems too, such as what happens if you get stuck behind someone very slow, or the previous user was really sweaty and smelly. More at Shweeb, via Inhabitat
velo city tunnel image
I think that Chris Hardwicke's Velo-City of elevated bicycle highways is perhaps a better option. It is "a high speed, all season, pollution free, ultra-quiet transit system that makes people healthier. Using an infrastructure of elevated cycle tracks, velo-city creates a network across the City." You can pass, and you can use your own bike.
But personal preferences aside, it is a great step forward for human-powered transport.

Every Argument You Ever Wanted to Have About Geoengineering

geoengineering-sucks.jpg


Geoengineering a disaster?
Ah, geoengineering -- the topic every climate-conscious lady or gent has a strong opinion on. It brings up all those fun questions: If things get bad enough, like if man continues to emit greenhouse gases into the atmosphere at alarming rates -- can we use science to hack the planet? Is it worth investing big research bucks in, just in case? What would be the best route: Something innocuous like white roofing or tree planting projects, or something more sci-fi wet dream-esque, like launching hordes of tiny reflectors into space? Or would funding geoengineering divert resources from the more important task of seriously curbing emissions in the first place, and give folks the impression that cutting carbon isn't a priority? Or is the whole thing just too damn dangerous and unwieldy to warrant discussion in the first place? I've got all the answers, after the jump.
Kidding, of course. But there's one thing for certain that can be picked out of the whole confounding debate -- this is something we need to think long and hard about right now. And not just because we may have to deploy one of the technologies geoengineering -- but because we may need to know how to fight ardently against it should a proposed solution get snatched up and touted as a solution to all our woes.
I'm especially worried about this happening here in the United States -- we have a long, storied tradition of preferring quick fixes and feats of engineering marvel over making inconvenient decisions and implementing sound, far-reaching policy. If the right ever relents in its near-universal climate ignorance, I especially fear that an answer like geoengineering would be the preferred solution -- as it would be with many on the left as well, no doubt.
So let's talk about it. Last week, Slate ran a series of articles under the Future Tense banner examining geoengineering inside and out (all but the hard science of it). They looked at the political dilemma geoengineering would impose on Washington, the merits of at least having the conversation about the topic, the dangers it poses -- both bureaucratic and to humanity in general -- and even whipped up a slightly goofy but fun interactive app where you can play around with the different geoengingeering proposals. The series culminated in a symposium held on the subject today.
All of are well worth reading (well, except one, which I will instead paraphrase for you here: Climate change is complicated, and is tied to the global economy. Geoengineering is complicated too, and should not be treated like a silver bullet. Not exactly revelatory.). Most illuminating to me was the historian's take on the legacy of geoengineering so far -- and yes, there is indeed already a history of GE, it's just not widely known or touted by its advocates. Turns out that with our little cold war phase of toying with hydrogen bombs, we and Russia did a little geoengineering already -- to disastrous effect.
Also interesting in another piece was the idea that funding GE federally could create a self-perpetuating 'research' bureaucracy that siphons precious resources from more worthy sectors -- like climate change mitigation and adaptation, for example.
A couple of the articles point out, rightfully so, that actually managing geonengineering would be near-impossible without a functioning, cooperative global governance behind the controls, and nobody has any clue how that would be achieved -- we can't even ratify a treaty establishing emissions reduction goals, remember?
The series eventually boils down to the familiar premises, which we should continue to analyze and explore: Worth funding as a Plan B, or not? It's risky, yes, but so is failing to counter steadily rising emissions. It's dangerous, sure, but might we have no choice?
My feeling is (and the general sense I get from the articles) that GE may at some point be a reality worth considering, to certain degrees (bring on the white roofs and the tree-planting) -- but for now, our efforts should be focused, laser-like, on mitigation. It's not too late to stop catastrophic climate change from occurring, and mitigation is infinitely safer and more predictable than most of the suggestions in the GE realm. And if we could feasibly build the political will behind geoengineering, hypothetically we should be able to do so for serious carbon-reducing efforts (though GE of course has the distinct benefit of not conflicting with fossil fuel interests ...). Let's be realistic -- let's do what we know could work: Reduce emissions, and eventually the greenhouse gas concentration in the atmosphere, and slowly bring the planet's climate back towards its natural equilibrium.

Physics Breakthrough: Fast-Moving Neutral Atom Isolated and Captured


Physicists have developed a technique to entrap a fast-moving neutral atom – and have also seen and photographed this atom for the first time.


In a major physics breakthrough, University of Otago scientists in New Zealand have developed a technique to consistently isolate and capture a fast-moving neutral atom -- and have also seen and photographed this atom for the first time.
The entrapment of the Rubidium 85 atom is the result of a three-year research project funded by the Foundation for Research, Science and Technology, and has already prompted world-wide interest in the new science which will flow from the breakthrough.
A team of four researchers from Otago's Physics Department, led by Dr Mikkel F. Andersen, used laser cooling technology to dramatically slow a group of rubidium 85 atoms. A laser-beam, or "optical tweezers," was then deployed to isolate and hold one atom -- at which point it could be photographed through a microscope.
The researchers then proved they could reliably and consistently produce individual trapped atoms -- a major step towards using the atoms to build next-generation, ultra-fast quantum-logic computers, which harness the potency of atoms to perform complex information-processing tasks.
Dr Andersen says that unlike conventional silicon-based computers which generally perform one task at a time, quantum computers have the potential to perform numerous long and difficult calculations simultaneously; they also have the potential to break secret codes that would usually prove too complex.
"Our method provides a way to deliver those atoms needed to build this type of computer, and it is now possible to get a set of ten atoms held or trapped at the one time.
"You need a set of 30 atoms if you want to build a quantum computer that is capable of performing certain tasks better than existing computers, so this is a big step towards successfully doing that," he says.
"It has been the dream of scientists for the past century to see into the quantum world and develop technology on the smallest scale -- the atomic scale.
"What we have done moves the frontier of what scientists can do and gives us deterministic control of the smallest building blocks in our world," Dr Andersen says.
The results of the landmark study have been announced in the journal Nature Physics.
Dr Andersen says that within three weeks of the first laboratory experiment successfully trapping the atom, new experiments previously not thought possible were underway.
The next step is to try and generate a "state of entanglement" between the atoms, a kind of atomic romance which lasts the distance, he says.
"We need to generate communication between the atoms where they can feel each other, so when they are apart they stay entangled and don't forget each other even from a distance. This is the property that a quantum computer uses to do tasks simultaneously," says Dr Andersen.
One atom is so tiny that 10 billion side by side would make a metre in length. Atoms usually move at the speed of sound, making them difficult to manipulate.
Unlike ions, neutral atoms like the Rubidium 85 atom are notoriously difficult to pin down because they cannot be held by electrical fields. In recent times, only two other types of neutral atom have been seen and photographed by scientists in the world; the Rubidium 87 and the Caesium 133 atom.
Dr Andersen says that for him personally, the breakthrough has been a major milestone.
"I learnt at elementary school that it is impossible to see a single atom through a microscope. Well, my elementary school teacher was wrong," he says.
The other members of Dr Andersen's team are Tzahi Grünzweig, Andrew Hilliard and Matt McGovern.

Gigantic Mirror for X-Radiation in Outer Space


Study of the planned X-ray telescope IXO.
It is to become the largest X-ray telescope ever: The International X-Ray Observatory (IXO), which has been planned in a cooperation between NASA, ESA and Japan's Aerospace Exploration Agency JAXA, will be launched into space in 2021 and provide the world with brand new information about black holes and, thus, about the origin of the universe.
Its dimensions are gigantic: The surface of the mirror alone, which is to capture, for example, the cosmic X-radiation of black holes, will be 1300 square metres in size. It will consist of commercially available silicon wafers with pores of a few millimetres underneath. The quality of these "hidden" surfaces will be tested at the Physikalisch-Technische Bundesanstalt (PTB) with a monochromatic X-ray pencil beam. The new measuring device has been installed at PTB's synchrotron radiation laboratory at BESSY II in Berlin-Adlershof.
eROSITA will do the preliminary work. The German-Russian experiment under the auspices of the Max Planck Institute for Extraterrestrial Physics will be launched into space in 2013. With the aid of a bundle of seven X-ray telescopes, eROSITA will search the whole sky for a specific kind of black hole: supermassive black holes which developed at the dawn of the universe -- probably even before the development of the first stars. Scientists expect that -- among other things -- approximately three million new black holes will be found with this mission. This will, for the first time, allow a complete overview of the formation and development of supermassive black holes to be given. IXO will then be responsible for their systematic investigation.
In addition, the new space telescope is to provide much new information about neutron stars and stellar black holes, the second type of black hole which develops when especially massive stars explode. Due to the fact that such a venture is extremely expensive, in 2008 the space agencies of the USA, of Europe and Japan decided to realize this joint project from then on instead of three individual solutions.
IXO can capture the X-radiation of very distant black holes, because this kind of radiation penetrates -- in an unhindered way -- cosmic dust, which is the most frequently occurring impediment on the way. For that purpose, the mirror in the telescope must be very large, but at the same time light enough. IXO will have one single mirror with a collection surface of approx. 3 m2, a focal length of 20 m and an angular resolution of less than 5 arc seconds. Due to the required grazing radiation incidence, the whole surface of the mirror must be approx. 1300 m2. To construct this large surface in a stable and, at the same time, light way, the underneath of commercially available, highly polished silicon wafers will be provided with ribs to allow the wafers to be stacked in rigid blocks. Through this, pores with a cross-section of approximately 1 mm2 are formed in which the radiation is reflected at the surface of the respective lower wafer. With respect to tangent errors and roughness, the quality of these "hidden" surfaces cannot be investigated as usual from above, but must be determined in the intended application geometry with X-ray reflection at grazing incident angles of approximately one degree. To investigate the reflecting surface of single pores, an X-ray pencil beam is required.
At the "X-ray pencil beam facility" (XPBF) at PTB's synchrotron radiation laboratory at BESSY II, which has recently been extended within the scope of a research cooperation with ESA, a monochromatic pencil beam with a typical diameter of 50 µm and a divergency of less than one arc second is now available for this purpose. It will characterize the X-ray lens systems for IXO at three different photon energies, i.e. at 1 keV, 2.8 keV and 7.6 keV. The lens systems can be adjusted or turned with a hexapod in vacuum with reproducibilities of 2 µm or below 1", respectively. The direct beam and the reflected beam are registered with a spatial resolving detector based on CCD at a distance of 5 m or 20 m from the lens system. For the last-mentioned distance, which corresponds to the intended focal length of IXO, a vertical movement of the CCD detector by more than 2 m has been implemented. First test measurements at this distance were already performed in May 2010, complete commissioning of the extended XPBF is planned for the beginning of November 2010.

Rain or Shine, Researchers Find New Ways to Forecast Large Photovoltaic Power Plant Output


Sandia researcher Josh Stein studies how clouds impact large-scale solar photovoltaic (PV) power plants.
Sandia National Laboratories researchers have developed a new system to monitor how clouds affect large-scale solar photovoltaic (PV) power plants. By observing cloud shape, size and movement, the system provides a way for utility companies to predict and prepare for fluctuations in power output due to changes in weather. The resulting models will provide utility companies with valuable data to assess potential power plant locations, ramp rates and power output.
Sandia researchers' work is currently focused at the 1.2-megawatt La Ola Solar Farm on the Hawaiian island of Lana'i. La Ola is the state's largest solar power system, and can produce enough power to supply up to 30 percent of the island's peak electric demand, which is one of the highest rates of solar PV power penetration in the world. Understanding variability of such a large plant is critical to ensuring that power output is reliable and that output ramp rates remain manageable.
"As solar power continues to develop and take up a larger percentage of grids nationwide, being able to forecast power production is going to become more and more critical," said Chris Lovvorn, director of alternative energy of Castle & Cooke Resorts, LLC, which owns 98 percent of the island. "Sandia's involvement and insight has been invaluable in our efforts to meet 100 percent of the island's energy needs with renewable resources."
The effects of clouds on small PV arrays are well-documented, but there is little research on how large-scale arrays interact and function under cloud cover. A small system can be completely covered by a cloud, which drastically reduces its power output, but what's less well understood is what happens when only part of a large system is covered by a moving cloud shadow, while the rest stays in sunlight.
"Our goal is to get to the point where we can predict what's going to happen at larger scale plants as they go toward hundreds of megawatts. To do that, you need the data, and the opportunity was available at La Ola," said Sandia researcher Scott Kuszmaul.
The high penetration of PV power on Lana'i, combined with the sun and cloud mix at the 10-acre La Ola plant, provides an optimal environment for prediction and modeling research. Research could not interfere with the ongoing operations of the plant, which currently sells power to Maui Electric Company (MECO), so Sandia engineers connected 24 small, nonintrusive sensors to the plant's PV panels and used a radio frequency network to transmit data. The sensors took readings at one-second intervals to provide researchers with unprecedented detail about cloud direction and coverage activity.
A radio frequency transmission system has the added benefit of being portable. "Currently, a utility company that wants to build a large solar PV power plant might have a lot of questions about the plant's output and variability at a proposed site. Work being done at the La Ola plant is leading to new methods that eventually can be used to answer these questions," said Sandia researcher Josh Stein. "These techniques will allow a developer to place a sensor network at a proposed site, make measurements for a period of time and use that to predict plant output variability."
La Ola was commissioned in December 2008 by Castle & Cooke Resorts, LLC, and SunPower Corp., a manufacturer of high-efficiency solar cells. The project uses SunPower's Tracker technology. Panels rotate on a single axis to follow the sun, which increases energy capture by up to 25 percent. Since February, Sandia Labs has held a cooperative research and development agreement (CRADA) with SunPower to conduct research on integrating large-scale PV systems into the grid. The CRADA is funded with about $1 million of combined U.S. Department of Energy and SunPower funding and is expected to achieve significant results, which will be disseminated through joint publications over the next two years.
For more information about Sandia's photovoltaic work, please visit: www.sandia.gov/pv.

A Shot to the Heart: Nanoneedle Delivers Quantum Dots to Cell Nucleus


University of Illinois researchers developed a nanoneedle that releases quantum dots directly into the nucleus of a living cell when a small electrical charge is applied. The quantum dots are tracked to gain information about conditions inside the nucleus.
Getting an inside look at the center of a cell can be as easy as a needle prick, thanks to University of Illinois researchers who have developed a tiny needle to deliver a shot right to a cell's nucleus.
Understanding the processes inside the nucleus of a cell, which houses DNA and is the site for transcribing genes, could lead to greater comprehension of genetics and the factors that regulate expression. Scientists have used proteins or dyes to track activity in the nucleus, but those can be large and tend to be sensitive to light, making them hard to use with simple microscopy techniques.
Researchers have been exploring a class of nanoparticles called quantum dots, tiny specks of semiconductor material only a few molecules big that can be used to monitor microscopic processes and cellular conditions. Quantum dots offer the advantages of small size, bright fluorescence for easy tracking, and excellent stability in light.
"Lots of people rely on quantum dots to monitor biological processes and gain information about the cellular environment. But getting quantum dots into a cell for advanced applications is a problem," said professor Min-Feng Yu, a professor of mechanical science and engineering.
Getting any type of molecule into the nucleus is even trickier, because it's surrounded by an additional membrane that prevents most molecules in the cell from entering.
Yu worked with fellow mechanical science and engineering professor Ning Wang and postdoctoral researcher Kyungsuk Yum to develop a nanoneedle that also served as an electrode that could deliver quantum dots directly into the nucleus of a cell -- specifically to a pinpointed location within the nucleus. The researchers can then learn a lot about the physical conditions inside the nucleus by monitoring the quantum dots with a standard fluorescent microscope.
"This technique allows us to physically access the internal environment inside a cell," Yu said. "It's almost like a surgical tool that allows us to 'operate' inside the cell."
The group coated a single nanotube, only 50 nanometers wide, with a very thin layer of gold, creating a nanoscale electrode probe. They then loaded the needle with quantum dots. A small electrical charge releases the quantum dots from the needle. This provides a level of control not achievable by other molecular delivery methods, which involve gradual diffusion throughout the cell and into the nucleus.
"Now we can use electrical potential to control the release of the molecules attached on the probe," Yu said. "We can insert the nanoneedle in a specific location and wait for a specific point in a biologic process, and then release the quantum dots. Previous techniques cannot do that."
Because the needle is so small, it can pierce a cell with minimal disruption, while other injection techniques can be very damaging to a cell. Researchers also can use this technique to accurately deliver the quantum dots to a very specific target to study activity in certain regions of the nucleus, or potentially other cellular organelles.
"Location is very important in cellular functions," Wang said. "Using the nanoneedle approach you can get to a very specific location within the nucleus. That's a key advantage of this method." The new technique opens up new avenues for study. The team hopes to continue to refine the nanoneedle, both as an electrode and as a molecular delivery system.
They hope to explore using the needle to deliver other types of molecules as well -- DNA fragments, proteins, enzymes and others -- that could be used to study a myriad of cellular processes.
"It's an all-in-one tool," Wang said. "There are three main types of processes in the cell: chemical, electrical, and mechanical. This has all three: It's a mechanical probe, an electrode, and a chemical delivery system."
The team's findings will appear in the Oct. 4 edition of the journal Small. The National Institutes of Health and the National Science Foundation supported this work.

Quantum Information Systems: Researchers Convert Signals to Telecom Wavelengths, Increase Memory Times


Researchers Alex Radnaev and Jacob Blumoff (standing) and Yaroslav Dudin collect data for a study of quantum information systems at the Georgia Institute of Technology.Using optically dense, ultra-cold clouds of rubidium atoms, researchers have made advances in three key elements needed for quantum information systems -- including a technique for converting photons carrying quantum data to wavelengths that can be transmitted long distances on optical fiber telecom networks.


The developments move quantum information networks -- which securely encode information by entangling photons and atoms -- closer to a possible prototype system.

Researchers at the Georgia Institute of Technology reported the findings Sept. 26 in the journal Nature Physics, and in a manuscript submitted for publication in the journal Physical Review Letters. The research was sponsored by the Air Force Office of Scientific Research, the Office of Naval Research and the National Science Foundation.

The advances include:

* Development of an efficient, low-noise system for converting photons carrying quantum information at infrared wavelengths to longer wavelengths suitable for transmission on conventional telecommunications systems. The researchers have demonstrated that the system, believed to be the first of its kind, maintains the entangled information during conversion to telecom wavelengths -- and back down to the original infrared wavelengths.
* A significant improvement in the length of time that a quantum repeater -- which would be necessary to transmit the information -- can maintain the information in memory. The Georgia Tech team reported memory lasting as long as 0.1 second, 30 times longer than previously reported for systems based on cold neutral atoms and approaching the quantum memory goal of at least one second -- long enough to transmit the information to the next node in the network.
* An efficient, low-noise system able to convert photons of telecom wavelengths back to infrared wavelengths. Such a system would be necessary for detecting entangled photons transmitted by a quantum information system.

"This is the first system in which such a long memory time has been integrated with the ability to transmit at telecom wavelengths," said Brian Kennedy, a co-author of the Nature Physics paper and a professor in the Georgia Tech School of Physics. "We now have the crucial aspects needed for a quantum repeater."

The conversion technique addresses a long-standing issue facing quantum networks: the wavelengths most useful for creating quantum memory aren't the best for transmitting that information across optical telecommunications networks. Wavelengths of approximately 1.3 microns can be transmitted in optical fiber with the lowest absorption, but the ideal wavelength for storage is 795 nanometers.

The wavelength conversion takes place in a sophisticated system that uses a cloud of rubidium atoms packed closely together in gaseous form to maximize the likelihood of interaction with photons entering the samples. Two separate laser beams excite the rubidium atoms, which are held in a cigar-shaped magneto-optical trap about six millimeters long. The setup creates a four-wave mixing process that changes the wavelength of photons entering it.

"One photon of infrared light going in becomes one photon of telecom light going out," said Alex Kuzmich, an associate professor in the Georgia Tech School of Physics and another of the Nature Physics paper's co-authors. "To preserve the quantum entanglement, our conversion is done at very high efficiency and with low noise."

By changing the shape, size and density of the rubidium cloud, the researchers have been able to boost efficiency as high as 65 percent. "We learned that the efficiency of the system scales up rather quickly with the size of the trap and the number of atoms," Kuzmich said. "We spent a lot of time to make a really dense optical sample. That dramatically improved the efficiency and was a big factor in making this work."

The four-wave mixing process does not add noise to the signal, which allows the system to maintain the information encoded onto photons by the quantum memory. "There are multiple parameters that affect this process, and we had to work hard to find the optimal set," noted Alexander Radnaev, another co-author of the Nature Physics paper.

Once the photons are converted to telecom wavelengths, they move through optical fiber -- and loop back into the magneto-optical trap. They are then converted back to infrared wavelengths for testing to verify that the entanglement has been maintained. That second conversion turns the rubidium cloud into a photon detector that is both efficient and low in noise, Kuzmich said.

Quantum memory is created when laser light is directed into a cloud of rubidium atoms confined in an optical lattice. The energy excites the atoms, and the photons scattered from the atoms carry information about that excitation. In the new Georgia Tech system, these photons carrying quantum information are then fed into the wavelength conversion system.

The research team took two different approaches to extending the quantum memory lifetime, both of which sought to mix the two levels of atoms involved in encoding the quantum information. One approach, described in the Nature Physics paper, used an optical lattice and a two-photon process. The second approach, described in the Physical Review Letters submission, used a magnetic field approach pioneered by researchers at the National Institute of Standards and Technology.

The general purpose of quantum networking is to distribute entangled qubits -- two correlated data bits that are either "0" or "1" -- over long distances. The qubits would travel as photons across existing optical networks that are part of the existing global telecommunications system.

Because of loss in the optical fiber that makes up these networks, repeaters must be installed at regular intervals to boost the signals. For carrying qubits, these repeaters will need quantum memory to receive the photonic signal, store it briefly, and then produce another signal that will carry the data to the next node, and on to its final destination.

"This is another significant step toward improving quantum information systems based on neutral atoms," Kuzmich said. "For quantum repeaters, most of the basic steps have now been made, but achieving the final benchmarks required for an operating system will require intensive optical engineering efforts."

In addition to those already mentioned, the research team also included Y.O. Dudin, R. Zhao, H.H. Jen, J.Z. Blumoff and S.D. Jenkins

Monday, September 27, 2010

Magnetic Anomalies: New Type of Solar Wind Interaction With Airless Bodies in Our Solar System


Spatial variation of the energetic neutral hydrogen flux over the magnetic anomaly close to the Gerasimovic crater. (a) High energy hydrogen flux with energy indicates a ~50% flux reduction inside the magnetic anomaly compared to the surrounding area. (b) Hydrogen flux with lower energy of 30-100 eV fills the magnetic anomaly. (c) The albedo (reflectivity) map of the Moon with the spacecraft trajectories (white lines).

Scientists have discovered a new type of solar wind interaction with airless bodies in our solar system. Magnetized regions called magnetic anomalies, mostly on the far side of the Moon, were found to strongly deflect the solar wind, shielding the Moon's surface. This will help scientists understand the solar wind behaviour near the lunar surface and how water may be generated in its upper layer.
Observational evidence for these findings is being presented by Drs. Yoshifumi Futaana and Martin Wieser at the European Planetary Science Congress in Rome.
Atmosphere-less bodies interact with the solar wind quite differently than the Earth: Their surfaces are exposed without any shielding by a dense atmosphere or magnetosphere. This causes them to be heavily weathered by meteoroids or the solar wind, forming a very rough and chaotic surface called regolith. So far, the solar wind was thought to be completely absorbed by regolith. However, recent explorations of the Earth's moon by the Chang'E-1, Kaguya and Chandrayaan-1 spacecrafts have revealed that this interaction is not that simple.
A significant flux of high energy particles were found to originate from the lunar surface, most probably due to the solar wind directly reflected off the Moon's regolith. "These results may change dramatically the way we understood the solar wind-regolith interaction so far. Since the solar wind is one potential source of water on the Moon, we need to make better models of the lunar hydrogen circulation in order to understand how water molecules form in its upper layers," says Dr. Futaana of the Swedish Institute of Space Physics. "Also, it will be possible to remotely investigate the solar wind-surface interaction on other airless bodies, such as the Martian moon Phobos or Mercury, by imaging the energetic hydrogen atoms that are reflected back to space when solar wind hits their surface," he adds.
The current investigation was carried out with the Sub-keV Atom Reflecting Analyzer instrument which was developed in a collaboration between Sweden, India, Switzerland and Japan and flown onboard the Indian Chandrayaan-1 spacecraft. Scientists have mapped for the first time the energetic hydrogen atoms coming from the Moon, and found that up to one fifth of the solar wind protons reaching the lunar surface are reflected back to space.
This may be a general feature of the atmosphere-less bodies, such as Mercury, meteorites and several moons of the giant planets. "In fact, during the close encounter of the European Mars Express spacecraft with Phobos in 2008, we detected signatures of reflected solar wind protons also from the surface of Martian moon Phobos," says Dr. Futaana.
However, when Chandrayaan-1 flew over a magnetic anomaly (magnetized region on the Moon surface), the scientists detected significantly less reflected hydrogen atoms meaning that the solar wind had not reached the lunar surface. In fact, the solar wind was found to be strongly deflected by an aggregation of magnetic anomalies in the southern hemisphere of the lunar far side. "We detected a strong flux of deflected solar wind protons. This clearly indicates that magnetic anomalies can shield the lunar surface from the incoming solar wind, in the same way as the magnetospheres of several planets in our solar system," says Dr. Futaana.
"It all depends on how strong the solar wind "blows." When the solar wind pressure is low, this "mini-magnetosphere" expands causing stronger shielding," concludes Dr. Wieser, also of the Swedish Institute of Space Physics.

How Molecules Escape from Cell's Nucleus: Key Advance in Using Microscopy to Reveal Secrets of Living Cells


Real-Time mRNA Export: Messenger RNA molecules (green structures) passing through the nuclear pore (red) from the nucleus to the cytoplasm.

By constructing a microscope apparatus that achieves resolution never before possible in living cells, researchers at Albert Einstein College of Medicine of Yeshiva University have illuminated the molecular interactions that occur during one of the most important "trips" in all of biology: the journey of individual messenger Ribonucleic acid (RNA) molecules from the nucleus into the cytoplasm (the area between the nucleus and cell membrane) so that proteins can be made.
The results, published in the September 15 online edition of Nature, mark a major advance in the use of microscopes for scientific investigation (microscopy). The findings could lead to treatments for disorders such as myotonic dystrophy in which messenger RNA gets stuck inside the nucleus of cells.
Robert Singer, Ph.D., professor and co-chair of anatomy and structural biology, professor of cell biology and neuroscience and co-director of the Gruss-Lipper Biophotonics Center at Einstein, is the study's senior author. His co-author, David Grünwald, is at the Kavli Institute of Nanoscience at Delft University of Technology, The Netherlands. Prior to their work, the limit of microscopy resolution was 200 nanometers (billionths of a meter), meaning that molecules closer than that could not be distinguished as separate entities in living cells. In this paper, the researchers improved that resolution limit by 10 fold, successfully differentiating molecules only 20 nanometers apart.
Protein synthesis is arguably the most important of all cellular processes. The instructions for making proteins are encoded in the Deoxyribonucleic acid (DNA) of genes, which reside on chromosomes in the nucleus of a cell. In protein synthesis, DNA instructions of a gene are transcribed, or copied, onto messenger RNA; these molecules of messenger RNA must then travel out of the nucleus and into the cytoplasm, where amino acids are linked together to form the specified proteins.
Molecules shuttling between the nucleus and cytoplasm are known to pass through protein complexes called nuclear pores. After tagging messenger RNA molecules with a yellow fluorescent protein (which appears green in the accompanying image) and tagging the nuclear pore with a red fluorescent protein, the researchers used high-speed cameras to film messenger RNA molecules as they traveled across the pores. The Nature paper reveals the dynamic and surprising mechanism by which nuclear pores "translocate" messenger RNA molecules from the nucleus into the cytoplasm: this is the first time their pore transport has been seen in living cells in real time.
"Up until now, we'd really had no idea how messenger RNA travels through nuclear pores," said Dr. Singer. "Researchers intuitively thought that the squeezing of these molecules through a narrow channel such as the nuclear pore would be the slow part of the translocation process. But to our surprise, we observed that messenger RNA molecules pass rapidly through the nuclear pores, and that the slow events were docking on the nuclear side and then waiting for release into the cytoplasm."
More specifically, Dr. Singer found that single messenger RNA molecules arrive at the nuclear pore and wait for 80 milliseconds (80 thousandths of a second) to enter; they then pass through the pore breathtakingly fast -- in just 5 milliseconds; finally, the molecules wait on the other side of the pore for another 80 milliseconds before being released into the cytoplasm.
The waiting periods observed in this study, and the observation that 10 percent of messenger RNA molecules sit for seconds at nuclear pores without gaining entry, suggest that messenger RNA could be screened for quality at this point.
"Researchers have speculated that messenger RNA molecules that are defective in some way, perhaps because the genes they're derived from are mutated, may be inspected and destroyed before getting into the cytoplasm or a short time later, and the question has been, 'Where might that surveillance be happening?'," said Dr. Singer. "So we're wondering if those messenger RNA molecules that couldn't get through the nuclear pores were subjected to a quality control mechanism that didn't give them a clean bill of health for entry."
In previous research, Dr. Singer studied myotonic dystrophy, a severe inherited disorder marked by wasting of the muscles and caused by a mutation involving repeated DNA sequences of three nucleotides. Dr. Singer found that in the cells of people with myotonic dystrophy, messenger RNA gets stuck in the nucleus and can't enter the cytoplasm. "By understanding how messenger RNA exits the nucleus, we may be able to develop treatments for myotonic dystrophy and other disorders in which messenger RNA transport is blocked," he said.
The paper, "In Vivo Imaging of Labelled Endogenous β-actin mRNA during Nucleocytoplasmic Transport," was published in the September 15 online edition of Nature.

Mystery of Disappearing Martian Carbon Dioxide Ice Solved?


Sudden reappearance of the carbon dioxide ice signature between "solar longitudes" 59.2° and 60.2° (which corresponds to a time lapse of approximately two Martian days) in the spiral troughs structure of the North polar cap.

Scientists may have solved the mystery of the carbon dioxide ice disappearance early in the Northern Martian springs followed later by its sudden reappearance, revealing a very active water cycle on the planet. Dr. Bernard Schmitt and Mr. Thomas Appéré are reporting their results about water ice mobility during Martian Year 28, at the European Planetary Science Congress in Rome.
Seasonal ice deposits are one of the most important Martian meteorological processes, playing a major role in the water cycle of the planet. Every Martian year, alternatively during northern and southern winter, a significant part of the atmosphere condenses on the surface in the form of frost and snow. These seasonal ice deposits, which can be up to one meter thick, are mainly composed of carbon dioxide with minor amounts of water and dust. During spring, the deposits sublimate (vaporize), becoming a substantial source of water vapour, in particular in the northern hemisphere of the planet.
Dr. Schmitt and his colleagues Thomas Appéré and Dr. Sylvain Douté at the Laboratoire de Planétologie de Grenoble, France, have analyzed data taken with the OMEGA instrument on board Mars Express, for two northern Martian regions. Before the Mars Express mission (ESA), the evolution of the seasonal deposits has been monitored by the albedo (reflectivity) and temperature changes of the surface, as the ice deposits appear much brighter and are colder than the surrounding defrosted terrains."But we couldn't resolve their exact composition and how they were distributed on the planet. Near-infrared observations, such as the OMEGA data, are much better for detecting strong signatures of water and carbon dioxide ice," says Mr Appéré.
The first Martian region that the scientists observed is located on Gemina Lingula, a Northern plateau, where peculiar evolution of the carbon dioxide ice deposits was observed. "During spring the ice signature disappeared from our data, but the surface temperature was still cold enough to sustain plenty of CO2 ice. We concluded that a thick layer of something else, either dust or water ice was overlaid. If it was dust then it would also hide water ice and the surface of the planet would become darker. None of these happened so we concluded that a layer of water ice was hiding the CO2 ice. We had to wait until the weather gets warm enough on Mars for the water to vaporize as well, and then the carbon dioxide signatures re-appeared in our data," explains Dr Schmitt.
But where does this layer of water ice come from? Soon after spring sunrise, the solar radiation hitting the surface of Mars warms enough the CO2 ice lying on the top layer to cause it to vaporize. But the water ice needs higher temperatures to sublimate, so a fine grained layer of water ice gradually forms hiding the carbon dioxide ice still lying beneath it. "A layer only 2 tenths of a millimetre thick is enough to completely hide the CO2 ice. Also some water that has been vaporized at lower, warmer, Martian latitudes condenses as it moves northward and may be cold trapped on top of the CO2 ice," says Mr. Appéré.
The second region analysed by the team is located in the spiral troughs structure of the North permanent cap. A similar situation was observed but the carbon dioxide ice re-appeard very quickly here after its initial disappearance. "This hide-and-seek game didn't make much sense to us. It wasn't cold enough for CO2 ice to condense again, neither warm enough for water ice to sublimate," explains Dr. Schmitt. "We concluded that somehow the water ice layer was removed. The topography of the North permanent Martian cap is well-suited to entail the formation of strong katabatic (downhill) winds. Aymeric Spiga used a model from the Laboratoire de Météorologie Dynamique du CNRS to simulate those winds and he indeed confirmed the sudden re-appearances of CO2 ice where strong katabatic winds blow," says Mr. Appéré.
Dr. Schmitt concludes: "To decipher the present and past water cycles on Mars and improve our weather models on the planet one needs to have a good understanding of the seasonal ice deposits dynamics, how they change in space and time. We are confident that our results will make a significant contribution in this direction."

Sunday, September 26, 2010

New 'Light Switch' Chloride Binder Developed


When azobenzene units (blue) at the ends of the receptor are struck with UV light, the receptor unfolds the helix and releases the chloride ion (green)
Chemists at Indiana University Bloomington have designed a molecule that binds chloride ions -- but can be conveniently compelled to release the ions in the presence of ultraviolet light.
Reporting in the Journal of the American Chemical Society (online August 30), IU Bloomington chemist Amar Flood and Ph.D. student Yuran Hua explain how they designed the molecule, how it works and, just as importantly, how they know it works.
"One of the things we like most about this system is that the mechanism is predictable -- and it functions in the way we propose," said Flood, who led the project.
Chloride is a relatively common element on Earth, ubiquitous in seawater and in the bodies of living organisms.
"We have two main goals with this research," Flood said. "The first is to design an effective and flexible system for the removal of toxic, negatively charged ions from the environment or industrial waste. The second goal is to develop scientific and even medical applications. If a molecule similar to ours could be made water soluble and non-toxic, it could, say, benefit people with cystic fibrosis, who have a problem with chloride ions accumulating outside of certain cells."
Many organic molecules exist that can bind positively charged ions, or cations, and this has much to do with the fact that it is easy to synthesize organic molecules with negatively charged parts. Synthesizing organic molecules that bind negatively charged ions, or anions, like chloride, presents special challenges.
The binding molecule or "foldamer" Flood and Hua designed is both a folding molecule and a (small) polymer, meaning the foldamer's constituent parts can be synthesized with relative ease. Under visible light of 436 nanometers (nm), the foldamer prefers a tight spiral structure that allows specially configured residues to interact with each other, which improves stability, and creates an attractive pocket for chloride. In the presence of ultraviolet light (365 nm), the foldamer absorbs energy and the tight spiral is destabilized, weakening the chloride binding pocket and freeing chloride to re-enter the solution.
The "light switch" properties of the foldamer could make it an invaluable tool to biochemists and molecular biologists who seek to adjust the availability of chloride in their experiments by simply turning a UV light emitter on or off.
The foldamer is not quite ready for that, however. It can only be dissolved at present in organic (fatty) solutions, whereas living systems operate mostly in water-based solutions.
"That's the direction we're headed," Flood said. "It actually wouldn't be that difficult to modify the molecule so that it is water soluble. But first we need to make sure it does all the things we want it to do."
In their JACS paper, Flood said he and Hua wanted to bring sythentic chemistry together with modern diagnostic approaches to demonstrate the efficacy of their foldamer.
"A lot of the ideas in our paper have been floating around for some time," Flood said. "The idea of a foldamer that binds anions, the idea of a foldamer that you can isomerize with light, the idea of receptor that can bind anions ... But none of the prior work uses conductivity to show that the chloride concentrations actually go up and down as intended. What's new is that we've put all these things together. We think we have something here that allows us to raise our heads to the great research that's preceded us."
Flood and Hua used an electrical conductivity test to show that when voltage is applied to the solution containing chloride ions and the binding molecule, electricity flows more freely in the presence of UV light, when the binder is relaxed and chloride is disassociated from it. That was proof, Flood said, that the foldamer was working as intended.
"My training is in building molecular machines," Flood said. "I create machines that do what we want them to do -- and to show what's possible in chemical and biological laboratory science."
The binding molecule Flood and Hua describe is an improvement on a previous binder developed by Flood and then-postdoctoral fellow Yongjun Li that was also an oligomer of sorts but did not fold. This previous iteration of the chloride binder was closed and donut-shaped, using space restrictions and strategically placed atoms to yield a binding pocket with a special affinity for chloride.
Funding and support for this research were provided by the U.S. Department of Energy's Office of Science and the Camille Dreyfus Teacher-Scholar Award.