about us:
our company name is KNK DECORATION. our job about sticker service, example: sticker cutting, digital printing, screen printing, neon box, sign marking and another material promotion.
kami melayani pengerjaan stiker cutting, printing indoor/outdoor, neon box, pembuatan rambu-rambu lalu lintas(standard DLLAJR), dll

Tuesday, March 25, 2008

Atom 'Noise' May Help Design Quantum Computers

ScienceDaily (Mar. 3, 2007) — As if building a computer out of rubidium atoms and laser beams weren't difficult enough, scientists sometimes have to work as if blindfolded: The quirks of quantum physics can cause correlations between the atoms to fade from view at crucial times.
What to do? Focus on the noise patterns. Building on earlier work by other groups, physicists at the National Institute of Standards and Technology (NIST) have found that images of "noise" in clouds of ultracold atoms trapped by lasers reveal hidden structural patterns, including spacing between atoms and cloud size.

The technique, described in the Feb. 23 issue of Physical Review Letters,* was demonstrated in an experiment to partition about 170,000 atoms in an "optical lattice," produced by intersecting laser beams that are seen by the atoms as an array of energy wells arranged like an egg carton. By loading just one atom into each well, for example, scientists can create the initial state of a hypothetical quantum computer using neutral atoms to store and process information.

The atoms first are cooled to form a Bose-Einstein condensate (BEC), a unique form of matter in which all the atoms are in the same quantum state and completely indistinguishable. The optical lattice lasers then are slowly turned on and the BEC undergoes a transformation in which the atoms space out evenly in the lattice. More intense light creates deeper wells until each atom settles into its own lattice well. But during this transition, scientists lose their capability to see and measure key quantum correlations among the atoms.

Key structures are visible, however, in composite images of the noise patterns, which reveal not only atom spacing but also cloud size and how much of the BEC has undergone the transition.

In the NIST experiments, the BEC was placed in an optical lattice at various laser intensities. The lattice was turned off, and scientists took pictures of the expanding cloud of atoms after 20 to 30 milliseconds. To identify and enhance the noise signal, scientists looked for identical bumps and wiggles in the images and made composites of about 60 images by identifying and overlaying matching patterns. Lead author Ian Spielman likens the technique to listening to a noisy ballroom: While it may be impossible to hear specific conversations, correlations in noise can show where people (or atoms) are located in relation to each other, and the volume of noise can indicate the size of the ballroom (or atomic cloud), Spielman says.

The authors are affiliated with the Joint Quantum Institute, a new collaborative venture of NIST and the University of Maryland. The work was partially supported by the Disruptive Technology Office, an agency of the U.S. intelligence community that funds unclassified research on information systems, and by the Office of Naval Research.

* I.B. Spielman, W.D. Phillips and J.V. Porto. 2007. The Mott insulator transition in a two dimensional atomic Bose gas. Physical Review Letters. Feb. 23.

Adapted from materials provided by National Institute of Standards and Technology.

Stunt Doubles: Ultracold Atoms Could Replicate The Electron 'Jitterbug'

ScienceDaily (Mar. 13, 2008) — Ultracold atoms moving through a carefully designed arrangement of laser beams will jiggle slightly as they go, two NIST scientists have predicted.* If observed, this never-before-seen "jitterbug" motion would shed light on a little-known oddity of quantum mechanics arising from Paul Dirac's 80-year-old theory of the electron.
Dirac's theory, which successfully married the principles of Einstein's relativity to the quantum property of electrons known as spin, famously predicted that the electron must have an antiparticle, subsequently discovered and named the positron. More enigmatically, the Dirac theory indicates that an isolated electron moving through empty space will vibrate back and forth. But this shaking--named Zitterbewegung from the German for 'trembling motion'--is so rapid and so tiny in amplitude that it has never been directly observed.

Jay Vaishnav and Charles Clark of the Joint Quantum Institute, a partnership of NIST and the University of Maryland, have devised an experimental arrangement in which atoms are made to precisely mimic the behavior of electrons in Dirac's theory. The atoms will show Zitterbewegung--but with vibrations that are slow enough and large enough to be detected.

Vaishnav and Clark's proposal begins with an atom--rubidium-87 is an example--that has a 'tripod' arrangement of electron energy levels, consisting of one higher energy level above three equal-energy lower levels. Suppose, say the researchers, that such atoms are placed in a region crisscrossed by lasers at specific frequencies. Two pairs of laser beams face each other, creating a pattern of standing waves, while a third laser beam is set perpendicular to the other two.

Given the proper frequencies of light, a perfectly stationary "tripod" atom at the intersection will have the energy of its upper state and one of the three lower states slightly changed. To a moving atom, however, the electromagnetic field will look a little different, and in that case the energies of the other two lower states also change slightly.

Remarkably, those two states, moving in this particular arrangement of laser light, are governed by an equation that's exactly analogous to the Dirac equation for the two spin states of an electron moving in empty space. In particular, as the atom moves, it flips back and forth between the two states, and that flipping is accompanied by a jiggling back and forth of the atom's position--a version of Zitterbewegung with a frequency measured in megahertz, a hundred trillion times slower than the vibration of a free electron.

Other arrangements of lasers and atoms have been used to cleanly simulate a variety of quantum systems, says Vaishnav. Examples includes recent studies of the mechanisms of quantum magnetism and high-temperature superconductivity.** What's unusual about this new proposal, she adds, is that it offers a simulation of a fundamental elementary particle in free space and may offer access to an aspect of electron behavior that would otherwise remain beyond observational scrutiny.

* J.Y. Vaishnav and C.W. Clark. Observation of zitterbewegung in ultracold atoms. Presented at the March Meeting of the American Physical Society, March 10, 2008, New Orleans, La., Session A14.00003.

** I. Bloch. Towards quantum magnetism with ultracold quantum gases in optical lattices. Presented at the March Meeting of the American Physical Society, March 12, 2008, New Orleans, La., Session P7.00003. and A.M. Rey. Probing and controlling quantum magnetism with ultra-cold atoms. Presented at the March Meeting of the American Physical Society, March 12, 2008, New Orleans, La., Session P7.00004.

Adapted from materials provided by National Institute of Standards and Technology.

Hyper-entangled Photons: 'Superdense' Coding Gets Denser

ScienceDaily (Mar. 25, 2008) — The record for the most amount of information sent by a single photon has been broken by researchers at the University of Illinois. Using the direction of "wiggling" and "twisting" of a pair of hyper-entangled photons, they have beaten a fundamental limit on the channel capacity for dense coding with linear optics.
"Dense coding is arguably the protocol that launched the field of quantum communication," said Paul Kwiat, a John Bardeen Professor of Physics and Electrical and Computer Engineering. "Today, however, more than a decade after its initial experimental realization, channel capacity has remained fundamentally limited as conceived for photons using conventional linear elements."

In classical coding, a single photon will convey only one of two messages, or one bit of information. In dense coding, a single photon can convey one of four messages, or two bits of information.

"Dense coding is possible because the properties of photons can be linked to one another through a peculiar process called quantum entanglement," Kwiat said. "This bizarre coupling can link two photons, even if they are located on opposite sides of the galaxy."

Using linear elements, however, the standard protocol is fundamentally limited to convey only one of three messages, or 1.58 bits. The new experiment surpasses that threshold by employing pairs of photons entangled in more ways than one (hyper-entangled). As a result, additional information can be sent and correctly decoded to achieve the full power of dense coding.

Kwiat, graduate student Julio Barreiro and postdoctoral researcher Tzu-Chieh Wei (now at the University of Waterloo) describe their recent experiment in a paper accepted for publication in the journal Nature Physics, and posted on its Web site.

Through the process of spontaneous parametric down conversion in a pair of nonlinear crystals, the researchers first produce pairs of photons simultaneously entangled in polarization, or "wiggling" direction, and in orbital angular momentum, or "twisting" direction. They then encode a message in the polarization state by applying birefringent phase shifts with a pair of liquid crystals.

"While hyper-entanglement in spin and orbital angular momentum enables the transmission of two bits with a single photon," Barreiro said, "atmospheric turbulence can cause some of the quantum states to easily decohere, thus limiting their likely communication application to satellite-to-satellite transmissions."

Adapted from materials provided by University of Illinois at Urbana-Champaign.

Loopy Photons Clarify 'Spookiness' Of Quantum Physics

ScienceDaily (Mar. 19, 2008) — Researchers at the National Institute of Standards and Technology (NIST) and the Joint Quantum Institute (NIST/University of Maryland) have developed a new method for creating pairs of entangled photons, particles of light whose properties are interlinked in a very unusual way dictated by the rules of quantum physics. The researchers used the photons to test fundamental concepts in quantum theory.
In the experiment, the researchers send a pulse of light into both ends of a twisted loop of optical fiber. Pairs of photons of the same color traveling in either direction will, every so often, interact in a process known as "four-wave mixing," converting into two new, entangled photons, one that is redder and the other that is bluer than the originals.

Although the fiber's twist means that pairs emerging from one end are vertically polarized (having electric fields that vibrate up and down) while pairs from the other end are horizontally polarized (vibrating side to side), the setup makes it impossible to determine which path the newly created photon pairs took. Since the paths are indistinguishable, the weird rules of quantum physics say that the photon pairs actually will be in both states--horizontal and vertical polarization--at the same time. Until someone measures one, at which time both photons must chose one specific, and identical, state.

This "spooky action at a distance" is what caused Einstein to consider quantum mechanics to be incomplete, prompting debate for the past 73 years over the concepts of "locality" and "realism." Decades of experiments have demonstrated that measurements on pairs of entangled particles don't agree with the predictions made by "local realism," the concept that processes occurring at one place have no immediate effect on processes at another place (locality) and that the particles have definite, preexisting properties (called "hidden variables") even without being measured (realism).

Experiments so far have ruled out locality and realism as a combination. But could a theory assuming only one of them be correct" Nonlocal hidden variables (NLHV) theories would allow for the possibility of hidden variables but would concede nonlocality, the idea that a measurement on a particle at one location may have an immediate effect on a particle at a separate location.

Measuring the polarizations of the pairs of entangled particles in their setup, the researchers showed that the results did not agree with the predictions of certain NLHV theories but did agree with the predictions of quantum mechanics. In this way, they were able to rule out certain NLHV theories. Their results agree with other groups that have performed similar experiments.

* J. Fan, M.D. Eisaman and A. Migdall, Bright phase-stable broadband fiber-based source of polarization-entangled photon pairs. Physical Review A 76, 043836 (2007).

** M.D. Eisaman, E.A. Goldschmidt, J. Chen, J. Fan and A. Migdall. Experimental test of non-local realism using a fiber-based source of polarization-entangled photon pairs. Physical Review A., upcoming.

Adapted from materials provided by National Institute of Standards and Technology.

Mass Measurement Technique Uncovers New Iron Isomer

ScienceDaily (Mar. 24, 2008) — A ground state atomic nucleus can be something of a black box, masking subtle details about its structure behind the aggregate interplay of its protons and neutrons. This is one reason nuclear scientists are so keenly interested in isomers -- relatively long-lived excited-state nuclei that more easily give up their structural secrets to experimentalists.
For years, gamma ray spectroscopy has been one of the only reliable means of studying isomers. But now scientists have a new tool at their disposal. In a paper that will be published in Physical Review Letters, researchers at Michigan State University's National Superconducting Laboratory (NSCL) report the first ever discovery of a nuclear isomer by Penning trap mass spectrometry.

The concept of excitation applies across physics and chemistry to everything from molecules to atoms to nuclei. Consider the basic physics behind a neon light. When voltage is applied across a tube filled with neon gas, the electrons orbiting the neon nuclei briefly are excited to higher energy levels before they come crashing back down to their ground states, releasing visible light.

Nucleons, the protons and neutrons that comprise atomic nuclei, can similarly be raised to higher energy levels. Most resulting excited-state nuclei exist on the briefest of timescales, with lifetimes often measured in trillionths of a second, before the nucleons decay to lower energy states, releasing various forms of radiation. However, some of these excited-state nuclei are quite stable and can exist for much longer periods of time, from fractions of seconds to millions of years.

These relatively long-lived nuclei are called isomers, which are the focus of intense scrutiny by nuclear scientists. Among the open questions about isomers: For which combinations of protons and neutrons can they exist? What are their properties? How long do they live? And what is their excitation energy (the energy required to raise their nucleons to higher energy levels)?

The discovery of the new iron isomer came while using NSCL's Low-Energy Beam and Ion Trap (LEBIT) device to make precision measurements of rare isotopes that are close, in terms of numbers of protons and neutrons, to nickel-68, a particularly enigmatic isotope.

With 28 protons and 40 neutrons, nickel-68 displays some of the characteristics of doubly magic nuclei, so named because they have just the right number of protons and neutrons to completely fill all the energy states, or shells, they occupy. (According to the nuclear shell model, protons and neutrons in most nuclei occupy different energetic shells, completely filling the low-lying states and only partially filling higher states; in doubly magic nuclei, all occupied shells are filled.) However, nuclei with slightly fewer protons and neutrons than nickel-68 reveal pronounced changes in structure, which generally is not the case for isotopes nearby other doubly magic nuclei.

"We have no good idea what is happening in this nuclear region, so more measurements are needed," said Georg Bollen, NSCL professor and co-author of the paper.

The experiment was conducted at NSCL's Coupled Cyclotron Facility, which produced various neutron-rich isotopes of iron and cobalt, including iron-65, with 26 protons and 39 neutrons. (The most abundant stable iron isotope on Earth has 26 protons and 30 neutrons.) These isotopes, produced by smashing beams of germanium nuclei traveling at half the speed of light into thin target material, were brought nearly to rest in a helium gas cell.

Next, the isotopes were guided by a series of electric fields into two ion traps. One was a Penning trap, a device commonly used in atomic and nuclear physics to precisely measure mass. A Penning trap catches and retains charged particles in a strong magnetic field. Responding to this field, captured particles move in what's known as a cyclotron motion, the frequency of which is directly related to the mass of the particle.

During the experiment, Bollen and his collaborators observed two distinct frequencies associated with the trapped iron-65 particles. They concluded that the heavier of the two was a previously unknown isomer of iron-65.

NSCL is the first laboratory in the world to stop fast beams of nuclei such that they can be trapped in space and studied with high precision. Bollen, one of the experts in this discipline at the interface between atomic and nuclear physics, helped to design and build ISOLTRAP, the first Penning trap spectrometer for the study of the mass of short-lived nuclei at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland.

"The nuclear region we looked at still has lots of uncertainty, but we were successful in adding an intriguing new piece of information," said Bollen. "And we did so by going beyond gamma ray spectroscopy, the classical means of studying isomers; finding isomers by weighing nuclei with very high precision bears interesting prospects for future studies."

The research was supported by the National Science Foundation and Michigan State University.

Adapted from materials provided by National Superconducting Cyclotron Laboratory at Michigan State University.

Ultrahigh-energy Cosmic Rays Are From Extremely Far Away

ScienceDaily (Mar. 25, 2008) — Final results from the University of Utah's High-Resolution Fly's Eye cosmic ray observatory show that the most energetic particles in the universe rarely reach Earth at full strength because they come from great distances, so most of them collide with radiation left over from the birth of the universe.
The findings are based on nine years of observations at the now-shuttered observatory on the U.S. Army's Dugway Proving Ground. They confirm a 42-year-old prediction -- known as the Greisen-Zatsepin-Kuzmin (GZK) "cutoff," "limit" or "suppression" -- about the behavior of ultrahigh-energy cosmic rays, which carry more energy than any other known particle.

The idea is that most -- but not all -- cosmic ray particles with energies above the GZK cutoff cannot reach Earth because they lose energy when they collide with "cosmic microwave background radiation," which was discovered in 1965 and is the "afterglow" of the "big bang" physicists believe formed the universe 13 billion years ago.

The GZK limit's existence was first predicted by Kenneth Greisen of Cornell University while visiting the University of Utah in 1966, and independently by Georgiy Zatsepin and Vadim Kuzmin of Moscow's Lebedev Institute of Physics.

"It has been the goal of much of ultrahigh-energy cosmic ray physics for the past 40 years to find this cutoff or disprove it," says physics Professor Pierre Sokolsky, dean of the University of Utah College of Science and leader of the study by a collaboration of 60 scientists from seven research institutions. "For the first time in 40 years, that question is answered: there is a cutoff."

That conclusion, based on 1997-early 2006 observations at the High Resolution Fly's Eye cosmic ray observatory (nicknamed HiRes) in Utah's western desert, has been bolstered by the new Auger cosmic ray observatory in Argentina. During a cosmic ray conference in Merida, Mexico, last summer, Auger physicists outlined preliminary, unpublished results showing that the number of ultrahigh-energy cosmic rays reaching Earth drops sharply above the cutoff.

So both the HiRes and Auger findings contradict Japan's now-defunct Akeno Giant Air Shower Array (AGASA), which observed roughly 10 times more of the highest-energy cosmic rays -- and thus suggested there was no GZK cutoff.

Cosmic Rays: Far Out

Last November, the Auger observatory collaboration -- to which Sokolsky also belongs -- published a study suggesting that the highest-energy cosmic rays come from active galactic nuclei or AGNs, or the hearts of extremely active galaxies believed to harbor supermassive black holes.

AGNs are distributed throughout the universe, so confirmation that the GZK cutoff is real suggests that if ultrahigh-energy cosmic rays are spewed out by AGNs, they primarily are very distant from the Earth -- at least in Northern Hemisphere skies viewed by the HiRes observatory. University of Utah physics Professor Charlie Jui, a co-author of the new study, says that means galaxies beyond our "local" supercluster of galaxies at distances of at least 150 million light years from Earth, or roughly 870 billion billion miles. [In U.S. usage, billion billion is correct here and in subsequent references for 10 to the 18th power. In British usage, 10 to the 18th power should be million billion.]

However, unpublished results from HiRes do not find the same correlation that Auger did between ultrahigh-energy cosmic rays and active galactic nuclei. So there still is uncertainty about the true source of extremely energetic cosmic rays.

"We still don't know where they're coming from, but they're coming from far away," Sokolsky says. "Now that we know the GZK cutoff is there, we have to look at sources much farther out."

In addition to the University of Utah, High Resolution Fly's Eye scientists are from Los Alamos National Laboratory in New Mexico, Columbia University in New York, Rutgers University -- the State University of New Jersey, Montana State University in Bozeman, the University of Tokyo and the University of New Mexico, Albuquerque.

Messengers from the Great Beyond

Cosmic rays, discovered in 1912, are subatomic particles: the nuclei of mostly hydrogen (bare protons) and helium, but also of some heavier elements such as oxygen, carbon, nitrogen or even iron. The sun and other stars emit relatively low-energy cosmic rays, while medium-energy cosmic rays come from exploding stars.

The source of ultrahigh-energy cosmic rays has been a mystery for almost a century. The recent Auger observatory results have given the edge to the popular theory they originate from active galactic nuclei. They are 100 million times more energetic than anything produced by particle smashers on Earth. The energy of one such subatomic particle has been compared with that of a lead brick dropped on a foot or a fast-pitched baseball hitting the head.

"Quite apart from arcane physics, we are talking about understanding the origin of the most energetic particles produced by the most energetic acceleration process in the universe," Sokolsky says. "It's a question of how much energy the universe can pack into these extraordinarily tiny particles known as cosmic rays. ... How high the energy can be in principle is unknown. By the time they get to us, they have lost that energy."

He adds: "Looking at energy processes at the very edge of what's possible in the universe is going to tell us how well we understand nature."

Ultrahigh-energy cosmic rays are considered to be those above about 1 billion billion electron volts (1 times 10 to the 18th power).

The most energetic cosmic ray ever found was detected over Utah in 1991 and carried an energy of 300 billion billion electron volts (3 times 10 to the 20th power). It was detected by the University of Utah's original Fly's Eye observatory, which was built at Dugway during 1980-1981 and improved in 1986. A better observatory was constructed during 1994-1999 and named the High Resolution Fly's Eye.

Jui says that during its years of operation, HiRes detected only four of the highest-energy cosmic rays -- those with energies above 100 billion billion electron volts. AGASA detected 11, even though it was only one-fourth as sensitive as HiRes.

The new study covers HiRes operations during 1997 through 2006, and cosmic rays above the GZK cutoff of 60 billion billion electron volts (6 times 10 to the 19th power). During that period, the observatory detected 13 such cosmic rays, compared with 43 that would be expected without the cutoff. So the detection of only 13 indicates the GZK limit is real, and that most ultrahigh-energy cosmic rays are blocked by cosmic microwave background radiation so that few reach Earth without losing energy.

The discrepancy between HiRes Fly's Eye and AGASA is thought to stem from their different methods for measuring cosmic rays.

HiRes used multifaceted (like a fly's eye) sets of mirrors and photomultiplier tubes to detect faint ultraviolet fluorescent flashes in the sky generated when incoming cosmic ray particles hit Earth's atmosphere. Sokolsky and University of Utah physicist George Cassiday won the prestigious 2008 Panofsky Prize for developing the method.

HiRes measured a cosmic ray's energy and direction more directly and reliably than AGASA, which used a grid-like array of "scintillation counters" on the ground.

The Search Goes On

University of Tokyo, University of Utah and other scientists now are using the new $17 million Telescope Array cosmic ray observatory west of Delta, Utah, which includes three sets of fluorescence detectors and 512 table-like scintillation detectors spread over 400 square miles -- in other words, the two methods that produced conflicting results at HiRes and AGASA. One goal is to figure out why ground detectors gave an inflated count of the number of ultrahigh-energy cosmic rays.

The Telescope Array also will try to explain an apparent shortage in the number of cosmic rays at energies about 10 times lower than the GZK cutoff. This ankle-shaped dip in the cosmic ray spectrum is a deficit of cosmic rays at energies of about 5 billion billion electron volts.

Sokolsky says there is debate over whether the "ankle" represents cosmic rays that run out of "oomph" after being spewed by exploding stars in our galaxy, or the loss of energy predicted to occur when ultrahigh-energy cosmic rays from outside our galaxy collide with the big bang's afterglow, generating electrons and antimatter positrons.

The Telescope Array and Auger observatories will keep looking for the source of rare ultrahigh-energy cosmic rays that evade the big bang afterglow and reach Earth.

"The most reasonable assumption is they are coming from a class of active galactic nuclei called blazars," Sokolsky says.

Such a galaxy center is suspected to harbor a supermassive black hole with the mass of a billion or so suns. As matter is sucked into the black hole, nearby matter is spewed outward in the form of a beam-like jet. When such a jet is pointed at Earth, the galaxy is known as a blazar.

"It's like looking down the barrel of a gun," Sokolsky says. "Those guys are the most likely candidates for the source of ultrahigh-energy cosmic rays."

The journal Physical Review Letters published the results Friday, March 21. The new study's 60 co-authors include Sokolsky, Jui and 31 other University of Utah faculty members, postdoctoral fellows and students: Rasha Abbasi, Tareq Abu-Zayyad, Monica Allen, Greg Archbold, Konstantin Belov, John Belz, S. Adam Blake, Olga Brusova, Gary W. Burt, Chris Cannon, Zhen Cao, Weiran Deng, Yulia Fedorova, Richard C. Gray, William Hanlon, Petra Huntemeyer, Benjamin Jones, Kiyoung Kim, the late Eugene Loh, Melissa Maestas, Kai Martens, John N. Matthews, Steffanie Moore, Kevin Reil, Robertson Riehle, Douglas Rodriguez, Jeremy D. Smith, R. Wayne Springer, Benjamin Stokes, Stanton Thomas, Jason Thomas and Lawrence Wiencke.

Adapted from materials provided by University of Utah.

Evolution Of Aversion: Why Even Children Are Fearful Of Snakes

ScienceDaily (Feb. 28, 2008) — Some of the oldest tales and wisest mythology allude to the snake as a mischievous seducer, dangerous foe or powerful iconoclast; however, the legend surrounding this proverbial predator may not be based solely on fantasy. As scientists from the University of Virginia recently discovered, the common fear of snakes may well be intrinsic.
Evolutionarily speaking, early humans who were capable of surviving the dangers of an uncivilized society adapted accordingly. And the same can be said of the common fear of certain animals, such as spiders and snakes: The ancestors of modern humans were either abnormally lucky or extraordinarily capable of detecting and deterring the threat of, for example, a poisonous snake.

Psychologists Vanessa LoBue and Judy DeLoache were able to show this phenomenon by examining the ability of adults and children to pinpoint snakes among other nonthreatening objects in pictures.

“We wanted to know whether preschool children, who have much less experience with natural threats than adults, would detect the presence of snakes as quickly as their parents,” LoBue explained. “If there is an evolved tendency in humans for the rapid detection of snakes, it should appear in young children as well as their elders.”

Preschool children and their parents were shown nine color photographs on a computer screen and were asked to find either the single snake among eight flowers, frogs or caterpillars, or the single nonthreatening item among eight snakes. As the study surprisingly shows, parents and their children identified snakes more rapidly than they detected the other stimuli, despite the gap in age and experience.

LoBue and DeLoache also found that both children and adults who don't fear snakes are just as good at quickly identifying them as children and adults who do fear snakes, indicating that there may be a universal human ability to visually detect snakes whether there is or is not a fear factor based on a learned bias or experience.

LoBue and DeLoache explain that their study does not prove an innate fear of snakes, only that humans, including young children, seem to have an innate ability to quickly identify a snake from among other things. One of their previous studies indicated that humans also have a profound ability to identify spiders from among non-threatening flora and fauna. Lobue has also shown that people are very good at quickly detecting threats of many types, including aggressive facial expressions.

The results, which appear in the March 2008 issue of Psychological Science, a journal of the Association for Psychological Science, may provide the first evidence of an adapted, visually-stimulated fear mechanism in humans.

Adapted from materials provided by Association for Psychological Science.

Unlocking The Psychology Of Snake And Spider Phobias

ScienceDaily (Mar. 24, 2008) — University of Queensland researchers have unlocked new evidence that could help them get to the bottom of our most common phobias and their causes.
Hundreds of thousands of people count snakes and spiders among their fears, and while scientists have previously assumed we possess an evolutionary predisposition to fear the unpopular animals, researchers at UQ's School of Psychology may have proved otherwise.

According to Dr Helena Purkis, the results of the UQ study could provide an unprecedented insight into just why the creepy creatures are so widely feared.

“Previous research shows we react differently to snakes and spiders than to other stimuli, such as flowers or mushrooms, or even other dangerous animals….or cars and guns, which are also much more dangerous,” Dr Purkis said.

“[In the past, this] has been explained by saying that people are predisposed by evolution to fear certain things, such as snakes and spiders, that would have been dangerous to our ancestors.

“[However], people tend to be exposed to a lot of negative information regarding snakes and spiders, and we argue this makes them more likely to be associated with phobia.”

In the study, researchers compared the responses to stimuli of participants with no particular experience with snakes and spiders, to that of snake and spider experts.

“Previous research has argued that snakes and spiders attract preferential attention (they capture attention very quickly) and that during this early processing a negative (fear) response is generated… as an implicit and indexed subconscious [action],” Dr Purkis said.

“We showed that although everyone preferentially attends to snakes or spiders in the environment as they are potentially dangerous, only inexperienced participants display a negative response.”

The study is the first to establish a clear difference between preferential attention and the accompanying emotional response: that is, that you can preferentially attend to something without a negative emotional response being elicited.

Dr Purkis said the findings could significantly increase understanding about the basic cognitive and emotional processes involved in the acquisition and maintenance of fear.

“If we understand the relationship between preferential attention and emotion it will help us understand how a stimulus goes from being perceived as potentially dangerous, to eliciting an emotional response and to being associated with phobia,” she said.

“[This] could give us some information about the way people need to deal with snakes and spiders in order to minimise negative emotional responses.”

Researchers are now planning a follow-up study, which will test their theory that love and fear, or phobia, involve the same basic attention mechanism.

“We are interested in testing animal stimuli for animal lovers to see whether these stimuli, a dog for a breeder for instance, have access to preferential attention [in the same way as snakes and spiders do for those with phobias of them].

“I am also interested in the difference that we saw in our previous work between preferential attention, and the emotional response that is elicited after this initial processing."

The study calls for volunteers who work with or own dogs, cats, horses, cattle, snakes and spiders and also general members of the public who will form a control group.

“I also need people who are allergic to dogs or cats, people who are apprehensive of snakes and spiders, and people who have no fear of snakes and spiders but don't explicitly work with them,” Dr Purkis said.

“[Additionally, we're looking to get in touch with] people who are willing to have their pets (dogs, cats, horses, cattle, snakes, spiders) photographed for use as experimental stimuli.”

Adapted from materials provided by University of Queensland.

Killer Military Robots Pose Latest Threat To Humanity, Robotics Expert Warns

ScienceDaily (Feb. 28, 2008) — A robotics expert at the University of Sheffield has issued stark warnings over the threat posed to humanity by new robot weapons being developed by powers worldwide.

In a keynote address to the Royal United Services Institute (RUSI), Professor Noel Sharkey, from the University's Department of Computer Science, expressed his concerns that we are beginning to see the first steps towards an international robot arms race. He will warn that it may not be long before robots become a standard terrorist weapon to replace the suicide bomber.

Many nations are now involved in developing the technology for robot weapons, with the US Department of Defence (DoD) being the most significant player. According to the Unmanned Systems Roadmap 2007-2013 (published in December 2007), the US propose to spend an estimated $4 billion by 2010 on unmanned systems technology. The total spending is expected to rise above $24 billion.

Over 4,000 robots are currently deployed on the ground in Iraq and by October 2006 unmanned aircraft had flown 400,000 flight hours. Currently there is always a human in the loop to decide on the use of lethal force. However, this is set to change with the US giving priority to autonomous weapons - robots that will decide on where, when and who to kill.

Others are now embarking on robot weapons programmes in Europe and other allied countries such as Canada, South Korea, South Africa, Singapore and Israel. China, Russia and India are also embarking on the development of unmanned aerial combat vehicle. The US DoD report is unsure about the activity in China but admits that they have strong infrastructure capability for parallel developments in robot weapons.

Professor Sharkey, who is famously known for his roles as chief judge on the TV series Robot Wars and as onscreen expert for the BBC´s TechnoGames, said: "The trouble is that we can't really put the genie back in the bottle. Once the new weapons are out there, they will be fairly easy to copy. How long is it going to be before the terrorists get in on the act?"

"With the current prices of robot construction falling dramatically and the availability of ready-made components for the amateur market, it wouldn't require a lot of skill to make autonomous robot weapons."

Professor Sharkey is reluctant to explain how such robots could be made but he points out that a small GPS guided drone with autopilot could be made for around £250.

The robotics expert is also concerned with a number of ethical issues that arise from the use of autonomous weapons. He added: "Current robots are dumb machines with very limited sensing capability. What this means is that it is not possible to guarantee discrimination between combatants and innocents or a proportional use of force as required by the current Laws of War.

"It seems clear that there is an urgent need for the international community to assess the risks of these new weapons now rather than after they have crept their way into common use."

Professor Sharkey's talk was at a one-day conference at RUSI in Whitehall on 27 February 2008.

Adapted from materials provided by University of Sheffield, via EurekAlert!, a service of AAAS.

Wireless Networks That Build Themselves

ScienceDaily (Mar. 14, 2008) — From traffic lights to mobile phones, small computers are all around us. Enabling these ‘embedded systems’ to create wireless communications networks automatically will have profound effects in areas from emergency management to healthcare and traffic control.
Networks of mobile sensors and other small electronic devices have huge potential. Applications include emergency management, security, helping vulnerable people to live independently, traffic control, warehouse management, and environmental monitoring.

One scenario investigated by European researchers was a road-tunnel fire. With fixed communications destroyed and the tunnel full of smoke, emergency crews would normally struggle to locate the seat of the blaze and people trapped in the tunnel.

Wireless sensors could cut through the chaos by providing the incident control room with information on visibility, temperatures, and the locations of vehicles and people. Firefighters inside the tunnel could then receive maps and instructions through handheld terminals or helmet-mounted displays.

For this vision to become reality, mobile devices have to be capable of forming self-organising wireless networks spanning a wide variety of communications technologies. Developing software tools to make this possible was the task of the RUNES project.

Intelligent networking

‘Ad-hoc’ mobile networks are very different from the wireless computer networks in homes and offices, explains Dr Lesley Hanna, a consultant and dissemination manager for RUNES. Without a human administrator, an ad-hoc network must assemble itself from any devices that happen to be nearby, and adapt as devices move in and out of wireless range. And where office networks use powerful computers with separate routers, the building blocks of ad-hoc mobile networks are low-power devices that must do their own wireless routing, forwarding signals from other devices that would otherwise be out of radio range.

A typical network could contain tens or even hundreds of these ‘embedded systems’, ranging from handheld computers down to ‘motes’: tiny units each equipped with a sensor, a microcontroller and a radio that can be scattered around an area to be monitored. Other devices could be mounted at fixed points, carried by robots, or worn as ‘smart clothing’ or ‘body area networks’.

Wireless standards are not the issue: most mobile devices use common protocols, such as GSM, Wi-Fi, Bluetooth and ZigBee. The real challenge, suggests Hanna, is to build self-managing networks that work reliably on a large scale, with a variety of operating systems and low-power consumption.

Middleware and more

The EU-funded RUNES (Reconfigurable Ubiquitous Networked Embedded Systems) covered 21 partners in nine countries. Although RUNES was led by Ericsson, it had an academic bias, with twice as many universities as industrial partners, and most of the resulting software is publicly available.

RUNES set out to create middleware: software that bridges the gap between the operating systems used by the mobile sensor nodes, and high-level applications that make use of data from the sensors. RUNES middleware is modular and flexible, allowing programmers to create applications without having to know much about the detailed working of the network devices supplying the data. This also makes it easy to incorporate new kinds of mobile device, and to re-use applications.

Interoperability was a challenge, partly because embedded systems themselves are so varied. At one end of the spectrum are powerful environments, such as Java, while at the other are simple systems designed for wireless sensors. For devices with small memories, RUNES developed middleware modules that can be uploaded, used to carry out specific tasks, and then overwritten.

Project partners also worked on an operating system and a simulator. Contiki is an open-source operating system designed for networked, embedded systems with small amounts of memory. Simics, a simulator allowing large networks to be tested in ways that are impractical with real hardware, is commercially available from project partner Virtutech.

Taking the plunge

The tunnel fire scenario was valuable in demonstrating what networks of this kind can achieve. Using real sensor nodes, routers, gateways and robots developed during the project, a demonstration setup showed how, for instance, a robot router can manoeuvre itself to cover a gap in the network’s wireless coverage.

“A lot of people have been looking at embedded systems networking, but so far there has been a reluctance to take the plunge commercially,” says Hanna. “RUNES’ open-source model is an excellent way to stimulate progress, and it should generate plenty of consultancy work for the academic partners.”

Adapted from materials provided by ICT Results.

Computers Show How Bats Classify Plants According To Their Echoes

ScienceDaily (Mar. 24, 2008) — Researchers have developed a computer algorithm that can imitate the bat's ability to classify plants using echolocation. The study represents a collaboration between machine learning scientists and biologists studying bat orientation.
To detect plants, bats emit ultrasonic pulses and decipher the various echoes that return. Bats use plants daily as food sources and landmarks for navigation between foraging sites. Plant echoes are highly complex signals due to numerous reflections from leaves and branches. Classifying plants or other intricate objects, therefore, has been considered a troublesome task for bats and the scientific community was far from understanding how they do it.

Now, a research group in Tübingen, Germany, including University of Tübingen researchers Yossi Yovel, Peter Stilz and Hans Ulrich-Schnitzler, and Matthias Franz from the Max Planck Institute of Biological Cybernetics, has demonstrated that this process of plant classification is not as difficult as previously thought.

The group used a sonar system to emit bat-like, frequency-modulated ultrasonic pulses. The researchers recorded thousands of echoes from live plants of five species. An algorithm that uses the time-frequency information of these echoes was able to classify plants with high accuracy. This new algorithm also provides hints toward which echo characteristics might be best understood by the bats.

According to the group, these results enable us to improve our understanding of this fascinating ability of how bats classify plants, but do so without entering the bat's brain.

Journal reference: Yovel Y, Franz MO, Stilz P, Schnitzler H-U (2008). Plant Classification from Bat-Like Echolocation Signals. PLoS Comput Biol 4(3): e1000032. doi:10.1371/journal.pcbi.1000032

Adapted from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.

Saturday, March 15, 2008

First Humanoid Robot That Will Develop Language May Be Coming Soon

ScienceDaily (Mar. 4, 2008) — iCub, a one metre-high baby robot which will be used to study how a robot could quickly pick up language skills, will be available next year.
Professor Chrystopher Nehaniv and Professor Kerstin Dautenhahn at the University of Hertfordshire’s School of Computer Science are working with an international consortium led by the University of Plymouth on ITALK (Integration and Transfer of Action and Language Knowledge in Robots), which begins on 1 March.

ITALK aims to teach the robot to speak by employing the same methods used by parents to teach their children. Professor Nehaniv and Professor Dautenhahn, who are European leaders in Artificial Intelligence and Human Robot Interaction, will conduct experiments in human and robot language interaction to enable the robot to converse with humans.

Typical experiments with the iCub robot will include activities such as inserting objects of various shapes into the corresponding holes in a box, serialising nested cups and stacking wooden blocks. Next, the iCub will be asked to name objects and actions so that it acquires basic phrases such as "robot puts stick on cube".

Professor Nehaniv said: “Our approach is that robot will use what it learns individually and socially from others to bootstrap the acquisition of language, and will use its language abilities in turn to drive its learning of social and manipulative abilities. This creates a positive feedback cycle between using language and developing other cognitive abilities. Like a child learning by imitation of its parents and interacting with the environment around it, the robot will master basic principles of structured grammar, like negation, by using these abilities in context.”

The scientific and technological research developed during the project will have a significant impact on the future generation of interactive robotic systems within the next ten years and the leadership role of Europe in this area.

Speaking about the research, Professor Dautenhahn said: “iCub will take us a stage forward in developing robots as social companions. We have studied issues such as how robots should look and how close people will want them to approach and now, within a year, we will have the first humanoid robot capable to developing language skills.”

Adapted from materials provided by University of Hertfordshire.

Mind Over Body: New Hope For Quadriplegics

ScienceDaily (Mar. 12, 2008) — Around 2.5 million people worldwide are wheelchair bound because of spinal injuries. Half of them are quadriplegic, paralysed from the neck down. European researchers are now offering them new hope thanks to groundbreaking technology that uses brain signals alone to control computers, artificial limbs and even wheelchairs.

People left paralysed by spinal injuries or suffering from neurodegenerative diseases could regain a degree of independence thanks to a new type of non-intrusive brain-computer interface, or BCI, developed by the MAIA project.

Using electrical signals emitted by the brain and picked up by electrodes attached to the user’s scalp, the system allows people to operate devices and perform tasks that previously they could only dream of. So far, the team, led by the IDIAP Research Institute in Switzerland, has carried out a series of successful trials in which users have been able to manoeuvre a wheelchair around obstacles and people using brainpower alone.

“We have demonstrated that it is possible for someone to control a complex mechanical device with their minds, and this opens up all sorts of possibilities,” says MAIA coordinator José del R. Millán.

Though BCIs, for people with impaired movement and for other uses, have been under development for many years, they have had varying degrees of success, largely because of the difficulties of turning brain signals into accurate mechanical movement. What sets the EU-funded MAIA system apart is that it does not rely on the human brain alone to do all the work, instead incorporating artificial intelligence into the device being used.

Intelligence meets artificial intelligence

A person using the MAIA BCI to control a wheelchair, for example, only has to think about going straight ahead or turning left and the chair follows their command. However, they do not have to worry about colliding with obstacles – even moving ones such as people – because the wheelchair itself monitors and reacts to its environment.

“A user can tell the chair to go straight ahead, but it will not just randomly roll in that direction if there is a wall or a flight of stairs in the way,” Millán notes. “What we have done is combine the intelligence of the person with the artificial intelligence of the device.”

In a sense, the artificial intelligence embedded in the chair acts much like a human’s subconscious. People, for example, do not consciously send commands to every muscle in each leg in order to walk and do not think where to step to avoid an obstacle – they do it subconsciously. Similarly, a wheelchair-bound user of the MAIA BCI simply has to send the signal to go in a certain direction and the chair figures out how to get there.

But the user always stays in control!

Keeping the user in control

“We wanted to see how much of the movement was down to the user’s brain signals and how much was due to the intelligence of the chair. It turned out that the wheelchair intervened between 10 and 40 percent of the time depending on the user and the environment.

“In one demonstration in which someone was manoeuvring the chair for six hours, the computer intelligence kicked in more frequently later on as the person became increasingly tired and made more mistakes,” Millán says.

Importantly, the chair can recognise from the user’s brain signals if it has made a mistake, and, through tactile devices similar to the vibrators used in mobile phones, it can send feedback to users about the direction they are going that enhances their sense of awareness beyond the visual.

Millán notes that the same technology could be applied to artificial limbs to allow quadriplegics to pick up objects or unlock a door. By using the BCI to interact with computer systems, meanwhile, they could control the lighting in their homes, surf the internet, or change the channels on the TV. Those simpler brain-computer interactions, which have the potential to become the basis for commercial systems sooner, will be the focus of a follow-up EU project called TOBI that is due to begin in September and which will also be led by Millán.

“For a wheelchair, such as the one developed in MAIA, to reach the market would take extensive trials to prove that the technology is robust enough. We can’t have it breaking down when someone is in the middle of the street,” Millán notes.

Carrying out such validation trials remains a goal of the project partners who are actively seeking further funding and investment to continue their work.

Adapted from materials provided by ICT Results.

Ethanol Imports From Latin America May Help US Meet Energy Goals

ScienceDaily (Mar. 5, 2008) — Latin American nations could become important suppliers of ethanol for world markets in coming decades, according to an Oak Ridge National Laboratory study released recently.
The ORNL study* highlights the importance of Brazil's dynamic sugarcane industry in future world trade in fuel ethanol.

A team of ORNL researchers led by Keith Kline and Gbadebo Oladosu projected that Brazil, Argentina, Colombia and members of the Caribbean Basin Initiative could produce sufficient feedstock for more than 30 billion gallons of ethanol per year by 2017, which would represent a six-fold increase over current production. Nearly 40 percent of the projected supply in 2017 is based on the potential to use new technology to produce advanced biofuels from cellulosic feedstock using crop residues and forestry byproducts.

"Current feedstock production, based on traditional crops such as sugarcane, soybeans and palm oil, has the potential to double or triple by 2017 in some cases," said Oladosu, the lead economist for the study. "Supply growth is derived from increasing the area cultivated, supplemented by improving yields and farming practices."

Although it was not a focus for this research, the researchers highlighted implications for potential land use change.

The ORNL report assembles historic data on feedstock production for multiple countries and crops and calculates future production and the potential supplies available for export. Included in the report are detailed graphs, tables and disaggregated data for feedstock supplies under a range of future growth possibilities.

"The supply projections provide analysts and policymakers with better data on which to base decisions," Kline said. "The potential for future biofuel feedstock production in Latin America offers interesting opportunities for the U.S. and developing nations."

Results suggest that an increasing portion of U.S. fossil fuel imports that now arrive from distant nations in Africa and the Middle East Asia could be replaced by renewable biofuels from neighbors in the Americas.

Paul Leiby, an ORNL expert on energy security, notes that ethanol from trading partners in this hemisphere could offer many mutual benefits: more reliable and diversified U.S. fuel supply, improved rural livelihoods in Latin America, reductions in greenhouse gas emissions and the expanded availability of biofuel in many urban markets via delivery at coastal ports.

"Biofuel imports complement domestic biofuel production and diminish reliance on oil, the price of which is unstable and strongly influenced by the OPEC cartel," Leiby said. "Even if imported, biofuels can improve our energy security by reducing oil imports and expanding our base of independent fuel sources. Best of all, American consumers could pay less at the pump during energy emergencies."

ORNL's study focuses on assessing future potential for feedstock production in Argentina, Brazil, Canada, China, Colombia, India, Mexico and the Caribbean Basin region. Countries were selected based on their potential to impact world biofuel markets, proximity to the U.S. and other criteria. The research team hopes to expand the analysis to include additional nations in Asia and Africa over the coming year.

The report, available at the website noted below, provides supply curves for selected countries and feedstocks projected to 2012, 2017 and 2027. Highlights include:

* If the total projected feedstock supply calculated as "available" for export or biofuel in 2017 from these countries were converted to biofuel, it would represent the equivalent of about 38 billion gallons of gasoline.
* Sugarcane and bagasse, the solid residue after juices are pressed from the sugarcane stalk, form the bulk of potential future feedstock supplies, representing about two-thirds of the total available for export or biofuel in 2017.
* Soybeans are next in importance in terms of available supply potential in 2017, representing about 18 percent of the total.
* Most future supplies of corn and wheat are projected to be allocated to food and feed and would not be available for biofuels. Canada may be an exception because government programs will likely cause these crops to be used as feedstock to meet their domestic biofuel targets over the coming decade.
* In the various countries assessed, recent changes in national policies and laws are catalyzing investments in biofuel industries to meet targets for fuel blending that generally fall in the 5 percent to 10 percent range.
* Social and environmental concerns associated with the expansion of feedstock production are considered in the report, including land availability and efforts to establish systems for certification of sustainable production.
* Sugarcane dominates potential supply among the crops studied while bagasse -- the crushed stalk residue from sugarcane processing -- and forest industry residues are the principle sources among potential cellulosic supplies.

*The report, "Biofuel Feedstock Assessment for Selected Countries," presents findings from research conducted in support of a larger study of "Worldwide Potential to Produce Biofuels with a focus on U.S. Imports" by the Department of Energy.

Also contributing to the ORNL report were Bob Perlack, Amy Wolfe and Virginia Dale of ORNL's Environmental Sciences Division and Matt McMahon, a summer intern from Appalachian State University.

This research was jointly funded by DOE's Office of Policy and International Affairs and the Office of Energy Efficiency and Renewable Energy, Biomass Programs. UT-Battelle manages Oak Ridge National Laboratory for the Department of Energy.

Adapted from materials provided by DOE/Oak Ridge National Laboratory.

Nuclear Fuel Performance Milestone Achieved

ScienceDaily (Mar. 13, 2008) — Researchers at the U.S. Department of Energy's Idaho National Laboratory, in partnership with three other science and engineering powerhouses, reached a major domestic milestone relating to nuclear fuel performance on March 8.
David Petti, Sc.D., and technical director for the INL research, says the team used reverse engineering methods to help turn the fuel test failures from the early 1990s into successes in 2008. "We wanted to close this loop for the high-temperature gas reactor fuels community," he said. "We wanted to put more science into the tests and take the process and demonstrate its success."

The research is key in supporting reactor licensing and operation for high-temperature reactors such as the Next Generation Nuclear Plant and similar reactors envisioned for subsequent commercial energy production.

"Hats off to the R&D fuels team on this major milestone," said Greg Gibbs, Next Generation Nuclear Plant Project director. "This is a major accomplishment in demonstrating TRISO fuel safety. This brings us one step closer to licensing a commercially-capable, high-temperature gas reactor that will be essentially emission free, help curb the rising cost of energy and help to achieve energy security for our country."

The work is a team effort of more than 40 people from INL, The Babcock & Wilcox Company, General Atomics and Oak Ridge National Laboratory.

"I salute the team effort that made the research the success it is today," said David Hill, INL deputy laboratory director for Science and Technology. "I saw the research start while I was part of the ORNL team, and to see it succeed today is hugely satisfying and a tribute to everyone involved."

The team has now set its sights on reaching its next major milestone -- achievement of a 12-14 percent burnup* expected later this calendar year.

Research details

The research to improve the performance of coated-particle nuclear fuel met an important milestone by reaching a burnup of 9 percent without any fuel failure. Raising the burnup level of fuel in a nuclear reactor reduces the amount of fuel required to produce a given amount of energy while reducing the volume of the used fuel generated, and improves the overall economics of the reactor system.

After U.S. coated-particle fuel performance difficulties in the 1990s and a shift in national priorities, research on this type of fuel was curtailed for a time. Funding for the research resumed in 2003 as part of the DOE Advanced Gas Reactor fuel development and qualification program.

The team studied the very successful technology developed by the Germans for this fuel in the 1980s and decided to make the carbon and silicon carbide layers of the U.S. particle coatings more closely resemble the German model. The changes resulted in success that has matched the historical German level.

INL's Advanced Test Reactor was a key enabler of the successful research. The ATR was used to provide the heating of the fuel to watch the fuel's response. The fuel kernel is coated with layers of carbon and silicon compounds. These microspheres are then placed in compacts one-half-inch wide by two inches long and then placed in graphite inside the reactor for testing. The fuel element is closely monitored while inside the test reactor to track its behavior.

*A burnup is a measure of the neutron irradiation of the fuel. Higher burnup allows more of the fissile 235U and of the plutonium bred from the 238U to be utilised, reducing the uranium requirements of the fuel cycle.

Adapted from materials provided by DOE/Idaho National Laboratory, via EurekAlert!, a service of AAAS.

Toward A Healthier Food For Fido: Corn Provides Promising Fiber Alternative

ScienceDaily (Feb. 27, 2008) — In addition to helping fill gasoline tanks with alcohol-based fuel, corn may have a new role in filling Fido's bowl with more healthful food, nutritional biochemists in Illinois are reporting. They found that corn fiber shows promise as a more economical and healthier ingredient in dog food than some of the fibers now in use.
George Fahey and colleagues point out that the fiber content of dog food varies widely and is often of inferior quality. Many dog foods use fiber from sugar beet pulp. Corn fiber -- available in large amounts as a byproduct of ethanol production -- is an attractive alternative. However, researchers have little information on corn fiber's effects in dogs.

In the new study, researchers studied digestion, food intake, and fecal characteristics in dogs fed either a special food containing corn fiber or a standard food containing beet fiber. Substituting corn fiber for beet fiber "does not dramatically impact nutrient digestibility, food intake, or fecal production and characteristics," the researchers say. Corn fiber should therefore be considered a promising fiber alternative for use in dog food, they note. Previous studies suggest that corn fiber in animal food could have beneficial effects in reducing risks of obesity and diabetes.

The study "Chemical Composition, in Vitro Fermentation Characteristics, and in Vivo Digestibility Responses by Dogs to Select Corn Fibers" is scheduled for the March 26 issue of ACS' Journal of Agricultural and Food Chemistry. doi: 10.1021/jf073073b

Adapted from materials provided by American Chemical Society, via EurekAlert!, a service of AAAS.

Fight Against Obesity: Increase Cells' Energy Consumption With Mitochondrial Uncoupling?

ScienceDaily (Mar. 15, 2008) — With obesity still on the increase, it appears that the main weapon in the fight against it - reducing energy consumption by eating less - is ineffective. There is evident need to search for new treatment strategies dealing with the opposite aspect of the energy balance: increasing energy consumption. Researchers at Maastricht University have now found a way to increase cells' energy consumption: mitochondrial uncoupling.

PhD candidate Sander Wijers and his colleagues Patrick Schrauwen, Prof. Wim Saris and Wouter van Marken Lichtenbelt have shown that this process occurs naturally in human skeletal muscle cells when exposed to mild cold. They carried out muscle biopsies on 11 lean, healthy male subjects both under normal and mild cold conditions. Their results could lead to the development of drugs that stimulate mitochondrial uncoupling, and thus contribute to obesity treatment.

Fats and sugars are broken down in the mitochondria, or energy factories of the cells. ATP - the energy source used, for example, when muscles contract and for many other cellular processes - is formed using the energy released in this process. In some cases, such as when exposed to cold, not all the energy released from sugars and fats is used to produce ATP; stored energy is used for heat, reducing the availability of ATP for cellular processes. This phenomenon is called mitochondrial uncoupling. Fats and sugars are still broken down in the uncoupled mitochondria, but the energy released is not entirely used for cellular processes. More energy is therefore required to carry out the same physical functions.

Further genomic and proteomic research is required to identify the proteins responsible for uncoupling in skeletal muscle mitochondria. The animal proteins UCP1, UCP2, UCP4 and UCP5 detected in tests appear not to exist in human muscle tissue. And although UCP3 is found in human muscles, it seems to be involved primarily in fatty acid metabolism, not in mitochondrial uncoupling.

Citation: Wijers SLJ, Schrauwen P, Saris WHM, van Marken Lichtenbelt WD (2008) Human Skeletal Muscle Mitochondrial Uncoupling Is Associated with Cold Induced Adaptive Thermogenesis. PLoS One 3(3): e1777. doi:10.1371/journal.pone.0001777 http://www.plosone.org/doi/pone.0001777

Adapted from materials provided by Public Library of Science, via EurekAlert!, a service of AAAS.

Coolest Winter Since 2001 For U.S., Globe, According To NOAA Data

ScienceDaily (Mar. 15, 2008) — The average temperature across both the contiguous U.S. and the globe during climatological winter (December 2007-February 2008) was the coolest since 2001, according to scientists at NOAA’s National Climatic Data Center in Asheville, N.C. In terms of winter precipitation, Pacific storms, bringing heavy precipitation to large parts of the West, produced high snowpack that will provide welcome runoff this spring.
U.S. Winter Temperature Highlights

In the contiguous United States, the average winter temperature was 33.2°F (0.6°C), which was 0.2°F (0.1°C) above the 20th century average – yet still ranks as the coolest since 2001. It was the 54th coolest winter since national records began in 1895.

Winter temperatures were warmer than average from Texas to the Southeast and along the Eastern Seaboard, while cooler-than-average temperatures stretched from much of the upper Midwest to the West Coast.

With higher-than-average temperatures in the Northeast and South, the contiguous U.S. winter temperature-related energy demand was approximately 1.7 percent lower than average, based on NOAA’s Residential Energy Demand Temperature Index.

U.S. Winter Precipitation Highlights

Winter precipitation was much above average from the Midwest to parts of the West, notably Kansas, Colorado and Utah. Although moderate-to-strong La Niña conditions were present in the equatorial Pacific the winter was unique for the above average rain and snowfall in the Southwest, where La Niña typically brings drier-than-average conditions.

During January alone, 170 inches of snow fell at the Alta ski area near Salt Lake City, Utah, more than twice the normal amount for the month, eclipsing the previous record of 168 inches that fell in 1967. At the end of February, seasonal precipitation for the 2008 Water Year, which began on October 1, 2007, was well above average over much of the West.

Mountain snowpack exceeded 150 percent of average in large parts of Colorado, New Mexico, Arizona, and Oregon at the end of February. Spring run-off from the above average snowpack in the West is expected to be beneficial in drought plagued areas.

Record February precipitation in the Northeast helped make the winter the fifth wettest on record for the region. New York had its wettest winter, while Pennsylvania, Connecticut, Vermont, and Colorado to the West, had their second wettest.

Snowfall was above normal in northern New England, where some locations posted all-time record winter snow totals. Concord, N.H., received 100.1 inches, which was 22.1 inches above the previous record set during the winter of 1886-87. Burlington, Vt., received 103.2 inches, which was 6.3 inches above the previous record set during the winter of 1970-71.

While some areas of the Southeast were wetter than average during the winter, overall precipitation for the region was near average. At the end of February, two-thirds of the Southeast remained in some stage of drought, with more than 25 percent in extreme-to- exceptional drought.

Drought conditions intensified in Texas with areas experiencing drought almost doubling from 25 percent at the end of January to 45 percent at the end of February.

Global Highlights

The combined global land and ocean surface temperature was the 16th warmest on record for the December 2007-February 2008 period (0.58°F/0.32°C above the 20th century mean of 53.8°F/12.1°C). The presence of a moderate-to-strong La Niña contributed to an average temperature that was the coolest since the La Niña episode of 2000-2001.

While analyses of the causes of the severe winter storms in southern China continues, NOAA Earth System Research Laboratory scientists are focusing on the presence of unusually strong, persistent high pressure over Eastern Europe, combined with low pressure over Southwest Asia. This pattern directed a series of storms across the region, while northerly low level flow introduced cold air from Mongolia. Unusually high water temperatures in the China Sea may have triggered available moisture that enhanced the severity of these storms.

Record Northern Hemisphere snow cover extent in January was followed by above average snow cover for the month of February. Unusually high temperatures across much of the mid- and high-latitude areas of the Northern Hemisphere in February began reducing the snow cover, and by the end of February, snow cover extent was below average in many parts of the hemisphere.

While there has been little trend in snow cover extent during the winter season since records began in the late 1960s, spring snow cover extent has been sharply lower in the past two decades as global temperatures have increased.

February Temperature Highlights

February was 61st warmest in the contiguous U.S. and 15th warmest globally on record. For the U.S., the temperature was near average, 0.2°F (0.1°C) above the 20th century average of 34.7°F (1.5°C), which was 2.0°F (1.1°C) warmer than February 2007.

Globally, the February average temperature was 0.68°F/0.38°C above the 20th century mean of 53.8°F/12.1°C.

Adapted from materials provided by National Oceanic And Atmospheric Administration.

Two-Dimensional High-Temperature Superconductor Discovered

ScienceDaily (Mar. 14, 2008) — Scientists at Brookhaven Lab have discovered a state of two-dimensional (2D) fluctuating superconductivity in a high-temperature superconductor with a particular arrangement of electrical charges known as "stripes."
The finding was uncovered during studies of directional dependence in the material's electron-transport and magnetic properties. In the 2D plane, the material acts as a superconductor - conducts electricity with no resistance - at a significantly higher temperature than in the 3D state.

"The results provide many insights into the interplay between the stripe order and superconductivity, which may shed light on the mechanism underlying high-temperature superconductivity," said Brookhaven physicist Qiang Li.

Understanding the mechanism of high-temperature superconductivity is one of the outstanding scientific issues in condensed matter physics, Li said. Understanding this mechanism could lead to new strategies for increasing the superconducting transition temperature of other superconductors to make them more practical for applications such as electrical transmission lines.

"As electricity demand increases, the challenge to the national electricity grid to provide reliable power will soon grow to crisis levels," Li said. "Superconductors offer powerful opportunities for restoring the reliability of the power grid and increasing its capacity and efficiency by providing reactive power reserves against blackouts, and by generating and transmitting electricity."

This research was presented at The March 2008 American Physical Society Meeting in New Orleans, La., March 10 -14.

Adapted from materials provided by DOE/Brookhaven National Laboratory.

Meditation Can Lower Blood Pressure, Study Shows

ScienceDaily (Mar. 15, 2008) — Transcendental Meditation is an effective treatment for controlling high blood pressure with the added benefit of bypassing possible side effects and hazards of anti-hypertension drugs, according to a new meta-analysis conducted at the University of Kentucky.
The meta-analysis evaluated nine randomized, controlled trials using Transcendental Meditation as a primary intervention for hypertensive patients. The practice of Transcendental Meditation was associated with approximate reductions of 4.7 mm systolic blood pressure and 3.2 mm diastolic blood pressure.

The study's lead author, Dr. James W. Anderson, professor of medicine at the University of Kentucky College of Medicine, said that blood pressure reductions of this magnitude would be expected to be accompanied by significant reductions in risk for atherosclerotic cardiovascular disease—without drug side effects. Anderson's most recent findings reinforce an earlier study that found Transcendental Meditation produces a statistically significant reduction in high blood pressure that was not found with other forms of relaxation, meditation, biofeedback or stress management.

"Adding Transcendental Medication is about equivalent to adding a second antihypertension agent to one's current regimen only safer and less troublesome," Anderson said.

The Centers for Disease Control and Prevention (CDC) estimates that 1 out of 3 American adults have high blood pressure. Having high blood pressure increases one's chances of developing heart disease, stroke, congestive heart failure and kidney disease.

The study appears in the March issue of the American Journal of Hypertension.

Adapted from materials provided by University of Kentucky.