about us:
our company name is KNK DECORATION. our job about sticker service, example: sticker cutting, digital printing, screen printing, neon box, sign marking and another material promotion.
kami melayani pengerjaan stiker cutting, printing indoor/outdoor, neon box, pembuatan rambu-rambu lalu lintas(standard DLLAJR), dll

Friday, February 22, 2008

Carbon Capture Strategy Could Lead To Emission-free Cars

ScienceDaily (Feb. 14, 2008) — Researchers at the Georgia Institute of Technology have developed a strategy to capture, store and eventually recycle carbon from vehicles to prevent the pollutant from finding its way from a car tailpipe into the atmosphere. Georgia Tech researchers envision a zero emission car, and a transportation system completely free of fossil fuels.
Technologies to capture carbon dioxide emissions from large-scale sources such as power plants have recently gained some impressive scientific ground, but nearly two-thirds of global carbon emissions are created by much smaller polluters — automobiles, transportation vehicles and distributed industrial power generation applications (e.g., diesel power generators).

The Georgia Tech team’s goal is to create a sustainable transportation system that uses a liquid fuel and traps the carbon emission in the vehicle for later processing at a fueling station. The carbon would then be shuttled back to a processing plant where it could be transformed into liquid fuel. Currently, Georgia Tech researchers are developing a fuel processing device to separate the carbon and store it in the vehicle in liquid form.

“Presently, we have an unsustainable carbon-based economy with several severe limitations, including a limited supply of fossil fuels, high cost and carbon dioxide pollution,” said Andrei Fedorov, associate professor in the Woodruff School of Mechanical Engineering at Georgia Tech and a lead researcher on the project. “We wanted to create a practical and sustainable energy strategy for automobiles that could solve each of those limitations, eventually using renewable energy sources and in an environmentally conscious way.”

Little research has been done to explore carbon capture from vehicles, but the Georgia Tech team outlines an economically feasible strategy for processing fossil or synthetic, carbon-containing liquid fuels that allows for the capture and recycling of carbon at the point of emission. In the long term, this strategy would enable the development of a sustainable transportation system with no carbon emission.

Georgia Tech’s near-future strategy involves capturing carbon emissions from conventional (fossil) liquid hydrocarbon-fueled vehicles with an onboard fuel processor designed to separate the hydrogen in the fuel from the carbon. Hydrogen is then used to power the vehicle, while the carbon is stored on board the vehicle in a liquid form until it is disposed at a refueling station. It is then transported to a centralized site to be sequestered in a permanent location currently under investigation by scientists, such as geological formations, under the oceans or in solid carbonate form.

In the long-term strategy, the carbon dioxide will be recycled forming a closed-loop system, involving synthesis of high energy density liquid fuel suitable for the transportation sector.

Georgia Tech settled on a hydrogen-fueled vehicle for its carbon capture plan because pure hydrogen produces no carbon emissions when it is used as a fuel to power the vehicle. The fuel processor produces the hydrogen on-board the vehicle from the hydrocarbon fuel without introducing air into the process, resulting in an enriched carbon byproduct that can be captured with minimal energetic penalty. Traditional combustion systems, including current gasoline-powered automobiles, have a combustion process that combines fuel and air — leaving the carbon dioxide emissions highly diluted and very difficult to capture.

“We had to look for a system that never dilutes fuel with air because once the CO2 is diluted, it is not practical to capture it on vehicles or other small systems,” said David Damm, PhD candidate in the School of Mechanical Engineering, the lead author on the paper and Fedorov’s collaborator on the project.

The Georgia Tech team compared the proposed system with other systems that are currently being considered, focusing on the logistic and economic challenges of adopting them on a global scale. In particular, electric vehicles could be part of a long-term solution to carbon emissions, but the team raised concerns about the limits of battery technology, including capacity and charging time.

The hydrogen economy presents yet another possible solution to carbon emissions but also yet another roadblock — infrastructure. While liquid-based hydrogen carriers could be conveniently transported and stored using existing fuel infrastructure, the distribution of gaseous hydrogen would require the creation of a new and costly infrastructure of pipelines, tanks and filling stations.

The Georgia Tech team has already created a fuel processor, called CO2/H2 Active Membrane Piston (CHAMP) reactor, capable of efficiently producing hydrogen and separating and liquefying CO2 from a liquid hydrocarbon or synthetic fuel used by an internal combustion engine or fuel cell. After the carbon dioxide is separated from the hydrogen, it can then be stored in liquefied state on-board the vehicle. The liquid state provides a much more stable and dense form of carbon, which is easy to store and transport.

The Georgia Tech paper also details the subsequent long-term strategy to create a truly sustainable system, including moving past carbon sequestration and into a method to recycle the captured carbon back into fuel. Once captured on-board the vehicle, the liquid carbon dioxide is deposited back at the fueling station and piped back to a facility where it is converted into a synthetic liquid fuel to complete the cycle.

Now that the Georgia Tech team has come up with a proposed system and device to produce hydrogen and, at the same time, capture carbon emissions, the greatest remaining challenge to a truly carbon-free transportation system will be developing a method for making a synthetic liquid fuel from just CO2 and water using renewable energy sources, Fedorov said. The team is exploring a few ideas in this area, he added.

The research was published in Energy Conversion and Management . The research was funded by NASA, the U.S. Department of Defense NDSEG Fellowship Program and Georgia Tech’s CEO (Creating Energy Options) Program.

Adapted from materials provided by Georgia Institute of Technology.

Solar Cell Directly Splits Water To Produce Recoverable Hydrogen

ScienceDaily (Feb. 19, 2008) — Plants trees and algae do it. Even some bacteria and moss do it, but scientists have had a difficult time developing methods to turn sunlight into useful fuel. Now, Penn State researchers have a proof-of-concept device that can split water and produce recoverable hydrogen.

"This is a proof-of-concept system that is very inefficient. But ultimately, catalytic systems with 10 to 15 percent solar conversion efficiency might be achievable," says Thomas E. Mallouk, the DuPont Professor of Materials Chemistry and Physics. "If this could be realized, water photolysis would provide a clean source of hydrogen fuel from water and sunlight."

Although solar cells can now produce electricity from visible light at efficiencies of greater than 10 percent, solar hydrogen cells -- like those developed by Craig Grimes, professor of electrical engineering at Penn State -- have been limited by the poor spectral response of the semiconductors used. In principle, molecular light absorbers can use more of the visible spectrum in a process that would mimic natural photosynthesis. Photosynthesis uses chlorophyll and other dye molecules to absorb visible light.

So far, experiments with natural and synthetic dye molecules have produced either hydrogen or oxygen-using chemicals consumed in the process, but have not yet created an ongoing, continuous process. Those processes also generally would cost more than splitting water with electricity. One reason for the difficulty is that once produced, hydrogen and oxygen easily recombine. The catalysts that have been used to study the oxygen and hydrogen half-reactions are also good catalysts for the recombination reaction.

Mallouk and W. Justin Youngblood, postdoctoral fellow in chemistry, together with collaborators at Arizona State University, developed a catalyst system that, combined with a dye, can mimic the electron transfer and water oxidation processes that occur in plants during photosynthesis. They reported the results of their experiments at the annual meeting of the American Association for the Advancement of Science, Feb. 17 in Boston.

The key to their process is a tiny complex of molecules with a center catalyst of iridium oxide molecules surrounded by orange-red dye molecules. These clusters are about 2 nanometers in diameter with the catalyst and dye components approximately the same size. The researchers chose orange-red dye because it absorbs sunlight in the blue range, which has the most energy. The dye used has also been thoroughly studied in previous artificial photosynthesis experiments.

They space the dye molecules around the center core leaving surface area on the catalyst for the reaction. When visible light strikes the dye, the energy excites electrons in the dye, which, with the help of the catalyst, can split the water molecule, creating free oxygen.

"Each surface iridium atom can cycle through the water oxidation reaction about 50 times per second," says Mallouk. "That is about three orders of magnitude faster than the next best synthetic catalysts, and comparable to the turnover rate of Photosystem II in green plant photosynthesis." Photosystem II is the protein complex in plants that oxidizes water and starts the photosynthetic process.

The researchers impregnated a titanium dioxide electrode with the catalyst complex for the anode and used a platinum cathode. They immersed the electrodes in a salt solution, but separated them from each other to avoid the problem of the hydrogen and oxygen recombining. Light need only shine on the dye-sensitized titanium dioxide anode for the system to work. This type of cell is similar to those that produce electricity, but the addition of the catalyst allows the reaction to split the water into its component gases.

The water splitting requires 1.23 volts, and the current experimental configuration cannot quite achieve that level so the researchers add about 0.3 volts from an outside source. Their current system achieves an efficiency of about 0.3 percent.

"Nature is only 1 to 3 percent efficient with photosynthesis," says Mallouk. "Which is why you can not expect the clippings from your lawn to power your house and your car. We would like not to have to use all the land area that is used for agriculture to get the energy we need from solar cells."

The researchers have a variety of approaches to improve the process. They plan to investigate improving the efficiency of the dye, improving the catalyst and adjusting the general geometry of the system. Rather than spherical dye catalyst complexes, a different geometry that keeps more of the reacting area available to the sun and the reactants might be better. Improvements to the overall geometry may also help.

"At every branch in the process, there is a choice," says Mallouk. "The question is how to get the electrons to stay in the proper path and not, for example, release their energy and go down to ground state without doing any work."

The distance between molecules is important in controlling the rate of electron transfer and getting the electrons where they need to go. By shortening some of the distances and making others longer, more of the electrons would take the proper path and put their energy to work splitting water and producing hydrogen.

The U.S. Department of Energy supported this research.

Adapted from materials provided by Penn State.

'NMR On A Chip' Features NIST Magnetic Mini-sensor

ScienceDaily (Feb. 19, 2008) — A super-sensitive mini-sensor developed at the National Institute of Standards and Technology (NIST) can detect nuclear magnetic resonance (NMR) in tiny samples of fluids flowing through a novel microchip. The prototype chip device, developed in a collaboration between NIST and the University of California, may have wide application as a sensitive chemical analyzer, for example in rapid screening to find new drugs.
The NMR chip detected magnetic signals from atomic nuclei in tap water flowing through a custom silicon chip that juxtaposes a tiny fluid channel and the NIST sensor. The Berkeley group recently co-developed this "remote NMR" technique for tracking small volumes of fluid or gas flow inside soft materials such as biological tissue or porous rock, for possible applications in industrial processes and oil exploration. The chip could be used in NMR spectroscopy, a widely used technique for determining physical, chemical, electronic and structural information about molecules. NMR signals are equivalent to those detected in MRI (magnetic resonance imaging) systems

Berkeley scientists selected the NIST sensor, a type of atomic magnetometer, for the chip device because of its small size and high sensitivity, which make it possible to detect weak magnetic resonance signals from a small sample of atoms in the adjacent microchannel. Detection is most efficient when the sensor and sample are about the same size and located close together, lead author Micah Ledbetter says. Thus, when samples are minute, as in economical screening of many chemicals, a small sensor is crucial, Ledbetter says.

Its small size and extreme sensitivity make the NIST sensor ideal for the microchip device, in contrast to SQUIDs (superconducting quantum interference devices) that require bulky equipment for cooling to cryogenic temperatures or conventional copper coils that need much higher magnetic fields (typically generated by large, superconducting magnets) like those in traditional MRI.

The results reported in the PNAS* demonstrate another use for the NIST mini-sensor, a spin-off of NIST's miniature atomic clocks. The sensor already has been shown to have biomedical imaging applications.

*Journal reference: M.P. Ledbetter, I.M. Savukov, D. Budker, V. Shah, S. Knappe, J. Kitching, D. Michalak, S. Xu , and A. Pines. Zero-field remote detection of NMR with a microfabricated atomic magnetometer. Proceedings of the National Academy of Sciences, posted online Feb. 6, 2008.

The principal investigator for the study described in PNAS is Alexander Pines, a leading authority on NMR. A joint university/NIST patent application is being filed for the microchip device. The research was supported by the Office of Naval Research, U.S. Department of Energy, a CalSpace Minigrant and the Defense Advanced Research Projects Agency.

Adapted from materials provided by National Institute of Standards and Technology.

Immune System Reactivated In Adults With HIV: Thymus Producing New T-cells

ScienceDaily (Feb. 22, 2008) — Scientists at the Gladstone Institute of Virology and Immunology and the University of California, San Francisco (UCSF) have found that therapy can be used to stimulate the production of vital immune cells, called "T- cells," in adults with HIV infection.
HIV disease destroys T-cells, leading to collapse of the immune system and severe infection. The thymus gland, which produces T-cells, gradually loses function over time (a process called "involution") and becomes mostly inactive during adulthood. Because the thymus gland does not function well in adults, it is difficult for HIV-infected adults to make new T-cells. Thus, therapies that stimulate the thymus to produce new T-cells could help HIV-infected patients to rebuild their embattled immune systems.

Although it has been long assumed that the thymus cannot be reactivated in humans, new research shows that the thymus can be stimulated to produce more T-cells. This study is the first to show that pharmacologic therapies can be used to enhance human thymic function.

"These results represent new proof-of-principle findings that thymic involution can be reversed in humans" said Laura Napolitano, MD, lead author of the study, an Assistant Investigator at Gladstone and Assistant Professor of Medicine at UCSF. "Improved T-cell production may be helpful for some medical conditions such as HIV disease or bone marrow transplantation. These findings contribute new information to our understanding of T-cell production and are also an important step to determine whether immune therapies might someday benefit patients who need more T-cells."

Based on promising animal studies suggesting that growth hormone (GH) enhances thymic function in aged mice, Gladstone and UCSF investigators conducted a prospective randomized research study that yielded an exciting observation: GH increased thymic mass and T-cells in humans.

The investigators studied 22 HIV-infected adults for 2 years. One half of study participants were randomly assigned to continue their usual HIV therapy and to receive GH in the first year ("GH Arm"), and the other half continued their usual HIV therapy without GH treatment ("Control Arm"). In the second year of the study, Control Arm participants received GH, and GH Arm participants were studied off GH. Immune analyses were performed regularly in all study participants. The thymus was assessed by computed tomography (CT) scans, and the numbers and types of immune cells in the blood were determined by an advanced method called multiparameter flow cytometry.

All study participants had been receiving effective HIV therapy for at least one year (average duration of HIV therapy was approximately 3 years) with good suppression of the virus. Despite effective therapy, they still had an unusually low number of "CD4" T-cells, a type of T cell that is essential for normal immune function. At the start of the study, the patients in the two arms did not differ in average duration of effective HIV therapy, amount of HIV in the blood, age, thymic mass or in a large number of important immunologic measurements.

The results were very encouraging. Napolitano's team found that GH treatment markedly increased thymic mass and appeared to double the number of newly made T-cells. On average, GH receipt was associated with a 30% increase in CD4 T-cells (2.4 fold higher than no GH). These gains continued to increase at least 3 months beyond GH discontinuation and appeared to persist for at least one year after GH discontinuation.

"The findings of this study are exciting," said senior author Joseph M. McCune, a Professor of Medicine at UCSF, "and dispel the previously-held notion that the thymus cannot be summoned into action later in life. If these findings bear out in larger studies, this news should be of particular interest to those in need of new T-cells, for instance, adults with HIV disease or other forms of T cell depletion."

"However," both Napolitano and McCune cautioned, "GH should not be used as a treatment for immune purposes in HIV disease or in any other individuals at this time, unless this treatment occurs within a research study. More research is needed to learn whether stimulating the production of new T-cells actually provides a health benefit.

"We have shown an increase in the quantity of T-cells, but must also determine whether a recovered thymus produces good quality T-cells that provide satisfactory immune protection." Napolitano added, "This was a relatively small study of carefully selected adults receiving effective therapy for HIV infection and our findings may not apply to the majority of individuals."

While the sample in this study is relatively small, Napolitano said a larger, multi-center study conducted by the AIDS Clinical Trial Group (ACTG) has yielded similar results in preliminary analyses and is expected to report these results in the future. "The ACTG study will provide additional data that will add to our understanding of GH effects on the immune system," said Napolitano who is also a member of the ACTG Team conducting the multi-center study.

"GH is a protein hormone that acts upon mosT-cells of the body, which can result in several side effects," stated Napolitano. "We are interested in learning the specific way that GH affects the thymus so that therapy can be more narrowly directed to the thymus.

It should be noted that in an accompanying commentary in the Journal of Clinical Investigation, Kiki Tesselaar and Frank Miedema, at University Medical Center Utrecht, The Netherlands, warn that the long-term immunological and clinical benefits of growth hormone administration need to be thoroughly determined before this approach can be used more widely in the clinic.

Other participants in the research included Erin Filbert, Myra Ng, Julie Clor, and Kai Li of the Gladstone Institute of Virology and Immunology; and Diane Schmidt, Michael Gotway, Niloufar Ameli, Lorrie Epling, Elizabeth Sinclair, Paul Baum, Marisela Lua Killian, and Peter Bacchetti of UCSF. Research was conducted in the Clinical Research Center at San Francisco General Hospital. Funding was provided by the National Institutes of Health, the Gladstone Institutes, Serono Inc., and the UCSF AIDS Research Institute.

Journal reference: Growth hormone resurrects adult human thymus during HIV-1 infection, Journal of Clinical Investigation. March 2008. https://www.the-jci.org/article.php?id=32830

Adapted from materials provided by Gladstone Institutes.

Powerful Explosions Suggest Neutron Star Missing Link

ScienceDaily (Feb. 22, 2008) — Observations from NASA's Rossi X-ray Timing Explorer (RXTE) have revealed that the youngest known pulsing neutron star has thrown a temper tantrum. The collapsed star occasionally unleashes powerful bursts of X-rays, which are forcing astronomers to rethink the life cycle of neutron stars.
"We are watching one type of neutron star literally change into another right before our very eyes. This is a long-sought missing link between different types of pulsars," says Fotis Gavriil of NASA's Goddard Space Flight Center in Greenbelt, Md., and the University of Maryland, Baltimore. Gavriil is lead author of a paper in the February 21 issue of Sciencexpress.

Pulsars and magnetars belong to the same class of ultradense, small stellar objects called neutron stars, left behind after massive stars die and explode as supernovae. Pulsars, by far the most common type, spin extremely rapidly and emit powerful bursts of radio waves. These waves are so regular that, when they were first detected in the 1960's, researchers considered the possibility that they were signals from an extraterrestrial civilization.

By contrast, magnetars are slowly rotating neutron stars which derive their energy from incredibly powerful magnetic fields, the strongest known in the universe. These fields can stress the neutron star's solid crust past the breaking point, triggering starquakes that snap magnetic-field lines, producing violent and sporadic X-ray bursts. There are over 1800 known pulsars in our galaxy alone, but magnetars are much less common, said the researchers.

"Magnetars are actually very exotic objects," said Dr. Kaspi, McGill's Lorne Trottier Chair in Astrophysics and Cosmology and Canada Research Chair in Observational Astrophysics. "Their existence has only been established in the last 10 years, and we know of only a handful in the whole galaxy. They have dramatic X-ray and gamma-ray bursts and can emit huge flares, sometimes brighter than all other cosmic X-ray sources in the sky combined."

But what is the evolutionary relationship between pulsars and magnetars? Astronomers would like to know if magnetars represent a rare class of pulsars, or if some or all pulsars go through a magnetar phase during their life cycles.

Gavriil and his colleagues have found an important clue by examining archival RXTE data of a young neutron star, known as PSR J1846-0258 for its sky coordinates in the constellation Aquila. Previously, astronomers had classified PSR J1846 as a normal pulsar because of its fast spin (3.1 times per second) and pulsar-like spectrum. But RXTE caught four magnetar-like X-ray bursts on May 31, 2006, and another on July 27, 2006. Although none of these events lasted longer than 0.14 second, they all packed the wallop of at least 75,000 Suns.

"Never before has a regular pulsar been observed to produce magnetar bursts," says Gavriil.

"Young, fast-spinning pulsars were not thought to have enough magnetic energy to generate such powerful bursts," says coauthor Marjorie Gonzalez, who worked on this paper at McGill University in Montreal, Canada, but who is now based at the University of British Columbia in Vancouver. "Here's a normal pulsar that's acting like a magnetar."

Observations from NASA's Chandra X-ray Observatory also provided key information. Chandra observed the neutron star in October 2000 and again in June 2006, around the time of the bursts. Chandra showed the object had brightened in X-rays, confirming that the bursts were from the pulsar, and that its spectrum had changed to become more magnetar- like.

Astronomers know that PSR J1846 is very young for several reasons. First, it resides inside a supernova remnant known as Kes 75, an indicator that it hasn't had time to wander from its birthplace. Second, based on the rapidity that its spin rate is slowing down, astronomers calculate that it can be no older than 884 years - an infant on the cosmic timescale. Magnetars are thought to be about 10,000 years old, whereas most pulsars are thought to be considerably older.

The fact that PSR J1846's spin rate is slowing down relatively fast also means that it has a strong magnetic field that is braking the rotation. The implied magnetic field is trillions of times stronger than Earth's field, but it's 10 to 100 times weaker than typical magnetar field strengths. Coauthor Victoria Kaspi of McGill University notes, "PSR J1846's actual magnetic field could be much stronger than the measured amount, suggesting that many young neutron stars classified as pulsars might actually be magnetars in disguise, and that the true strength of their magnetic field only reveals itself over thousands of years as they ramp up in activity."

Adapted from materials provided by NASA/Goddard Space Flight Center.

Rare Massive Star, Eta Carinae, Produces Vast Winds Of Colliding Electrically-charged Particles

ScienceDaily (Feb. 22, 2008) — ESA’s Integral has made the first unambiguous discovery of high-energy X-rays coming from a rare massive star at our cosmic doorstep, Eta Carinae. It is one of the most violent places in the galaxy, producing vast winds of electrically-charged particles colliding at speeds of thousands of kilometres per second.
The only astronomical object that emits gamma-rays and is observable by the naked eye, Eta Carinae is monstrously large, so large that astronomers call it a hypergiant. It contains between 100–150 times the mass of the Sun and glows more brightly than four million Suns put together. Astronomers know that it is not a single star, but a binary, with a second massive star orbiting the first.

It has long been suspected that such massive binary stars should give off high-energy X-rays, but until now, the instruments required for the observations were lacking. Recently, Integral has conclusively shown that Eta Carinae gives off high-energy X-rays, more or less in agreement with theoretical predictions.

“The intensity of the X-rays is a little lower than we expected, but given that this is the first-ever conclusive observation, that’s okay,” says Jean-Christophe Leyder of the Institut d’Astrophysique et de Géophysique, Université de Liège, Belgium.

The high-energy X-rays come from a vast shockwave, set up and maintained between the two massive stars. The shockwave is produced when the two stars’ stellar winds collide, creating a system that astronomers term a colliding-wind binary. Massive stars are constantly shedding particles that are ‘blown’ away into space by the effect of light and other radiation given off by the star.

This starlight is so fierce that the stellar winds can reach speeds of 1500–2000 km/s. With two massive stars in close proximity, as they are in the Eta Carinae system, the winds collide and set up fearsome shockwaves where temperatures reach several thousand million degrees Kelvin. “It’s a very tough environment,” says Leyder.

Electrically-charged particles called electrons get caught in the magnetic environment of the shockwaves, bouncing back and forth and being accelerated to huge energies. When they finally burst out of the shockwave, they collide with low-frequency photons and give them more energy, creating the emission that Integral has seen.

Understanding this emission is important because astronomers believe that it lies at the heart of many diverse phenomena in the universe. Stellar winds have profound implications on the evolution of stars, the chemical evolution of the universe and as a source of energy in the galaxy.

Massive stars are rare, so two in a binary system is even rarer. “In our galaxy, there are probably only 30-50 colliding-wind binaries that display a clear signature of wind-wind collision,” says Leyder. A year ago, ESA’s XMM-Newton saw X-rays from the colliding wind binary, HD 5980, situated in the neighbouring galaxy, the Small Magellanic Cloud.

Integral covers a different, higher energy range in X-rays than that covered by XMM-Newton. This is why it was able to detect the more energetic X-rays emitted by Eta Carinae. Based on observations, scientists have learnt that the Eta Carinae system loses one Earth mass per day, which is roughly 140 times higher than the mass loss rate in HD 5980.

To have a rare, massive binary star such as Eta Carinae virtually at our cosmic doorstep at 8000 light-years, close enough to be observable in detail, is a stroke of luck. Now that they know what to look for, astronomers will continue searching for other examples of colliding wind binaries emitting high-energy X-rays further afield.

‘Hard X-ray Emission from Eta Carinae’ by J-C. Leyder, R. Walter and G. Rauw has been accepted for publication in the journal Astronomy and Astrophysics.

Adapted from materials provided by European Space Agency.

Greenland's Rising Air Temperatures Drive Ice Loss At Surface And Beyond

ScienceDaily (Feb. 21, 2008) — A new NASA study confirms that the surface temperature of Greenland's massive ice sheet has been rising, stoked by warming air temperatures, and fueling loss of the island's ice at the surface and throughout the mass beneath.
Greenland's enormous ice sheet is home to enough ice to raise sea level by about 23 feet if the entire ice sheet were to melt into surrounding waters. Though the loss of the whole ice sheet is unlikely, loss from Greenland's ice mass has already contributed in part to 20th century sea level rise of about two millimeters per year, and future melt has the potential to impact people and economies across the globe. So NASA scientists used state-of-the-art NASA satellite technologies to explore the behavior of the ice sheet, revealing a relationship between changes at the surface and below.

"The relationship between surface temperature and mass loss lends further credence to earlier work showing rapid response of the ice sheet to surface meltwater," said Dorothy Hall, a senior researcher in Cryospheric Sciences at NASA's Goddard Space Flight Center, in Greenbelt, Md., and lead author of the study.

A team led by Hall used temperature data captured each day from 2000 through 2006 from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on NASA's Terra satellite. They measured changes in the surface temperature to within about one degree of accuracy from about 440 miles away in space. They also measured melt area within each of the six major drainage basins of the ice sheet to see whether melt has become more extensive and longer lasting, and to see how the various parts of the ice sheet are reacting to increasing air temperatures.

The team took their research at the ice sheet's surface a step further, becoming the first to pair the surface temperature data with satellite gravity data to investigate what internal ice changes occur as the surface melts. Geophysicist and co-author, Scott Luthcke, also of NASA Goddard, developed a mathematical solution, using gravity data from NASA's Gravity Recovery and Climate Experiment (GRACE) twin satellite system. "This solution has permitted greatly-improved detail in both time and space, allowing measurement of mass change at the low-elevation coastal regions of the ice sheet where most of the melting is occurring," said Luthcke.

The paired surface temperature and gravity data confirm a strong connection between melting on ice sheet surfaces in areas below 6,500 feet in elevation, and ice loss throughout the ice sheet's giant mass. The result led Hall's team to conclude that the start of surface melting triggers mass loss of ice over large areas of the ice sheet.

The beginning of mass loss is highly sensitive to even minor amounts of surface melt. Hall and her colleagues showed that when less than two percent of the lower reaches of the ice sheet begins to melt at the surface, mass loss of ice can result. For example, in 2004 and 2005, the GRACE satellites recorded the onset of rapid subsurface ice loss less than 15 days after surface melting was captured by the Terra satellite.

"We're seeing a close correspondence between the date that surface melting begins, and the date that mass loss of ice begins beneath the surface," Hall said. "This indicates that the meltwater from the surface must be traveling down to the base of the ice sheet -- through over a mile of ice -- very rapidly, where its presence allows the ice at the base to slide forward, speeding the flow of outlet glaciers that discharge icebergs and water into the surrounding ocean."

Hall underscores the importance of combining results from multiple NASA satellites to improve understanding of the ice sheet's behavior. "We find that when we look at results from different satellite sensors and those results agree, the confidence in the conclusions is very high," said Hall.

Hall and her colleagues believe that air temperature increases are responsible for increasing ice sheet surface temperatures and thus more-extensive surface melt. "If air temperatures continue rising over Greenland, surface melt will continue to play a large role in the overall loss of ice mass." She also noted that the team's detailed study using the high-resolution MODIS data show that various parts of the ice sheet are reacting differently to air temperature increases, perhaps reacting to different climate-driven forces. This is important because much of the southern coastal area of the ice sheet is already near the melting point (0 degrees Celsius) during the summer.

Changes in Greenland's ice sheet surface temperature have been measured by satellites dating back to 1981. "Earlier work has shown increasing surface temperatures from 1981 to the present," said Hall. "However, additional years with more accurate and finer resolution data now available using Terra's imager are providing more information on the surface temperature within individual basins on the ice sheet, and about trends in ice sheet surface temperature. Combining this data with data from GRACE, arms us with better tools to establish the relationship between surface melting and loss of ice mass."

The new NASA study appears in the January issue of the quarterly Journal of Glaciology.

Adapted from materials provided by NASA/Goddard Space Flight Center.