about us:
our company name is KNK DECORATION. our job about sticker service, example: sticker cutting, digital printing, screen printing, neon box, sign marking and another material promotion.
kami melayani pengerjaan stiker cutting, printing indoor/outdoor, neon box, pembuatan rambu-rambu lalu lintas(standard DLLAJR), dll

Monday, December 24, 2007

New Color-changing Technology Has Potential Packaging, Military, Aerospace Applications

ScienceDaily (Jul. 25, 2007) — Imagine cleaning out your refrigerator and being able to tell at a glance whether perishable food items have spoiled, because the packaging has changed its color, or being able to tell if your dollar bill is counterfeit simply by stretching it to see if it changes hue. These are just two of the promising commercial applications for a new type of flexible plastic film developed by scientists at the University of Southampton in the United Kingdom and the Deutsches Kunststoff-Institut (DKI) in Darmstadt, Germany. Combining the best of natural and manmade optical effects, their films essentially represent a new way for objects to precisely change their color.
These "polymer opal films" belong to a class of materials known as photonic crystals. Such crystals are built of many tiny repeating units, and are usually associated with a large contrast in the components' optical properties, leading to a range of frequencies, called a "photonic bandgap," where no light can propagate in any direction. Instead, these new opal films have a small contrast in their optical properties.

As with other artificial opal structures, they are also "self-assembling," in that the small constituent particles assemble themselves in a regular structure. But this self-assembly is not perfect, and though meant to be periodic, they have significant irregularities. In these materials, the interplay between the periodic order, the irregularities, and the scattering of small inclusions strongly affect the way the light travels through these films, just as in natural opal gem stones, a distant cousin of these materials. For example, light may be reflected in unexpected directions that depend on the light's wavelength.

Photonic crystals have been of interest for years for various practical applications, most notably in fiber optic telecommunications but also as a potential replacement for toxic and expensive dyes used for coloring objects, from clothes to buildings. Yet much of their commercial potential has yet to be realized because the colors in manmade films made from photonic crystals depend strongly on viewing angle. If you hold up a sheet of the opal film, Baumberg explains, "You'll only see milky white, unless you look at a light reflected in it, in which case certain colors from the light source will be preferentially reflected." In other words, change the angle, and the color changes.

These photonic crystals are apparent in the natural world as well but are more consistent in color at varying angles. Opals, butterfly wings, certain species of beetle, and peacock feathers all feature arrays of tiny holes, neatly arranged into patterns. Even though these natural structures aren't nearly as precisely ordered as the manmade versions, the colors produced are unusually strong, and depend less on the viewing angle.

Until now, scientists believed that the same effect was at work in both manmade and natural photonic crystals: the lattice structure caused the light to reflect off the surface in such a way as to produce a color that changes depending upon the angle of reflection. Baumberg, however, suspects that the natural structures selectively scatter rather than reflect the light, a result of complex interplay between the order and the irregulaty in these structures.

Given that hunch, Baumberg's team developed polymer opals to combine the precise structure of manmade photonic crystals with the robust color of natural structures. The polymer opal films are made of arrays of spheres stacked in three dimensions, rather than layers. They also contain tiny carbon nanoparticles wedged between the spheres, so light doesn't just reflect at the interfaces between the plastic spheres and the surrounding materials, it also scatters off the nanoparticles embedded between the spheres. This makes the film intensely colored, even though they are made from only transparent and black components, which are environmentally benign. Additionally, the material can be "tuned" to only scatter certain frequencies of light simply by making the spheres larger or smaller.

In collaboration with scientists at DKI in Darmstadt, Germany, Baumberg and his colleagues have developed a solution for another factor that traditionally has limited the commercial potential of photonic crystals: the ability to mass-produce them. His Darmstadt colleagues have developed a manufacturing process that can be successfully applied to photonic crystals and they now can produce very long rolls of polymer opal films.

The films are "quite stretchy," according to Baumberg, and when they stretch, they change color, since the act of stretching changes the distance between the spheres that make up the lattice structure. This, too, makes them ideal for a wide range of applications, including potential ones in food packaging, counterfeit identification and even defense.

The researchers will publish their findings in the July 23 issue of Optics Express, an open-access journal of the Optical Society of America.

Article: Otto L. J. Pursiainen, Jeremy J. Baumberg, Holger Winkler, Benjamin Viel, Peter Spahn, Tilmann Ruhl, "Nanoparticle-tuned Structural Color from Polymer Opals," Optics Express, Vol. 15, Issue 15,

Adapted from materials provided by Optical Society of America.

Explosives On A Chip: Unique Structure Enables New Generation Of Military Micro-detonators

ScienceDaily (Dec. 23, 2007) — Tiny copper structures with pores at both the nanometer and micron size scales could play a key role in the next generation of detonators used to improve the reliability, reduce the size and lower the cost of certain military munitions.
Developed by a team of scientists from the Georgia Tech Research Institute (GTRI) and the Indian Head Division of the Naval Surface Warfare Center, the highly-uniform copper structures will be incorporated into integrated circuits -- then chemically converted to millimeter-diameter explosives. Because they can be integrated into standard microelectronics fabrication processes, the copper materials will enable micro-electromechanical (MEMS) fuzes for military munitions to be mass-produced like computer chips.

"An ability to tailor the porosity and structural integrity of the explosive precursor material is a combination we've never had before," said Jason Nadler, a GTRI research engineer. "We can start with the Navy's requirements for the material and design structures that are able to meet those requirements. We can have an integrated design tool able to develop a whole range of explosive precursors on different size scales."

Nadler uses a variety of templates, including microspheres and woven fabrics, to create regular patterns in copper oxide paste whose viscosity is controlled by the addition of polymers. He then thermochemically removes the template and converts the resulting copper oxide structures to pure metal, retaining the patterns imparted by the template. The size of the pores can be controlled by using different templates and by varying the processing conditions.

So far, he's made copper structures with channel sizes as small as a few microns -- with structural components that have nanoscale pores.

Based on feedback from the Navy scientists, Nadler can tweak the structures to help optimize the overall device -- known as a fuze -- which controls when and where a munition will explode.

"We are now able to link structural characteristics to performance," Nadler noted. "We can produce a technically advanced material that can be tailored to the thermodynamics and kinetics that are needed using modeling techniques."

Beyond the fabrication techniques, Nadler developed characterization and modeling techniques to help understand and control the fabrication process for the unique copper structures, which may also have commercial applications.

The copper precursor developed in GTRI is a significant improvement over the copper foam material that Indian Head had previously been evaluating. Produced with a sintered powder process, the foam was fragile and non-uniform, meaning Navy scientists couldn't precisely predict reliability or how much explosive would be created in each micro-detonator.

"GTRI has been able to provide us with material that has well-controlled and well-known characteristics," said Michael Beggans, a scientist in the Energetics Technology Department of the Indian Head Division of the Naval Surface Warfare Center. "Having this material allows us to determine the amount of explosive that can be formed in the MEMS fuze. The size of that charge also determines the size and operation of the other components."

The research will lead to a detonator with enhanced capabilities. "The long-term goal of the MEMS Fuze program is to produce a low-cost, highly-reliable detonator with built-in safe and arm capabilities in an extremely small package that would allow the smallest weapons in the Navy to be as safe and reliable as the largest," Beggans explained.

Reducing the size of the fuze is part of a long-term strategy toward smarter weapons intended to reduce the risk of collateral damage. That will be possible, in part, because hundreds of fuzes, each about a centimeter square, can be fabricated simultaneously using techniques developed by the microelectronics industry.

"Today, everything is becoming smaller, consuming less power and offering more functionality," Beggans added. "When you hear that a weapon is 'smart,' it's really all about the fuze. The fuze is 'smart' in that it knows the exact environment that the weapon needs to be in, and detonates it at the right time. The MEMS fuze would provide 'smart' functionality in medium-caliber and sub-munitions, improving results and reducing collateral damage."

Development and implementation of the new fuze will also have environmental and safety benefits.

"Practical implementation of this technology will enable the military to reduce the quantity of sensitive primary explosives in each weapon by at least two orders of magnitude," said Gerald R. Laib, senior explosives applications scientist at Indian Head and inventor of the MEMS Fuze concept. "This development will also vastly reduce the use of toxic heavy metals and waste products, and increase the safety of weapon production by removing the need for handling bulk quantities of sensitive primary explosives."

The next step will be for Indian Head to integrate all the components of the fuze into the smallest possible package -- and then begin producing the device in large quantities.

A specialist in metallic and ceramic cellular materials, Nadler said the challenge of the project was creating structures porous enough to be chemically converted in a consistent way -- while retaining sufficient mechanical strength to withstand processing and remain stable in finished devices.

"The ability to design things on multiple size scales at the same time is very important," he added. "Designing materials on the nano-scale, micron-scale and even the millimeter-scale simultaneously as a system is very powerful and challenging. When these different length scales are available, a whole new world of capabilities opens up."

Adapted from materials provided by Georgia Institute of Technology.

Sunday, December 23, 2007

New Brain Cells Listen Before They Talk

ScienceDaily (Nov. 1, 2007) — Newly created neurons in adults rely on signals from distant brain regions to regulate their maturation and survival before they can communicate with existing neighboring cells--a finding that has important implications for the use of adult neural stem cells to replace brain cells lost by trauma or neurodegeneration, Yale School of Medicine researchers report in The Journal of Neuroscience.
In fact, certain important synaptic connections--the circuitry that allows the brain cells to talk to each other--do not appear until 21 days after the birth of the new cells, according to Charles Greer, professor of neurosurgery and neurobiology, and senior author of the study, In the meantime, other areas of the brain provide information to the new cells, preventing them from disturbing ongoing functions until the cells are mature.

It was established in previous studies that several regions of the adult brain continue to generate new neurons, which are then integrated into existing brain circuitry. However the mechanisms that allowed this to happen were not known.

To answer this question, Greer and Mary Whitman, an M.D./Ph.D. candidate at Yale, studied how new neurons are integrated into the olfactory bulb, which helps discriminate between odors, among other functions.

They found that new neurons continue to mature for six to eight weeks after they are first generated and that the new neurons receive input from higher brain regions for up to 10 days before they can make any outputs. The other brain regions then continue to provide information to the new neurons as they integrate into existing networks.

The discovery of this previously unrecognized mechanism is significant, said Greer, because "if we want to use stem cells to replace neurons lost to injury or disease, we must ensure that they do not fire inappropriately, which could cause seizures or cognitive dysfunction."

The Journal of Neuroscience 27: 9951-9961 (October 2007)

Adapted from materials provided by Yale University.

Neuronal Circuits Able To Rewire On The Fly To Sharpen Senses

ScienceDaily (Dec. 22, 2007) — Researchers from the Center for the Neural Basis of Cognition (CNBC), a joint project of Carnegie Mellon University and the University of Pittsburgh, have for the first time described a mechanism called "dynamic connectivity," in which neuronal circuits are rewired "on the fly" allowing stimuli to be more keenly sensed.
This new, biologically inspired algorithm for analyzing the brain at work allows scientists to explain why when we notice a scent, the brain can quickly sort through input and determine exactly what that smell is.

"If you think of the brain like a computer, then the connections between neurons are like the software that the brain is running. Our work shows that this biological software is changed rapidly as a function of the kind of input that the system receives," said Nathan Urban, associate professor of biological sciences at Carnegie Mellon.

When a stimulus such as an odor is encountered, many neurons start to fire. When many neurons fire at the same time, the signals can be difficult for the brain to interpret. During lateral inhibition, the stimulated neurons send "cease-fire" messages to the neighboring neurons, reducing the noise and making it easier to precisely identify a stimulus. This process also facilitates accurate recognition of stimuli in many sensory areas of the brain.

In this project, Urban and colleagues specifically examine the process of lateral inhibition in an area of the brain called the olfactory bulb, which is responsible for processing scents. Until now, scientists thought that the connections made by the neurons in the olfactory bulb were dictated by anatomy and could only change slowly.

However, in this current study, Urban and colleagues found that the connections are, in fact, not set but rather able to change dynamically in response to specific patterns of stimuli. In their experiments, they found that when excitatory neurons in the olfactory bulb fire in a correlated fashion, this determines how they are functionally connected.

The researchers showed that dynamic connectivity allows lateral inhibition to be enhanced when a large number of neurons initially respond to a stimulus, filtering out noise from other neurons. By filtering out the noise, the stimulus can be more clearly recognized and separated from other similar stimuli.

"This mechanism helps to explain why you can walk into a room and recognize a smell that seems to be floral. As you continue to smell the odor, you begin to recognize that the scent is indeed flowers and even more specifically is the scent of roses," Urban said. "By understanding how the brain does this, we can then apply this mechanism to other problems faced by the brain."

Researchers converted this mechanism into an algorithm and used computer modeling to further show that dynamic connectivity makes it easier to identify and discriminate between stimuli by enhancing the contrast, or sharpness, of the stimuli, independent of the spatial patterns of the active neurons. This algorithm allows researchers to show the applicability of the mechanism in other areas of the brain where similar inhibitory connections are widespread. For example, the researchers applied the algorithm to a blurry picture and the picture appeared refined and in sharper contrast (see image).

The process is described in a paper in the January 2008 issue of Nature Neuroscience, and available online at http://dx.doi.org/10.1038/nn2030.

Coauthors of the study include Armen Arevian, a graduate student in the Center for Neuroscience at the University of Pittsburgh, and Vikrant Kapoor, a biological sciences graduate student at Carnegie Mellon. The study was funded through grants from the National Institute on Deafness and Other Communication Disorders, and the National Science Foundation.

Adapted from materials provided by Carnegie Mellon University.

Wednesday, December 12, 2007

Nitrous Oxide From Ocean Microbes Could Be Adding To Global Warming

ScienceDaily (Dec. 12, 2007) — A large amount of the greenhouse gas nitrous oxide is produced by bacteria in the oxygen poor parts of the ocean using nitrites according to Dr Mark Trimmer of Queen Mary, University of London.
Dr Trimmer looked at nitrous oxide production in the Arabian Sea, which accounts for up to 18 % of global ocean emissions. He found that the gas is primarily produced by bacteria trying to make nitrogen gas.

"A third of the 'denitrification' that happens in the world's oceans occurs in the Arabian Sea (an area equivalent to France and Germany combined)" said Dr Trimmer. "Oxygen levels decrease as you go deeper into the sea. At around 130 metres there is what we call an oxygen minimum zone where oxygen is low or non-existent. Bacteria that produce nitrous oxide do well at this depth."

Gas produced at this depth could escape to the atmosphere. Nitrous oxide is a powerful greenhouse gas some 300 times more so than carbon dioxide, it also attacks the ozone layer and causes acid rain.

"Recent reports suggest increased export of organic material from the surface layers of the ocean under increased atmospheric carbon dioxide levels. This could cause an expansion of the oxygen minimum zones of the world triggering ever greater emissions of nitrous oxide."

Adapted from materials provided by Society for General Microbiology.

Methane From Microbes: A Fuel For The Future

ScienceDaily (Dec. 12, 2007) — Microbes could provide a clean, renewable energy source and use up carbon dioxide in the process, suggested Dr James Chong at a Science Media Centre press briefing December 10.

"Methanogens are microbes called archaea that are similar to bacteria. They are responsible for the vast majority of methane produced on earth by living things" says Dr Chong from York University. "They use carbon dioxide to make methane, the major flammable component of natural gas. So methanogens could be used to make a renewable, carbon neutral gas substitute."

Methanogens produce about one billion tonnes of methane every year. They thrive in oxygen-free environments like the guts of cows and sheep, humans and even termites. They live in swamps, bogs and lakes. "Increased human activity causes methane emissions to rise because methanogens grow well in rice paddies, sewage processing plants and landfill sites, which are all made by humans."

Methanogens could feed on waste from farms, food and even our homes to make biogas. This is done in Europe, but very little in the UK. The government is now looking at microbes as a source of fuel and as a way to tackle food waste in particular.

Methane is a greenhouse gas that is 23 times more effective at trapping heat than carbon dioxide. "By using methane produced by bacteria as a fuel source, we can reduce the amount released into the atmosphere and use up some carbon dioxide in the process!"

Adapted from materials provided by Society for General Microbiology.

Greenland Melt Accelerating, According To Climate Scientist

ScienceDaily (Dec. 12, 2007) — The 2007 melt extent on the Greenland ice sheet broke the 2005 summer melt record by 10 percent, making it the largest ever recorded there since satellite measurements began in 1979, according to a University of Colorado at Boulder climate scientist.
The melting increased by about 30 percent for the western part of Greenland from 1979 to 2006, with record melt years in 1987, 1991, 1998, 2002, 2005 and 2007, said CU-Boulder Professor Konrad Steffen, director of the Cooperative Institute for Research in Environmental Sciences. Air temperatures on the Greenland ice sheet have increased by about 7 degrees Fahrenheit since 1991, primarily a result of the build-up of greenhouse gases in Earth's atmosphere, according to scientists.

Steffen gave a presentation on his research at the fall meeting of the American Geophysical Union held in San Francisco from Dec. 10 to Dec. 14. His team used data from the Defense Meteorology Satellite Program's Special Sensor Microwave Imager aboard several military and weather satellites to chart the area of melt, including rapid thinning and acceleration of ice into the ocean at Greenland's margins.

Steffen maintains an extensive climate-monitoring network of 22 stations on the Greenland ice sheet known as the Greenland Climate Network, transmitting hourly data via satellites to CU-Boulder to study ice-sheet processes.

Although Greenland has been thickening at higher elevations due to increases in snowfall, the gain is more than offset by an accelerating mass loss, primarily from rapidly thinning and accelerating outlet glaciers, Steffen said. "The amount of ice lost by Greenland over the last year is the equivalent of two times all the ice in the Alps, or a layer of water more than one-half mile deep covering Washington, D.C."

The Jacobshavn Glacier on the west coast of the ice sheet, a major Greenland outlet glacier draining roughly 8 percent of the ice sheet, has sped up nearly twofold in the last decade, he said. Nearby glaciers showed an increase in flow velocities of up to 50 percent during the summer melt period as a result of melt water draining to the ice-sheet bed, he said.

"The more lubrication there is under the ice, the faster that ice moves to the coast," said Steffen. "Those glaciers with floating ice 'tongues' also will increase in iceberg production."

Greenland is about one-fourth the size of the United States, and about 80 percent of its surface area is covered by the massive ice sheet. Greenland hosts about one-twentieth of the world's ice -- the equivalent of about 21 feet of global sea rise. The current contribution of Greenland ice melt to global sea levels is about 0.5 millimeters annually.

The most sensitive regions for future, rapid change in Greenland's ice volume are dynamic outlet glaciers like Jacobshavn, which has a deep channel reaching far inland, he said. "Inclusion of the dynamic processes of these glaciers in models will likely demonstrate that the 2007 Intergovernmental Panel on Climate Change assessment underestimated sea-level projections for the end of the 21st century," Steffen said.

Helicopter surveys indicate there has been an increase in cylindrical, vertical shafts in Greenland's ice known as moulins, which drain melt water from surface ponds down to bedrock, he said. Moulins, which resemble huge tunnels in the ice and may run vertically for several hundred feet, switch back and forth from vertical to horizontal as they descend toward the bottom of the ice sheet, he said.

"These melt-water drains seem to allow the ice sheet to respond more rapidly than expected to temperature spikes at the beginning of the annual warm season," Steffen said. "In recent years the melting has begun earlier than normal."

Steffen and his team have been using a rotating laser and a sophisticated digital camera and high-definition camera system provided by NASA's Jet Propulsion Laboratory to map the volume and geometry of moulins on the Greenland ice sheet to a depth of more than 1,500 feet. "We know the number of moulins is increasing," said Steffen. "The bigger question is how much water is reaching the bed of the ice sheet, and how quickly it gets there."

Steffen said the ice loss trend in Greenland is somewhat similar to the trend of Arctic sea ice in recent decades. In October, CU-Boulder's National Snow and Ice Data Center reported the 2007 Arctic sea-ice extent had plummeted to the lowest levels since satellite measurements began in 1979 and was 39 percent below the long-term average tracked from 1979 to 2007.

CIRES is a joint institute of CU-Boulder and the National Oceanic and Atmospheric Administration. For more information on Steffen's research, visit the Web site at: http://cires.colorado.edu/science/groups/steffen/.

Adapted from materials provided by University of Colorado at Boulder.

Hunter Humidifier Filter Types and Maintenance

When trying to heat up the air in cold weather, the air gets dried up. To counter this effect, a Hunter humidifier is needed. And it’s also necessary to know how Hunter humidifier filters work.

Dry air in a room or enclosure is often the result of heating up the air. Heating up a room or enclosure only solves room temperature, but not the air circulation. When the air is merely heated up and not well circulated, the air becomes dry and viruses, bacteria, and molds may increase in a particular enclosure. Hence, a Hunter humidifier comes in. It can be installed in the furnace or a portable Hunter humidifier unit brought in to treat a room or enclosure. Hunter humidifier filters help the humidifier perform better air humidification and circulation.

A Hunter humidifier filter looks like a mesh wire pad that helps the humidifier unit perform better at filtering the air. As a Hunter humidifier performs, chances are air elements like dusts and molds may collect in the unit, making the unit blow and circulate contaminated air without a filter. Several water minerals may also be trapped in the humidifier, making it perform less. Thus, Hunter humidifier filters need to be maintained by regular replacements. Hunter humidifier filters are antibacterial paper filters that are specially made to eliminate micro-organism growth in humidifier units.

When the cold season sets in, heaters are in use. To prevent the air from drying up, humidifiers are used. Before using the humidifier in cold seasons, it is best to replace at once a Hunter humidifier filter. Hunter humidifier filters should also be replaced every 2 to 4 weeks for better humidifier unit performance, and also according to frequency of use and water condition. Hard water (more mineral content) used in humidifiers means more frequent replacements of Hunter humidifier filter. It is also advisable to take out the Hunter humidifier filter each time the humidifier unit is cleaned.

A Genuine Hunter Humidifier Filter type (model 32300) can filter out 3 gallons of humidifying water. Another type of Genuine Hunter Humidifier Filter (model 32400) can filter out 4 gallons of humidifying water. Models 32500 and 34500 of a Genuine Hunter Humidifier Filter can filter out 5 gallons of humidifying water. A Hunter humidifier filter of the Endurawick make is specifically for a Hunter Endurawick for filtering 5 gallons of humidifying water.

Replacing Hunter humidifier filters as often as needed is the best way to maintain good performance of humidifiers.

Author Info:

Robert Thomson is writing humidifier reviews and articles about humidifier filters and general humidifier maintenance.

Transcriptional Factors And Regulators

All the cellular processes in living cells such as growth, development, morphogenesis and cellular differentiation are a product of gene expression programs involving complicated transcriptional regulation of several genes. This process of transcriptional regulation is tightly controlled and coordinated by proteins called transcriptional regulators. These transcriptional regulators and factors are DNA-binding proteins that bind to the promoter or enhancer sequences on the DNA and facilitate either transcriptional repression or activation.

There are three principal types of transcription factors. These include basal transcription factors, upstream transcription factors and inducible transcription factors. The basic structure of every transcriptional factor mainly contains a DNA-binding domain and an activator domain. DNA-binding motifs found in transcription factors include zinc-finger, helix-loop-helix, helix-turn-helix, leucine zipper and high-mobility groups, based on which transcription factors are classified. The activator domain of these transcription factors interacts with components of transcription machinery such as RNA polymerases and associated transcription regulators.

Regulation of transcriptional factors is a complex mechanism that ensures exact spatio-temporal expression of genes. In response to a specific cellular stimulus, these trans-regulatory factors are activated in a sequential manner. Upon activation, these factors recruit transcriptional co-regulators such as histones that function as co-activators or co- repressors and aid in modifying chromatin structure. Altered activation of these regulators is often associated with various pathologies such as chronic disorders and malignancies. Recent studies are concentrating on developing improved disease treatment strategies through identification of different transcription factor-binding patterns and blocking them.

There are several families of trans-regulatory factors that control critical cellular signaling cascades involved in cell proliferation, survival, lineage development and cellular differentiation. These include Rel/NF-kB family, AP-1 family, STAT family of transcription factors, homeodomain proteins, DNA-binding proteins, POU transcription factors, nuclear hormone receptor family, p53 family and E2F family.

Author Info:

IMGENEX India Pvt Ltd. the only biotech company in Orissa and one of its kinds in Eastern India. IMGENEX India started in Oct as an outsourcing branch of IMGENEX Corporation, San Diego, USA.Find out more information about Transcriptional Factors.

Showing Off Your Honda Civic SI



by Kathy Austin
A Honda Civic SI is a vehicle that can be an immense source of pride and joy for vehicle owners everywhere. This vehicle, which is manufactured by the Japanese automobile company Honda, is actually the sports version of its rather famous econocar sister, the Honda Civic. Owning a vehicle like this has made a lot of people use Honda Civic SI keychains on their car keys to show everyone what a wonderful car they have.

The Honda Civic SI boasts of a popular following that began with the introduction of this car in 1984. The designation "SI" actually means "Sports Injected" and was first used in the 1984 Honda Civic SI that came in the 3-door hatchback style. The introduction of this souped up Honda Civic to the market gained popularity among those car owners who enjoyed tweaking with their cars and speed enthusiasts.

The 3-door hatchback design of the Honda Civic SI was retained for most of its life, with modifications to the car's design, engine and other specifications being made throughout the years. The Honda Civic SI then saw a change in body style from the 3-door hatchback design that people have seemed to have gotten used to, to a new age 2-door coupe body design in 1996. This gave the Honda Civic SI a sportier appeal to complement the added engine power that the name implies.

Engine power for the original Honda Civic SI was pegged at 130 horsepower when it was first introduced in the early 1980s. The engine's power was then seen to fluctuate between 112 to 158 horsepower between the mid 1980s till the early 1990s models. The 1992 to 1995 version of this automobile sported an engine that gave the car 125 horsepower. Different regions of the world, however, seemed to have different powerplants installed into their Honda Civic SI's. There were vehicles of the same make and model that had a 130 horsepower engine and there were others that had between 158 to 168 horsepower under their hoods.

The 1996 to 2000 version of this car had 160 horsepower to its name and came in three color selections. Only Canadian Honda Civic SI buyers had the option of a fourth color choice, which is the Vogue Silver Metallic. The three original color choices for this era's Hinda Civic SI included Flamenco Black Pearl, Electron Blue Pearl and Milano Red. The 2002 to 2005 edition of this car returned to its original hatchback roots. While the car still had the 160 horsepower engine under its hood, it did not gain as much acclaim as its predecessor due to its less than sporty look.

The current Honda Civic SI now comes in two body designs, the 2-door coupe and the 4-door sedan. With a 197 horsepower 2.0-liter engine to give it the power that a lot of car buyers these days seem to want, it is no wonder that this car, which comes in numerous color choices, is one of the most popular cars being purchased these days.

About the Author
Kathy Austin is an internet marketing consultant for http://www.wholesalekeychain.com. Wholesale Keychain sells officially licensed key chains, made in USA and comes with lifetime guarantee against flaws.

Tuesday, December 4, 2007

The Samsung G600 - Top Quality Samsung Handset!

by Matt Sharp
The Samsung G600 camera mobile phone comes with a slide opening casing. The phone with its smooth finish and easy to access slide opening mechanism has succeeded in grabbing the attention of the mobile phone market. The mobile phone coming with a large 2.2 Inch high colour screen provides the user with up to sixteen million colours on a high resolution TFT screen. The Samsung G600 mobile phone is a quad band technology handset that helps the user by providing with worldwide roaming that will automatically switch between the GSM networks without the user knowing. This model mobile phone from Samsung comes with a built in memory of 55 Mbytes that can be extended by the user by adding a memory card slot. The mobile phone handset comes in a 102mm x 47.8mm x 14.9mm sized casing, weighing 105 gram including the phone battery making it much comfortable to hold in hand. .

It is the existence of 5 megapixel camera feature that makes the mobile phones unique. This mobile handset helps the user in capturing beautifully clear and precise shot at any time of the day or night. The camera has a built in flash and an auto focus feature that ensures perfect picture at any time a photo is shot. The mobile phone comes with an integrated FM radio function that guarantees excellent musical entertainment. It is also possible for the user to transfer music files onto their Samsung G600 or simply download the latest music from an Internet music store.

The mobile phones comes with a battery that has approximately 3.5 hours of talk time or approximately 300 hours of standby battery time from a fully charged battery. The Samsung G600 also has additional features including Bluetooth wireless connectivity that helps the user to transfer files between two compatible Bluetooth devices with being tangled in wires. The built in EDGE technology ensures high speed data transfers that are approximately three times faster than GPRS.

mobile phones, Samsung


About the Author
mobile phones
Samsung

The Nokia 6300 - Candy Bar Designing!

by Matt Sharp
The Nokia 6300 mobile phone comes with excellent features inclkuding built-in 2 megapixel camera, Edge technology, Bluetooth technology and web browsing facility, that are really beneficial and helpful for the user.

The new Nokia 6300 mobile phone handset coming in a sophisticated, stylish and attractive stainless steel casing is filled with fun and highly useable features that support the user to enjoy. The handset coming in a curvaceous stainless steel casing with a glossy black section around the screen has a large 2 Inch screen that can provide up to 16.7 million colours on a TFT QVGA type display. The O2 Nokia 6300 handset has a highly useable and attractive stainless steel keypad.

The mobile phone handset has a built in memory of 7.8 Mbytes that cane be expanded to 128 Mbytes by inserting a MicroSD memory card. The Nokia 6300 handset weighs 91 grams making it comfortable t hold in hand and slip into the pocket. The handset measures 106.4 mm x 43.6 mm x 11.7 mm making it really fit in to the userâ�,��,,�s hand. The attractive feature of the mobile phone is the built in 2 megapixel camera feature that is capable of taking photos, store photos, send and share photos from the mobile handset. With a fully charged battery, the O2 Nokia 6300 model mobile phone is capable of providing up to 264 hours of standby battery time and up to 3.5 hours of talk time. The existence of Edge technology ensures easy and high speed data transfer for the user to enjoy. The built in Bluetooth technology supports the user with a wireless connectivity option on handset. It is also possible for the user to browse internet with the help of the mobile phone making it comfortable for him when he is away from the PC. The mobile phone has a built in FM radio feature complete with visual radio that makes it more enjoyable for the user. The Nokia 6300 handset comes with simple to use messaging services including instant messaging, audio messaging, multimedia messaging, text messaging and emails with attachments.

Nokia, O2

About the Author
Nokia
O2

Nokia E90 - Symbian Power!


by Matt Sharp
After doing such a terrific job in the mobile industry, Nokia has never shown a sign of slowing down. After winning the world of communication with some power products, Nokia is still on a hunt to bring out the best in communication and hence we have been showered with tools like Nokia E90 and the Nokia Prism.

Nokia E90 has already become much popular among the tech-savvy people. Marketed as the Nokia E90, the E90 Communicator is a smart-looking business phone running on Symbian OS Version 9.2. The fold-open design of the casing of the phone makes the device really a cool contrivance. Displays are amazing â�,�" 16 million internal colour screen (800 x 352 pixels) and 16 million external colour screen (240 x 320 pixels). With a built in 3.2 megapixel camera, you can freeze anything you feel like. With features like email (POP3, IMAP4 and SMTP), push email, email attachment editor and viewer, text to speech message reader, contacts with images, Nokia active notepad etc. the Nokia E90 turns into a complete business machine. Additional features include music player, FM radio, push-to-talk, handsfree speaker, voice commands, voice recording, voice dialling, Internet call voice over IP, 3G HSDPA, WLAN, USB 2.0, Bluetooth, Infrared, 128MB memory plus MicroSD memory card support etc.

Nokia has just released two pay as you go mobile phones in its designer Prism category. Named as the Nokia 7500 Prism and the Nokia 7900 Prism, these two devices carry great aesthetic sense with supreme features. While the Nokia 7900 is a Quad Band 3G phone, the Nokia 7500 Prism is a Tri Band phone. Except weight and size, almost all the features in the two pay as you go mobile phones are similar. Common features include 2 megapixel camera with 8 x digital zoom, stereo music player, FM radio, web access, Bluetooth, EDGE, embedded Java games, downlaodable games.

Be it the pay as you go Nokia Prism or Nokia E90 Communicator, both are feature rich, both are user-friendly. By choosing one pay as you go mobile phones, you could give a different angle to the way you communicate!

pay as you go, pay as you go mobile phones

About the Author
pay as you go
pay as you go mobile phones

LG ENV VX9900 Review - A Review of the LG ENV VX9900 Cellular Phone

by Jonathan Baker
A multimedia messaging phone, the LG ENV (VX9900) is the go to device for those who need to be connected at all times. One of the best phones on the market, the ENV combines EvDO high speed technology, large internal screen, dual stereo speakers and an external memory port. From camera to multimedia player to cell phone extraordinare, this phone has everything one would ever need (and then some).

For those who are addicted to text messaging, the ENV (VX9900) offers a flip open QWERTY keyboard and up to 1,120 character text messages. The phone is also enables picture and video messaging, as well as web based email and instant messaging. With Mobile Web 2.0, users can check their MySpace.com profiles, purchase airline tickets from their favorite discount ticket broker or even bid on Ebay.com items. The internal antenna makes this phone difficult to break and a keyguard function prevents unintentional dialing.

The camera is the best out of all of the phones LG produces, and is a 2.0 megapixel with autofocus and flash. With four different resolution options, he camera even has a protective lens cover to help prevent scratches and dings. The video format is 3G2, and the digital zoom will increase the picture size up to two and a half times. Camera features include white balance, customizable brightness, color effects, night mode, and shutter sound, as well as a self timer that can be set for 3, 5 or 10 seconds. Up to one full hour of video can be stored, and 15 seconds of video at a time can be messaged to another phone or email address.

This phone truly has it all, including a VZ NavigatorSM that provides voice prompted turn by turn directions. All in all, it's a fantastic buy and one that won't be regretted.

About the Author
For more excellent mobile phone information, visit the Wireless Phone Forum at http://www.TheCellularForums.com/ today. We hope that you enjoyed this LG ENV VX9900 review!

LG KG800 Review - A Review of the LG KG800 Cellular Phone


by Jonathan Baker

Upon first glance, it's easy to see why the LG KG800 (also known as the Chocolate) won the 2006 IF Design Award in Germany for its fresh and innovative design. One such example of innovation is the hidden display and a touchpad that is invisible when not in use, but glows red when in use. Currently only available in black, the Chocolate is something to see and be seen with. Still, this phone is more than just a pretty package and comes equipped with more features than one would expect to find in a phone. Moreover, each feature is notably high quality.

The camera, for instance, isn't the run of the mill one that one tends to find in cell phones these days. Instead, it's a powerful 1.3 megapixel camera and video recorder that has a powerful flash, ensuring each photo taken is one worth saving. The multi-shot feature enables the user to take nine consecutive shots and once they're taken, the camera is capable of zooming up to four times the original size. Brightness, white balance, timer and effect are bonuses that round out this aspect of the Chocolate.

Thanks to the 262,000 color TFT screen, users won't miss a thing. The display of images, video and menu icons are high quality and vibrant. When owners are finished editing photos or watching videos, they can enjoy the built in MP3 player that supports MP3, AAC, AAC+ and WMA files.

With the Chocolate, boring polyphonic ringtones will be a thing of the past. Now, music files can be set as ringtones and the graphic equalizer with six different sound effects will make sure the phone is heard loud and clear when somebody calls. Bluetooth technology enables hands free talking and because the phone has Tri-band technology, the phone can be used in most countries.

About the Author
For more excellent mobile phone information, visit the Cellular Forum at http://www.TheCellularForums.com/ today. We hope that you enjoyed this LG KG800 review!

Wednesday, November 28, 2007

High Performance Field-effect Transistors With Thin Films Of Carbon 60 Produced

ScienceDaily (Nov. 27, 2007) — Using room-temperature processing, researchers at the Georgia Institute of Technology have fabricated high-performance field effect transistors with thin films of Carbon 60, also known as fullerene. The ability to produce devices with such performance with an organic semiconductor represents another milestone toward practical applications for large area, low-cost electronic circuits on flexible organic substrates.
The new devices -- which have electron-mobility values higher than amorphous silicon, low threshold voltages, large on-off ratios and high operational stability -- could encourage more designers to begin working on such circuitry for displays, active electronic billboards, RFID tags and other applications that use flexible substrates.

"If you open a textbook and look at what a thin-film transistor should do, we are pretty close now," said Bernard Kippelen, a professor in Georgia Tech's School of Electrical and Computer Engineering and the Center for Organic Photonics and Electronics. "Now that we have shown very nice single transistors, we want to demonstrate functional devices that are combinations of multiple components. We have everything ready to do that."

Fabrication of the C60 transistors was reported August 27th in the journal Applied Physics Letters. The research was supported by the U.S. National Science Foundation through the STC program MDITR, and the U.S. Office of Naval Research.

Researchers have been interested in making field-effect transistors and other devices from organic semiconductors that can be processed onto various substrates, including flexible plastic materials. As an organic semiconductor material, C60 is attractive because it can provide high electron mobility -- a measure of how fast current can flow. Previous reports have shown that C60 can yield mobility values as high as six square centimeters per volt-second (6 cm2/V/s). However, that record was achieved using a hot-wall epitaxy process requiring processing temperatures of 250 degrees Celsius -- too hot for most flexible plastic substrates.

Though the transistors produced by Kippelen's research team display slightly lower electron mobility -- 2.7 to 5 cm2/V/s -- they can be produced at room temperature.

"If you want to deposit transistors on a plastic substrate, you really can't have any process at a temperature of more than 150 degrees Celsius," Kippelen said. "With room temperature deposition, you can be compatible with many different substrates. For low-cost, large area electronics, that is an essential component."

Because they are sensitive to contact with oxygen, the C60 transistors must operate under a nitrogen atmosphere. Kippelen expects to address that limitation by using other fullerene molecules -- and properly packaging the devices.

The new transistors were fabricated on silicon for convenience. While Kippelen isn't underestimating the potential difficulty of moving to an organic substrate, he says that challenge can be overcome.

Though their performance is impressive, the C60 transistors won't threaten conventional CMOS chips based on silicon. That's because the applications Kippelen has in mind don't require high performance.

"There are a lot of applications where you don't necessarily need millions of fast transistors," he said. "The performance we need is by far much lower than what you can get in a CMOS chip. But whereas CMOS is extremely powerful and can be relatively low in cost because you can make a lot of circuits on a wafer, for large area applications CMOS is not economical."

A different set of goals drives electronic components for use with low-cost organic displays, active billboards and similar applications.

"If you look at a video display, which has a refresh rate of 60 Hz, than means you have to refresh the screen every 16 milliseconds," he noted. "That is a fairly low speed compared to a Pentium processor in your computer. There is no point in trying to use organic materials for high-speed processing because silicon is already very advanced and has much higher carrier mobility."

Now that they have demonstrated attractive field-effect C60 transistors, Kippelen and collaborators Xiao-Hong Zhang and Benoit Domercq plan to produce other electronic components such as inverters, ring oscillators, logic gates, and drivers for active matrix displays and imaging devices. Assembling these more complex systems will showcase the advantages of the C60 devices.

"The goal is to increase the complexity of the circuits to see how that high mobility can be used to make more complex structures with unprecedented performance," Kippelen said.

The researchers fabricated the transistors by depositing C60 molecules from the vapor phase into a thin film atop a silicon substrate onto which a gate electrode and gate dielectric had already been fabricated. The source and drain electrodes were then deposited on top of the C60 films through a shadow mask.

Kippelen's team has been working with C60 for nearly ten years, and is also using the material in photovoltaic cells. Beyond the technical advance, Kippelen believes this new work demonstrates the growing maturity of organic electronics.

"This progress may trigger interest among more conventional electronic engineers," he said. "Most engineers would like to work with the latest technology platform, but they would like to see a level of performance showing they could actually implement these circuits. If you can demonstrate -- as we have -- that you can get transistors with good reproducibility, good stability, near-zero threshold voltages, large on-off current ratios and performance levels higher than amorphous silicon, that may convince designers to consider this technology."

Adapted from materials provided by Georgia Institute of Technology.

Micro Microwave Does Pinpoint Cooking For Miniaturized Labs

ScienceDaily (Nov. 15, 2007) — Researchers at the National Institute of Standards of Technology (NIST) and George Mason University have demonstrated what is probably the world's smallest microwave oven, a tiny mechanism that can heat a pinhead-sized drop of liquid inside a container slightly shorter than an ant and half as wide as a single hair. The micro microwave is intended for lab-on-a-chip devices that perform rapid, complex chemical analyses on tiny samples.
In a paper in the Journal of Micromechanics and Microengineering*, the research team led by NIST engineer Michael Gaitan describes for the first time how a tiny dielectric microwave heater can be successfully integrated with a microfluidic channel to control selectively and precisely the temperature of fluid volumes ranging from a few microliters (millionth of a liter) to sub-nanoliters (less than a billionth of a liter). Sample heating is an essential step in a wide range of analytic techniques that could be built into microfluidic devices, including the high-efficiency polymerase chain reaction (PCR) process that rapidly amplifies tiny samples of DNA for forensic work, and and methods to break cells open to release their contents for study.

The team embedded a thin-film microwave transmission line between a glass substrate and a polymer block to create its micro microwave oven. A trapezoidal-shaped cut in the polymer block only 7 micrometers across at its narrowest--the diameter of a red blood cell--and nearly 4 millimeters long (approximately the length of an ant) serves as the chamber for the fluid to be heated.

Based on classical theory of how microwave energy is absorbed by fluids, the research team developed a model to explain how their minature oven would work. They predicted that electromagnetic fields localized in the gap would directly heat the fluid in a selected portion of the micro channel while leaving the surrounding area unaffected. Measurements of the microwaves produced by the system and their effect on the fluid temperature in the micro channel validated the model by showing that the increase in temperature of the fluid was predominantly due to the absorbed microwave power.

Once the new technology is more refined, the researchers hope to use it to design a microfluidic microwave heater that can cycle temperatures rapidly and efficiently for a host of applications.

The work is supported by the Office of Science and Technology at the Department of Justice's National Institute of Justice.

* J.J. Shah, S.G. Sundaresan, J. Geist, D.R. Reyes, J.C. Booth, M.V. Rao and M. Gaitan. Microwave dielectric heating of fluids in an integrated microfluidic device. Journal of Micromechanics and Microengineering, 17: 2224-2230 (2007)

Adapted from materials provided by National Institute of Standards and Technology.

Friday, November 23, 2007

PCBs May Threaten Killer Whale Populations For 30-60 Years

ScienceDaily (Sep. 10, 2007) — Orcas or killer whales may continue to suffer the effects of contamination with polychlorinated biphenyls (PCBs) for the next 30 -- 60 years, despite 1970s-era regulations that have reduced overall PCB concentrations in the environment, researchers in Canada report. The study calls for better standards to protect these rare marine mammals.
In the study, Brendan Hickie and Peter S. Ross and colleagues point out that orcas face a daunting array of threats to survival, including ship traffic, reduced abundance of prey and environmental contamination. Orcas, which reach a length exceeding 25 feet and weights of 4-5 tons, already are the most PCB-contaminated creatures on Earth. Scientists are trying to determine how current declines in PCBs in the environment may affect orcas throughout an exceptionally long life expectancy, which ranges up to 90 years for females and 50 years for males.

The new study used mathematical models and measurements of PCBs in salmon (orcas' favorite food) and ocean floor cores to recreate a PCB exposure history to estimate PCB concentrations in killer whales over time. It concluded that the "threatened" northern population of 230 animals will likely face health risks until at least 2030, while the endangered southern population of 85 orcas may face such risks until at least 2063. PCBs make whales more vulnerable to infectious disease, impair reproduction, and impede normal growth and development, the researchers say.

"The findings provide conservationists, regulators, and managers with benchmarks against which the effectiveness of mitigative steps can be measured and tissue residue guidelines can be evaluated," the study reported. "The results of our study on PCBs may paint an ominous picture for risks associated with emerging chemicals, as the concentrations of structurally-related PBDEs are doubling every 4 years in marine mammals," researchers added.

"Killer Whales (Orcinus orca) Face Protracted Health Risks Associated with Lifetime Exposure to PCBs" Environmental Science & Technology, September 15, 2007

Adapted from materials provided by American Chemical Society.

Connecting Wind Farms Can Make A More Reliable And Cheaper Power Source

ScienceDaily (Nov. 21, 2007) — Wind power, long considered to be as fickle as wind itself, can be groomed to become a steady, dependable source of electricity and delivered at a lower cost than at present, according to scientists at Stanford University.
The key is connecting wind farms throughout a given geographic area with transmission lines, thus combining the electric outputs of the farms into one powerful energy source. The findings are published in the November issue of the American Meteorological Society's Journal of Applied Meteorology and Climatology.

Wind is the world's fastest growing electric energy source, according to the study's authors, Cristina Archer and Mark Jacobson. However, because wind is intermittent, it is not used to supply baseload electric power today. Baseload power is the amount of steady and reliable electric power that is constantly being produced, typically by power plants, regardless of the electricity demand. But interconnecting wind farms with a transmission grid reduces the power swings caused by wind variability and makes a significant portion of it just as consistent a power source as a coal power plant.

"This study implies that, if interconnected wind is used on a large scale, a third or more of its energy can be used for reliable electric power, and the remaining intermittent portion can be used for transportation, allowing wind to solve energy, climate and air pollution problems simultaneously," said Archer, the study's lead author and a consulting assistant professor in Stanford's Department of Civil and Environmental Engineering and research associate in the Department of Global Ecology of the Carnegie Institution.

It's a bit like having a bunch of hamsters generating your power, each in a separate cage with a treadmill. At any given time, some hamsters will be sleeping or eating and some will be running on their treadmill. If you have only one hamster, the treadmill is either turning or it isn't, so the power's either on or off. With two hamsters, the odds are better that one will be on a treadmill at any given point in time and your chances of running, say, your blender, go up. Get enough hamsters together and the odds are pretty good that at least a few will always be on the treadmill, cranking out the kilowatts.

The combined output of all the hamsters will vary, depending on how many are on treadmills at any one time, but there will be a certain level of power that is always being generated, even as different hamsters hop on or off their individual treadmills. That's the reliable baseload power.

The connected wind farms would operate the same way.

"The idea is that, while wind speed could be calm at a given location, it could be gusty at others. By linking these locations together we can smooth out the differences and substantially improve the overall performance," Archer said.

As one might expect, not all locations make sense for wind farms. Only locations with strong winds are economically competitive. In their study, Archer and Jacobson, a professor of civil and environmental engineering at Stanford, evaluated 19 sites in the Midwestern United States, with annual average wind speeds greater than 6.9 meters per second at a height of 80 meters above ground, the hub height of modern wind turbines. Modern turbines are 80-100 meters high, approximately the height of a 30-story building, and their blades are 70 meters long or more.

The researchers used hourly wind data, collected and quality-controlled by the National Weather Service, for the entire year of 2000 from the 19 sites in the Midwestern United States. They found that an average of 33 percent and a maximum of 47 percent of yearly-averaged wind power from interconnected farms can be used as reliable, baseload electric power. These percentages would hold true for any array of 10 or more wind farms, provided it met the minimum wind speed and turbine height criteria used in the study.

Another benefit of connecting multiple wind farms is reducing the total distance that all the power has to travel from the multiple points of origin to the destination point. Interconnecting multiple wind farms to a common point and then connecting that point to a far-away city reduces the cost of transmission.

It's the same as having lots of streams and creeks join together to form a river that flows out to sea, rather than having each creek flow all the way to the coast by carving out its own little channel.

Another type of cost saving also results when the power combines to flow in a single transmission line. Explains Archer: Suppose a power company wanted to bring power from several independent farms--each with a maximum capacity of, say, 1,500 kilowatts (kW) --from the Midwest to California. Each farm would need a short transmission line of 1,500 kW brought to a common point in the Midwest. Then they would need a larger transmission line between the common point and California--typically with a total capacity of 1,500 kW multiplied by the number of independent farms connected.

However, with geographically dispersed farms, it is unlikely that they would simultaneously be experiencing strong enough winds to each produce their 1,500kW maximum output at the same time. Thus, the capacity of the long-distance transmission line could be reduced significantly with only a small loss in overall delivered power.

The more wind farms connected to the common point in the Midwest, the greater the reduction in long-distance transmission capacity that is possible.

"Due to the high cost of long-distance transmission, a 20 percent reduction in transmission capacity with little delivered power loss would notably reduce the cost of wind energy," added Archer, who calculated the decrease in delivered power to be only about 1.6 percent.

With only one farm, a 20 percent reduction in long-distance transmission capacity would decrease delivered power by 9.8 percent--not a 20 percent reduction, because the farm is not producing its maximum possible output all the time.

Archer said that if the United States and other countries each started to organize the siting and interconnection of new wind farms based on a master plan, the power supply could be smoothed out and transmission requirements could be reduced, decreasing the cost of wind energy. This could result in the large-scale market penetration of wind energy--already the most inexpensive clean renewable electric power source--which could contribute significantly to an eventual solution to global warming, as well as reducing deaths from urban air pollution.

Adapted from materials provided by American Meteorological Society.

New Technology Illuminates Protein Interactions In Living Cells

ScienceDaily (Nov. 15, 2007) — While fluorescence has long been used to tag biological molecules, a new technology developed at Yale allows researchers to use tiny fluorescent probes to rapidly detect and identify protein interactions within living cells while avoiding the biological disruption of existing methods, according to a report in Nature Chemical Biology.
Proteins are commonly tagged using variants of the "green fluorescent protein" (GFP), but these proteins are very large and are often toxic to live cells. They also tend to aggregate, making them difficult to work with and monitor. This new methodology uses the fluorescence emitted by a small molecule, rather than a large protein. It gives researchers a less disruptive way to capture images of the intricate contacts between folded regions of an individual protein or the partnerships between proteins in a live cell.

"Our approach bypasses many of the problems associated with fluorescent proteins, so that we can image protein interactions in living cells," said senior author Alanna Schepartz, the Milton Harris Professor of Chemistry, and Howard Hughes Medical Institute Professor at Yale. "Using these molecules we can differentiate alternative or misfolded proteins from those that are folded correctly and also detect protein partnerships in live cells."

Each protein is a three-dimensional structure created by "folding" its linear chain of amino acids. Usually only one shape "works" for each protein. The particular shape a protein takes depends on its amino acids and on other processes within the cell.

Schepartz and her team devised their new tagging system using small molecules, called "profluorescent" biarsenal dyes. These molecules easily enter cells and become fluorescent when they bind to a specific amino acid tag sequence within a protein. While these compounds have been used for about a decade to bind single proteins, this is the first time they have been used to identify interactions between proteins.

The researchers' strategy was to split the amino acid tag for the dye into two pieces, locating each piece of the tag far apart in the chain of a protein they genetically engineered and expressed in the cells. Then they monitored cells exposed to the dye. Where the protein folded correctly, the two parts of the tag came together and the fluorescent compound bound and lit up. There was no signal unless the protein folded normally.

"This method of detection can provide important insights into how proteins choose their partners within the cell -- choices that may be very different from those made in a test tube," said Schepartz. She emphasizes that this technology does not monitor the process of protein folding -- but, rather "sees" the protein conformations that exist at a given time.

"In theory, our technique could be used to target and selectively inactivate specific protein complexes in the cell, as therapy, or to visualize conformations at very high resolution for diagnostic purposes," said Schepartz. She speculates that the technology could be applied to detection strategies that identify protein misfolding in neurodegenerative diseases like Alzheimer's or Parkinson's.

Other authors on the paper are Nathan W. Luedtke, Rachel J. Dexter and Daniel B. Fried from the Schepartz lab at Yale. Funding from the Howard Hughes Medical Institute and the National Institutes of Health supported the research.

Journal citation: Nature Chemical Biology: (early online) 04 November 2007 | doi:10.1038/nchembio.2007.49

Adapted from materials provided by Yale University.

'Wiring Up' Enzymes For Producing Hydrogen In Fuel Cells

ScienceDaily (Nov. 21, 2007) — Researchers in Colorado are reporting the first successful "wiring up" of hydrogenase enzymes. Those much-heralded proteins are envisioned as stars in a future hydrogen economy where they may serve as catalysts for hydrogen production and oxidation in fuel cells.
heir report describes a successful electrical connection between a carbon nanotube and hydrogenase.

In the new study, Michael J. Heben, Paul W. King, and colleagues explain that bacterial enzymes called hydrogenases show promise as powerful catalysts for using hydrogen in fuel cells, which can produce electricity with virtually no pollution for motor vehicles, portable electronics, and other devices.

However, scientists report difficulty incorporating these enzymes into electrical devices because the enzymes do not form good electrical connections with fuel cell components. Currently, precious metals, such as platinum, are typically needed to perform this catalysis.

The researchers combined hydrogenase enzymes with carbon nanotubes, submicroscopic strands of pure carbon that are excellent electrical conductors. In laboratory studies, the researchers demonstrated that a good electrical connection was established using photoluminescence spectroscopy measurements.

These new "biohybrid" conjugates could reduce the cost of fuel cells by reducing or eliminating the need for platinum and other costly metal components, they say.

The journal article, "Wiring-Up Hydrogenase with Single-Walled Carbon Nanotubes" is scheduled for the Nov. issue of ACS' Nano Letters.

Adapted from materials provided by American Chemical Society.

Wednesday, November 21, 2007

Cold Shot: Blasting Frozen Soil Sample With Ultraviolet Laser Reveals Uranium

ScienceDaily (Sep. 20, 2006) — If you want to ferret out uranium's hiding place in contaminated soil, freeze the dirt and zap it with a black light, an environmental scientist reported Tuesday at the American Chemical Society national meeting.
Scientists have long known that uranium salts under ultraviolet light will glow an eerie greenish-yellow in the dark. This phenomenon sent Henri Bequerel down the path that led to his discovery of radioactivity a century ago.

Others since noted a peculiar feature about the UV glow, or fluorescence spectra, of uranium salts: The resolution of the spectral fingerprint becomes sharper as the temperature falls.

Zheming Wang, a staff scientist at the Department of Energy's Pacific Northwest National Laboratory in Richland, Wash., has now dusted the frost off the files, applying a technique called cryogenic fluorescence spectroscopy to uranium in contaminated soil at a former nuclear fuel manufacturing site.

By cooling the sediments to minus 267 degrees Celsius, near the temperature of liquid helium, Wang and colleagues at the PNNL-based W.R. Wiley Environmental Molecular Sciences Laboratory hit a sample with UV laser on a contaminated sample to coax a uranium fluorescence intensity of more than five times that at room temperature.

What is more, other spectra that were absent at room temperature popped out when frozen, enabling Wang and colleagues to distinguish different forms of uranium from one another, including uranium-carbonate that moves readily underground and is a threat to water supplies.

Adapted from materials provided by Pacific Northwest National Laboratory.

Chemists Create Novel Uranium Molecule

ScienceDaily (Nov. 19, 2007) — Chemists at the University of Virginia have prepared the first uranium methylidyne molecule ever reported, despite the reactivity of uranium atoms with other molecules. This new molecule is a hydrocarbon containing a uranium-carbon triple-bond.
Their finding, which contributes to chemists’ fundamental understanding of uranium chemistry, is reported in the Proceedings of the National Academy of Sciences.

“This is the first example of a triple bond between uranium and carbon in a hydrocarbon,” said Lester Andrews, the lead scientist and a professor of chemistry at the University of Virginia.

Andrews and members of his U.Va. laboratory have been working on uranium chemistry for 15 years with dozens of different molecules. For this finding they used a focused pulsed laser to evaporate depleted uranium in a vacuum chamber and reacted the vapor with fluoroform molecules, then trapped the new molecule in argon frozen at 8 K, near the absolute zero of temperature.

“The uranium atom went into a C-F bond and rearranged the other fluorines to make the new molecule with hydrogen-carbon {triple bond} uranium trifluoride, which is uranium trifluoride methylidyne,” Andrews said. It is this exotic triple bond between uranium and carbon that the researchers have characterized.

“After we did the infrared spectroscopy of the new molecule, we preformed calculations to predict the structure and bonding properties of the molecule and compared the predicted vibrational spectrum with the one we observed,” Andrews said. “The agreement was good enough for us to conclude we had in fact made the molecule that we set out to make.”

Uranium exists in natural abundance in the ground in the form of ores. The material used by Andrews in his experiments is a relatively stable, long-lived isotope, which is U-238 and very little of U-235, the “hot” uranium isotope used for fuel and weapons.

“I think it’s imperative for people to know more about uranium chemistry, particularly our policy makers,” Andrews said. “People need to realize that you can’t just dig up a shovelful of uranium ore and make a bomb from it. One has to go through a considerable amount of chemical process to win uranium from its ore. You have to refine the ore into metal and enrich the material in the hot isotope before it has uses as a nuclear material. This is highly complicated chemistry.”

Andrews’s colleagues include Jonathan T. Lyon, a recent Ph.D. graduate from U.Va. who performed experiments and calculations, and Han-Shi Hu and Jun Li, chemists at Tsinghua University in Beijing who performed additional theoretical calculations to describe this new molecule.

Adapted from materials provided by University of Virginia.

New Metal Alloys Boost High-temperature Heat Treatment Of Jet Engine Components

ScienceDaily (Jul. 27, 2007) — Measurement scientists at the National Physical Laboratory have reduced the uncertainty of thermocouple temperature sensors at high temperatures to within a degree. This may allow manufacturers to improve efficiency and reduce wastage in the quest for more efficient jet engines and lower aircraft emissions.
Aircraft engines are more efficient at higher temperatures, but this requires thermal treatment of engine components at very specific high temperatures in excess of 1300 °C. If the heat treatment temperature deviates too much from the optimal temperature, the treatment may be inadequate.

Thermocouples are calibrated using materials with known melting points (fixed points), but the available reference materials in the region of the very high temperatures required to treat jet engine components have a large uncertainty compared with the lower temperature fixed points.

Using a new type of metal alloy, National Physical Laboratory scientists have identified a range of reference points for thermocouples beyond 1100 °C. With this added confidence in thermal sensors, component manufacturers are expected to start improving hotter thermal treatments and reducing wastage during production of parts for engines which can run at higher temperatures.

Adapted from materials provided by National Physical Laboratory.

Wireless Sensors To Monitor Bearings In Jet Engines Developed

ScienceDaily (Nov. 5, 2007) — Researchers at Purdue University, working with the U.S. Air Force, have developed tiny wireless sensors resilient enough to survive the harsh conditions inside jet engines to detect when critical bearings are close to failing and prevent breakdowns.

The devices are an example of an emerging technology known as "micro electromechanical systems," or MEMS, which are machines that combine electronic and mechanical components on a microscopic scale.

"The MEMS technology is critical because it needs to be small enough that it doesn't interfere with the performance of the bearing itself," said Farshid Sadeghi, a professor of mechanical engineering. "And the other issue is that it needs to be able to withstand extreme heat."

The engine bearings must function amid temperatures of about 300 degrees Celsius, or 572 degrees Fahrenheit.

The researchers have shown that the new sensors can detect impending temperature-induced bearing failure significantly earlier than conventional sensors.

"This kind of advance warning is critical so that you can shut down the engine before it fails," said Dimitrios Peroulis, an assistant professor of electrical and computer engineering.

Findings will be detailed in a research paper to be presented on Tuesday (Oct. 30) during the IEEE Sensors 2007 conference in Atlanta, sponsored by the Institute of Electrical and Electronics Engineers. The paper was written by electrical and computer engineering graduate student Andrew Kovacs, Peroulis and Sadeghi.

The sensors could be in use in a few years in military aircraft such as fighter jets and helicopters. The technology also has potential applications in commercial products, including aircraft and cars.

"Anything that has an engine could benefit through MEMS sensors by keeping track of vital bearings," Peroulis said. "This is going to be the first time that a MEMS component will be made to work in such a harsh environment. It is high temperature, messy, oil is everywhere, and you have high rotational speeds, which subject hardware to extreme stresses."

The work is an extension of Sadeghi's previous research aimed at developing electronic sensors to measure the temperature inside critical bearings in communications satellites.

"This is a major issue for aerospace applications, including bearings in satellite attitude control wheels to keep the satellites in position," Sadeghi said.

The wheels are supported by two bearings. If mission controllers knew the bearings were going bad on a specific unit, they could turn it off and switch to a backup.

"What happens, however, is that you don't get any indication of a bearing's imminent failure, and all of a sudden the gyro stops, causing the satellite to shoot out of orbit," Sadeghi said. "It can take a lot of effort and fuel to try to bring it back to the proper orbit, and many times these efforts fail."

The Purdue researchers received a grant from the U.S. Air Force in 2006 to extend the work for high-temperature applications in jet engines.

"Current sensor technology can withstand temperatures of up to about 210 degrees Celsius, and the military wants to extend that to about 300 degrees Celsius," Sadeghi said. "At the same time, we will need to further miniaturize the size."

The new MEMS sensors provide early detection of impending failure by directly monitoring the temperature of engine bearings, whereas conventional sensors work indirectly by monitoring the temperature of engine oil, yielding less specific data.

The MEMS devices will not require batteries and will transmit temperature data wirelessly.

"This type of system uses a method we call telemetry because the devices transmit signals without wires, and we power the circuitry remotely, eliminating the need for batteries, which do not perform well in high temperatures," Peroulis said.

Power will be provided using a technique called inductive coupling, which uses coils of wire to generate current.

"The major innovation will be the miniaturization and design of the MEMS device, allowing us to install it without disturbing the bearing itself," Peroulis said.

Data from the onboard devices will not only indicate whether a bearing is about to fail but also how long it is likely to last before it fails, Peroulis said.

The research is based at the Birck Nanotechnology Center in Purdue's Discovery Park and at Sadeghi's mechanical engineering laboratory.

Adapted from materials provided by Purdue University.

Could Nuclear Power By The Answer To Fresh Water?

ScienceDaily (Nov. 20, 2007) — Scientists are working on new solutions to the ancient problem of maintaining a fresh water supply. With predictions that more than 3.5 billion people will live in areas facing severe water shortages by the year 2025, the challenge is to find an environmentally benign way to remove salt from seawater.
Global climate change, desertification, and over-population are already taking their toll on fresh water supplies. In coming years, fresh water could become a rare and expensive commodity. Research results presented at the Trombay Symposium on Desalination and Water Reuse offer a new perspective on desalination and describe alternatives to the current expensive and inefficient methods.

Pradip Tewari of the Desalination Division at Bhabha Atomic Research Centre, in Mumbai, India, discusses the increasing demand for water in India driven not only by growing population and expectancies rapid agricultural and industrial expansion. He suggests that a holistic approach is needed to cope with freshwater needs, which include primarily seawater desalination in coastal areas and brackish water desalination as well as rainwater harvesting, particularly during the monsoon season. "The contribution of seawater and brackish water desalination would play an important role in augmenting the freshwater needs of the country."

Meenakshi Jain of CDM & Environmental Services and Positive Climate Care Pvt Ltd in Jaipur highlights the energy problem facing regions with little fresh water. "Desalination is an energy-intensive process. Over the long term, desalination with fossil energy sources would not be compatible with sustainable development; fossil fuel reserves are finite and must be conserved for other essential uses, whereas demands for desalted water would continue to increase."

Jain emphasizes that a sustainable, non-polluting solution to water shortages is essential. Renewable energy sources, such as wind, solar, and wave power, may be used in conjunction to generate electricity and to carry out desalination, which could have a significant impact on reducing potential increased greenhouse gas emissions. "Nuclear energy seawater desalination has a tremendous potential for the production of freshwater," Jain adds.

The development of a floating nuclear plant is one of the more surprising solutions to the desalination problem. S.S. Verma of the Department of Physics at SLIET in Punjab, points out that small floating nuclear power plants represent a way to produce electrical energy with minimal environmental pollution and greenhouse gas emissions. Such plants could be sited offshore anywhere there is dense coastal population and not only provide cheap electricity but be used to power a desalination plant with their excess heat. "Companies are already in the process of developing a special desalination platform for attachment to FNPPs helping the reactor to desalinate seawater," Verma points out.

A. Raha and colleagues at the Desalination Division of the Bhabha Atomic Research Centre, in Trombay, point out that Low-Temperature Evaporation (LTE) desalination technology utilizing low-quality waste heat in the form of hot water (as low as 50 Celsius) or low-pressure steam from a nuclear power plant has been developed to produce high-purity water directly from seawater. Safety, reliability, viable economics, have already been demonstrated. BARC itself has recently commissioned a 50 tons per day low-temperature desalination plant.

Co-editor of the journal*, B.M. Misra, formerly head of BARC, suggests that solar, wind, and wave power, while seemingly cost effective approaches to desalination, are not viable for the kind of large-scale fresh water production that an increasingly industrial and growing population needs.

India already has plans for the rapid expansion of its nuclear power industry. Misra suggests that large-scale desalination plants could readily be incorporated into those plans. "The development of advanced reactors providing heat for hydrogen production and large amount of waste heat will catalyze the large-scale seawater desalination for economic production of fresh water," he says.

*This research is published in the International Journal of Nuclear Desalination.

Adapted from materials provided by Inderscience Publishers.

Nokia N95 - Sophisticated handsets with specialized capabilities


Nokia N Series mobile phones are a series of sophisticated handsets with specialized capabilities in different spheres. Great looks, innovative features and ease of use characterise all the phones from the Nokia N Series. Nokia N95 is the recent mobile phone from the series; and with its stunning looks and a wide spectrum of user-friendly features, it does not disappoint people showing an interest in the same.

Nokia N95 is a third generation (3G) smart phone that impresses one and all. The light weight and small profile of the handset ensures that users do not have any problem in carrying the handset from one place to another. The Nokia N95 weighs only 120 gm and measures 99x53x21 mm.

The multimedia options of the Nokia N95 are equally impressive. An integrated 5 mega-pixel digital camera can be used to capture shots with impressive image quality. An FM radio feature means that users can listen to the programs and music and that too from their favorite music stations. An MP3 player is in-built into the design of the Nokia N95 �" perfect for listening to music on the move. One could further personalise the music experience by using a set of headphones. Internet access is possible with the Nokia N95; the incorporation of the EDGE and WAP technology ensures that users can browse the Internet as well as check their e-mails and messages. Moreover, the 160 MB of internal memory can be expanded up to 2 GB.

Users who are interested in acquiring the Nokia N95 mobile phone can now easily do so. There are a number of innovative deals and offers in the market from leading service providers and network operators in different parts of the world. In the UK, for instance, a person can avail contract mobile phone deals on the Nokia N95 and get to own this sophisticated handset at a reduced cost.
About the Author

Adam Caitlin is expert author of Telecommunication industry.

* Nokia N95
* contract phones
* sim free phones