about us:
our company name is KNK DECORATION. our job about sticker service, example: sticker cutting, digital printing, screen printing, neon box, sign marking and another material promotion.
kami melayani pengerjaan stiker cutting, printing indoor/outdoor, neon box, pembuatan rambu-rambu lalu lintas(standard DLLAJR), dll

Monday, August 25, 2008

Schizophrenia: Costly By-product Of Human Brain Evolution?

ScienceDaily (Aug. 5, 2008) — Metabolic changes responsible for the evolution of our unique cognitive abilities indicate that the brain may have been pushed to the limit of its capabilities. Research published today in BioMed Central's open access journal Genome Biology adds weight to the theory that schizophrenia is a costly by-product of human brain evolution.

Philipp Khaitovich, from the Max-Planck-Institute for Evolutionary Anthropology and the Shanghai branch of the Chinese Academy of Sciences, led a collaboration of researchers from Cambridge, Leipzig and Shanghai who investigated brains from healthy and schizophrenic humans and compared them with chimpanzee and rhesus macaque brains. The researchers looked for differences in gene expression and metabolite concentrations and, as Khaitovich explains, "identified molecular mechanisms involved in the evolution of human cognitive abilities by combining biological data from two research directions: evolutionary and medical".

The idea that certain neurological diseases are by-products of increases in metabolic capacity and brain size that occurred during human evolution has been suggested before, but in this new work the authors used new technical approaches to really put the theory to the test.

They identified the molecular changes that took place over the course of human evolution and considered those molecular changes observed in schizophrenia, a psychiatric disorder believed to affect cognitive functions such as the capacities for language and complex social relationships. They found that expression levels of many genes and metabolites that are altered in schizophrenia, especially those related to energy metabolism, also changed rapidly during evolution. According to Khaitovich, "Our new research suggests that schizophrenia is a by-product of the increased metabolic demands brought about during human brain evolution".

The authors conclude that this work paves the way for a much more detailed investigation. "Our brains are unique among all species in their enormous metabolic demand. If we can explain how our brains sustain such a tremendous metabolic flow, we will have a much better chance to understand how the brain works and why it sometimes breaks", said Khaitovich.

Journal reference:

1. Khaitovich et al. Metabolic changes in schizophrenia and human brain evolution. Genome Biology, 2008 (in press) [link]

Adapted from materials provided by BioMed Central, via EurekAlert!, a service of AAAS.

Converting Sunlight To Cheaper Energy

ScienceDaily (Aug. 25, 2008) — Scientists are working to convert sunlight to cheap electricity at South Dakota State University. Research scientists are working with new materials that can make devices used for converting sunlight to electricity cheaper and more efficient.

Assistant professor Qiquan Qiao in SDSU’s Department of Electrical Engineering and Computer Science said so-called organic photovoltaics, or OPVs, are less expensive to produce than traditional devices for harvesting solar energy.

Qiao and his SDSU colleagues also are working on organic light-emitting diodes, or OLEDs.

The new technology is sometimes referred to as “molecular electronics” or “organic electronics” — organic because it relies on carbon-based polymers and molecules as semiconductors rather than inorganic semiconductors such as silicon.

“Right now the challenge for photovoltaics is to make the technology less expensive,” Qiao said.

“Therefore, the objective is find new materials and novel device structures for cost-effective photovoltaic devices.

“The beauty of organic photovoltaics and organic LEDs is low cost and flexibility,” the researcher continued. “These devices can be fabricated by inexpensive, solution-based processing techniques similar to painting or printing."

“The ease of production brings costs down, while the mechanical flexibility of the materials opens up a wide range of applications,” Qiao concluded.

Organic photovoltaics and organic LEDs are made up of thin films of semiconducting organic compounds that can absorb photons of solar energy. Typically an organic polymer, or a long, flexible chain of carbon-based material, is used as a substrate on which semiconducting materials are applied as a solution using a technique similar to inkjet printing.

“The research at SDSU is focused on new materials with variable band gaps,” Qiao said.

“The band gap determines how much solar energy the photovoltaic device can absorb and convert into electricity.”

Qiao explained that visible sunlight contains only about 50 percent of the total solar energy. That means the sun is giving off just as much non-visible energy as visible energy.

“We’re working on synthesizing novel polymers with variable band gaps, including high, medium and low-band gap varieties, to absorb the full spectrum of sunlight. By this we can double the light harvesting or absorption,” Qiao said.

SDSU’s scientists plan to use the variable band gap polymers to build multi-junction polymer solar cells or photovoltaics.

These devices use multiple layers of polymer/fullerene films that are tuned to absorb different spectral regions of solar energy.

Ideally, photons that are not absorbed by the first film layer pass through to be absorbed by the following layers.

The devices can harvest photons from ultraviolet to visible to infrared in order to efficiently convert the full spectrum of solar energy to electricity.

SDSU scientists also work with organic light-emitting diodes focusing on developing novel materials and devices for full color displays.

“We are working to develop these new light-emitting and efficient, charge-transporting materials to improve the light-emitting efficiency of full color displays,” Qiao said.

Currently, LED technology is used mainly for signage displays. But in the future, as OLEDs become less expensive and more efficient, they may be used for residential lighting, for example.

The new technology will make it easy to insert lights into walls or ceilings. But instead of light bulbs, the lighting apparatus of the future may look more like a poster, Qiao said.

Qiao and his colleagues are funded in part by SDSU’s electrical engineering Ph.D. program and by National Science Foundation and South Dakota EPSCoR, the Experimental Program to Stimulate Competitive Research.

In addition Qiao is one of about 40 faculty members from SDSU, the South Dakota School of Mines and Technology and the University of South Dakota who have come together to form Photo Active Nanoscale Systems (PANS).

The primary purpose is developing photovoltaics, or devices that will directly convert light to electricity.
Adapted from materials provided by South Dakota State University, via Newswise.

Thursday, August 21, 2008

New 'Nano-positioners' May Have Atomic-scale Precision

ScienceDaily (Aug. 21, 2008) — Engineers have created a tiny motorized positioning device that has twice the dexterity of similar devices being developed for applications that include biological sensors and more compact, powerful computer hard drives.

The device, called a monolithic comb drive, might be used as a "nanoscale manipulator" that precisely moves or senses movement and forces. The devices also can be used in watery environments for probing biological molecules, said Jason Vaughn Clark, an assistant professor of electrical and computer engineering and mechanical engineering, who created the design.

The monolithic comb drives could make it possible to improve a class of probe-based sensors that detect viruses and biological molecules. The sensors detect objects using two different components: A probe is moved while at the same time the platform holding the specimen is positioned. The new technology would replace both components with a single one - the monolithic comb drive.

The innovation could allow sensors to work faster and at higher resolution and would be small enough to fit on a microchip. The higher resolution might be used to design future computer hard drives capable of high-density data storage and retrieval. Another possible use might be to fabricate or assemble miniature micro and nanoscale machines.

Research findings were detailed in a technical paper presented in July during the University Government Industry Micro/Nano Symposium in Louisville. The work is based at the Birck Nanotechnology Center at Purdue's Discovery Park.

Conventional comb drives have a pair of comblike sections with "interdigitated fingers," meaning they mesh together. These meshing fingers are drawn toward each other when a voltage is applied. The applied voltage causes the fingers on one comb to become positively charged and the fingers on the other comb to become negatively charged, inducing an attraction between the oppositely charged fingers. If the voltage is removed, the spring-loaded comb sections return to their original position.

By comparison, the new monolithic device has a single structure with two perpendicular comb drives.

Clark calls the device monolithic because it contains comb drive components that are not mechanically and electrically separate. Conventional comb drives are structurally "decoupled" to keep opposite charges separated.

"Comb drives represent an advantage over other technologies," Clark said. "In contrast to piezoelectric actuators that typically deflect, or move, a fraction of a micrometer, comb drives can deflect tens to hundreds of micrometers. And unlike conventional comb drives, which only move in one direction, our new device can move in two directions - left to right, forward and backward - an advance that could really open up the door for many applications."

Clark also has invented a way to determine the precise deflection and force of such microdevices while reducing heat-induced vibrations that could interfere with measurements.

Current probe-based biological sensors have a resolution of about 20 nanometers.

"Twenty nanometers is about the size of 200 atoms, so if you are scanning for a particular molecule, it may be hard to find," Clark said. "With our design, the higher atomic-scale resolution should make it easier to find."

Properly using such devices requires engineers to know precisely how much force is being applied to comb drive sensors and how far they are moving. The new design is based on a technology created by Clark called electro micro metrology, which enables engineers to determine the precise displacement and force that's being applied to, or by, a comb drive. The Purdue researcher is able to measure this force by comparing changes in electrical properties such as capacitance or voltage.

Clark used computational methods called nodal analysis and finite element analysis to design, model and simulate the monolithic comb drives.

The research paper describes how the monolithic comb drive works when voltage is applied. The results show independent left-right and forward-backward movement as functions of applied voltage in color-coded graphics.

The findings are an extension of research to create an ultra-precise measuring system for devices having features on the size scale of nanometers, or billionths of a meter. Clark has led research to create devices that "self-calibrate," meaning they are able to precisely measure themselves. Such measuring methods and standards are needed to better understand and exploit nanometer-scale devices.

The size of the entire device is less than one millimeter, or a thousandth of a meter. The smallest feature size is about three micrometers, roughly one-thirtieth as wide as a human hair.

"You can make them smaller, though," Clark said. "This is a proof of concept. The technology I'm developing should allow researchers to practically and efficiently extract dozens of geometric and material properties of their microdevices just by electronically probing changes in capacitance or voltage."

In addition to finite element analysis, Clark used a simulation tool that he developed called Sugar.

"Sugar is fast and allows me to easily try out many design ideas," he said. "After I narrow down to a particular design, I then use finite element analysis for fine-tuning. Finite element analysis is slow, but it is able to model subtle physical phenomena that Sugar doesn't do as well."

Clark's research team is installing Sugar on the nanoHub this summer, making the tool available to other researchers. The nanoHub is operated by the Network for Computational Nanotechnology, funded by the National Science Foundation and housed at Purdue's Birck Nanotechnology Center.

The researchers also are in the process of fabricating the devices at the Birck Nanotechnology Center.
Adapted from materials provided by Purdue University.

Energy Storage For Hybrid Vehicles

ScienceDaily (Aug. 18, 2008) — Hybrid technology combines the advantages of combustion engines and electric motors. Scientists are developing high-performance energy storage units, a prerequisite for effective hybrid motors.

The vehicle is powered by petroleum on the freeway and by electricity in town, thus using considerably less energy. A hybrid propulsion system switches over to generator operation when the brakes go on, producing electric current that is temporarily stored in a battery. The electric motor uses this current when starting up. This yields tremendous savings, particularly in urban traffic.

But up to now, hybrid technology has always had a storage problem. Scientists from three Fraunhofer Institutes are developing new storage modules in a project called “Electromobility Fleet Test”.

The pilot project was launched by Volkswagen and Germany’s Federal Ministry for the Environment BMU together with seven other partners. The Fraunhofer Institutes for Silicon Technology ISIT in Itzehoe, Integrated Circuits IIS in Nuremberg, and Integrated Systems and Device Technology IISB in Erlangen will be pooling their expertise for the next three years. The researchers are developing an energy storage module based on lithium-polymer accumulator technology that is suitable for use in vehicles.

“This module has to be able to withstand the harsh environmental conditions it will encounter in a hybrid vehicle, and above all it must guarantee high operational reliability and a long service life,” states ISIT scientist Dr. Gerold Neumann, who coordinates the Fraunhofer activities. The researchers hope to reach this goal with new electrode materials that are kinder to the environment.

A specially developed battery management system makes the energy storage device more durable and reliable. The experts are also researching into new concepts that will enable large amounts of energy to be stored in a small space. To do this, they integrate mechanical and electrical components in a single module, devising systems for temperature control, performance data registration and high-voltage safety.

The tasks involved are distributed between the three Fraunhofer Institutes according to their skills: The ISIT experts, who have long experience in developing and manufacturing lithium accumulators, are manufacturing the cells. Their colleagues at IIS are responsible for battery management and monitoring. The scientists from IISB are contributing their know-how on power electronics components to configure the accumulator modules. The development and configuration of the new energy storage module is expected to be finished by mid-2010. Volkswagen AG – the industrial partner in this project – will then carry out field trials to test the modules’ suitability for everyday use in the vehicles.
Adapted from materials provided by Fraunhofer-Gesellschaft.

Large Hadron Collider Set To Unveil A New World Of Particle Physics

ScienceDaily (Aug. 21, 2008) — The field of particle physics is poised to enter unknown territory with the startup of a massive new accelerator--the Large Hadron Collider (LHC)--in Europe this summer. On September 10, LHC scientists will attempt to send the first beam of protons speeding around the accelerator.

The LHC will put hotly debated theories to the test as it generates a bonanza of new experimental data in the coming years. Potential breakthroughs include an explanation of what gives mass to fundamental particles and identification of the mysterious dark matter that makes up most of the mass in the universe. More exotic possibilities include evidence for new forces of nature or hidden extra dimensions of space and time.

"The LHC is a discovery machine. We don't know what we'll find," said Abraham Seiden, professor of physics and director of the Santa Cruz Institute for Particle Physics (SCIPP) at the University of California, Santa Cruz.

SCIPP was among the initial group of institutions that spearheaded U.S. participation in the LHC. About half of the entire U.S. experimental particle-physics community has focused its energy on the ATLAS and CMS detectors, the largest of four detectors where experiments will be performed at the LHC. SCIPP researchers have been working on the ATLAS project since 1994. It is one of many international physics and astrophysics projects that have drawn on SCIPP's 20 years of experience developing sophisticated technology for tracking high-energy subatomic particles.

The scale of the LHC is gigantic in every respect--its physical size, the energies attained, the amount of data it will generate, and the size of the international collaboration involved in its planning, construction, and operation. In September, high-energy beams of protons will begin circulating around the LHC's 27-kilometer (16.8-mile) accelerator ring located 100 meters (328 feet) underground at CERN, the European particle physics lab based in Geneva, Switzerland. After a period of testing, the beams will cross paths inside the detectors and the first collisions will take place.

Even before the machine is ramped up to its maximum energy early next year, it will smash protons together with more energy than any previous collider. The debris from those collisions--showers of subatomic particles that the detectors will track and record--will yield results that could radically change our understanding of the physical world.

In a talk at the American Physical Society meeting earlier this year, Seiden gave an overview of the LHC research program, including a rough timeline for reaching certain milestones. One of the most highly anticipated milestones, for example, is detection of the Higgs boson, a hypothetical particle that would fill a major gap in the standard model of particle physics by endowing fundamental particles with mass. Detection of the Higgs boson is most likely to occur in 2010, Seiden said.

But there's no guarantee that the particle actually exists; nature may have found another way to create mass. "I'm actually hoping we find something unexpected that does the job of the Higgs," Seiden said.

Technically, the Higgs boson was postulated to explain a feature of particle interactions known as the breaking of electroweak symmetry, and the LHC is virtually guaranteed to explain that phenomenon, according to theoretical physicist Howard Haber.

"We've been debating this for 30 years, and one way or another, the LHC will definitively tell us how electroweak symmetry breaking occurs. That's a fundamental advance," said Haber, a professor of physics at UCSC.

Haber and other theorists have spent years imagining possible versions of nature, studying their consequences, and describing in detail what the evidence would look like in the experimental data from a particle accelerator such as the LHC. The Higgs boson won't be easy to find, he said. The LHC should produce the particles in abundance (if they exist), but most of them will not result in a very distinctive signal in the detectors.

"It's a tough game. You can only do it by statistical analysis, since there are other known processes that produce events that can mimic a Higgs boson signal," Haber said.

Evidence to support another important theory--supersymmetry--could show up sooner. In many ways, supersymmetry is a more exciting possibility than the Higgs boson, according to theorist Michael Dine, also a professor of physics at UCSC.

"By itself, the Higgs is a very puzzling particle, so there have been a lot of conjectures about some kind of new physics beyond the standard model. Supersymmetry has the easiest time fitting in with what we know," Dine said.

Adding to its appeal, supersymmetry predicts the existence of particles that are good candidates to account for dark matter. Astronomers have detected dark matter through its gravitational effects on stars and galaxies, but they don't yet know what it is. Particles predicted by supersymmetry that could account for dark matter may be identified at the LHC as early as next year, Seiden said.

"Initially, we'll be looking for things that are known standards to make sure that everything is working properly. In 2009, we could start really looking for new things like supersymmetry," he said.

The massive ATLAS detector--45 meters (148 feet) long and 25 meters (82 feet) high--has involved more than 2,000 physicists at 166 institutions. Seiden's team at SCIPP has been responsible for developing the silicon sensors and electronics for the detector's inner tracker, which measures the trajectories of charged particles as they first emerge from the site of the collisions.

Seiden is now leading the U.S. effort to develop a major upgrade of ATLAS. The current detector is designed to last for 10 years, and the upgrade will coincide with a planned increase in the luminosity of the proton beams at the LHC (which will then become the "Super LHC").

"These large projects take such a long time, we have to start early," Seiden said.

Meanwhile, operation and testing of the current ATLAS detector is already under way at CERN, said Alexander Grillo, a SCIPP research physicist who has been working on the project from the start.

"We've been operating it and looking at cosmic ray particles," he said. "Nature gives us these cosmic rays for free, and they're the same kinds of particles we'll see when the machine turns on, so it enables us to check out certain aspects of the detector. But we're very excited to start seeing collisions from the machine."

ATLAS and the other LHC detectors are designed with "trigger" systems that ignore most of the signals and record only those events likely to yield interesting results. Out of the hundreds of millions of collisions happening every second inside the detector, only 100 of the most promising events will be selected and recorded in the LHC's central computer system.

"We'll be throwing away a lot of data, so we have to make sure the triggers are working correctly," Seiden said.

Grillo noted that the ATLAS project has been a great opportunity for UCSC students. Both graduate students and undergraduates have been involved in the development of the detector, along with postdoctoral researchers, research physicists, and senior faculty.

"The graduate students and postdocs get to go to Geneva, but even the undergraduates get a chance to work in a real physics lab and be part of a major international experiment," Grillo said.

SCIPP's prominent role in the LHC is also a boon for theoretical physicists at UCSC who are not directly involved in the collaboration, such as Dine, Haber, Thomas Banks, and others.

"There is a high level of interaction and camaraderie between theorists and experimentalists at UCSC, which is not the case at other leading institutions," Dine said. "For me, it's valuable just in terms of being aware of what's happening on the experimental side."

According to Haber, the LHC is certain to generate a lot of excitement in the world of physics.

"If nothing were found beyond what we know today, that would be so radical, because it would be in violation of a lot of extremely fundamental principles," he said.

More information about the LHC and U.S. involvement in the project is available at http://www.uslhc.us. Information about SCIPP is available at http://scipp.ucsc.edu.
Adapted from materials provided by University of California - Santa Cruz.

Friday, July 25, 2008

Whales And Dolphins Influence New Wind Turbine Design

ScienceDaily (July 8, 2008) — Sea creatures have evolved over millions of years to maximise efficiency of movement through water; humans have been trying to perfect streamlined designs for barely a century. So shouldn't we be taking more notice of the experts?

Biologists and engineers from across the US have been doing just that. By studying the flippers, fins and tails of whales and dolphins, these scientists have discovered some features of their structure that contradict long-held engineering theories. Dr Frank Fish (West Chester University) will talk about the exciting impact that these discoveries may have on traditional industrial designs on July 8th at the Society for Experimental Biology's Annual Meeting in Marseille.

Some of his observations are already being applied to real life engineering problems, a concept known as biomimetics. The shape of whale flippers with one bumpy edge has inspired the creation of a completely novel design for wind turbine blades. This design has been shown to be more efficient and also quieter, but defies traditional engineering theories.

"Engineers have previously tried to ensure steady flow patterns on rigid and simple lifting surfaces, such as wings. The lesson from biomimicry is that unsteady flow and complex shapes can increase lift, reduce drag and delay 'stall', a dramatic and abrupt loss of lift, beyond what existing engineered systems can accomplish," Dr Fish advises. "There are even possibilities that this technology could be applied to aeronautical designs such as helicopter blades in the future."

The work centres on studies of vortices, tornado-shaped water formations that develop in the wake of the animals. "In the case of the humpback whale, vortices formed from tubercles (bumps) on the front edge of flippers help to generate more lift without the occurrence of stall, as well as enhancing manoeuvrability and agility," explains Dr Fish. "In the case of the tails of dolphins, vortices are formed at the end of the up and down strokes. These vortices are involved in the production of a jet in the wake of the dolphin that produces high thrust. By regulating the production of the vortices, the dolphin can maximize its efficiency while swimming."

This work was funded by the US National Science Foundation and the US Office of Naval Research.

Journal reference:

1. Fish et al. Hydrodynamic flow control in marine mammals. Integrative and Comparative Biology, 2008; DOI: 10.1093/icb/icn029

Adapted from materials provided by Society for Experimental Biology, via EurekAlert!, a service of AAAS.

New 'Window' Opens On Solar Energy: Cost Effective Devices Available Soon

ScienceDaily (July 11, 2008) — Imagine windows that not only provide a clear view and illuminate rooms, but also use sunlight to efficiently help power the building they are part of. MIT engineers report a new approach to harnessing the sun's energy that could allow just that.

The work, reported in the July 11 issue of Science, involves the creation of a novel "solar concentrator." "Light is collected over a large area [like a window] and gathered, or concentrated, at the edges," explains Marc A. Baldo, leader of the work and the Esther and Harold E. Edgerton Career Development Associate Professor of Electrical Engineering.

As a result, rather than covering a roof with expensive solar cells (the semiconductor devices that transform sunlight into electricity), the cells only need to be around the edges of a flat glass panel. In addition, the focused light increases the electrical power obtained from each solar cell "by a factor of over 40," Baldo says.

Because the system is simple to manufacture, the team believes that it could be implemented within three years--even added onto existing solar-panel systems to increase their efficiency by 50 percent for minimal additional cost. That, in turn, would substantially reduce the cost of solar electricity.

In addition to Baldo, the researchers involved are Michael Currie, Jon Mapel, and Timothy Heidel, all graduate students in the Department of Electrical Engineering and Computer Science, and Shalom Goffri, a postdoctoral associate in MIT's Research Laboratory of Electronics.

"Professor Baldo's project utilizes innovative design to achieve superior solar conversion without optical tracking," says Dr. Aravinda Kini, program manager in the Office of Basic Energy Sciences in the U.S. Department of Energy's Office of Science, a sponsor of the work. "This accomplishment demonstrates the critical importance of innovative basic research in bringing about revolutionary advances in solar energy utilization in a cost-effective manner."

Solar concentrators in use today "track the sun to generate high optical intensities, often by using large mobile mirrors that are expensive to deploy and maintain," Baldo and colleagues write in Science. Further, "solar cells at the focal point of the mirrors must be cooled, and the entire assembly wastes space around the perimeter to avoid shadowing neighboring concentrators."

The MIT solar concentrator involves a mixture of two or more dyes that is essentially painted onto a pane of glass or plastic. The dyes work together to absorb light across a range of wavelengths, which is then re-emitted at a different wavelength and transported across the pane to waiting solar cells at the edges.

In the 1970s, similar solar concentrators were developed by impregnating dyes in plastic. But the idea was abandoned because, among other things, not enough of the collected light could reach the edges of the concentrator. Much of it was lost en route.

The MIT engineers, experts in optical techniques developed for lasers and organic light-emitting diodes, realized that perhaps those same advances could be applied to solar concentrators. The result? A mixture of dyes in specific ratios, applied only to the surface of the glass, that allows some level of control over light absorption and emission. "We made it so the light can travel a much longer distance," Mapel says. "We were able to substantially reduce light transport losses, resulting in a tenfold increase in the amount of power converted by the solar cells."

This work was also supported by the National Science Foundation. Baldo is also affiliated with MIT's Research Laboratory of Electronics, Microsystems Technology Laboratories, and Institute for Soldier Nanotechnologies.

Mapel, Currie and Goffri are starting a company, Covalent Solar, to develop and commercialize the new technology. Earlier this year Covalent Solar won two prizes in the MIT $100K Entrepreneurship Competition. The company placed first in the Energy category ($20,000) and won the Audience Judging Award ($10,000), voted on by all who attended the awards.
Adapted from materials provided by Massachusetts Institute of Technology.

Solar Cooling Becomes A New Air-conditioning System

ScienceDaily (July 20, 2008) — Scientists from the Universidad Carlos III of Madrid (UC3M) and the Consejo Superior de Investigaciones Científicas (CSIC) have developed an environmentally friendly cooling technology that does not harm the ozone layer. This is achieved by using solar energy and therefore reducing the use of greenhouse gases.

A research team has designed and built an absorption chiller capable of using solar and residual heat as an energy source to drive the cooling system. The technology used in this machine, which looks like an ordinary air-conditioning system, minimises its environmental impact by combining the use of a lithium bromide solution, which does not damage the ozone layer or increase the greenhouse effect, with a reduction in the use of water by the machine.

The team, managed by Professor Marcelo Izquierdo from the Department of Thermal Engineering and Fluid Mechanics of the UC3M, who is also a researcher at the Instituto de Ciencias de la Construcción Eduardo Torroja (IETCC) of the CSIC, is building a solar cooling system that unlike the existing machines on the market, uses an improved absorption mechanism capable of producing cold water at a range of temperatures from 7º C to 18º C when the exterior temperature ranges from 33º C to 43º C.

Residential use

Professor Marcelo Izquierdo states that the conclusions reached by a study with a commercial air condensed absorption machine prove that given an outside temperature ranging from 28ºC and 34ºC, the machine can produce cold water at a range of 12 to 16ºC, with a source temperature at the generator between 80ºC to 95ºC. Under these conditions, the cold water produced can be used for climate control applications in houses by combining it with a water-to-air heat exchanger (fan coil).

Quoting Raquel Lizarte, a researcher at the Department of Thermal Engineering and Fluid Mechanics of the UC3M, “There are few absorption machines at a commercial level that are adapted for residential use”, and since it is very hard to go without climate control, it is important to find a cooling technology that has minimal environmental impact. “The machine that we're studying produces enough cold water to cool down a room of 40 m2 of floor area and with a volume of 120 m3”, she states.

In 2007, 191 countries were involved in the Montreal protocol; a signed agreement to avoid the use of ozone depleting substances such as the HCFC refrigerants used in the air-conditioning industry as well as to set a limit such that by the year 2010 the energy consumption should be just 25% of the level that was allowed in 1996. Also, by the year 2020 all the HCFC refrigerants used in developed countries will have to be replaced with substitutes. This protocol makes research into this kind of technology extremely important for the near future.

The study has been published in the current edition of the magazine Applied Thermal Engineering under the title: ‘Air conditioning using an air-cooled single effect lithium bromide absorption chiller: Results of a trial conducted in Madrid in August 2005’. In this investigation scientists from the Universidad Carlos III of Madrid and Universidad Nacional de Educación a Distancia have collaborated under the coordination of the Instituto de Ciencias de la Construcción Eduardo Torroja-CSIC.
Adapted from materials provided by Universidad Carlos III de Madrid, via AlphaGalileo.

Friday, July 4, 2008

A Healthier July Fourth: Eco-friendly Fireworks And Flares Poised To Light Up The Sky

ScienceDaily (July 3, 2008) — From the rockets' red glare to bombs bursting in air, researchers are developing more environmentally friendly fireworks and flares to light up the night sky while minimizing potential health risks, according to an article scheduled for the June 30 issue of Chemical & Engineering News. Some eco-friendly fireworks may soon appear at a Fourth of July display or rock concert near you.

In the C&EN cover story, Associate Editor Bethany Halford points out that fireworks, flares and other so-called pyrotechnics commonly include potassium perchlorate to speed up the fuel-burning process. But some studies have linked perchlorate, which can accumulate in the soil, air and water, to thyroid damage. Pyrotechnics also contain color-producing heavy metals, such as barium and copper, which have also been linked to toxic effects.

Researchers recently developed new pyrotechnic formulas that replace perchlorate with nitrogen-rich materials or nitrocellulose that burn cleaner and produce less smoke. At the same time, these nitrogen-rich formulas also use fewer color-producing chemicals, dramatically cutting down on the amount of heavy metals used and lowering their potentially toxic effects. Some of these fireworks are already being used at circuses, rock concerts, and other events.

The big challenge in developing these "eco-friendly" pyrotechnics is making them as cost-effective as conventional fireworks while maintaining their dazzle and glow, the article states.

Journal reference:

1. Bethany Halford. Pyrotechnics For The Planet: Chemists seek environmentally friendlier compounds and formulations for fireworks and flares. Chemical & Engineering News, June 30 2008 [link]

Adapted from materials provided by American Chemical Society, via EurekAlert!, a service of AAAS.

Getting Wrapped Up In Solar Textiles

ScienceDaily (June 21, 2008) — Sheila Kennedy, an expert in the integration of solar cell technology in architecture who is now at MIT, creates designs for flexible photovoltaic materials that may change the way buildings receive and distribute energy.

These new materials, known as solar textiles, work like the now-familiar photovoltaic cells in solar panels. Made of semiconductor materials, they absorb sunlight and convert it into electricity.

Kennedy uses 3-D modeling software to design with solar textiles, generating membrane-like surfaces that can become energy-efficient cladding for roofs or walls. Solar textiles may also be draped like curtains.

"Surfaces that define space can also be producers of energy," says Kennedy, a visiting lecturer in architecture. "The boundaries between traditional walls and utilities are shifting."

Principal architect in the Boston firm, Kennedy & Violich Architecture, Ltd., and design director of its materials research group, KVA Matx, Kennedy came to MIT this year. She was inspired, she says, by President Susan Hockfield's plan to make MIT the "energy university" and by MIT's interdisciplinary energy curriculum that integrates research and practice.

This spring, Kennedy taught a new MIT architecture course, Soft Space: Sustainable Strategies for Textile Construction. She challenged the students to design architectural proposals for a new fast train station and public market in Porto, Portugal.

For Mary Hale, graduate student in architecture, Kennedy's Soft Space course was an inspiration to pursue photovoltaic technology in her master's thesis.

"I have always been interested in photovoltaics, but before this studio, I am not sure that I would have felt empowered to integrate them into a personal, self-propelled, project," she says.

Kennedy, for her part, will pursue her research in pushing the envelope of energy-efficiency and architecture. A recent project, "Soft House," exhibited at the Vitra Design Museum in Essen, Germany, illustrates what Kennedy means when she says the boundaries between walls and utilities are changing.

For Soft House, Kennedy transformed household curtains into mobile, flexible energy-harvesting surfaces with integrated solid-state lighting. Soft House curtains move to follow the sun and can generate up to 16,000 watt-hours of electricity--more than half the daily power needs of an average American household.

Although full-scale Soft House prototypes were successfully developed, the project points to a challenge energy innovators and other inventors face, Kennedy says. "Emerging technologies tend to under-perform compared with dominant mainstream technologies."

For example, organic photovoltaics (OPV), an emergent solar nano-technology used by the Soft House design team, are currently less efficient than glass-based solar technologies, Kennedy says.

But that lower efficiency needn't be an insurmountable roadblock to the marketplace, Kennedy says, because Soft House provides an actual application of the unique material advantages of solar nano-technologies without having to compete with the centralized grid.

Which brings her back to the hands-on, prototype-building approach Kennedy hopes to draw from in her teaching and work at MIT.

"Working prototypes are a very important demonstration tool for showing people that there are whole new ways to think about energy," she says.
Adapted from materials provided by Massachusetts Institute Of Technology. Original article written by Sarah H. Wright.

Goodbye To Batteries And Power Cords In Factories

ScienceDaily (June 11, 2008) — A broken cable or a soiled connector? If a machine in a factory goes on strike, it could be for any of a thousand reasons. Self-sufficient sensors that provide their own power supply will soon make these machines more robust.

When a factory machine breaks down, it’s hard to know what to do. Production often comes to a standstill until the error has finally been pinpointed – and that can take hours. The causes are legion; in many cases it is all due to a single interrupted contact. Consequently, many manufacturers have long been hoping for a technology that will work without vulnerable power and data cables. The idea is basically feasible, using small devices that harvest energy from their surroundings and provide their own power supply rather like a solar calculator.

Experts speak of energy self-sufficient sensor-actuator systems. These high-tech components normally consist of a sensor, a processor and a radio module. They measure position, force or temperature and transmit the data instantaneously by radio. In this way, vital machine data reach the control center without using cables at all. Is the machine overheating? Is the drive shaft wearing out?

So far, however, there are hardly any off-the-shelf solutions with their own energy supply. Research scientists from the Fraunhofer Technology Development Group TEG in Stuttgart have now joined forces with industrial partners and universities in the EnAS project, sponsored by the Federal Ministry of Economics and Technology, to build a transportable demonstrator. This is a miniature conveyer system driven by compressed air that transports small components in an endless cycle.

The round workpieces are picked up by a vacuum gripper, transported a short way and set down on a small carrier, which conveys the parts back to the starting point. All steps of the process are monitored by sensors as usual. The special feature of the demonstrator is that the sensing elements have no need of an external power supply.

The machine uses photo diodes, for instance, to check whether the carrier has been correctly loaded – if so, the light from the diodes is obscured by the workpieces. Solar cells supply the energy for this workpiece detector. Another example are pressure sensors which monitor the work of the vacuum gripper. In this case, the power is supplied by piezoelectric flexural transducers. The piezoelectric elements contain ceramics that generate electricity on being deformed. This deformation happens when the vacuum pump is switched on and off. The electricity thus generated is sufficient to send an OK signal to the central control unit. The sensor thus draws its power from pressurized air that is present anyway.

Within the next two years, the various system components are expected to make their way into everyday industrial use.
Adapted from materials provided by Fraunhofer-Gesellschaft.

Rubber 'Snake' Could Help Wave Power Get A Bite Of The Energy Market

ScienceDaily (July 3, 2008) — A device consisting of a giant rubber tube may hold the key to producing affordable electricity from the energy in sea waves. Invented in the UK, the 'Anaconda' is a totally innovative wave energy concept. Its ultra-simple design means it would be cheap to manufacture and maintain, enabling it to produce clean electricity at lower cost than other types of wave energy converter. Cost has been a key barrier to deployment of such converters to date.

Named after the snake of the same name because of its long thin shape, the Anaconda is closed at both ends and filled completely with water. It is designed to be anchored just below the sea's surface, with one end facing the oncoming waves.

A wave hitting the end squeezes it and causes a 'bulge wave'* to form inside the tube. As the bulge wave runs through the tube, the initial sea wave that caused it runs along the outside of the tube at the same speed, squeezing the tube more and more and causing the bulge wave to get bigger and bigger. The bulge wave then turns a turbine fitted at the far end of the device and the power produced is fed to shore via a cable.

Because it is made of rubber, the Anaconda is much lighter than other wave energy devices (which are primarily made of metal) and dispenses with the need for hydraulic rams, hinges and articulated joints. This reduces capital and maintenance costs and scope for breakdowns.

The Anaconda is, however, still at an early stage of development. The concept has only been proven at very small laboratory-scale, so important questions about its potential performance still need to be answered. Funded by the Engineering and Physical Sciences Research Council (EPSRC), and in collaboration with the Anaconda's inventors and with its developer (Checkmate SeaEnergy), engineers at the University of Southampton are now embarking on a programme of larger-scale laboratory experiments and novel mathematical studies designed to do just that.

Using tubes with diameters of 0.25 and 0.5 metres, the experiments will assess the Anaconda's behaviour in regular, irregular and extreme waves. Parameters measured will include internal pressures, changes in tube shape and the forces that mooring cables would be subjected to. As well as providing insights into the device's hydrodynamic behaviour, the data will form the basis of a mathematical model that can estimate exactly how much power a full-scale Anaconda would produce.

When built, each full-scale Anaconda device would be 200 metres long and 7 metres in diameter, and deployed in water depths of between 40 and 100 metres. Initial assessments indicate that the Anaconda would be rated at a power output of 1MW (roughly the electricity consumption of 2000 houses) and might be able to generate power at a cost of 6p per kWh or less. Although around twice as much as the cost of electricity generated from traditional coal-fired power stations, this compares very favourably with generation costs for other leading wave energy concepts.

"The Anaconda could make a valuable contribution to environmental protection by encouraging the use of wave power," says Professor John Chaplin, who is leading the EPSRC-funded project. "A one-third scale model of the Anaconda could be built next year for sea testing and we could see the first full-size device deployed off the UK coast in around five years' time."

The Anaconda was invented by Francis Farley (an experimental physicist) and Rod Rainey (of Atkins Oil and Gas). There may be advantages in making part of the tube inelastic, but this is still under assessment.

Wave-generated electricity is carbon-free and so can help the fight against global warming. Together with tidal energy, it is estimated that wave power could supply up to 20% of the UK's current electricity demand.

The two-year project 'The Hydrodynamics of a Distensible Wave Energy Converter' is receiving EPSRC funding of just over £430,000.

*A bulge wave is a wave of pressure produced when a fluid oscillates forwards and backwards inside a tube.
Adapted from materials provided by Engineering and Physical Sciences Research Council, via EurekAlert!, a service of AAAS.

Friday, May 23, 2008

New World Record For Efficiency For Solar Cells; Inexpensive To Manufacture

ScienceDaily (May 17, 2008) — Physicist Bram Hoex and colleagues at Eindhoven University of Technology, together with the Fraunhofer Institute in Germany, have improved the efficiency of an important type of solar cell from 21.9 to 23.2 percent (a relative improvement of 6 per cent). This new world record is being presented on Wednesday May 14 at a major solar energy conference in San Diego.

The efficiency improvement is achieved by the use of an ultra-thin aluminum oxide layer at the front of the cell, and it brings a breakthrough in the use of solar energy a step closer.

An improvement of more than 1 per cent (in absolute terms) may at first glance appear modest, but it can enable solar cell manufacturers to greatly increase the performance of their products. This is because higher efficiency is a very effective way of reducing the cost price of solar energy. The costs of applying the thin layer of aluminum oxide are expected to be relatively low. This will mean a significant reduction in the cost of producing solar electricity.

Ultra-thin

Hoex was able to achieve the increase in efficiency by depositing an ultra-thin layer (approximately 30 nanometer) of aluminum oxide on the front of a crystalline silicon solar cell. This layer has an unprecedented high level of built-in negative charges, through which the -- normally significant -- energy losses at the surface are almost entirely eliminated. Of all sunlight falling on these cells, 23.2 per cent is now converted into electrical energy. This was formerly 21.9 per cent, which means a 6 per cent improvement in relative terms.

Dutch company OTB Solar

Hoex gained his PhD last week at the Applied Physics department of the TU/e with this research project. He was supported in the Plasma & Materials Processing (PMP) research group by professor Richard van de Sanden and associate professor Erwin Kessels. This group specializes in plasma deposition of extremely thin layers. The Dutch company OTB Solar has been a licensee of one of these processes since 2001, which it is using in its solar cell production lines. Numerous solar cell manufacturers around the world use equipment supplied by OTB Solar.

The ultra-thin aluminum oxide layer developed in the PMP group may lead to a technology innovation in the solar cell world. A number of major solar cell manufacturers have already shown interest.

Promising

Solar cells have for years looked like a highly promising way to partly solve the energy problem. The sun rises day after day, and solar cells can conveniently be installed on surfaces with no other useful purpose. Solar energy also offers opportunities for use in developing countries, many of which have high levels of sunshine. Within ten to fifteen years the price of electricity generated by solar cells is expected to be comparable to that of 'conventional' electricity from fossil fuels.

This technology breakthrough now brings the industrial application of this type of high-efficiency solar cell closer.

Part of Hoex's PhD research project was paid for by three Dutch ministries: Economic Affairs; Education, Culture and Science; and Housing, Spatial Planning and the Environment.
Adapted from materials provided by Eindhoven University of Technology.

Hybrid Vehicle Competition: Mississippi State Wins DOE And GM Challenge X 2008 Advanced Vehicle Competition

ScienceDaily (May 21, 2008) — Mississippi State University in Starkville, Miss. is the first place winner of Challenge X, in which 17 university teams from across the U.S. and Canada competed to reengineer a General Motors (GM) Chevrolet Equinox Crossover SUV with advanced powertrain configurations. The winner of the competition achieved high fuel economy and low emissions, all while maintaining driver comfort and vehicle performance.

Department of Energy (DOE), GM and Natural Resources Canada also kicked off EcoCAR: The NeXt Challenge, a competition set to begin in the fall of 2008 that will challenge 17 university teams to re-engineer a Saturn VUE.

Over the past four years, 17 Challenge X university teams followed a real-world vehicle development process to produce advanced vehicle powertrain technologies that increased energy efficiency and reduced environmental impact. Those technologies were then integrated into GM vehicles and powered by a variety of alternative fuels including B20 biodiesel, E85 ethanol, reformulated gasoline, and hydrogen. GM, DOE, and the Canadian government congratulated students from 17 participating universities at a finish line ceremony this morning in Washington, D.C.

Students competed in 12 events over the eight day final competition, ranging from on-road emissions and drivability assessments to vehicle performance and consumer acceptability evaluations. The Mississippi State team designed a through-the-road parallel hybrid electric vehicle with all-wheel drive using a turbocharged direct-injection diesel engine fueled by B20 biodiesel. The vehicle demonstrated a 38 percent increase in energy efficiency over the production vehicle, a 1.6 second better quarter-mile acceleration performance, and a 44 percent reduction in well-to-wheel greenhouse gas emissions.

The second place vehicle, engineered by students at the University of Wisconsin – Madison, is a through-the-road parallel hybrid electric vehicle with a 1.9L diesel engine fueled by B20 biodiesel. Ohio State University was awarded third place for its power-split hybrid electric vehicle with a diesel engine fueled by B20 biodiesel.

In 2004, the first year of the program, the Challenge focused on vehicle simulation, modeling and subsystem development, and testing. In the second and third years, students integrated their advanced powertrains and subsystems into the Chevrolet Equinox. In the fourth year, students focused on consumer acceptability and over-the-road reliability and durability of their advanced propulsion systems with real-world evaluation outside of an official testing environment.

DOE’s Argonne National Laboratory (Argonne) provided competition management, team evaluation and technical and logistical support. The Greenhouse gas, Regulated Emissions, and Energy in Transportation model, developed at Argonne, was used to assess a well-to-wheel analysis of the greenhouse gas impacts of each technology approach the teams selected.

The 17 teams that participated in Challenge X are: Michigan Technological University—Houghton, Mich.; Mississippi State University—Starkville, Miss.; The Ohio State University— Columbus, Ohio; Pennsylvania State University—University Park, Pa.; Rose-Hulman Institute of Technology—Terre Haute, Ind.; San Diego State University—San Diego, Calif.; Texas Tech University—Lubbock, Texas; University of Akron—Akron, Ohio; University of California, Davis — Davis, Calif.; University of Michigan—Ann Arbor, Mich.; University of Tennessee—Knoxville, Tenn.; University of Texas at Austin—Austin, Texas; University of Tulsa—Tulsa, Okla.; University of Waterloo—Waterloo, Ontario Canada; University of Wisconsin-Madison — Madison, Wis.; Virginia Tech—Blacksburg, Va.; and West Virginia University— Morgantown, W.Va.

Students participating in the fall EcoCAR competition will design and build advanced propulsion solutions similar to the vehicle categories utilized by the California Air Resources Board (CARB) zero emissions vehicle (ZEV) regulations. In addition, they will incorporate lightweight materials into the vehicles, improve aerodynamics and utilize clean alternative fuels such as ethanol, biodiesel and hydrogen. Funding for EcoCar in FY 2009 and beyond is subject to annual appropriations.

The following teams have been selected to compete in the EcoCAR competition: Embry-Riddle Aeronautical University—Daytona Beach, Fla.; Georgia Tech —Atlanta, Ga.; Howard University —Washington, D.C.; Michigan Technological University — Houghton, Mich.; Mississippi State University — Starkville, Miss.; Missouri University of Science and Technology — Rolla, Miss.; North Carolina State University — Raleigh, N.C.; Ohio State University — Columbus, Ohio; Ontario Institute of Technology — Oshawa, Ontario, Canada; Pennsylvania State University — University Park, Pa.; Rose-Hulman Institute of Technology — Terre Haute, Ind.; Texas Tech University — Lubbock, Texas; University of Victoria Victoria, British Columbia, Canada; University of Waterloo — Waterloo, Ontario, Canada; University of Wisconsin, Madison —Madison, Wis.; Virginia Tech —Blacksburg, Va.; and, West Virginia University —Morgantown, W. Va..
Adapted from materials provided by US Department of Energy.

Monday, April 28, 2008

Beating The Codebreakers With Quantum Cryptography

ScienceDaily (Apr. 28, 2008) — Quantum cryptography may be essentially solved, but getting the funky physics to work on disciplined computer networks is a whole new headache.

Cryptography is an arms race, but the finish line may be fast approaching. Up to now, each time the codemakers made a better mousetrap, codebreakers breed a better mouse. But quantum cryptography theoretically could outpace the codebreakers and win the race. Forever.

Already the current state of the art in classical encryption, 128-bit RSA, can be cracked with enough raw, brute force computing power available to organisations like the US National Security Agency. And the advent of quantum computing will make it even simpler. The gold standard for secret communication will be truly dead.

Quantum cryptography solves the problem, and it will overcome the remaining stumbling block, the distribution of the code key to the right person, by using quantum key distribution (QKD).

Modern cryptography relies on the use of digital ‘keys’ to encrypt data before sending it over a network, and to decrypt it at the other end. The receiver must have a version of the key code used by the sender so as to be able to decrypt and access the data.

QKD offers a theoretically uncrackable code, one that is easily distributed and works in a transparent manner. Even better, the nature of quantum mechanics means that if any eavesdropper – called Eve in the argot of cryptographers – tries to snoop on a message the sender and receiver will both know.

That ability is due to the use of the Heisenberg Uncertainty Principle, which sits at the heart of quantum mechanics. The principle rests on the theory that the act of measuring a quantum state changes that state. It is like children with a guilty secret. As soon as you look at them their faces morph plausibly into ‘Who, me?’

The practical upshot for cryptography is that the sender and receiver can verify the security of the transmission. They will know if the state of the quanta has changed, whether the key has been read en route. If so, they can abandon the key they are using and generate a new one.

QKD made its real-world debut in the canton of Geneva for use in the electronic voting system used in the Swiss general election last year. The system guaranteed that the poll was secure. But, more importantly perhaps, it also ensured that no vote was lost in transmission, because the uncertainly principle established there was no change to the transmitted data.

The end of the beginning

The canton election was a demonstration of the work done by researchers for the SECOQC project, an EU-funded effort to develop an international network for secure communication based on QKD.

The test of the technology demonstrated that QKD worked for point-to-point communications between two parties. But the demonstration was just the beginning of the SECOQC’s overall goal.

“We want to establish a network wide quantum encryption, because it will mean it works over much longer distances,” explains Christian Monyk, co-ordinator of the SECOQC project and head of the quantum-technologies unit at the Austrian Research Centres. “Network quantum encryption and QKD mean that many parties can communicate securely, not just two. Finally, it also means quantum encryption could be deployed on a very large scale, for the insurance and banking sectors, for example.”

Moving the system from point-to-point communications to a network is an order of magnitude more difficult.

“The quantum science for cryptography and key distribution is essentially solved, and it is a great result,” Monyk says. “But getting that system to work across a network is much more difficult. You have to deal with different protocols and network architectures, develop new nodes and new interfaces with the quantum devices to get it to a large-scale, long distance, real-world application.”

Working at a distance

Getting the system to work over long distances is also a challenge because QKD requires hi-fidelity data transmission over high-quality physical networks like non-zero dispersion shifted fibre optics.

“It was not one big problem, it was many, many small computing science and engineering problems,” says Monyk. “We had to work with a large number of technologies. And we have to certify it to experts.”

But SECOQC’s researchers believe they have solved the network issue. The researchers are currently putting the final touches to a demonstration of the technology to be held this October in Vienna, Austria. Industry has shown great interest in the technology. Still the technology is not quite ready for prime time.

“From a technical point of view, the technology will be ready in one or two years,” says Monyk.

And that means that the race will be won, finally, by the codemakers.

Adapted from materials provided by ICT Results.

Aeronautics Engineers Design Silent, Eco-friendly Plane

ScienceDaily (Nov. 7, 2006) — MIT and Cambridge University researchers unveiled the conceptual design for a silent, environmentally friendly passenger plane at a press conference Monday, Nov. 6, at the Royal Aeronautical Society in London.

"Public concern about noise is a major constraint on expansion of aircraft operations. The 'silent aircraft' can help address this concern and thus aid in meeting the increasing passenger demand for air transport," said Edward M. Greitzer, the H.N. Slater Professor of Aeronautics and Astronautics at MIT.

Greitzer and Professor Ann P. Dowling of Cambridge University are the lead principal investigators on the Silent Aircraft Initiative. This collaboration of 40 researchers from MIT and Cambridge, plus many others from more than 30 companies, was launched three years ago "to develop a conceptual design for an aircraft whose noise was almost imperceptible outside the perimeter of an airfield in an urban environment."

While originally conceived to make a huge reduction in airplane noise, the team's ultimate design also has the potential to be more fuel-efficient. In a typical flight, the proposed plane, which is designed to carry 215 passengers, is predicted to achieve 124 passenger-miles per gallon, almost 25 percent more than current aircraft, according to Greitzer. (For a down-to-earth comparison, the Toyota Prius hybrid car carrying two passengers achieves 120 passenger-miles per gallon.)

The project aims to develop aircraft by 2030.

The conceptual design addresses both the engines and the structure, or airframe, of a plane. Half of the noise from a landing plane comes from the airframe.

Other key features of the design include:

* An overall shape that integrates body and wings into a "single" flying wing. As a result, both the body and wings provide lift, allowing a slower approach and takeoff, which would reduce noise. The shape also improves fuel efficiency.
* The elimination of the flaps, or hinged rear sections on each wing. These are a major source of airframe noise when a plane is taking off and landing.
* Engines embedded in the aircraft with air intakes on top of the plane rather than underneath each wing. This screens much of the noise from the ground.
* A variable-size jet nozzle that allows slower jet propulsion during takeoff and landing but efficient cruising at higher speeds.

What will it take to turn the design into a plane by 2030?

"One major technical challenge is the integration of the propulsion system with the aircraft," Greitzer said. "The propulsion system, with engines embedded in the fuselage, is different than for traditional civil aircraft, in which the engines are located in nacelles below the wing. This presents a different set of issues to the designer."

Zoltan S. Spakovszky, C. S. Draper Associate Professor in MIT's Department of Aeronautics and Astronautics, also cited the integration of the propulsion system as a key challenge. Spakovszky and James I. Hileman, a research engineer in the department, are the chief engineers, or day-to-day managers, for the project.

He explained that in today's airplanes, with engines hanging below the wings, air flows unimpeded into the engine. In the new design, however, air traveling into the air intakes on top of the plane will behave differently. This is because the air particles flowing close to the plane's body experience friction. As a result, "the particles flow at a lower velocity near the surface of the plane than in the free (air) stream," Spakovszky said. The new engine must be designed to operate in these strongly nonuniform airflows.

A second important technical challenge involves the craft's unconventional airframe, Spakovszky said. "The structural integrity of a pressure vessel allowing this single wing-like shape needs to be ensured and poses a major challenge."

Greitzer emphasized that the collaboration between MIT, Cambridge University and their industrial partners was key to the end result.

"Collaboration and teaming occurred in essentially all aspects of the project. The Silent Aircraft Initiative has been very much an enterprise in which the whole is greater than the sum of the separate parts," he said.

Spakovszky referred to the overall team effort as the best part of the project. "Technical expectations were taken for granted, but working well across the Atlantic was not a given," he said. "It was a very, very neat experience."

The Silent Aircraft Initiative is funded by the Cambridge-MIT Institute, which has supported a wide range of research and educational collaborations between the two universities. The Knowledge Integration Community (or KIC) that created the conceptual design included academic staff and students from both institutions and participants from a wide range of industrial collaborators including Boeing and Rolls Royce.

Adapted from materials provided by Massachusetts Institute Of Technology.

Next-generation Sensors May Help Avert Airline Disasters

ScienceDaily (Mar. 28, 2008) — Everyone on board an Scandinavian Airline System (SAS) plane died when it collided with a light aircraft and exploded in a luggage hanger in Milan in 2001. The smaller plane had taxied wrongly and ended up on the runway where the SAS aircraft was taking off.

The following year, two planes collided in mid-air over Überlingen in the south of Germany on the edge of Lake Constance. One was a Russian passenger flight from Moscow to Barcelona, while the other was a cargo plane heading for Belgium from the Persian Gulf. Seventy-one persons died.

Safety need to be improved

As air transport grows, take-offs become more tightly spaced and more and more planes are circling airports as they wait for permission to land, the potential for disasters increases.

Last year alone, international air traffic grew by 5.9 percent. Parallel to this increase, the minimum distance between aircraft in the air in European airspace has decreased. The minimum vertical distance between aircraft has been halved from 600 metres to 300 for planes flying above 29000 feet. The idea has been to increase airspace capacity by 20 percent. Routes are shortened, and airlines expect to save the huge sum of NOK 30 billion a year in fuel costs alone.

But what above safety up there? In the wake of a number of disasters in the air in 2001 and 2002, the EU took up the problem and resolved that certain aspects of the industry should be studied in detail and evaluated in terms of safety. Several projects were launched under its 6th Framework Programme. One of these was the HASTEC project, which was to develop the next generation of pressure sensors for better aircraft altitude measurement.

Next-generation sensors

“There is a need for aircraft sensors that are more accurate than current models, which are large and reliable, but expensive systems,” says Sigurd Moe of SINTEF ICT. “Among other things, they need to be more stable throughout their life-cycle. The problem with current sensors is that they need to be checked and calibrated regularly, and this is an expensive process since the aircraft needs to be grounded.

Challenges

The problem is that mechanical tensions may develop in the connection with the sensor package itself. The scientists therefore had to produce a silicon based sensor structure in which such tensions would not transmit/propogate into the chip itself. The solution was a spiral silicon element in which the pressure-sensitive part was not affected even if the mounting stretches and drags the element.

SINTEF produces silicon wafers with hundreds of chips on each wafer, several of which are laid on top of each other and glued together before being sawn into chips. Individual chips are then selected and integrated into a sensor package that has been developed by Memscap. The company produces, assembles and tests the sensor package itself.

The first prototype has now been delivered to Memscap by the scientists for further testing and mounting. During the first six months of 2008 these new-technology sensors will be flight tested.

Adapted from materials provided by SINTEF, via AlphaGalileo.

Next Step In Robot Development Is Child's Play

ScienceDaily (Apr. 26, 2008) — Teaching robots to understand enough about the real world to allow them act independently has proved to be much more difficult than first thought.

The team behind the iCub robot believes it, like children, will learn best from its own experiences.

The technologies developed on the iCub platform – such as grasping, locomotion, interaction, and even language-action association – are of great relevance to further advances in the field of industrial service robotics.

The EU-funded RobotCub project, which designed the iCub, will send one each to six European research labs. Each of the labs proposed winning projects to help train the robots to learn about their surroundings – just as a child would.

The six projects include one from Imperial College London that will explore how ‘mirror neurons’ found in the human brain can be translated into a digital application. ‘Mirror neurons’, discovered in the early 1990s, trigger memories of previous experiences when humans are trying to understand the physical actions of others. A separate team at UPF Barcelona will also work on iCub’s ‘cognitive architecture’.

At the same time, a team headquartered at UPMC in Paris will explore the dynamics needed to achieve full body control for iCub. Meanwhile, researchers at TUM Munich will work on the development of iCub’s manipulation skills. A project team from the University of Lyons will explore internal simulation techniques – something our brains do when planning actions or trying to understand the actions of others.

Over in Turkey, a team based at METU in Ankara will focus almost exclusively on language acquisition and the iCub’s ability to link objects with verbal utterances.

“The six winners had to show they could really use and maintain the robot, and secondly the project had to exploit the capabilities of the robot,” says Giorgio Metta. “Looking at the proposals from the winners, it was clear that if we gave them a robot we would get something in return.”

The iCub robots are about the size of three-year-old children, with highly dexterous hands and fully articulated heads and eyes. They have hearing and touch capabilities and are designed to be able to crawl on all fours and to sit up.

Humans develop their abilities to understand and interact with the world around them through their experiences. As small children, we learn by doing and we understand the actions of others by comparing their actions to our previous experience.

The developers of iCub want to develop their robots’ cognitive capabilities by mimicking that process. Researchers from the EU-funded Robotcub project designed the iCub’s hardware and software using a modular system. The design increases the efficiency of the robot, and also allows researcher to more easily update individual components. The modular design also allows large numbers of researchers to work independently on separate aspects of the robot.

iCub’s software coding, along with technical drawings, are free to anyone who wishes to download and use them.

“We really like the idea of being open as it is a way to build a community of many people working towards a common objective,” says Giorgio Metta, one of the developers of iCub. “We need a critical mass working on these types of problems. If you get 50 researchers, they can really layer knowledge and build a more complex system. Joining forces really makes economic sense for the European Commission that is funding these projects and it makes scientific sense.”

Built-in learning skills

While the iCub’s hardware and mechanical parts are not expected to change much over the next 18 months, researchers expect to develop the software further. To enable iCub to learn by doing, the Robotcub research team is trying to pre-fit it with certain innate skills.

These include the ability to track objects visually or by the sounds – with some element of prediction of where the tracked object will move to next. iCub should also be able to navigate based on landmarks and a sense of its own position.

But the first and key skill iCub needs for learning by doing is an ability to reach towards a fixed point. By October this year, the iCub developers plan to develop the robot so it is able to analyse the information it receives via its vision and feel ‘senses’. The robot will then be able to use this information to perform at least some crude grasping behaviour – reaching outwards and closing its fingers around an object.

“Grasping is the first step in developing cognition as it is required to learn how to use tools and to understand that if you interact with an object it has consequences,” says Giorgio Metta. “From there the robot can develop more complex behaviours as it learns that particular objects are best manipulated in certain ways.”

Once the assembly of the six robots for the research projects is completed, the developers plan to build more iCubs, creating between 15 and 20 in use around Europe.

Adapted from materials provided by ICT Results.

Tuesday, April 22, 2008

Data Transfer In The Brain: Newfound Mechanism Enables Reliable Transmission Of Neuronal Information

ScienceDaily (Apr. 22, 2008) — The receptors of neurotransmitters move very rapidly. This mobility plays an essential, and hitherto unsuspected, role in the passage of nerve impulses from one neuron to another, thus controlling the reliability of data transfer. This has recently been demonstrated by scientists in the "Physiologie cellulaire de la synapse" Laboratory (CNRS/Université Bordeaux 2) coordinated by Daniel Choquet, senior researcher at CNRS.

By enabling a clearer understanding of the mechanisms involved in neuronal transmissions, this work opens the way to new therapeutic targets for the neurological and psychiatric disorders that depend on poor neuronal communication (Parkinson's disease, Alzheimer's disease, OCD, etc.). Fruit of a collaboration with physicists in the Centre de physique moléculaire optique et hertzienne (CPMOH, CNRS/Université Bordeaux 1) and German and American research teams(1), these findings were published on April 11, 2008 in Science.

The processing of information by the brain is mainly based on the coding of data by variations in the frequency of neuronal activity. "Good" communication thus implies the reliable transmission of this "code" by the connections between neurons, or synapses. Under normal circumstances, this junction comprises a pre-synaptic element from which the information arises, and a post-synaptic element which receives it.

It is at this point that neuronal communication occurs. Once the pre-synaptic neuron has been stimulated by an electrical signal with a precise frequency, it releases chemical messengers into the synapse: neurotransmitters. And the response is rapid! These neurotransmitters bind to specific receptors, thus provoking a change to the electrical activity of the post-synaptic neuron and hence the birth of a new signal.

The mobility of receptors controls the reliability of neuronal transmission. Working at the interface between physics and biology, the teams in Bordeaux led by Choquet, CNRS senior researcher in the "Physiologie cellulaire de la synapse"(2) laboratory, working in close collaboration with the group led by

Brahim Lounis at the Centre de physique moléculaire optique et hertzienne(2) have been studying synaptic transmission and, more particularly, the role of certain receptors of glutamate, a neurotransmitter present in 80% of neurons in the brain.

Focusing on the dynamics of these receptors, the researchers have revealed that a minor modification to their mobility has a major impact on high frequency synaptic transmission, i.e. at frequencies between 50 and 100 Hz (those which intervene during memorization, learning or sensory stimulation processes). More specifically, they have established that this mobility enables the replacement in a few milliseconds of desensitized receptors by "naïve" receptors in the synapse. This phenomenon reduces synaptic depression(3) and allows the neurons to transmit the information at a higher frequency. By contrast, if the receptors are immobilized, this depression is notably enhanced, preventing transmission of the nerve impulse in the synapses above around ten Hertz.

More profoundly, the scientists have demonstrated that prolonged series of high frequency stimulations, which induce an increase in calcium levels in the synapses, cause the immobilization of receptors. They have also proved that these series of stimulations diminish the ability of neurons to transmit an activity at high frequency. Receptor mobility is thus correlated with the frequency of synaptic transmission and consequently, the reliability of this transmission.

A real advance for research

When the brain is functioning under normal conditions, we can suppose that the immobilization of receptors following a series of high frequency stimulations constitutes a safety mechanism. It will prevent subsequent series from overexciting the post-synaptic neuron. A reliable transmission of information between two neurons is obviously crucial to satisfactory functioning of the brain.

These results, of prime importance, suggest that some dysfunctions of neuronal transmission are due to a defect in receptor stabilization. However, high frequency electrical stimulation of certain regions of the brain is used to treat Parkinson's disease or obsessive-compulsive disorders (OCD). Its mechanism of action, still poorly understood, may therefore involve receptor mobility. This work has thus made it possible to identify new therapeutic targets and could augur well for potential drugs to treat neurological and psychiatric disorders which often result from poor communication between neurons.

Notes

1. Teams at the Leibniz Institute, Magdeburg and Johns Hopkins University School of Medicine, Baltimore, USA.
2. CNRS/Université Bordeaux 2.
3. CPMOH, CNRS/Université Bordeaux 1.
4. When a pre-synaptic neuron is stimulated at very frequent intervals (high frequencies of around 50-100 Hertz), the post-synaptic response generally diminishes over time: this is called synaptic depression. The higher the stimulation frequency, the more this depression increases.

Journal reference: Surface Mobility of Post-synaptic AMPARs Tunes Synaptic Transmission. Martin Heine, Laurent Groc, Renato Frischknecht, Jean-Claude Béïque, Brahim Lounis, Gavin Rumbaugh, Richard L. Huganir, Laurent Cognet and Daniel Choquet. Science. 11 April 2008.

Adapted from materials provided by CNRS.

Breastfeeding While Taking Seizure Medicine Does Not Appear To Harm Children, Study Suggests

ScienceDaily (Apr. 22, 2008) — A first of its kind study finds breastfeeding while taking certain seizure medications does not appear to harm a child's cognitive development.
"Our early findings show breastfeeding during anti-epilepsy drug treatment doesn't appear to have a negative impact on a child's cognitive abilities," said study author Kimford Meador, MD, with the University of Florida at Gainesville, and Fellow of the American Academy of Neurology. "However, more research is needed to confirm our findings and women should use caution due to the limitations of our study."

Researchers tested the cognitive development of 187 two-year-old children whose mothers were taking the epilepsy drugs lamotrigine, carbamazepine, phenytoin, or valproate. Forty-one percent of the children were breastfed.

The study found breastfed children had higher cognitive test scores than those children who were not breastfed, and this trend was consistent for each anti-epilepsy drug. The children who were breastfed received an average test score of 98.1 compared to a score of 89.5 for the children not breastfed. However, the results were not significant after adjusting for the mother's IQ. Thus, it appears that the higher scores in children who were breastfed is due to the fact that their mothers had higher IQs.

Meador says animal studies have shown that some anti-epilepsy drugs, but not all, can cause cells to die in immature brains, but this effect can be blocked by the protective effects of beta estradiol, which is the mother's sex hormone. "Since the potential protective effects of beta estradiol in utero are absent after birth, concern was raised that breastfeeding by women taking anti-epilepsy drugs may increase the risk of anti-epilepsy drug-induced cell death and result in reduced cognitive outcomes in children."

Meador says additional research on the effects of breastfeeding should be extended to other anti-epilepsy drugs and mothers who use more than one anti-epilepsy medication.

The study is part of an ongoing study of the long-term effects of in utero anti-epilepsy drug exposure on children's cognition. Women with epilepsy who were taking anti-epilepsy drugs were enrolled in the study during pregnancy. Ultimately, the study will examine the effects of in utero anti-epilepsy drug exposure on children at six years old.

This research was presented at the upcoming American Academy of Neurology 60th Anniversary Annual Meeting in Chicago, April 17, 2008.

Adapted from materials provided by American Academy of Neurology.

Chemotherapy's Damage To The Brain Detailed

ScienceDaily (Apr. 22, 2008) — A commonly used chemotherapy drug causes healthy brain cells to die off long after treatment has ended and may be one of the underlying biological causes of the cognitive side effects -- or "chemo brain" -- that many cancer patients experience. That is the conclusion of a study published today in the Journal of Biology.

A team of researchers at the University of Rochester Medical Center (URMC) and Harvard Medical School have linked the widely used chemotherapy drug 5-fluorouracil (5-FU) to a progressing collapse of populations of stem cells and their progeny in the central nervous system.

"This study is the first model of a delayed degeneration syndrome that involves a global disruption of the myelin-forming cells that are essential for normal neuronal function," said Mark Noble, Ph.D., director of the University of Rochester Stem Cell and Regenerative Medicine Institute and senior author of the study. "Because of our growing knowledge of stem cells and their biology, we can now begin to understand and define the molecular mechanisms behind the cognitive difficulties that linger and worsen in a significant number of cancer patients."

Cancer patients have long complained of neurological side effects such as short-term memory loss and, in extreme cases, seizures, vision loss, and even dementia. Until very recently, these cognitive side effects were often dismissed as the byproduct of fatigue, depression, and anxiety related to cancer diagnosis and treatment. Now a growing body of evidence has documented the scope of these conditions, collectively referred to as chemo brain. And while it is increasingly acknowledged by the scientific community that many chemotherapy agents may have a negative impact on brain function in a subset of cancer patients, the precise mechanisms that underlie this dysfunction have not been identified.

Virtually all cancer survivors experience short-term memory loss and difficulty concentrating during and shortly after treatment. A study two years ago by researchers with the James P. Wilmot Cancer Center at the University of Rochester showed that upwards of 82% of breast cancer patients reported that they suffer from some form of cognitive impairment.

While these effects tend to wear off over time, a subset of patients, particularly those who have been administered high doses of chemotherapy, begin to experience these cognitive side effects months or longer after treatment has ceased and the drugs have long since departed their systems. For example, a recent study estimates that somewhere between 15 and 20 percent of the nation's 2.4 million female breast cancer survivors have lingering cognitive problems years after treatment. Another study showed that 50 percent of women had not recovered their previous level of cognitive function one year after treatment.

Two years ago, Noble and his team showed that three common chemotherapy drugs used to treat a wide range of cancers were more toxic to healthy brain cells than the cancer cells they were intended to treat. While these experiments were among the first to establish a biological basis for the acute onset of chemo brain, they did not explain the lingering impact that many patients experience.

The scientists conducted a similar series of experiments in which they exposed both individual cell populations and mice to doses of 5-fluorouracil (5-FU) in amounts comparable to those used in cancer patients. 5-FU is among a class of drugs called antimetabolites that block cell division and has been used in cancer treatment for more than 40 years. The drug, which is often administered in a "cocktail" with other chemotherapy drugs, is currently used to treat breast, ovarian, stomach, colon, pancreatic and other forms of cancer.

The researchers discovered that months after exposure, specific populations of cells in the central nervous -- oligodendrocytes and dividing precursor cells from which they are generated -- underwent such extensive damage that, after 6 months, these cells had all but disappeared in the mice.

Oligodendrocytes play an important role in the central nervous system and are responsible for producing myelin, the fatty substance that, like insulation on electrical wires, coats nerve cells and enables signals between cells to be transmitted rapidly and efficiently. The myelin membranes are constantly being turned over, and without a healthy population of oligodendrocytes, the membranes cannot be renewed and eventually break down, resulting in a disruption of normal impulse transmission between nerve cells.

These findings parallel observations in studies of cancer survivors with cognitive difficulties. MRI scans of these patients' brains revealed a condition similar to leukoencephalopathy. This demyelination -- or the loss of white matter -- can be associated with multiple neurological problems.

"It is clear that, in some patients, chemotherapy appears to trigger a degenerative condition in the central nervous system," said Noble. "Because these treatments will clearly remain the standard of care for many years to come, it is critical that we understand their precise impact on the central nervous system, and then use this knowledge as the basis for discovering means of preventing such side effects."

Noble points out that not all cancer patients experience these cognitive difficulties, and determining why some patients are more vulnerable may be an important step in developing new ways to prevent these side effects. Because of this study, researchers now have a model which, for the first time, allows scientists to begin to examine this condition in a systematic manner.
###

Other investigators participating in the study include Ruolan Han, Ph.D., Yin M. Yang, M.D., Anne Luebke, Ph.D., Margot Mayer-Proschel, Ph.D., all with URMC, and Joerg Dietrich, M.D., Ph.D., formerly with URMC and now with Harvard Medical School. The study was funded by the National Institutes of Neurological Disorders and Stroke, the Komen Foundation for the Cure, and the Wilmot Cancer Center.

Adapted from materials provided by University of Rochester Medical Center.