Activities such as grinding fish teeth and snapping shrimp claws contribute to the din surrounding a coral reef, which can often be heard from distances up to a few kilometers away. To test whether the racket affected young fish, Stephen Simpson of the University of Edinburgh and his colleagues constructed 24 patches of artificial reef. The researchers outfitted half of them with speakers and broadcast recordings of reef noise, whereas the other half were kept silent. Of the two main types of fish attracted to the fake reefs, both cardinalfish and damselfish exhibited a preference for the louder reefs compared to the quiet ones. The two species did show differences in what types of noise they favored: damselfish were drawn more to higher-frequency sound, but cardinalfish exhibited no such preference. | ||||||
The discovery that fish respond to reef sounds suggests a potentially valuable management tool, the authors say. "This is a significant step forward in our understanding of their behavior, which should help us to better predict how we should conserve or harvest populations of reef fishes in the future," Simpson remarks. "It should also alert policymakers to the damage that human activities like drilling and shipping may have on fish stocks because they drown out the natural clues given by animals." --Sarah Graham |
A Schrödinger's Cat in HyperCube... Paw prints... Fish bones... Breadcrumbs... Figure the way out before Implosion!
2005-04-14
Sounds Guide Young Fish toward Home
2005-04-12
Making Memories Stick
Some moments become lasting recollections while others just evaporate. The reason may involve the same processes that shape our brains to begin with | |
By R. Douglas Fields | |
|
That disturbing story was inspired by the real case history of a patient known in the medical literature only as "HM." When HM was nine years old, a head injury in a bicycle accident left him with debilitating epilepsy. To relieve his seizures that could not be controlled in any other way, surgeons removed parts of HM's hippocampus and adjoining brain regions. The operation succeeded in reducing the brain seizures but inadvertently severed the mysterious link between short-term and long-term memory. Information destined for what is known as declarative memory--people, places, events--must pass through the hippocampus before being recorded in the cerebral cortex. Thus, memories from long ago that were already stored in HM's brain remained clear, but all his experiences of the present soon faded into nothing. HM saw his doctor on a monthly basis, but at each visit it was as if the two had never met. | |||||||||
This transition from the present mental experience to an enduring memory has long fascinated neuroscientists. A person's name when you are first introduced is stored in short-term memory and may be gone within a few minutes. But some information, like your best friend's name, is converted into long-term memory and can persist a lifetime. The mechanism by which the brain preserves certain moments and allows others to fade has recently become clearer, but first neuroscientists had to resolve a central paradox. Both long- and short-term memories arise from the connections between neurons, at points of contact called synapses, where one neuron's signal-emitting extension, called an axon, meets any of an adjacent neuron's dozens of signal-receiving fingers, called dendrites. When a short-term memory is created, stimulation of the synapse is enough to temporarily "strengthen," or sensitize, it to subsequent signals. For a long-term memory, the synapse strengthening becomes permanent. Scientists have been aware since the 1960s, however, that this requires genes in the neuron's nucleus to activate, initiating the production of proteins. | |||||||||
Memory researchers have puzzled over how gene activity deep in the cell nucleus could govern activities at faraway synapses. How does a gene "know" when to strengthen a synapse permanently and when to let a fleeting moment fade unrecorded? And how do the proteins encoded by the gene "know" which of thousands of synapses to strengthen? The same questions have implications for understanding fetal brain development, a time when the brain is deciding which synaptic connections to keep and which to discard. In studying that phenomenon, my lab came up with an intriguing solution to one of these mysteries of memory. And just like Dorothy, we realized that the answer was there all the time. Genetic Memory For a protein to be produced, a stretch of DNA inside the cell nucleus must be transcribed into a portable form called messenger RNA (mRNA), which then travels out to the cell's cytoplasm, where cellular machinery translates its encoded instructions into a protein. These researchers had found that blocking the transcription of DNA into mRNA or the translation of mRNA into a protein would impede long-term memory formation but that short-term memory was unaffected.
|
Hebb proposed that, like an orchestra player who cannot keep up, a synapse on a neuron that fires out of sync with the other inputs to the neuron will stand out as odd and should be eliminated, but synapses that fire together--enough so as to make the neuron fire an action potential--should be strengthened. The brain would thus wire itself up in accordance with the flow of impulses through developing neural circuits, refining the original general outline.
Moving from Hebb's theory to sorting out the actual mechanics of this process, however, one again confronts the fact that the enzymes and proteins that strengthen or weaken synaptic connections during brain wiring must be synthesized from specific genes. So our group set out to find the signals that activate those genes.
Because information in the nervous system is coded in the pattern of neural impulse activity in the brain, I began with an assumption that certain genes in nerve cells must be turned on and off by the pattern of impulse firing. To test this hypothesis, a postdoctoral fellow in my lab, Kouichi Itoh, and I took neurons from fetal mice and grew them in cell culture, where we could stimulate them using electrodes in the culture dish. By stimulating neurons to fire action potentials in different patterns and then measuring the amount of mRNA from genes known to be important in forming neural circuits or in adapting to the environment, we found our prediction to be true. We could turn on or off particular genes simply by dialing up the correct stimulus frequency on our electrophysiological stimulator, just as one tunes into a particular radio station by selecting the correct signal frequency.
Time Code
Once we observed that neuronal genes could be regulated according to the pattern of impulses the cell was emitting, we wanted to investigate a deeper question: How could the pattern of electrical depolarizations at the surface of the cell membrane control genes deep in the nucleus of the neuron? To do so, we needed to peer into the cell cytoplasm and see how information was translated on its way from the surface to the nucleus.
What we found was not a single pathway leading from the neuron's membrane to its nucleus but rather a highly interconnected network of chemical reactions. Like the maze of roads leading to Rome, there were multiple intersecting biochemical pathways crisscrossing as they carried signals from the cell membrane throughout the cell. Somehow electrical signals of varying frequencies on the membrane flowed through this traffic in the cytoplasm to reach their proper destination in the nucleus. We wanted to understand how.
finally, we began to appreciate that the important factor was time
The primary way that information about the neuronal membrane's electrical state enters this system of chemical reactions in the cytoplasm is by regulating the influx of calcium ions through voltage-sensitive channels in the cell membrane. Neurons live in a virtual sea of calcium ions, but inside a neuron the concentration of calcium is kept extremely low--20,000 times lower than the concentration outside. When the voltage across the neuronal membrane reaches a critical level, the cell fires an action potential, causing the calcium channels to open briefly. Admitting a spurt of calcium ions into the neuron with the firing of each neural impulse translates the electrical code into a chemical code that cellular biochemistry inside the neuron can understand.
In domino fashion, as calcium ions enter the cytoplasm, they activate enzymes called protein kinases. Protein kinases turn on other enzymes by a chemical reaction called phosphorylation that adds phosphate tags to proteins. Like runners passing the baton, the phosphate-tagged enzymes become activated from a dormant state and stimulate the activity of transcription factors. CREB, for instance, is activated by calcium-dependent enzymes that phosphorylate it and inactivated by enzymes that remove the phosphate tag. But there are hundreds of different transcription factors and protein kinases in a cell. We wanted to know how a particular frequency of action potential firing could work through calcium fluxes to reach the appropriate protein kinases and ultimately the correct transcription factors to control the right gene. By filling the neurons with dye that fluoresces green when the calcium concentration in the cytoplasm increases, we were able to track how different action-potential firing patterns translated into dynamic fluctuations in intracellular calcium. One simple possibility was that gene transcription might be regulated by the amount of calcium rise in a neuron, with different genes responding better to different levels of calcium. Yet we observed a more interesting result: the amount of calcium increase in the neuron was much less important in regulating specific genes than the temporal patterns of calcium flashes, echoing the temporal code of the neural impulse that had generated them. |
Another postdoc in my lab, Feleke Eshete, followed these calcium signals to the enzymes they activate and the transcription factors those enzymes regulate, and finally we began to appreciate how different patterns of neural impulses could be transmitted through different intracellular signaling pathways. The important factor was time. We found that one could not represent the pathway from the cell's membrane to its DNA in a simple sequence of chemical reactions. At each step, starting from calcium entering the membrane, the reactions branched off into a highly interconnected network of signaling pathways, each of which had its own speed limits governing how well it could respond to intermittent signals. This property determined which signaling pathway a particular frequency of action potentials would follow to the nucleus. |
Some signaling pathways responded quickly and recovered rapidly; thus, they could react to high-frequency patterns of action potentials but could not sustain activation in response to bursts of action potentials separated by long intervals of inactivity. Other pathways were sluggish and could not respond well to rapid bursts of impulses, but once activated, their slowness to inactivate meant that they could sustain signals between bursts of action potentials that were separated by long intervals of inactivity. The genes activated by this pathway would therefore respond to stimuli that are delivered repeatedly, but infrequently, like the repetition necessary for committing new information to memory. In other words, we observed that signals of different temporal patterns propagated through distinct pathways that were favorably tuned to those particular patterns and ultimately regulated different transcription factors and different genes. For instance, our measurements showed that CREB was rapidly activated by action potentials but sluggish in inactivating after we stopped stimulating the neuron. Thus, CREB would sustain its activation between repeated bursts of stimuli separated by intervals of 30 minutes or more, similar to the intervals of time between practice sessions required to learn new skills or facts. Given CREB's role in memory, we could not help but wonder if the signaling pathway we were studying to understand brain development might not also be relevant to the mechanism of memory. So we devised a test. Memory in a Dish |
This increased strength, termed long-term potentiation (LTP), can be, despite its name, relatively short-lived. When test pulses are applied at a series of intervals after the high-frequency stimulus, the voltage produced by the synapse slowly diminishes back to its original strength within a few hours. Known as early LTP, this temporary synaptic strengthening is a cellular model of short-term memory. Remarkably, if the same high-frequency stimulus is applied repeatedly (three times in our experiments), the synapse becomes strengthened permanently, a state called late LTP. But the stimuli cannot be repeated one after the other. Instead each stimulus burst must be spaced by sufficient intervals of inactivity (10 minutes in our experiments). And adding chemicals that block mRNA or protein synthesis to the salt solution bathing the brain slice will cause the synapse to weaken to its original strength within two to three hours. Just as in whole organisms, the cellular model of short-term memory is not dependent on the nucleus, but the long-term form of memory is. |
Indeed, Frey and Morris had used this technique to show that synapse-strengthening proteins would affect any temporarily strengthened synapse. First, they stimulated a synapse briefly to induce early LTP, which would normally last just hours. Then they fired a second synapse on the same neuron in a manner that would induce late LTP in that synapse: three bursts separated by 10 minutes. As a result, both synapses were permanently strengthened. The stronger stimulus sent a signal to the nucleus calling for memory-protein manufacture, and the proteins "found" any synapse that was already primed to use them. Based on our work showing how different patterns of impulses could activate specific genes, and recalling Hebb's theory that the firing of a neuron was critical in determining which of its connections will be strengthened, we asked whether a signaling molecule sent from the synapse to the nucleus was really necessary to trigger long-term memory formation. Instead we proposed that when a synapse fired strongly enough or in synchrony with other synapses so as to make the neuron fire action potentials out its axon, calcium should enter the neuron directly through voltage-sensitive channels in the cell body and activate the pathways we had already studied leading to CREB activation in the nucleus. |
To test our theory, postdoc Serena Dudek and I administered a drug known to block synaptic function to the brain slice. We then caused neurons to fire action potentials by using an electrode to stimulate the neurons' cell bodies and axons directly. Thus, the neurons fired action potentials, but the synaptic inputs to these neurons could not fire. If a synapse-to-nucleus signaling molecule was necessary to trigger late LTP, our cellular model of long-term memory formation, then this procedure should not work, because the synapses were silenced by the drugs. On the other hand, if the signals to the nucleus originated from the neurons firing action potentials, as in our developmental studies, silencing the synapses should not prevent activation of the memory-protein genes in the nucleus. We next processed the brain tissue to determine if the transcription factor CREB had been activated. Indeed, in the small region of brain slice that had been stimulated to fire action potentials in the complete absence of synaptic activity, all the CREB had a phosphate molecule added to it, indicating that it had been switched to the activated state. We then checked for activity of the gene zif268, which is known to be associated with creation of LTP and memory. We found that it, too, was turned on by the hippocampal neuron firing, without any synaptic stimulation. But if we performed the same stimulation in the presence of another drug that blocks the voltage-sensitive calcium channels--which we suspected were the actual source of the signal from the membrane to the nucleus--we found that CREB phosphorylation, zif268 and a protein associated with late LTP called MAPK were not activated after the neurons fired. |
These results clearly showed that there was no need for a messenger from the synapse to the nucleus. Just as in our developmental studies, membrane depolarization by action potentials opened calcium channels in the neuronal membrane, activating signaling pathways to the nucleus and turning on appropriate genes. It seems to make good sense that memory should work this way. Rather than each synapse on the neuron having to send private messages to the nucleus, the transcriptional machinery in the nucleus listens instead to the output of the neuron to decide whether or not to synthesize the memory-fixing proteins. Molecular Memento |
This understanding offers a very appealing cellular analogue of our everyday experience with memory. Like Leonard in Memento or any witness to a crime scene, one does not always know beforehand what events should be committed permanently to memory. The moment-to-moment memories necessary for operating in the present are handled well by transient adjustments in the strength of individual synapses. But when an event is important enough or is repeated enough, synapses fire to make the neuron in turn fire neural impulses repeatedly and strongly, declaring "this is an event that should be recorded." The relevant genes turn on, and the synapses that are holding the short-term memory when the synapse-strengthening proteins find them, become, in effect, tattooed. |
R. DOUGLAS FIELDS is chief of the Nervous System Development and Plasticity Section of the National Institute of Child Health and Human Development and adjunct professor in the Neurosciences and Cognitive Science Program at the University of Maryland. His last article in Scientific American, " The Other Half of the Brain" (April 2004), described the importance of glial cells to thinking and learning. |
|
MORE TO EXPLORE: |
Regulated Expression of the Neural Cell Adhesion Molecule L1 by Specific Patterns of Neural Impulses. Kouichi Itoh, B. Stevens, M. Schachner and R. D. Fields in Science, Vol. 270, pages 1369–1372; November 24, 1995. |
Synaptic Tagging and Long-Term Potentiation. Uwe Frey and Richard G.M. Morris in Nature, Vol. 385, pages 533–536; February 6, 1997. |
Somatic Action Potentials Are Sufficient for Late-Phase LTP-Related Cell Signaling. Serena M. Dudek and R. Douglas Fields in Proceedings of the National Academy of Sciences USA, Vol. 99, No. 6, pages 3962–3967; March 19, 2002. |
Memory Systems of the Brain: A Brief History and Current Perspective. Larry R. Squire in Neurobiology of Learning and Memory, Vol. 82, pages 171–177; November 2004. |
Researchers Use X-Rays to 'See' Fingerprints
In the standard approach to lifting fingerprints from a crime scene, known as contrast enhancement, a sample is treated with a substance--either vapor, liquid or powder--that adds color to a fingerprint and allows it to stand out from its background. Prints left on such surfaces as leather, plastic or fibrous textiles, can sometimes be difficult to detect, however. The technique developed by Chris Worley of the Los Alamos National Laboratory and his colleagues is a noninvasive one that relies on a process known as micro-x-ray fluorescence (MXRF). When a surface is exposed to a thin beam of x-rays, the MXRF instrument detects elements such as sodium, potassium and chlorine, which are present as salts in human sweat. Because the salts are deposited along the ridges present in a fingerprint, the fluorescence can be used to assemble a digital image of a print. "This process represents a valuable new tool for forensic investigators that could allow them to nondestructively detect prints on surfaces that might otherwise be undetectable by conventional methods," Worley says. "It won't replace traditional fingerprinting, but could provide a valuable complement to it." | ||||
MXRF cannot detect all the prints that conventional techniques do, because some prints won't contain enough of the necessary elements. But it might find some prints that would otherwise be missed: the researchers' tests illustrated that MXRF successfully identified prints from subjects whose hands were exposed to sunscreen, lotion or saliva, which could interfere with contrast enhancement. Currently, this method can only test samples that can physically be transported to a laboratory that has an MXRF machine. If further testing and refinement of the technique are successful, the team predicts it could be used commercially in two to five years, perhaps as a portable device. |
Monkeys Pay for Prurient Pictures
Robert Deaner of Duke University Medical Center and his colleagues studied male rhesus macaques that received juice rewards while looking at a variety of images of other macaques on a computer screen. The pictures included a neutral target, male monkeys that differed in social standing and the hindquarters of a female monkey, which reveal her sexual receptiveness. By systematically varying the amount of juice offered to the monkeys while changing the pictures they were seeing, the scientists determined how much the animals were willing to give up, or pay, in order to glimpse specific images. The team discovered that monkeys would give up a significant reward if it meant viewing high-ranking individuals or female behinds. But when given the chance to glance at images of low-ranking males, the subjects held out for additional juice. | ||||
The findings may help scientists understand the neural wiring that underlies social cognition. "At the moment, it's only a tantalizing possibility, but we believe that similar processes are at work in these monkeys and in people," says study co-author Michael Platt, also at Duke. "After all, the same kinds of social conditions have been important in primate evolution for both nonhuman primates and humans. So, in further experiments, we also want to try to establish in the same way how people attribute value to acquiring visual information about other individuals." The findings will appear in the March issue of Current Biology. --Sarah Graham |
2005-04-11
Secret of the Venus Fly Trap Revealed
Lakshminarayanan Mahadevan of Harvard University and his colleagues used high-speed video to catch the Venus fly trap in action. The researchers first painted the plant's leaves with ultraviolet fluorescent dots and then filmed them shutting under ultraviolet light. By analyzing the images and modeling the movement using a mathematical formula, the team reconstructed the geometry of the leaves as they closed. When trigger hairs on the leaves are disturbed, the plant moves moisture in the leaf in response. This, in turn, affects the leaf's curvature. "In essence, a leaf stretches until reaching a point of instability where it can no longer maintain the strain," explains Mahadevan. "Like releasing a reversed plastic lid or part of a cut tennis ball, each leaf folds back in on itself, and in the process of returning to its original shape, ensnares the victim in the middle." | ||||||
A better understanding of the Venus fly trap's impressive system could help researchers lean to emulate it. The team speculates that similar muscle-free movement could applied to valves or switches in microfluidic devices or sensors. --Sarah Graham |
DNA Helps Nanoparticles Pull Themselves Together
| |
Image: MICHIGAN CENTER FOR BIOLOGICAL NANOTECHNOLOGY | |
Scientists at the University of Michigan have been working with branched polymers just nanometers long called dendrimers, which can carry many different types of molecules attached to their ends. Armed with contrast agents and drugs, a dendrimer can then locate and signal the presence of diseased tissue. But building a multifaceted dendrimer complex is labor intensive and requires separate, lengthy reaction steps for each additional molecule. In the current issue of the journal Chemistry and Biology, Youngseon Choi and his colleagues describe a different technique, which exploits the natural tendencies of DNA to speed up the process. The team first made separate batches of dendrimers, each carrying a single type of molecule as well as a small swatch of noncoding DNA. When solutions of these dendrimers were combined, the lengths of DNA formed complementary pairs, knitting the two dendrimer complexes together.
Using this approach, assembling a therapeutic dendrimer that could deliver five drugs to five different types of cells would require 10 steps. The traditional approach would require 25, each taking between two and three months. "With this approach, you can target a wide variety of molecules, drugs [and] contrast agents to almost any cell," comments study co-author James Baker of the University of Michigan. The results have proved the concept is feasible, the authors note, and could usher in a new age of self-assembling disease-fighters. --Sarah Graham
Chimps' Sense of Justice Found Similar to Humans'
In the fall of 2003 Sarah Brosnan and Frans de Waal of the Yerkes National Primate Research Center in Atlanta determined that capuchin monkeys don't like being subjected to treatment they deem unjust. In the new work, the researchers tested the reactions of pairs of chimpanzees to exchanges of food that varied in quality. The animals received either a grape, which they coveted, or a less appealing cucumber, and they could see what their partner obtained. In pairs of chimps that had lived together since birth, the individual given the cucumber was less likely to react negatively to the situation than was the short-changed member of a pair that did not know each other as well. Indeed, chimps in the short-term social groups refused to work after their partner received a better reward for the same job. "Human decisions tend to be emotional and vary depending on the other people involved," Brosnan says. "Our findings in chimpanzees implies this variability in response is adaptive and emphasizes there is not one best response for any given situation, but rather it depends on the social environment at the time." | ||||||
Further experiments to investigate reactions to unfair situations are ongoing at the center in the hopes of understanding why we humans make the decisions we do. "Identifying a sense of fairness in two closely related nonhuman primate species implies it could have a long evolutionary history," Brosnan remarks. The findings will be published in the February 7 edition of the Proceedings of The Royal Society B: Biological Sciences. --Sarah Graham |
You, Robot
He says humans will download their minds into computers one day. With a new robotics firm, Hans Moravec begins the journey from warehouse drones to robo sapiens | |
By Chip Walter | |
|
When word got around that Hans Moravec had founded an honest-to-goodness robotics firm, more than a few eyebrows were raised. Wasn't this the same Carnegie Mellon University scientist who had predicted that we would someday routinely download our minds into robots? And that exponential advances in computing power would cause the human race to invent itself out of a job as robots supplanted us as the planet's most adept and adaptive species? Somehow, creating a company seemed ... uncharacteristically pragmatic. But Moravec doesn't see it that way. He says he didn't start Seegrid Corporation because he was backing off his predictions. He founded the company because he was planning to help fulfill them. "It was time," he says, slowly rubbing his hand across his bristle-short hair. "The computing power is here." | ||||
The 56-year-old Moravec should know. Born in Kautzen, Austria, and raised in Montreal, he has been pushing the envelope on robotics theory and experimentation for the past 35 years, first as the graduate student at Stanford University who created the "Stanford Cart," the first mobile robot capable of seeing and autonomously navigating the world around it (albeit very slowly), and later as a central force in Car-negie Mellon's vaunted Robotics Institute. His iconoclastic theories and inventive work in machine vision have both shocked his colleagues and jump-started research; Seegrid is just the next logical step. Moravec pulls an image up onto one of the two massive monitors that sit side by side on his desk, like great unblinking eyes. It's six o'clock in the evening, but an inveterate night owl, he's just starting his "day." "I have been drawing these graphs for years about what will be possible," he comments. His mouse roams along dots and images that plot and compare the processing power of old top-of-the-line computers with their biological equivalents. There is the ENIAC, for example, that in 1946 possessed the processing capacity of a bacterium and then a 1990 model IBM PS/2 90 that once harnessed the digital horsepower of a worm. Only recently have desktop computers arrived that can deliver the raw processing muscle of a spider or a guppy (about one billion instructions per second). "At guppy-level intelligence," he explains, "I thought we could manage 3-D mapping and create a robot that could get around pretty well without any special preparation of its environment." | ||||
But no one was creating that robot, so in the late 1990s Moravec says he began to grow "very antsy" about getting one built. In 1998 he wrote an ambitious grant proposal that outlined software for a robotic vision system. The Defense Advanced Research Projects Agency quickly funded the proposal, and three and a half years and $970,000 later, with PCs just reaching guppy smarts, a working demonstration was complete. "It proved the principle," Moravec says. "We really could map with stereo vision, if we did things just right." But doing things just right required more than prototype software. Robotic evolution, he adds, "has to be driven forward by a lot of trial and error, and the only way to get enough is if you have an industry where one company is trying to outdo another." To help things along, he and Pittsburgh physician and entrepreneur Scott Friedman founded Seegrid in 2003. Their focus: the unglamorous but potentially huge "product handling" market. Industrial robots already flourish in tightly constrained environments such as assembly lines. Where they fail is in locations loaded with unpredictability. So Seegrid concentrated on creating vision systems that enable simple machines to move supplies around warehouses without any human direction. Not exactly the stuff of science fiction, Moravec agrees, and a long way from superintelligent robots, but he says you have to start somewhere. Nearly everything sold has to be warehoused at some point, and at some point it also has to be rerouted and shipped. Right now human workers move millions of tons of supplies and products using dollies, pallet jacks and forklifts. Seegrid's first prototype devices automate that work, turning wheeled carts into seeing-eye machines that can be loaded and then walked through various routes to teach them how to navigate on their own. The technology is built on Moravec's bedrock belief that if robots are going to succeed, the world cannot be adapted to them; they have to adapt to the world, just like the rest of us.
|
Scientists Unravel How Geckos Keep Their Sticky Feet Clean
Previous research had hinted at a built-in cleaning process for gecko feet, but just how the creatures kept their toes tidy remained a mystery because they neither groom their footpads nor secrete fluids. Kellar Autumn and Wendy R. Hansen of Lewis and Clark College measured the amount of force between the setae and different surfaces both when they were dirt-free and in the presence of particulate contamination. They found that it takes only a few steps for setae to shed tiny silica spheres. "Self-cleaning in gecko setae may occur because it is energetically favorable for particles to be deposited on the surface rather than remain adhered to the spatulae," they write in the current issue of the Proceedings of the National Academy of Sciences. | ||||||
The findings indicate that gecko foot cleaning occurs even under extreme exposure to clogging particles. To best imitate this property in synthetic adhesives the authors posit that an array of adhesive nanostructures should be made out of a relatively hard material having a small surface area and low surface energy for optimum performance. --Sarah Graham |
A Glimpse of Supersolid
| |
Solid helium can behave like a superfluid | |
By Graham P. Collins | |
|
Solids and liquids could hardly seem more different, one maintaining a rigid shape and the other flowing to fit the contours of whatever contains it. And of all the things that slosh and pour, superfluids seem to capture the quintessence of the liquid state--running through tiny channels with no resistance and even dribbling uphill to escape from a bowl. A superfluid solid sounds like an oxymoron, but it is precisely what researchers at Pennsylvania State University have recently witnessed. Physicists Moses Chan and Eun-Seong Kim saw the behavior in helium 4 that was compressed into solidity and chilled to near absolute zero. Although the supersolid behavior had been suggested as a theoretical possibility as long ago as 1969, its demonstration poses deep mysteries. |
Chan and Kim observed such a decrease of rotational inertia in a ring of solid helium. They applied about 26 atmospheres of pressure to liquid helium, forcing the atoms to lock in place and thereby form a fixed lattice. They observed the oscillations of the helium as it twisted back and forth on the end of a metal rod. The period of these torsional oscillations depended on the rotational inertia of the helium; the oscillations occurred more rapidly when the inertia went down, just as if the mass of the helium decreased. Amazingly, they found that about 1 percent of the helium ring remained motionless while the other 99 percent continued rotating as normal. One solid could somehow move effortlessly through another. |
So how can a solid behave like a superfluid? All bulk liquid superfluids are caused by Bose-Einstein condensation, which is the quantum process whereby a large number of particles all enter the same quantum state. Chan and Kim's result therefore suggests that 1 percent of the atoms in the solid helium somehow form a Bose-Einstein condensate even while they remain at fixed lattice positions. That seems like a contradiction in terms, but the exchange of atoms between lattice sites might allow it. A characteristic of helium would tend to promote such an exchange--namely, its large zero-point motion, which is the inherent jiggling of atoms that represents a minimum amount of movement required by quantum uncertainty. (It is the reason helium ordinarily only occurs as a gas or a liquid: the extremely lightweight atoms jiggle about too much to form a solid.) Supporting the idea of condensation, the two researchers did not see superfluidity in solid helium 3, an isotope of helium that as a liquid undergoes a kind of condensation and becomes superfluid only at temperatures far below that needed by liquid helium 4. Another possibility is that the crystal of helium contains numerous defects and lattice vacancies (yet another effect of the zero-point motion). These defects and vacancies could be what, in effect, undergo Bose-Einstein condensation. But all those theories seem to imply that the superfluidity would vary with the pressure, yet Chan and Kim see roughly the same effect all the way from 26 to 66 atmospheres. Douglas D. Osheroff of Stanford University, the co-discoverer of superfluidity in helium 3, calls the lack of pressure dependence "more than a bit bewildering." He says that Chan and Kim have done "all the obvious experiments to search for some artifact." If they are correct, Osheroff adds, then "I don't understand how supersolids become super. I hope the theorists are thinking about it seriously." |
Early Mammal Dined on Dinosaurs
Researchers from the American Museum of Natural History (AMNH) in New York City and the Chinese Academy of Sciences in Beijing recovered the 130-million-year-old remains of an opossum-size mammal from China's fossil-rich Liaoning province. While cleaning the fossil of Repenomamus robustus, the team discovered a small patch of bones within the rib cage, where the stomach of similarly sized living mammals would be. The stomach contents included the limbs, fingers and teeth of a juvenile herbivorous dinosaur known as a psittacosaur. Although adult psittacosaurs grew to a height of around six feet, the baby prey was just five inches long, about a third the size of Repenomamus robustus. From wear marks on the dinosaur's teeth, the researchers inferred that it was not an embryo. (This supports the notion that the mammal hunted the dinosaur, rather than snatching an unhatched egg out of a nest.) In addition, they surmise that it was swallowed in chunks because some of its bones were still connected. | ||||||
The scientists also found remains of a larger mammal, about the size of a small dog, that was a close relative of the dinosaur eater. This fairly complete skeleton, Repenomamus giganticus, represents the largest mammal known so far from the Mesozoic era (spanning from about 250 million to 65 million years ago). Both creatures belong to a lineage that has no extant descendants. Remarks study co-author Jin Meng of AMNH, "This new evidence of larger size and predatory, carnivorous behavior in early mammals is giving us a drastically new picture of many of the animals that lived in the age of dinosaurs." --Sarah Graham |
Cricket Courting Can Be a Deadly Deed
very funny... | |
|
John Hunt of the University of New South Wales in Australia and his colleagues observed two groups of field crickets. One group ate a restrictive, low-protein diet whereas the other dined on protein-rich foods. The researchers monitored the creatures' size, mating behavior and how long they lived. For female crickets, those fed a robust diet lived longer than did their protein-starved counterparts. This pattern did not hold for the males, however. Instead, the well-fed males used their extra energy to woo female partners by calling more extensively during early adulthood and experienced shortened life spans as a result. "They literally knocked themselves out trying to impress female crickets," says study co-author Luc F. Bussiere, also at the University of New South Wales. | ||||
The findings demonstrate that the best reproductive strategy in the animal kingdom does not always coincide with living a long life. What is more, long-lived males are not necessarily those in the best condition, which indicates that longevity is not always a reliable measure of male quality. "One thing that consistently prolongs life span in a range of species is a restricted diet," remarks co-author Rob Brooks of the University of New South Wales. "Now we know a bit more about how this occurs in male crickets--by suppressing sexual advertisement." --Sarah Graham |