The world's largest instrument resides in a cave - Axios
Science
Featured

The world's largest instrument resides in a cave

PBS News Hour / YouTube

The world's largest instrument exists deep inside the Luray Caverns in the Appalachian Mountains, and it's 400 million years in the making, Scientific American reports. It's called the Great Stalacpipe Organ, but it's unlike a typical organ, which forces air through pipes to create music — this instrument rhythmically strikes the cave's stalactites to create beautiful sounds.

How it works: Large, rubber mallets sit next to the 37 stalactites. When the organist strikes a key on the instrument, the corresponding mallet strikes the stalactites, all coordinated via a hidden mechanical device that receives electrical signals from the organ. The organ's inventor sanded down 35 of the 37 stalactites to perfect their tone (two didn't need it) and the 3.5-acre cave lends natural acoustics for the songs.

Because the acoustics are not uniform throughout the cavern, it can be difficult to play by ear. So an automated system — using a plastic sheet with holes in it that rotates around a metal drum — plays songs similar to a player piano. When the metal meets the drum, corresponding stalactites sound off.

Go deeper: A combination of Mother Nature's work, time and a skilled mathematician helped create the Great Stalacpipe Organ. The large, echoey chambers of Luray Caverns have formed naturally over the past 400 millions years. Calcium-rich water droplets eventually formed the large stalactites that hang from the cave's ceiling and now act as an integral part of this instrument. Back in 1954, mathematician and electronics engineer Leland Sprinkle visited the caverns. As was customary during these tours, the guide would strike the stalactites to show visitors how each one gave off a unique sound. That moment sparked an idea in Sprinkle's head — create the Great Stalacpipe Organ, which he did in three years.


Featured

Ancient engravings depict dogs on leashes

A dog carving found on a cliff in M.Guagnin. Credit: Journal of Anthropological Archaeology, 2017

Ancient sandstone engravings found in the Arabian desert depict dogs wearing leashes, according to a paper published Thursday in the Journal of Anthropological Archaeology and reported by David Grimm at Science.

Why it matters: It's difficult to date these carvings, but "based on the sequence of carving, the weathering of the rock, and the timing of switch to pastoralism" the pictures are likely 8,000-9,000 years old, writes Grimm, who notes that this would make them the oldest known depictions of dogs.

Modern resemblance: The dogs depicted have curly tails, like the Canaan dogs that live in the Middle East today.

Yes, but before the age of these engravings can be confirmed, they'll need to be tied to a well-dated archaeological site. Melinda Zeder, an archaeologist at the Smithsonian Institute of Natural History tells Grimm doing so could be difficult because "the archaeological record in this region is really spotty." Even if these aren't the oldest dog depictions we've found, they're definitely the oldest of leashes.

Featured

How evolution shaped passenger pigeons' DNA — and their fate

Martha, the last known passenger pigeon before dying in 1914, can be seen at the Smithsonian's Natural History Museum. Photo: Susan Walsh / AP

A new study suggests passenger pigeons, which once covered North America with massive flocks before their extinction in the early 20th century, may have maintained stable populations for thousands of years until human hunters came along, per The Washington Post. That counters previous research that found the species had already taken a downturn by that time.

The double whammy: Besides the sudden influx of human predators, the birds' genome had been tuned to the size of its population. It had surprisingly low genetic variation in some parts of its genome, which "provided few avenues for the bird to respond to human pressures, which ultimately drove it to extinction," according to the study, published Thursday in Science.

Role of genetics: One way genomes evolve is via random mutation (also called neutral evolution). Those mutations don't necessarily have an immediate benefit but sometimes can in the long run. Another process is selection in which one version of a gene is preferred — or not — over another because it influences survival. Researchers found the passenger pigeon's genome was diverse overall compared to other birds but that diversity wasn't uniform across their chromosomes. The researchers think that suggests their large population size allowed them to adapt quickly to their environment (via selection) but the cost was that there wasn't much neutral evolution happening, which left them with little genetic variation.

"Large population size appears to have allowed for faster adaptive evolution and removal of harmful mutations, driving a huge loss in their neutral genetic diversity," the researchers wrote. "These results demonstrate the effect that selection can have on a vertebrate genome and contradict results that suggested that population instability contributed to this species's surprisingly rapid extinction."

The bottom line: The study says having a huge population was initially a key survival mechanism for the passenger pigeon. However, the birds' surprisingly low genetic variation caused it to be unable to recover from humanity's overhunting practices. As one of the study authors told WashPost, "It's impossible to adapt to gunfire."

Editor's note: This story has been updated to provide further information.

Featured

Pluto's hazy atmosphere keeps its surface icy cold

Image released by NASA in Oct. 2015 shows a haze surrounding Pluto. Photo: NASA/JHUAPL/SwRI

Pluto's thick hazy atmosphere may be responsible for its temperature of -203ºC, according to a study published this week in Nature.

How it works: Hydrocarbon particles created in chemical reactions in Pluto's upper atmosphere group together as they fall toward the surface and are "transformed into thick layers of haze," Alexandra Witze writes in Nature. Haze doesn't block light from the the sun, but it is able to cool down and heat up the atmosphere.

Why it matters: Leslie Young, a planetary scientist at the Southwest Research Institute in Colorado, told Nature it's important to understand Pluto's atmosphere in order to work out what might be happening on other icy planets.

  • Another view: There are other ideas about why Pluto's atmosphere is so cold, including a combination of hydrogen cyanide, acetylene, and ethane gas. The haze model could be tested with observations from NASA's James Webb Space Telescope, which is now scheduled to launch in 2019.
Featured

How bees decode each others' dances

The three neurons involved in deciphering the waggle dance. Image: Hidetoshi Ikeno / University of Hyogo

Scientists have mapped some of the neurons that let bees talk by dancing.

Why it matters: Bees, who accomplish impressive things despite their tiny stature, have become models for understanding cognition. Scientists study how they navigate and recognize faces — and now, how they share information. "We're starting to understand how a fairly simple neural system, like a bee's, can solve a complex task like communication," says Thomas Wachtler, a researcher at the Ludwig Maximilian University of Munich and an author on the study.

Bees tell each other how to find pollen-laden flowers using the 'waggle dance.' It's incredibly precise, and can pinpoint a flower miles away. A bee stomps and vibrates her wings and waggles her abdomen while walking in a straight line, then circles back to the start and does it again. The angle she moves says which way to go. The amount of time she wags tells the distance. Other bees follow the waggle map.

The catch: Hives are pitch-black. The observing bees don't see the dance — they hear and feel it.

Researchers already knew which neurons the bees used to feel vibrations, and they knew about the dance. But no one had looked at how the two interacted.

How they did it: Wachtler, along with Hiroyuki Ai and his colleagues at Fukuoka University and the University of Hyogo, drummed the beat of an artificial waggle dance to a bee, and measured signals from the neurons. At the center of the brain's response were three neurons: the first starts or stops the second in response to sound – so that measures the time period of the waggle. The purpose of the third isn't clear yet, but since it receives signals from both of the bee's antennae, Wachtler thinks it helps the observers track where the dancing bee is in space, so they can determine the angle of the waggle.

Featured

New stem cell research offers promise — and raises questions

Illustration: Rebecca Zisser / Axios

New advances in stem cell research have the potential to save lives – but not necessarily for the reasons people think. In the late 90s and early 2000s, scientists and the press heralded the promise of these cells that appeared to have the ability to become whatever type of cell was needed to replace or fix damaged tissues. But major advances were slow to come, and the hype faded.

What's happening now: Instead of flashy, morphing cells, the stem cell therapies of today are much more subtle, work in unexpected ways, and it's not always clear why. Still, these advances are promising, so much so that today the FDA released a newly restructured framework for regenerative medicine, including stem cells, to help expedite applications for new therapies.

Background: Stem cells are cellular blank slates. They take cues from their environment and permanently become a more specialized kind of cell. Some used in medicine do come from fetuses, but many persist into adulthood and are also used in some therapies:

  • Bone marrow stem cells, for example, can become blood, cartilage, or bone cells.
  • And, skin or blood cells can be re-programmed back into stem cells called induced pluripotent stem cells.

What's new: Research has steadily chugged along away from the limelight, "which is honestly how we prefer it," says neuroscientist Evan Snyder of UC San Diego, who admits some responsibility for the hype of the early-aughts. Small advances have accumulated, and there are currently several active human clinical trials using various types of stem cells to treat diseases, including:

  • ALS patients: With this neuromuscular disease, also called Lou Gehrig's disease, the brain cells called glia degrade. Stem cells injected into rats seem to protect these glia. Cedars-Sinai Medical Center has begun recruiting human patients for a phase 1 clinical trial.
  • Stroke patients: Bone marrow stem cells injected into the blood helped reduce movement difficulties in a trial of 31 recent stroke patients conducted by the University of Grenoble in France and the University of Baltimore in Maryland. The findings were presented in a poster at the Society for Neuroscience's annual meeting on Monday, and plans for a 400-patient study are underway.
  • Patients with spinal injuries: In a clinical trial of six patients with recent spinal injuries, all regained some motor function after receiving oligodendrocyte progenitor cells, a type of stem cell.
  • Buyer, beware: There are predatory clinics offering cure-all stem cell treatments, and the new guidelines issued today crack down on what the FDA calls "unscrupulous actors" under the guise of cutting-edge science. Outside of clinical trials, the FDA has only approved the use of a specific group of stem cells (from cord blood) for a specific set of blood-related illnesses.

What's next:

  • In injuries like gunshot wounds, the brain's own immune system turns on itself and attacks neurons, demolishing large regions of the brain. Research conducted by Shyam Gajavelli, a neurologist at the University of Miami, shows that human neuronal stem cells can prevent this process in rats, potentially by giving the immune system something other than brain cells to attack.
  • In a poster presented Monday at the Society for Neuroscience meeting, Gajavelli also reported stem cells protected rats from injury-related coordination problems. He says that more research is needed before they're ready for human trials, however.
A black box: Researchers agree that stem cells work to treat many diseases. However, "there's a sort of black box around the mechanism with stroke," says Thomas Zeffiro from the University of Maryland, who is involved with human trials on stroke and stem cells.
  • Part of the mystery is that in past stroke studies, it appears stem cells injected into the blood stream never reach the brain, but are instead processed in the spleen, according to Snyder. Despite the mechanistic mystery, the benefits of stem cells for stroke in lab animals are well-documented.
  • It's not just stroke. Although numerous trials have shown that stem cells can be effective treatments, in many cases the exact ways they work aren't yet clear. Snyder suspects that the therapeutic strength of stem cells might not lie in their abilities to heal, but in their abilities to protect:

"It could be anti-inflammation, it could be protective against scar formation, it could be building an extracellular matrix. It might not even be just one mechanism," Snyder says. '"I'd go so far as to say that almost any positive outcome seen in humans or animals is due to neuroprotection."

Why this is important: As several researchers noted, the FDA is understandably reluctant to approve new treatments if it isn't clear why they work. Until scientists better understand reasons different stem cell treatments seem to help with different diseases, this could limit the development of more effective, precise treatments.

One more thing: Although all these advances are significant and important, Snyder thinks there's an area of stem cell research that is even more promising – as tools that:

  • measure the progression of a disease
  • act as 'reporter cells' that alter as they move through the body in ways scientists can track.
  • can be used to study drug toxicity.
  • help researchers understand more about how development happens, from egg to embryo to full-blown life.
Featured

AI searches for new inspiration

Illustration: Lazaro Gamio / Axios

Deep learning — the AI technique that allowed a computer to beat a world-champion Go player — has become very good at recognizing patterns in images and games. But it's loosely based on ideas we've had about the human brain for decades. Researchers now have more insights from neuroscience and better technologies, both of which they are trying to use to make more intelligent machines.

What's new: On Tuesday, DeepMind co-founder Demis Hassabis presented new work from the company that indicates a move into different territory. Researchers gave an AI system pictures of a 3D scene, along with the coordinates of the camera angles, and it was able to output a new scene from an angle it had never seen. Being able to build models of the world like this — and then use them to react and respond to new situations never encountered before — is considered key to intelligence.

The unpublished work was presented at the Society for Neuroscience's annual meeting in Washington, D.C. It's one example of different kinds of learning that researchers would like to develop in AI — and one based on aspects of human intelligence that computers haven't mastered yet.

The approach is among a few being tried but one that some researchers are excited about because, as Hassabis recently wrote, "[The human brain is] the only existing proof that such an intelligence is even possible."

"A lot of the machine learning people now are turning back to neuroscience and asking what have we learned about the brain over the last few decades, and how we can translate principles of neuroscience in the brain to make better algorithms," says Saket Navlakha, a computer scientist at the Salk Institute for Biological Sciences.

Last week, he and his colleagues published a paper suggesting that incorporating a strategy used by fruit flies to decide whether to avoid an odor it hasn't encountered before can improve a computer's searches for similar images.

Other goals:

  • One-shot learning. Children can learn a new word, task or concept from few examples. For some of the first deep learning algorithms, it required massive amounts of data. Progress has been made in reducing the amount of data needed, but it is still far more than what a two-year-old needs to learn.
  • Attention: In a crowded place, most of us are able to pay attention to what we need to know and filter out the rest. "Trying to include this idea in neural networks and machine learning is something people are paying more attention to," says Navlakha.
  • External memory: Brains have multiple systems for memory that operate at different time scales. Researchers want to see if they can give algorithms the equivalent of working memory or scratch pads. DeepMind combined external memory with deep learning to create an algorithm that can efficiently navigate the London Underground.
  • Intuitive physics. We recognize when something is physically off — an airplane balancing on its wing on a highway is clearly not right to us. But when a computer puts a caption to just that image, it reads "an airplane is parked on a tarmac at the airport." NYU's Brenden Lake says, "We don't know how the brain has those abilities."
  • Lifelong learning. Humans are built to constantly integrate new and perhaps sometimes conflicting information, resolve it and maybe even at times have to revise our entire understanding of something. "This constant change over time is something machine learning and AI has been struggling with," says Navlakha.

The big question for all AI approaches: What problem is a particular algorithm best suited to solve, and will it be better than other AI techniques? For neuroscience-inspired AI, there has been early progress but "the jury is still out," says Oren Etzioni, who heads the Allen Institute for Artificial Intelligence.

The big picture: It isn't about replicating the brain in a computer, but building a mathematical theory of learning, says Terrence Sejnowski who is also at the Salk Institute. "Eventually we will get to a point where theory in the machine learning world will illuminate neuroscience in a way unlike we've seen so far."

The back story: Deep learning algorithms only started to work in recent years as more data became available to train them and more processing power could be dedicated to them. In that sense, Sejnowski and others say what we've seen so far is really an "engineering achievement."

The field's pioneer, Geoffrey Hinton, recently said it needs new ideas.

The recent advances have reignited a bit of a debate among AI researchers about how best to actually do this. One way is to find principles of how the brain works and translate them into machine learning and other applications.

There's the "build it like the brain" approach — and to that end, efforts to map how neurons communicate with one another. And then there is the strategy of hard-wiring rules gleaned from models of how humans learn. MIT's Joshua Tenenbaum, Lake and their colleagues suggest the latter is needed to get beyond the accomplishments of pattern recognition. It's very likely advances will come from combining both.

"A more productive way to think about it is that there are some core things that infants, children, and adults use to learn new concepts and perform new tasks," says Lake. He suggests these principles of development and cognition should be seen as milestones and targets for machine learning algorithms to capture, however they get there.

Expert Voices Featured

Capsule networks advance AI image recognition

Photo illustration: Axios Visuals

Geoffrey Hinton, a Google researcher and professor at the University of Toronto, helped pioneer artificial neural networks, the technology behind most of the major advances in machine learning. And now he's come up with a new idea that he thinks is even more powerful.

Hinton calls his latest creation "capsule networks." Each capsule is a group of artificial neurons trained to track a specific feature of an image. Combining them allows an AI system to understand the spatial relations between different features of an image, so it can identify different views of the same image. Hinton has shown that this technique performs much better than existing systems in a challenge to recognize objects from different angles.

Why it matters: To existing neural networks, two images of the same object from different angles look like totally different objects. This means neural networks asked to recognize objects in images need to train on images from many different angles, which requires vast amounts of data. For example, the ImageNet data set, used in the image recognition competition that's been the benchmark for these systems for the last seven years, contains more than 13 million images. The hope is that capsule networks could achieve the same results working from much smaller data sets.

Featured

New Earth-like planet may be the right temperature for life

Illustration from the European Southern Observatory shows the planet Ross 128 b, foreground, which orbits a red dwarf star, 11 light-years from Earth. Photo: M. Kornmesser / ESO via AP

Scientists announced today the discovery of Ross 128 b, an Earth-sized planet 11 light-years away that may be a new target in the search for extraterrestrial life.

Why it matters: The planet is believed to be in the "habitable zone" because it's temperature falls between -76 and 68 degrees Fahrenheit, which could allow water to exist on its surface, according to The Planetary Society. Ross 128 b orbits a so-called "quiet" star that "spews out comparatively less radiation that could harm life as we know it." Another Earth-like planet — Proxima b — is closer to our solar system but bombarded with radiation.

The bottom line: Scientists aren't yet certain about its habitability, but the discovery could lead to future searches for extraterrestrial intelligence on the planet.

Featured

How India's colonial past shapes its science today

Illustration: Lazaro Gamio / Axios

Today, India hosts some of the world's top doctors and scientists. But the focus on global science sometimes neglects local needs and expertise, says Indian science journalist Padma Tata Venkata, who goes by Padma TV.

Axios spoke with her about the impact of the English on research in her country as part of a series of interviews about a movement to decolonize science. Highlights from the interview conducted via e-mail — and edited for length and clarity — are below.

What does decolonizing science mean to you?

Decolonizing science means acknowledging the scientific accomplishments of non-western ancient civilizations. Of course, all modern scientific theories and methods of observation, enquiry and validation hold and should continue to hold. But one should also be open to the existence of scientific knowledge in civilizations that preceded the present one in which the West dominates, and propagate the oft-repeated narrative that true scientific knowledge evolved only in the West.

India had advanced technology before colonization. Did that change?

In colonial India, the then rulers dismissed traditional knowledge systems and their contributions to science. Colonization also created arbitrary divisions of western scientific rationale, logic and technology versus eastern superstition and magic, which further devalued any genuine scientific knowledge present before the colonizers arrived.

An example from the Indian sub-continent is the ancient systems of medicine that existed before the introduction of the British system of medicine or allopathy. These include Ayurveda, Siddha systems, Unani from West Asia. These systems were slowly marginalized and de-recognized during colonial rule when only allopathy was recognized and only its practitioners were considered eligible for registration as doctors.

With imposition of English, additionally, other local languages started to get marginalized, which meant loss of valuable information available in local languages.

Does English colonization have an impact on science in India today?

By the time India became a free country in 1947, it was economically poor and, hence, had to struggle to balance its investments in a number of sectors such as agriculture, health, education and science. India and other newly free countries also ended up with low self-esteem, with systematic under-mining of their older scientific legacies and the fact they lost out on the 'Industrial Revolution' of the West. They felt the only way forward was to 'catch up' and obtain parity the West in science and technology, and overall development , and so they need to imitate the west.

Hence, an independent India adopted the Western model of speedy development, which meant some of its technological choices were also a result of global structures and global technology politics. A western- and techno-centric model of development also meant aligning its scientific priorities with prevailing western scientific trends and focus; and disregarding local priorities, needs. The heavy reliance on Western interpretation of technology and culture often disregards local knowledge systems, especially of local ecological and socio-economic conditions, which are not seen as 'technologically advanced'. This has also led to adoption of faulty technologies that led to severe ecological and environmental crisis.

How has this focus on Western scientific trends influenced the research that's done?

Grassroots innovations are examples of simple, frugal, niche-specific innovations which address a local need, and are often by people who have not studied science or are not PhDs and post-docs,. These innovations tailored to solve local problems and lack institutional support of elite scientific institutes whose research agenda is dictated, or at any rate, influenced by current research trends and priorities in advanced countries. The innovations are not mentioned in peer-reviewed scientific literature.

Over 100,000 ideas, innovations and traditional knowledge practices from India and abroad are documented by Honey Bee Network, founded by Anil Gupta, professor emeritus at Indian Institute of Management, Ahmedabad. To cite a few examples: a power-generating pumping machine, tractor-mounted maize sheller, an 'amphibious' bicycle that helps you peddle in water, a compost aerator, natural convection drier for agricultural products.

Featured

Opioid addiction treatments are equally effective, study finds

Vivitrol packaging. Photo: Carla K. Johnson / AP

The first study in the U.S. to directly compare two medications to treat opioid addiction — one injected monthly, one given daily as a film placed under the tongue — found the treatments are equally effective, according to research published in the Lancet on Tuesday.

Yes but: Naltrexone (or Vivitrol) injections can't begin until someone has detoxed, typically after a few days. In the study, 28 percent of the participants who were to receive naltrexone didn't detox and couldn't begin the treatment.

What it means: Increasing access to medications for treating opioid addiction was a top recommendation from President Trump's opioid crisis commission. One potential advantage of naltrexone for some patients is that it is a monthly shot versus a daily treatment. Until now, there was limited data about the medication's effectiveness. "We need as many evidence-based options as possible because opioid abuse disorder is more deadly and affecting more people," says Alex Walley, a physician and researcher at Boston Medical Center's Grayken Center for Addiction, who was not involved in the study. He says detox programs should be offering both treatments and allowing patients and their physicians to choose.

What they did: 570 people who had used illegal opioids, mostly heroin, were recruited from inpatient treatment clinics in eight different locations across the U.S. Roughly half of the people in the study received monthly shots of Vivitrol, and the other half took buprenorphine and naloxone (Suboxone) at home each day.

The researchers then tracked any relapses, overdoses, and deaths in the groups over the following 24 weeks. 52 percent of the people that were able to detox and receive naltrexone relapsed during the next six months versus 56 percent of the group that received Suboxone. "We need to think of staying on medication as an important outcome," says Walley.

The big question: What happens long-term? Nora Volkow, director of the National Institute on Drug Abuse, which funded the trial, says they want to understand the characteristics of patients that determine if they will respond better to one treatment versus another. They also want to know how protocols used by centers to initiate treatments might be standardized and improved.

Editor's note: This story has been corrected. In the study, Suboxone was administered daily as a film placed under the tongue, not a pill.