Quantum tunneling takes time, new study shows

Quantum particles can burrow through barriers that should be impenetrable — but they don’t do it instantaneously, a new experiment suggests.

The process, known as quantum tunneling, takes place extremely quickly, making it difficult to confirm whether it takes any time at all. Now, in a study of electrons escaping from their atoms, scientists have pinpointed how long the particles take to tunnel out: around 100 attoseconds, or 100 billionths of a billionth of a second, researchers report July 14 in Physical Review Letters.
In quantum tunneling, a particle passes through a barrier despite not having enough energy to cross it. It’s as if someone rolled a ball up a hill but didn’t give it a hard enough push to reach the top, and yet somehow the ball tunneled through to the other side.

Although scientists knew that particles could tunnel, until now, “it was not really clear how that happens, or what, precisely, the particle does,” says physicist Christoph Keitel of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. Theoretical physicists have long debated between two possible options. In one model, the particle appears immediately on the other side of the barrier, with no initial momentum. In the other, the particle takes time to pass through, and it exits the tunnel with some momentum already built up.

Keitel and colleagues tested quantum tunneling by blasting argon and krypton gas with laser pulses. Normally, the pull of an atom’s positively charged nucleus keeps electrons tightly bound, creating an electromagnetic barrier to their escape. But, given a jolt from a laser, electrons can break free. That jolt weakens the electromagnetic barrier just enough that electrons can leave, but only by tunneling.

Although the scientists weren’t able to measure the tunneling time directly, they set up their experiment so that the angle at which the electrons flew away from the atom would reveal which of the two theories was correct. The laser’s light was circularly polarized — its electromagnetic waves rotated in time, changing the direction of the waves’ wiggles. If the electron escaped immediately, the laser would push it in one particular direction. But if tunneling took time, the laser’s direction would have rotated by the time the electron escaped, so the particle would be pushed in a different direction.

Comparing argon and krypton let the scientists cancel out experimental errors, leading to a more sensitive measurement that was able to distinguish between the two theories. The data matched predictions based on the theory that tunneling takes time.
The conclusion jibes with some physicists’ expectations. “I’m pretty sure that the tunneling time cannot be instantaneous, because at the end, in physics, nothing can be instantaneous,” says physicist Ursula Keller of ETH Zurich. The result, she says, agrees with an earlier experiment carried out by her team.

Other scientists still think instantaneous tunneling is possible. Physicist Olga Smirnova of the Max Born Institute in Berlin notes that Keitel and colleagues’ conclusions contradict previous research. In theoretical calculations of tunneling in very simple systems, Smirnova and colleagues found no evidence of tunneling time. The complexity of the atoms studied in the new experiment may have led to the discrepancy, Smirnova says. Still, the experiment is “very accurate and done with great care.”

Although quantum tunneling may seem an esoteric concept, scientists have harnessed it for practical purposes. Scanning tunneling microscopes, for instance, use tunneling electrons to image individual atoms. For such an important fundamental process, Keller says, physicists really have to be certain they understand it. “I don’t think we can close the chapter on the discussion yet,” she says.

Ancient people arrived in Sumatra’s rainforests more than 60,000 years ago

Humans inhabited rainforests on the Indonesian island of Sumatra between 73,000 and 63,000 years ago — shortly before a massive eruption of the island’s Mount Toba volcano covered South Asia in ash, researchers say.

Two teeth previously unearthed in Sumatra’s Lida Ajer cave and assigned to the human genus, Homo, display features typical of Homo sapiens, report geoscientist Kira Westaway of Macquarie University in Sydney and her colleagues. By dating Lida Ajer sediment and formations, the scientists came up with age estimates for the human teeth and associated fossils of various rainforest animals excavated in the late 1800s, including orangutans.

Ancient DNA studies had already suggested that humans from Africa reached Southeast Asian islands before 60,000 years ago.

Humans migrating out of Africa 100,000 years ago or more may have followed coastlines to Southeast Asia and eaten plentiful seafood along the way (SN: 5/19/12, p. 14). But the Sumatran evidence shows that some of the earliest people to depart from Africa figured out how to survive in rainforests, where detailed planning and appropriate tools are needed to gather seasonal plants and hunt scarce, fat-rich prey animals, Westaway and colleagues report online August 9 in Nature.

Where does the solar wind come from? The eclipse may offer answers

The sun can’t keep its hands to itself. A constant flow of charged particles streams away from the sun at hundreds of kilometers per second, battering vulnerable planets in its path.

This barrage is called the solar wind, and it has had a direct role in shaping life in the solar system. It’s thought to have stripped away much of Mars’ atmosphere (SN: 4/29/17, p. 20). Earth is protected from a similar fate only by its strong magnetic field, which guides the solar wind around the planet.
But scientists don’t understand some key details of how the wind works. It originates in an area where the sun’s surface meets its atmosphere. Like winds on Earth, the solar wind is gusty — it travels at different speeds in different areas. It’s fastest in regions where the sun’s atmosphere, the corona, is dark. Winds whip past these coronal holes at 800 kilometers per second. But the wind whooshes at only around 300 kilometers per second over extended, pointy wisps called coronal streamers, which give the corona its crownlike appearance. No one knows why the wind is fickle.
The Aug. 21 solar eclipse gives astronomers an ideal opportunity to catch the solar wind in action in the inner corona. One group, Nat Gopalswamy of NASA’s Goddard Spaceflight Center in Greenbelt, Md., and his colleagues, will test a new version of an instrument called a polarimeter, built to measure the temperature and speed of electrons leaving the sun. Measurements will start close to the sun’s surface and extend out to around 5.6 million kilometers, or eight times the radius of the sun.

“We should be able to detect the baby solar wind,” Gopalswamy says.

Set up at a high school in Madras, Ore., the polarimeter will separate out light that has been polarized, or had its electric field organized in one direction, from light whose electric field oscillates in all sorts of directions. Because electrons scatter polarized light more than non-polarized light, that observation will give the scientists a bead on what the electrons are doing, and by extension, what the solar wind is doing — how fast it flows, how hot it is and even where it comes from.
Gopalswamy and colleagues will also take images in four different wavelengths of light, as another measurement of speed and temperature. Mapping the fast and slow solar winds close to the surface of the sun can give clues to how they are accelerated.
The team tried out an earlier version of this instrument during an eclipse in 1999 in Turkey. But that instrument required the researchers to flip through three different polarization filters to capture all the information that they wanted. Cycling through the filters using a hand-turned wheel was slow and clunky — a problem when totality, the period when the moon completely blocks the sun, only lasts about two minutes.
The team’s upgraded polarimeter is designed so it can simultaneously gather data through all three filters and in four wavelengths of light. “The main requirement is that we have to take these images as close in time as possible, so the corona doesn’t change from one period to the next,” Gopalswamy says. One exposure will take 2 to 4 seconds, plus a 6-second wait between filters. That will give the team about 36 images total.

Gopalswamy and his team first tested this instrument in Indonesia for the March 2016 solar eclipse. “That experiment failed because of noncooperation from nature,” Gopalswamy says. “Ten minutes before the eclipse, the rain started pouring down.”

This year, they chose Madras because, historically, it’s the least cloud-covered place on the eclipse path. But they’re still crossing their fingers for clear skies.

If you’re 35 or younger, your genes can predict whether the flu vaccine will work

A genetic “crystal ball” can predict whether certain people will respond effectively to the flu vaccine.

Nine genes are associated with a strong immune response to the flu vaccine in those aged 35 and under, a new study finds. If these genes were highly active before vaccination, an individual would generate a high level of antibodies after vaccination, no matter the flu strain in the vaccine, researchers report online August 25 in Science Immunology. This response can help a person avoid getting the flu.

The research team also tried to find a predictive set of genes in people aged 60 and above — a group that includes those more likely to develop serious flu-related complications, such as pneumonia — but failed. Even so, the study is “a step in the right direction,” says Elias Haddad, an immunologist at Drexel University College of Medicine in Philadelphia, who did not participate in the research. “It could have implications in terms of identifying responders versus nonresponders by doing a simple test before a vaccination.”

The U.S. Centers for Disease Control and Prevention estimates that vaccination prevented 5.1 million flu illnesses in the 2015‒2016 season. Getting a flu shot is the best way to stay healthy, but “the problem is, we don’t know what makes a successful vaccination,” says Purvesh Khatri, a computational immunologist at Stanford University School of Medicine. “The immune system is very personal.”
Khatri and colleagues wondered if there was a certain immune state one needed to be in to respond effectively to the flu vaccine. So the researchers looked for a common genetic signal in blood samples from 175 people with different genetic backgrounds, from different locations in the United States, and who received the flu vaccine in different seasons. After identifying the set of predictive genes, the team used another collection of 82 samples to confirm that the crystal ball accurately predicted a strong flu response. Using such a variety of samples makes it more likely that the crystal ball will work for many different people in the real world, Khatri says.

The nine genes make proteins that have various jobs, including directing the movement of other proteins and providing structure to cells. Previous research on these genes has tied some of them to the immune system, but not others. Khatri expects the study will spur investigations into how the genes promote a successful vaccine response. And figuring out how to boost the genes may help those who don’t respond strongly to flu vaccine, he says.

As for finding a genetic crystal ball for older adults, “there’s still hope that we’ll be able to,” says team member Raphael Gottardo, a computational biologist at the Fred Hutchinson Cancer Research Center in Seattle. Older people are even more diverse in how they respond to the flu vaccine than younger people, he says, so it may take a larger group of samples to find a common genetic thread.

More research is also needed to learn whether the identified genes will predict an effective response for all vaccines, or just the flu, Haddad says. “There is a long way to go here.”

Machines are getting schooled on fairness

You’ve probably encountered at least one machine-learning algorithm today. These clever computer codes sort search engine results, weed spam e-mails from inboxes and optimize navigation routes in real time. People entrust these programs with increasingly complex — and sometimes life-changing — decisions, such as diagnosing diseases and predicting criminal activity.

Machine-learning algorithms can make these sophisticated calls because they don’t simply follow a series of programmed instructions the way traditional algorithms do. Instead, these souped-up programs study past examples of how to complete a task, discern patterns from the examples and use that information to make decisions on a case-by-case basis.
Unfortunately, letting machines with this artificial intelligence, or AI, figure things out for themselves doesn’t just make them good critical “thinkers,” it also gives them a chance to pick up biases.

Investigations in recent years have uncovered several ways algorithms exhibit discrimination. In 2015, researchers reported that Google’s ad service preferentially displayed postings related to high-paying jobs to men. A 2016 ProPublica investigation found that COMPAS, a tool used by many courtrooms to predict whether a criminal will break the law again, wrongly predicted that black defendants would reoffend nearly twice as often as it made that wrong prediction for whites. The Human Rights Data Analysis Group also showed that the crime prediction tool PredPol could lead police to unfairly target low-income, minority neighborhoods (SN Online: 3/8/17). Clearly, algorithms’ seemingly humanlike intelligence can come with humanlike prejudices.

“This is a very common issue with machine learning,” says computer scientist Moritz Hardt of the University of California, Berkeley. Even if a programmer designs an algorithm without prejudicial intent, “you’re very likely to end up in a situation that will have fairness issues,” Hardt says. “This is more the default than the exception.”
Developers may not even realize a program has taught itself certain prejudices. This problem gets down to what is known as a black box issue: How exactly is an algorithm reaching its conclusions? Since no one tells a machine-learning algorithm exactly how to do its job, it’s often unclear — even to the algorithm’s creator — how or why it ends up using data the way it does to make decisions.
Several socially conscious computer and data scientists have recently started wrestling with the problem of machine bias. Some have come up with ways to add fairness requirements into machine-learning systems. Others have found ways to illuminate the sources of algorithms’ biased behavior. But the very nature of machine-learning algorithms as self-taught systems means there’s no easy fix to make them play fair.

Learning by example
In most cases, machine learning is a game of algorithm see, algorithm do. The programmer assigns an algorithm a goal — say, predicting whether people will default on loans. But the machine gets no explicit instructions on how to achieve that goal. Instead, the programmer gives the algorithm a dataset to learn from, such as a cache of past loan applications labeled with whether the applicant defaulted.

The algorithm then tests various ways to combine loan application attributes to predict who will default. The program works through all of the applications in the dataset, fine-tuning its decision-making procedure along the way. Once fully trained, the algorithm should ideally be able to take any new loan application and accurately determine whether that person will default.

The trouble arises when training data are riddled with biases that an algorithm may incorporate into its decisions. For instance, if a human resources department’s hiring algorithm is trained on historical employment data from a time when men were favored over women, it may recommend hiring men more often than women. Or, if there were fewer female applicants in the past, then the algorithm has fewer examples of those applications to learn from, and it may not be as accurate at judging women’s applications.
At first glance, the answer seems obvious: Remove any sensitive features, such as race or sex, from the training data. The problem is, there are many ostensibly nonsensitive aspects of a dataset that could play proxy for some sensitive feature. Zip code may be strongly related to race, college major to sex, health to socioeconomic status.

And it may be impossible to tell how different pieces of data — sensitive or otherwise — factor into an algorithm’s verdicts. Many machine-learning algorithms develop deliberative processes that involve so many thousands of complex steps that they’re impossible for people to review.

Creators of machine-learning systems “used to be able to look at the source code of our programs and understand how they work, but that era is long gone,” says Simon DeDeo, a cognitive scientist at Carnegie Mellon University in Pittsburgh. In many cases, neither an algorithm’s authors nor its users care how it works, as long as it works, he adds. “It’s like, ‘I don’t care how you made the food; it tastes good.’ ”

But in other cases, the inner workings of an algorithm could make the difference between someone getting parole, an executive position, a mortgage or even a scholarship. So computer and data scientists are coming up with creative ways to work around the black box status of machine-learning algorithms.

Setting algorithms straight
Some researchers have suggested that training data could be edited before given to machine-learning programs so that the data are less likely to imbue algorithms with bias. In 2015, one group proposed testing data for potential bias by building a computer program that uses people’s nonsensitive features to predict their sensitive ones, like race or sex. If the program could do this with reasonable accuracy, the dataset’s sensitive and nonsensitive attributes were tightly connected, the researchers concluded. That tight connection was liable to train discriminatory machine-learning algorithms.

To fix bias-prone datasets, the scientists proposed altering the values of whatever nonsensitive elements their computer program had used to predict sensitive features. For instance, if their program had relied heavily on zip code to predict race, the researchers could assign fake values to more and more digits of people’s zip codes until they were no longer a useful predictor for race. The data could be used to train an algorithm clear of that bias — though there might be a tradeoff with accuracy.

On the flip side, other research groups have proposed de-biasing the outputs of already-trained machine-learning algorithms. In 2016 at the Conference on Neural Information Processing Systems in Barcelona, Hardt and colleagues recommended comparing a machine-learning algorithm’s past predictions with real-world outcomes to see if the algorithm was making mistakes equally for different demographics. This was meant to prevent situations like the one created by COMPAS, which made wrong predictions about black and white defendants at different rates. Among defendants who didn’t go on to commit more crimes, blacks were flagged by COMPAS as future criminals more often than whites. Among those who did break the law again, whites were more often mislabeled as low-risk for future criminal activity.

For a machine-learning algorithm that exhibits this kind of discrimination, Hardt’s team suggested switching some of the program’s past decisions until each demographic gets erroneous outputs at the same rate. Then, that amount of output muddling, a sort of correction, could be applied to future verdicts to ensure continued even-handedness. One limitation, Hardt points out, is that it may take a while to collect a sufficient stockpile of actual outcomes to compare with the algorithm’s predictions.
A third camp of researchers has written fairness guidelines into the machine-learning algorithms themselves. The idea is that when people let an algorithm loose on a training dataset, they don’t just give the software the goal of making accurate decisions. The programmers also tell the algorithm that its outputs must meet some certain standard of fairness, so it should design its decision-making procedure accordingly.

In April, computer scientist Bilal Zafar of the Max Planck Institute for Software Systems in Kaiserslautern, Germany, and colleagues proposed that developers add instructions to machine-learning algorithms to ensure they dole out errors to different demographics at equal rates — the same type of requirement Hardt’s team set. This technique, presented in Perth, Australia, at the International World Wide Web Conference, requires that the training data have information about whether the examples in the dataset were actually good or bad decisions. For something like stop-and-frisk data, where it’s known whether a frisked person actually had a weapon, the approach works. Developers could add code to their program that tells it to account for past wrongful stops.

Zafar and colleagues tested their technique by designing a crime-predicting machine-learning algorithm with specific nondiscrimination instructions. The researchers trained their algorithm on a dataset containing criminal profiles and whether those people actually reoffended. By forcing their algorithm to be a more equal opportunity error-maker, the researchers were able to reduce the difference between how often blacks and whites who didn’t recommit were wrongly classified as being likely to do so: The fraction of people that COMPAS mislabeled as future criminals was about 45 percent for blacks and 23 percent for whites. In the researchers’ new algorithm, misclassification of blacks dropped to 26 percent and held at 23 percent for whites.

These are just a few recent additions to a small, but expanding, toolbox of techniques for forcing fairness on machine-learning systems. But how these algorithmic fix-its stack up against one another is an open question since many of them use different standards of fairness. Some require algorithms to give members of different populations certain results at about the same rate. Others tell an algorithm to accurately classify or misclassify different groups at the same rate. Still others work with definitions of individual fairness that require algorithms to treat people who are similar barring one sensitive feature similarly. To complicate matters, recent research has shown that, in some cases, meeting more than one fairness criterion at once can be impossible.

“We have to think about forms of unfairness that we may want to eliminate, rather than hoping for a system that is absolutely fair in every possible dimension,” says Anupam Datta, a computer scientist at Carnegie Mellon.

Still, those who don’t want to commit to one standard of fairness can perform de-biasing procedures after the fact to see whether outputs change, Hardt says, which could be a warning sign of algorithmic bias.

Show your work
But even if someone discovered that an algorithm fell short of some fairness standard, that wouldn’t necessarily mean the program needed to be changed, Datta says. He imagines a scenario in which a credit-classifying algorithm might give favorable results to some races more than others. If the algorithm based its decisions on race or some race-related variable like zip code that shouldn’t affect credit scoring, that would be a problem. But what if the algorithm’s scores relied heavily on debt-to-income ratio, which may also be associated with race? “We may want to allow that,” Datta says, since debt-to-income ratio is a feature directly relevant to credit.

Of course, users can’t easily judge an algorithm’s fairness on these finer points when its reasoning is a total black box. So computer scientists have to find indirect ways to discern what machine-learning systems are up to.

One technique for interrogating algorithms, proposed by Datta and colleagues in 2016 in San Jose, Calif., at the IEEE Symposium on Security and Privacy, involves altering the inputs of an algorithm and observing how that affects the outputs. “Let’s say I’m interested in understanding the influence of my age on this decision, or my gender on this decision,” Datta says. “Then I might be interested in asking, ‘What if I had a clone that was identical to me, but the gender was flipped? Would the outcome be different or not?’ ” In this way, the researchers could determine how much individual features or groups of features affect an algorithm’s judgments. Users performing this kind of auditing could decide for themselves whether the algorithm’s use of data was cause for concern. Of course, if the code’s behavior is deemed unacceptable, there’s still the question of what to do about it. There’s no “So your algorithm is biased, now what?” instruction manual.
The effort to curb machine bias is still in its nascent stages. “I’m not aware of any system either identifying or resolving discrimination that’s actively deployed in any application,” says Nathan Srebro, a computer scientist at the University of Chicago. “Right now, it’s mostly trying to figure things out.”

Computer scientist Suresh Venkatasubramanian agrees. “Every research area has to go through this exploration phase,” he says, “where we may have only very preliminary and half-baked answers, but the questions are interesting.”

Still, Venkatasubramanian, of the University of Utah in Salt Lake City, is optimistic about the future of this important corner of computer and data science. “For a couple of years now … the cadence of the debate has gone something like this: ‘Algorithms are awesome, we should use them everywhere. Oh no, algorithms are not awesome, here are their problems,’ ” he says. But now, at least, people have started proposing solutions, and weighing the various benefits and limitations of those ideas. So, he says, “we’re not freaking out as much.”

If the past is a guide, Hubble’s new trouble won’t doom the space telescope

Hubble’s in trouble again.

The 28-year-old space telescope, in orbit around the Earth, put itself to sleep on October 5 because of an undiagnosed problem with one of its steering wheels. But once more, astronomers are optimistic about Hubble’s chances of recovery. After all, it’s just the latest nail-biting moment in the history of a telescope that has defied all life-expectancy predictions.

There is one major difference this time. Hubble was designed to be repaired by astronauts on the space shuttle. Each time the telescope broke previously, a shuttle mission fixed it. “That we can’t do anymore, because there ain’t no shuttle,” says astronomer Helmut Jenkner of the Space Telescope Science Institute in Baltimore, who is Hubble’s deputy mission head.
The most recent problem started when one of the three gyroscopes that control where the telescope points failed. That wasn’t surprising, says Hubble senior project scientist Jennifer Wiseman of NASA’s Goddard Space Flight Center in Greenbelt, Md. That particular gyroscope had been glitching for about a year. But when the team turned on a backup gyroscope, it didn’t function properly either.

Astronomers are working to figure out what went wrong and how to fix it from the ground. The mood is upbeat, Wiseman says. But even if the gyroscope doesn’t come back online, there are ways to point Hubble and continue observing with as few as one gyroscope.

“This is not a catastrophic failure, but it is a sign of mortality,” says astronomer Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. Like cataracts, he says, it’s “a sign of aging, but there’s a very good remedy.”
While we wait for news of how Hubble is faring, here’s a look back at some of its previous hiccups and repair missions.

1990: The blurry mirror
On June 27, 1990, three months after the space telescope launched, astronomers discovered an aberration in Hubble’s primary mirror. Its curvature was off by two micrometers, making the images slightly blurry.

The telescope soldiered on, despite being the butt of jokes on late-night TV. It observed a supernova that exploded in 1987 (SN: 2/18/17, p. 20), measured the distance to a satellite galaxy of the Milky Way and took its first look at Jupiter before the space shuttle Endeavour arrived to fix the mirror in December 1993.
1999: The first gyroscope crisis
On November 13, 1999, Hubble was put into safe mode after the fourth of its six gyroscopes failed, leaving it without the three working gyros necessary to point precisely.
An already planned preventative maintenance shuttle mission suddenly became more urgent. NASA split the mission into two parts to get to the telescope more quickly. The first part became a rescue mission: Astronauts flew the space shuttle Discovery to Hubble that December to install all new gyroscopes and a new computer.

2004: Final shuttle mission canceled
After the space shuttle Columbia disintegrated while re-entering Earth’s atmosphere in 2003, NASA canceled the planned fifth and final Hubble reservicing mission. “That could really have been the beginning of the end,” Jenkner says.

The team has known for more than a decade that someday Hubble will have to work with fewer than three gyroscopes. To prepare, Hubble’s operations team deliberately shut down one of the telescope’s gyroscopes in 2005, to observe with only two.

“We’ve been thinking about this possibility for many years,” Wiseman says. “This time will come at some point in Hubble’s mission, either now or later.”

Shutting down the third gyroscope was expected to extend Hubble’s life by only eight months, until mid-2008. In the meantime, two of the telescope’s scientific instruments — the Space Telescope Imaging Spectrograph and the Advanced Camera for Surveys — stopped working due to power supply failures.
2009: New lease on life
Fortunately, NASA restored the final servicing mission, and the space shuttle Atlantis visited Hubble in May 2009 (SN Online: 5/11/09). That mission restored Hubble’s cameras, installed new ones and crucially, left the space telescope with six new gyroscopes, three for immediate use and three backups. The three gyroscopes still in operation (including the backup that is currently malfunctioning) are of a newer type, and are expected to live five times as long as the older ones, which last four to six years.

The team expects Hubble to continue doing science well into the 2020s and to have years of overlap with its successor, the James Webb Space Telescope, due to launch in 2021. “We are always worried,” says Jenkner, who has been working on Hubble since 1983. “At the same time, we are confident that we will be running for quite some time more.”

People who have a good sense of smell are also good navigators

We may truly be led by our noses. A sense of smell and a sense of navigation are linked in our brains, scientists propose.

Neuroscientist Louisa Dahmani and colleagues asked 57 young people to navigate through a virtual town on a computer screen before being tested on how well they could get from one spot to another. The same young people’s smelling abilities were also scrutinized. After a sniff of one of 40 odor-infused felt-tip pens, participants were shown four words on a screen and asked to choose the one that matched the smell. On these two seemingly different tasks, the superior smellers and the superior navigators turned out to be one and the same, the team found.

Scientists linked both skills to certain spots in the brain: The left orbitofrontal cortex and the right hippocampus were both bigger in the better smellers and better navigators. While the orbitofrontal cortex has been tied to smelling, the hippocampus is known to be involved in both smelling and navigation. A separate group of nine people who had damaged orbitofrontal cortices had more trouble with navigation and smell identification, the researchers report October 16 in Nature Communications. Dahmani, who’s now at Harvard University, did the work while she was at McGill University in Montreal.

A sense of smell may have evolved to help people find their way around, an idea called the olfactory spatial hypothesis. More specific aspects of smell, such as how good people are at detecting faint whiffs, could also be tied to navigation, the researchers suggest.

Beavers are engineering a new Alaskan tundra

In a broad swath of northwestern Alaska, small groups of recent immigrants are hard at work. Like many residents of this remote area, they’re living off the land. But these industrious foreigners are neither prospecting for gold nor trapping animals for their pelts. In fact, their own luxurious fur was once a hot commodity. Say hello to Castor canadensis, the American beaver.

Much like humans, beavers can have an oversized effect on the landscape (SN: 8/4/18, p. 28). People who live near beaver habitat complain of downed trees and flooded land. But in areas populated mostly by critters, the effects can be positive. Beaver dams broaden and deepen small streams, forming new ponds and warming up local waters. Those beaver-built enhancements create or expand habitats hospitable to many other species — one of the main reasons that researchers refer to beavers as ecosystem engineers.
Beavers’ tireless toils — to erect lodges that provide a measure of security against land-based predators and to build a larder of limbs, bark and other vegetation to tide them over until spring thaw — benefit the wildlife community.

A couple of decades ago, the dam-building rodents were hard to find in northwestern Alaska. “There’s a lot of beaver around here now, a lot of lodges and dams,” says Robert Kirk, a long-time resident of Noatak, Alaska — ground zero for much of the recent beaver expansion. His village of less than 600 people is the only human population center in the Noatak River watershed.
Beavers may be infiltrating the region for the first time in recent history as climate change makes conditions more hospitable, says Ken Tape, an ecologist at the University of Alaska Fairbanks. Or maybe the expansion is a rebound after trapping reduced beaver numbers to imperceptible levels in the early 1900s, he says. Nobody knows for sure.
And the full range of changes the rodents are generating in their new Arctic ecosystems hasn’t been studied in detail. But from what Tape and a few other researchers can tell so far, the effects could be profound, and most of them will probably be beneficial for other species.

In the areas newly colonized by beavers, “some really interesting processes are unfolding,” says John Benson, a wildlife ecologist at the University of Nebraska–Lincoln who studies wolves and coyotes, among other beaver predators. “I’d expect some pretty dramatic changes to the areas they take over.”

Beavers’ biggest effects on Arctic ecosystems may come from the added biodiversity within the ponds they create, says James Roth, an ecologist at the University of Manitoba in Winnipeg, Canada. These “oases on the tundra” will not only provide permanent habitat for fish and amphibians, they’ll serve as seasonal stopover spots for migratory waterfowl. Physical changes to the environment could be just as dramatic, thawing permafrost decades faster than climate change alone would.

The Arctic tundra isn’t the first place beavers have made their mark. Changes seen in beaver-rich areas at lower latitudes may offer some clues to the future of the Alaskan tundra, home to moose, caribou and snowshoe hares.

North through Alaska
As Earth’s climate has warmed in recent years, some plants and animals — such as the mountain-dwelling pika, a small mammal related to rabbits — have fled the heat by moving to higher altitudes (SN: 6/30/12, p. 16). Others, from moose and snowshoe hares to bull sharks and bottlenosed dolphins, have moved toward the poles to take advantage of newly hospitable ecosystems (SN: 5/26/18, p. 9).

Arctic environments have changed more than most, Tape says. Polar regions are warming much faster than other parts of the world, he says. Studies estimate that average temperatures in the Arctic have risen about 1.8 degrees Celsius since 1900, about 60 percent faster than the Northern Hemisphere as a whole.

This warming is bringing great change to the Alaskan tundra, Tape says. Winter snow cover doesn’t persist as long as it used to. Streams freeze later in the fall and melt earlier in the spring. Permafrost, the perennially frozen ground, is thawing, allowing shrubs to take hold. New species are moving in, few more noticeable than the beaver. The dams they build and the ponds they create are hard to miss; these newly formed bodies of water even show up on satellite images.
Beavers have infiltrated three watersheds in northwestern Alaska in the last couple of decades. Together these drainages cover more than 18,000 square kilometers — an area larger than Connecticut.

On images of the region collected by Landsat satellites in summer months from 1999 through 2014, Tape and colleagues looked for new areas of wetness that covered at least half a hectare (1.24 acres), or about four times the area covered by an Olympic swimming pool.

The researchers then used newer, high-resolution satellite images to verify the presence of beaver ponds. Available aerial photographs taken before 1999 didn’t pick up any signs of beaver activity in the area, Tape says. Kirk notes that beavers were present in the Lower Noatak River watershed before 1999, but in vastly smaller numbers than they are today.

Based on the images at hand, the researchers found 56 new complexes of beaver ponds in the area over the 16-year study period. On average, beavers expanded their range about 8 kilometers per year, Tape and colleagues reported in the October Global Change Biology.

“This is remarkable, but it shouldn’t come as a surprise,” Tape says. “Beavers are engineers that work every day, all summer long.”

The animals have also made their way into western Alaska’s Seward Peninsula and the northern foothills of the Brooks Range, mountains that stretch east to west across northern Alaska, the researchers found. If the animals’ recent rate of expansion continues, beavers could spread throughout Alaska’s North Slope in the next 20 to 40 years, the researchers say.
The Lower Noatak River watershed, one of the areas that Tape and colleagues studied, is mostly tundra. By definition that means treeless plain. But the area also is about 3.5 percent forest, mainly concentrated along the river and its tributaries. The watersheds just to the north are completely tundra. So how do the beavers there build dams without trees? In those areas, Tape says, the animals construct smaller dams than they might at lower latitudes, using the branches, twigs and foliage of willows and other shrubs.

“I never expected to see beavers on the tundra,” Roth says, intrigued by Tape’s team’s findings.

Happy place
The beavers are not only persisting on the tundra, they’re thriving. The moderately sized streams and flat terrain provide ideal habitat. And once they gain a foothold, these industrious creatures set about making improvements that are probably an overall plus for myriad other species, Tape says.

For instance, frigid conditions in the region cause shallow streams to freeze solid in winter. But when a beaver builds a dam, the water that gathers upstream of the structure becomes deep enough to remain liquid below a sheet of ice that provides insulation from the chilly winter air.

That persistent liquid lets the beavers move about under the ice even in the depths of winter. The water gives them a place to stockpile food, too, Tape notes. That constant supply of liquid water also provides year-round habitat for fish, amphibians and even some insects in their larval stages. None of these species are part of the beaver’s diet, but they could serve as food for other creatures. “All that diversity would add whole new layers to food webs,” Roth says.
Ecological changes could extend well beyond the beaver pond. The water impounded by beaver dams sometimes finds its way past the dam, Tape says. The satellite photos that he and his colleagues analyzed revealed that some stretches of river just downstream of beaver dams now remain unfrozen even in winter. That flowing water probably spills over the dam or around its edges, but some may seep through or under the structure.

That liquid water also helps thaw the underlying permafrost. Previous studies have shown that even a shallow pond less than a meter deep can boost sediment temperature by as much as 10 degrees C above the locale’s average air temperature. That kind of warming causes permafrost to thaw decades earlier than it would without the pond. Although scientists are concerned that permafrost thawing will release stored carbon into the atmosphere, no one yet knows how that thawing will affect the balance of carbon emissions to the atmosphere (SN: 1/21/17, p. 15).

Field studies at lower latitudes hint that beavers will probably bring about other ecological changes, too, Tape says, which might shift over time. For example, moose and snowshoe hares eat the same willow shrubs that beavers consume and build their dams with. And ptarmigan, a crow-sized bird in the grouse family, rely on those shrubs for cover, especially during winter. So immediately after beavers move into an area and start clearing that brush, populations of those species may decline.

But the long-term benefits will probably outweigh the short-term impacts on those species, says Matthew Mumma, an ecologist at the University of Northern British Columbia in Prince George, Canada. Permafrost that thaws along the fringe of a beaver pond will probably boost numbers of the shrubs that these species depend on, Tape and colleagues suggest. So in the long run, the overall numbers of moose, hares and ptarmigan may rise.
Likewise, Mumma notes, beavers could provide big benefits for salmon and other migratory fish. Beaver dams were once thought to impede the travel of such fish upstream or to reduce the number of places where fish could spawn. But studies in the western United States, among other places, have suggested that the presence of beavers actually helps boost populations of salmon. For instance, the aquatic grasses in beaver ponds offer hiding places for young fish. Also, the languid ponds provide a resting spot for adult fish migrating upstream to spawning sites.

Better-fed wolves
Boosting herbivore populations on the tundra would be a boon for local predators, of course. Larger numbers of snowshoe hares, for example, could feed the populations of the arctic foxes that prey upon them, Mumma says. And more moose could mean better-fed wolves.

Beavers themselves make a meal for bears, wolverines and wolves. In areas where wolves and beavers coexist, the rodents make up as much as 30 percent of the wolf diet, Roth says. The presence of a more reliable and more diversified food supply could lead wolves to settle down in smaller territories rather than migrating widely.

Benson and his team have already seen the impact of beaver populations on wolves, coyotes and wolf-coyote hybrids in Ontario’s Algonquin Provincial Park from August 2002 until April 2011.

In that time, 37 of the 105 pups that had been tagged with radio transmitters died, Benson says. The second-highest cause of death was starvation. Every one of those starvation-related deaths took place in the western portion of the park, which has relatively rugged terrain and few beavers. In the eastern portion of the park, where beavers are plentiful, none of the pups starved, Benson and his team reported in 2013 in Biological Conservation.
In a separate study, Mumma and colleagues analyzed aerial surveys of beaver populations within seven broad regions in northeastern British Columbia in 2011 and 2012. Proximity to human activity, such as roadbuilding or oil and gas exploration, didn’t seem to affect beavers’ decisions to build at a particular locale. Nor did the presence of wolves in the area, the researchers reported in February in the Canadian Journal of Zoology.

Although having wolves nearby seemed to affect the number of beavers present (quite possibly via consumption), the predators didn’t seem to scare the rodents away entirely, Mumma notes.

More beavers, fewer sick moose
Whether the presence of beavers on the Alaskan tundra ends up boosting the numbers of moose and other ungulates, the dam builders could have a big, though indirect, impact on the hoofed browsers’ health.

Roth and parasitologist Olwyn Friesen, now at the University of Otago in Dunedin, New Zealand, recently studied how a wolf’s diet affects the parasites it carries — which can then be passed on to other creatures in the environment. The researchers analyzed 32 wolf carcasses collected by provincial conservation officers in southeastern Manitoba in 2011 and 2012. Those remains came from hunters, trappers and roadkill.

In particular, the team tallied the parasites in the wolves’ lungs, liver, heart and intestines. The group also measured the ratio of carbon-12 and carbon-13 isotopes in the wolf tissues, which provided insight into what sorts of prey each individual wolf had eaten near the time those tissues formed.

Typical prey for wolves in this area are, from most consumed to least: white-tailed deer, snowshoe hare, moose, beaver and caribou, Roth says. Each of these creatures has a distinct ratio of the two carbon isotopes in its tissues. That ratio gets passed along to the predators that eat them.

The wolves with diets heavier in beaver had, on average, fewer intestinal parasites called cestodes. (Tapeworms are the best-known members of that group.)

The implications are clear, Roth and Friesen reported in 2016 in the Journal of Animal Ecology. Beaver-eating wolves are much less likely to excrete parasites into the environment where they could be picked up by ungulates, such as moose and caribou. Wolves don’t seem to be detrimentally affected by such parasites. But ungulates that become infected — especially older animals — may have reduced lung capacity, making escape from predators more difficult.
A new resource
Although beavers may speed changes in the Arctic, those effects may still take a long time to manifest.

Despite the proliferation of beavers in the Lower Noatak River watershed in the last couple of decades, “things around here grow so slowly, they’re not really having a long-term impact yet,” says local resident Kirk. Shrubs haven’t yet noticeably spread into any areas of permafrost that have been thawed by waters impounded by recent dam-building.

Nor have the beavers made much of a mark on the local economy, he says. “There’s a lot of people harvesting them now, since there’s so many of them around,” he adds. However, the pelts from those rodents are so far used by the trappers themselves, not sold to others.

The beavers haven’t become a big draw on the local food scene, either. Even connoisseurs say the meat has a gamey, greasy taste. As Kirk puts it, “we haven’t adjusted our taste buds to them yet.”

To assemble a Top 10 list, Science News starts in June

When most people were thinking about summer vacation, we were contemplating the biggest science stories of 2018.

Yep, it takes more than six months of effort to put together Science News’ annual issue on the Top 10 science stories of the year. 2018 was no different, though we were hit with some exciting twists that had us revisiting our decisions just a week or so before closing the issue.

The early discussions tend to be more about themes — climate emerged as a big one, even before the recent reports linking increased severity of hurricanes, floods and wildfires to climate change. Reporters lobby to get the stories that intrigued them the most or the discoveries that mark critical turning points onto the short list.
By August, our editors have identified contenders for the top of the list and are assigning stories so writers can get to work. We try to keep the choices under wraps; it’s part of the fun. All of the stories are assigned by October 1. By then, we’re also planning illustrations, graphics and bonus items, like our much-loved list of favorite science books of the year. By Thanksgiving, we’ve nailed down the “map” for the magazine, including story order and page designs.

And then news happens. This year was particularly rich in breaking news that had us reshuffling the deck. That included the discovery of an impact crater hidden under Greenland’s ice, which some scientists argue contributed to the die-off of the mammoths. That story broke on November 14 (SN: 12/8/18, p. 6).

Then there was the U.S. report on domestic climate change impacts, which was released the Friday after Thanksgiving. A few days later came an even bigger surprise: A Chinese scientist claimed that he had created the world’s first gene-edited babies. The announcement unleashed a torrent of criticism from scientists around the world.
So what would you pick as the No. 1 science story of the year? After much discussion, our editorial team decided to stick with our original choice of climate change, considering the extraordinary amount of new data released this year and the import of those findings. The Chinese babies elbowed their way into the No. 2 slot. Even though the scientist’s claim may prove false, the technology has clearly advanced to the point where scientists and governments must act to set ethical standards for human gene editing.

Note to our readers: The magazine will be taking a break over the holidays. The next issue you receive will be dated January 19. But we’ll still be hard at work reporting on developments in science, medicine and technology; visit us at www.sciencenews.org for the latest. In 2019, we’ll publish four double issues, in May, July, October and December. These special issues include more features and in-depth coverage of topics like last summer’s “Water woes,” which included reporting from Mumbai, India. We love having the opportunity to dig deep on pressing issues and hope you enjoy the results. Thank you for being part of the Science News community. We wish you joyous holidays and an evidence-based new year.

Pregnancy depression is on the rise, a survey suggests

Today’s young women are more likely to experience depression and anxiety during pregnancy than their mothers were, a generation-spanning survey finds.

From 1990 to 1992, about 17 percent of young pregnant women in southwest England who participated in the study had signs of depressed mood. But the generation that followed, including these women’s daughters and sons’ partners, fared worse. Twenty-five percent of these young women, pregnant in 2012 to 2016, showed signs of depression, researchers report July 13 in JAMA Network Open.
“We are talking about a lot of women,” says study coauthor Rebecca Pearson, a psychiatric epidemiologist at Bristol University in England.

Earlier studies also had suggested that depression during and after pregnancy is relatively common (SN: 3/17/18, p. 16). But those studies are dated, Pearson says. “We know very little about the levels of depression and anxiety in new mums today,” she says.

To measure symptoms of depression and anxiety, researchers used the Edinburgh Postnatal Depression Scale — 10 questions, each with a score of 0 to 3, written to reveal risk of depression during and after pregnancy. A combined score of 13 and above signals high levels of symptoms.

From 1990 to 1992, 2,390 women between the ages of 19 and 24 took the survey while pregnant. Of these women, 408 — or 17 percent — scored 13 or higher, indicating worrisome levels of depression or anxiety.
When researchers surveyed the second-generation women, including both daughters of the original participants and sons’ partners ages 19 to 24, the numbers were higher. Of 180 women pregnant in 2012 to 2016, 45 of them — or 25 percent — scored 13 or more. It’s not clear whether the findings would be similar for pregnant women who are older than 24 or younger than 19.
That generational increase in young women scoring high for symptoms of depression comes in large part from higher scores on questions that indicate anxiety and stress, Pearson says. Today’s pregnant women reported frequent feelings of “unnecessary panic or fear” and “things getting too much,” for instance.

Those findings fit with observations by psychiatrist Anna Glezer of the University of California, San Francisco. “A very significant portion of my patients present with their primary problem as anxiety, as opposed to a low mood,” says Glezer, who has a practice in Burlingame, Calif.

The study’s cutoff score for indicating high depression risk was 13, but Pearson points out that a lower score can signal mild depression. Women who score an 8 or 9 “still aren’t feeling great,” she says. It’s likely that even more pregnant women might have less severe, but still unpleasant symptoms, she says.

The researchers also found that depression moves through families. Daughters of women who were depressed during pregnancy were about three times as likely to be depressed during their own pregnancy than women whose mothers weren’t depressed. That elevated risk “was news to me,” says obstetrician and gynecologist John Keats, who chaired a group of the American College of Obstetricians and Gynecologists that studied maternal mental health. Asking about whether a patient’s mother experienced depression or anxiety while pregnant might help identify women at risk, he says.

Negative effects of stress can be transmitted during pregnancy in ways that scientists are just beginning to understand, and stopping this cycle is important (SN Online: 7/9/18). “You’re not only talking about the effects on a patient and her family, but potential effects on her growing fetus and newborn,” says Keats, of the David Geffen School of Medicine at UCLA.

Although researchers don’t yet know what’s behind the increase, they have some guesses. More mothers work today than in the 1990s, and tougher financial straits push women to work inflexible jobs. More stress, less sleep and more time sitting may contribute to the difference.

Time on social media may also increase feelings of isolation and anxiety, Glezer says. Social media can help new moms get information, but that often comes with “a whole lot of comparisons, judgments and expectations.”