Ceres harbors homegrown organic compounds

NASA’s Dawn spacecraft has detected organic compounds on Ceres — the first concrete proof of organics on an object in the asteroid belt between Mars and Jupiter.

This material probably originated on the dwarf planet itself, the researchers report in the Feb. 17 Science. The discovery of organic compounds adds to the growing body of evidence that Ceres may have once had a habitable environment.

“We’ve come to recognize that Ceres has a lot of characteristics that are intriguing for those looking at how life starts,” says Andy Rivkin, a planetary astronomer at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., who was not involved in the study.
The Dawn probe has previously detected salts, ammonia-rich clays and water ice on Ceres, which together indicate hydrothermal activity, says study coauthor Carol Raymond, a planetary scientist at NASA’s Jet Propulsion Laboratory in Pasadena, Calif.

For life to begin, you need elements like carbon, hydrogen, nitrogen and oxygen, as well as a source of energy. Both the hydrothermal activity and the presence of organics point toward Ceres having once had a habitable environment, Raymond says.

“If you have an abundance of those elements and you have an energy source,” she says, “then you’ve created sort of the soup from which life could have formed.” But study coauthor Lucy McFadden, a planetary scientist at NASA’s Goddard Space Flight Center in Greenbelt, Md., stresses that the team has not actually found any signs of life on Ceres.

Evidence of Ceres’ organic material comes from areas near Ernutet crater. Dawn picked up signs of a “fingerprint,” or spectra, consistent with organics. The pattern of wavelengths of light absorbed and reflected from these areas is similar to the pattern seen in hydrocarbons on Earth such as kerite and asphaltite. But without a sample from the surface, the team can’t say definitively what organic material is present or how it formed, says study coauthor Harry McSween, a geologist at the University of Tennessee.
The team suspects that the organics formed within Ceres’ interior and were brought to the surface by hydrothermal activity. An alternative idea — that a space rock that crashed into Ceres brought the material — is unlikely, the researchers say, because the concentration of organics is so high. An impact would have mixed organic compounds across the surface, diluting the concentration.

Detecting organics on Ceres also has implications for how life arose on Earth, McSween says. Some researchers think that life was jump-started by asteroids and other space rocks that delivered organic compounds to the planet. Finding such organic matter on Ceres “adds some credence to that idea,” he says.

Data-driven crime prediction fails to erase human bias

Big data is everywhere these days and police departments are no exception. As law enforcement agencies are tasked with doing more with less, many are using predictive policing tools. These tools feed various data into algorithms to flag people likely to be involved with future crimes or to predict where crimes will occur.

In the years since Time magazine named predictive policing as one of 2011’s best 50 inventions of the year, its popularity has grown. Twenty U.S. cities, including Chicago, Atlanta, Los Angeles and Seattle are using a predictive policing system, and several more are considering it. But with the uptick in use has come a growing chorus of caution. Community activists, civil rights groups and even some skeptical police chiefs have raised concerns that predictive data approaches may unfairly target some groups of people more than others.

New research by statistician Kristian Lum provides a telling case study. Lum, who leads the policing project at the San Francisco-based Human Rights Data Analysis Group, looked at how the crime-mapping program PredPol would perform if put to use in Oakland, Calif. PredPol, which purports to “eliminate profiling concerns,” takes data on crime type, location and time and feeds it into a machine-learning algorithm. The algorithm, originally based on predicting seismic activity after an earthquake, trains itself with the police crime data and then predicts where future crimes will occur.

Lum was interested in bias in the crime data — not political or racial bias, just the ordinary statistical kind. While this bias knows no color or socioeconomic class, Lum and her HRDAG colleague William Isaac demonstrate that it can lead to policing that unfairly targets minorities and those living in poorer neighborhoods.

By applying the algorithm to 2010 data on drug crime reports for Oakland, the researchers generated a predicted rate of drug crime on a map of the city for every day of 2011. The researchers then compared the data used by the algorithm — drug use documented by the police — with a record of overall drug use, whether recorded or not. This ground-truthing came from taking public health data from the 2011 National Survey on Drug Use and Health and demographic data from the city of Oakland to derive an estimate of drug use for all city residents.
In this public health-based map, drug use is widely distributed across the city. In the predicted drug crime map, it is not. Instead, drug use deemed worthy of police attention is concentrated in neighborhoods in West Oakland and along International Boulevard, two predominately low-income and nonwhite areas.
Predictive policing approaches are often touted as eliminating concerns about police profiling. But rather than correcting bias, the predictive model exacerbated it, Lum said during a panel on data and crime at the American Association for the Advancement of Science annual meeting in Boston in February. While estimates of drug use are pretty even across race, the algorithm would direct Oakland police to locations that would target black people at roughly twice the rate of whites. A similar disparity emerges when analyzing by income group: Poorer neighborhoods get targeted.
And a troubling feedback loop emerges when police are sent to targeted locations. If police find slightly more crime in an area because that’s where they’re concentrating patrols, these crimes become part of the dataset that directs where further patrolling should occur. Bias becomes amplified, hot spots hotter.

There’s nothing wrong with PredPol’s algorithm, Lum notes. Machine learning algorithms learn patterns and structure in data. “The algorithm did exactly what we asked; it learned patterns in the data,” she says. The danger is in thinking that predictive policing will tell you about patterns in the occurrence of crime. It’s really telling you about patterns in police records.

Police aren’t tasked with collecting random samples, nor should they be, says Lum. And that’s all the more reason why departments should be transparent and vigilant about how they use their data. In some ways, PredPol-guided policing isn’t so different from old-fashioned pins on a map.

For her part, Lum would prefer that police stick to these timeworn approaches. With pins on a map, the what, why and where of the data are very clear. The black box of an algorithm, on the other hand, lends undue legitimacy to the police targeting certain locations while simultaneously removing accountability. “There’s a move toward thinking machine learning is our savior,” says Lum. “You hear people say, “A computer can’t be racist.’”

The use of predictive policing may be costly, both literally and figuratively. The software programs can run from $20,000 to up to $100,000 per year for larger cities. It’s harder to put numbers on the human cost of over-policing, but the toll is real. Increased police scrutiny can lead to poor mental health outcomes for residents and undermine relationships between police and the communities they serve. Big data doesn’t help when it’s bad data.

For kids, daily juice probably won’t pack on the pounds

I’ve been to the playground enough times to know a juicy parenting controversy when I see (or overhear) one. Bed-sharing, breastfeeding and screen time are always hot-button issues. But I’m not talking about any of those. No, I’m talking about actual juice.

Some parents see juice as a delicious way to get vitamins into little kids. Others see juice as a gateway drug to a sugar-crusted, sedentary lifestyle, wrapped up in a kid-friendly box. No matter where you fall on the juice spectrum, you can be sure there are parents to either side of you. (Disclosure: My kids don’t drink much juice, simply because the people who buy their groceries aren’t all that into it. And juice is heavy.)

Scientific studies on the effects of juice have been somewhat sparse, allowing deeply held juice opinions to run free. One of the chief charges against juice is that it’s packed with sugar. An 8-ounce serving of grape juice, even with no sugar added, weighs in at 36 grams. That tops Coca-Cola, which delivers 26 grams of sugar in 8 ounces. And all of those extra sweet calories can lead to extra weight.

A recent review of eight studies on juice and children’s body weight, published online March 23 in Pediatrics, takes a look at this weight concern. It attempts to clarify whether kids who drink 100 percent fruit juice every day are at greater risk of gaining weight. After sifting through the studies’ data, researchers arrived at an answer that will please pro-juicers: Not really.

“Our study did not find evidence that consuming one serving per day of 100 percent fruit juice influenced BMI to a clinically important degree,” says study coauthor Brandon Auerbach of the University of Washington in Seattle.

The analysis found that for children ages 1 to 6, one daily serving of juice (6 to 8 ounces) was associated with a sliver of an increase in body mass index, or BMI. Consider a 5-year-old girl who started out right on the 50th percentile for weight and BMI. After a year of daily juice, this girl’s BMI may have moved from the 50th to the 52nd or 54th percentile, corresponding to a weight increase of 0.18 to 0.33 pounds over the year. That amount “isn’t trivial, but it’s not enough on its own to lead to poor health,” Auerbach says.

The results, of course, aren’t the final word. The analysis was reviewing data from other studies, and those studies came with their own limitations. For one thing, the studies didn’t assign children to receive or not receive juice. Instead, researchers measured the children’s juice-drinking behavior that was already under way and tried to relate that to their weight. That approach means that it’s possible that differences other than juice consumption could influence the results.
It’s important to note the distinction here between the 100 percent fruit juice in the studies and fruit cocktails, which are fruit-flavored drinks that often come with lots of added sugar. The data on those drinks is more damning in terms of weight gain and the risk of cavities, Auerbach says.

Also worth noting: The American Academy of Pediatrics recommends that kids between ages 1 and 6 get only 4 to 6 ounces of juice a day. That’s a smaller amount than many of the kids in the studies received. And the AAP recommends babies younger than 6 months get no juice at all.

In general, whole fruits, such as apples and oranges, are better than juice because they provide fiber and other nutrients absent from juice. (Bonus for toddlers: Oranges are fun to peel. Bummer for parents: Doing so makes a sticky mess.)

Still, the new analysis may ease some guilt around letting the juice flow. And it can enable parents to save their worries for more harmful things, of which there are plenty.

Male cockatoos have the beat

Like 1980s hair bands, male cockatoos woo females with flamboyant tresses and killer drum solos.

Male palm cockatoos (Probosciger aterrimus) in northern Australia refashion sticks and seedpods into tools that the animals use to bang against trees as part of an elaborate visual and auditory display designed to seduce females. These beats aren’t random, but truly rhythmic, researchers report online June 28 in Science Advances. Aside from humans, the birds are the only known animals to craft drumsticks and rock out.
“Palm cockatoos seem to have their own internalized notion of a regular beat, and that has become an important part of the display from males to females,” says Robert Heinsohn, an evolutionary biologist at the Australian National University in Canberra. In addition to drumming, mating displays entail fluffed up head crests, blushing red cheek feathers and vocalizations. A female mates only every two years, so the male engages in such grand gestures to convince her to put her eggs in his hollow tree nest.

Heinsohn and colleagues recorded more than 131 tree-tapping performances from 18 male palm cockatoos in rainforests on the Cape York Peninsula in northern Australia. Each had his own drumming signature. Some tapped faster or slower and added their own flourishes. But the beats were evenly spaced — meaning they constituted a rhythm rather than random noise.

From bonobos to sea lions, other species have shown a propensity for learning and recognizing beats. And chimps drum with their hands and feet, sometimes incorporating trees and stones, but they lack a regular beat.

The closest analogs to cockatoo drummers are human ones, Heinsohn says, though humans typically generate beats as part of a group rather than as soloists. Still, the similarity hints at the universal appeal of a solid beat that may underlie music’s origins.

Quantum tunneling takes time, new study shows

Quantum particles can burrow through barriers that should be impenetrable — but they don’t do it instantaneously, a new experiment suggests.

The process, known as quantum tunneling, takes place extremely quickly, making it difficult to confirm whether it takes any time at all. Now, in a study of electrons escaping from their atoms, scientists have pinpointed how long the particles take to tunnel out: around 100 attoseconds, or 100 billionths of a billionth of a second, researchers report July 14 in Physical Review Letters.
In quantum tunneling, a particle passes through a barrier despite not having enough energy to cross it. It’s as if someone rolled a ball up a hill but didn’t give it a hard enough push to reach the top, and yet somehow the ball tunneled through to the other side.

Although scientists knew that particles could tunnel, until now, “it was not really clear how that happens, or what, precisely, the particle does,” says physicist Christoph Keitel of the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. Theoretical physicists have long debated between two possible options. In one model, the particle appears immediately on the other side of the barrier, with no initial momentum. In the other, the particle takes time to pass through, and it exits the tunnel with some momentum already built up.

Keitel and colleagues tested quantum tunneling by blasting argon and krypton gas with laser pulses. Normally, the pull of an atom’s positively charged nucleus keeps electrons tightly bound, creating an electromagnetic barrier to their escape. But, given a jolt from a laser, electrons can break free. That jolt weakens the electromagnetic barrier just enough that electrons can leave, but only by tunneling.

Although the scientists weren’t able to measure the tunneling time directly, they set up their experiment so that the angle at which the electrons flew away from the atom would reveal which of the two theories was correct. The laser’s light was circularly polarized — its electromagnetic waves rotated in time, changing the direction of the waves’ wiggles. If the electron escaped immediately, the laser would push it in one particular direction. But if tunneling took time, the laser’s direction would have rotated by the time the electron escaped, so the particle would be pushed in a different direction.

Comparing argon and krypton let the scientists cancel out experimental errors, leading to a more sensitive measurement that was able to distinguish between the two theories. The data matched predictions based on the theory that tunneling takes time.
The conclusion jibes with some physicists’ expectations. “I’m pretty sure that the tunneling time cannot be instantaneous, because at the end, in physics, nothing can be instantaneous,” says physicist Ursula Keller of ETH Zurich. The result, she says, agrees with an earlier experiment carried out by her team.

Other scientists still think instantaneous tunneling is possible. Physicist Olga Smirnova of the Max Born Institute in Berlin notes that Keitel and colleagues’ conclusions contradict previous research. In theoretical calculations of tunneling in very simple systems, Smirnova and colleagues found no evidence of tunneling time. The complexity of the atoms studied in the new experiment may have led to the discrepancy, Smirnova says. Still, the experiment is “very accurate and done with great care.”

Although quantum tunneling may seem an esoteric concept, scientists have harnessed it for practical purposes. Scanning tunneling microscopes, for instance, use tunneling electrons to image individual atoms. For such an important fundamental process, Keller says, physicists really have to be certain they understand it. “I don’t think we can close the chapter on the discussion yet,” she says.

Telling children they’re smart could tempt them to cheat

It’s hard not to compliment kids on certain things. When my little girls fancy themselves up in tutus, which is every single time we leave the house, people tell them how pretty they are. I know these folks’ intentions are good, but an abundance of compliments on clothes and looks sends messages I’d rather my girls didn’t absorb at ages 2 and 4. Or ever, for that matter.

Our words, often spoken casually and without much thought, can have a big influence on little kids’ views of themselves and their behaviors. That’s very clear from two new studies on children who were praised for being smart.

The studies, conducted in China on children ages 3 and 5, suggest that directly telling kids they’re smart, or that other people think they’re intelligent, makes them more likely to cheat to win a game.

In the first study, published September 12 in Psychological Science, 150 3-year-olds and 150 5-year-olds played a card guessing game. An experimenter hid a card behind a barrier and the children had to guess whether the card’s number was greater or less than six. In some early rounds of the game, a researcher told some of the children, “You are so smart.” Others were told, “You did very well this time.” Still others weren’t praised at all.

Just before the kids guessed the final card in the game, the experimenter left the room, but not before reminding the children not to peek. A video camera monitored the kids as they sat alone.

The children who had been praised for being smart were more likely to peek, either by walking around or leaning over the barrier, than the children in the other two groups, the researchers found. Among 3-year-olds who had been praised for their ability (“You did very well this time.”) or not praised at all, about 40 percent cheated. But the share of cheaters jumped to about 60 percent among the 3-year-olds who had been praised as smart. Similar, but slightly lower, numbers were seen for the 5-year-olds.

In another paper, published July 12 in Developmental Science, the same group of researchers tested whether having a reputation for smarts would have an effect on cheating. At the beginning of a similar card game played with 3- and 5-year-old Chinese children, researchers told some of the kids that they had a reputation for being smart. Other kids were told they had a reputation for cleanliness, while a third group was told nothing about their reputation. The same phenomenon emerged: Kids told they had a reputation for smarts were more likely than the other children to peek at the cards.
The kids who cheated probably felt more pressure to live up to their smart reputation, and that pressure may promote winning at any cost, says study coauthor Gail Heyman. She’s a psychologist at the University of California, San Diego and a visiting professor at Zhejiang Normal University in Jinhua, China. Other issues might be at play, too, she says, “such as giving children a feeling of superiority that gives them a sense that they are above the rules.”

Previous research has suggested that praising kids for their smarts can backfire in a different way: It might sap their motivation and performance.

Heyman was surprised to see that children as young as 3 shifted their behavior based on the researchers’ comments. “I didn’t think it was worth testing children this age, who have such a vague understanding of what it means to be smart,” she says. But even in these young children, words seemed to have a powerful effect.

The results, and other similar work, suggest that parents might want to curb the impulse to tell their children how smart they are. Instead, Heyman suggests, keep praise specific: “You did a nice job on the project,” or “I like the solution you came up with.” Likewise, comments that focus on the process are good choices: “How did you figure that out?” and “Isn’t it fun to struggle with a hard problem like that?”

It’s unrealistic to expect parents — and everyone else who comes into contact with children — to always come up with the “right” compliment. But I do think it’s worth paying attention to the way we talk with our kids, and what we want them to learn about themselves. These studies have been a good reminder for me that comments made to my kids — by anyone — matter, perhaps more than I know.

Actress Hedy Lamarr laid the groundwork for some of today’s wireless tech

Once billed as “the most beautiful woman in the world,” actress Hedy Lamarr is often remembered for Golden Age Hollywood hits like Samson and Delilah. But Lamarr was gifted with more than just a face for film; she had a mind for science.

A new documentary, Bombshell: The Hedy Lamarr Story, spotlights Lamarr’s lesser-known legacy as an inventor. The film explores how the pretty veneer that Lamarr shrewdly used to advance her acting career ultimately trapped her in a life she found emotionally isolating and intellectually unfulfilling.
Lamarr, born in Vienna in 1914, first earned notoriety for a nude scene in a 1933 Czech-Austrian film. Determined to rise above that cinematic scarlet letter, Lamarr fled her unhappy first marriage and sailed to New York in 1937. En route, she charmed film mogul Louis B. Mayer into signing her. Stateside, she became a Hollywood icon by day and an inventor by night.
Lamarr’s interest in gadgetry began in childhood, though she never pursued an engineering education. Her most influential brainchild was a method of covert radio communication called frequency hopping, which involves sending a message over many different frequencies, jumping between channels in an order known only to the sender and receiver. So if an adversary tried to jam the signal on a certain channel, it would be intercepted for only a moment.

During World War II, Lamarr partnered with composer George Antheil to design a frequency-hopping device for steering antisubmarine torpedoes. The pair got a patent, but the U.S. Navy didn’t take the invention seriously. “The Navy basically told her, ‘You know, you’d be helping the war a lot more, little lady, if you got out and sold war bonds rather than sat around trying to invent,’ ” biographer Richard Rhodes says in the film. Ultimately, the film suggests, Lamarr’s bombshell image and the sexism of the day stifled her inventing ambitions. Yet, frequency hopping paved the way for some of today’s wireless technologies.

Throughout Bombshell, animated sketches illustrate Lamarr’s inventions, but the film doesn’t dig deep into the science. The primary focus is the tension between Lamarr’s love of invention and her Hollywood image. With commentary from family and historians, as well as old interviews with Lamarr, Bombshell paints a sympathetic portrait of a woman troubled by her superficial reputation and yearning for recognition of her scientific intellect.

Some of TRAPPIST-1’s planets could have life-friendly atmospheres

It’s still too early to pack your bags for TRAPPIST-1. But two new studies probe the likely compositions of the seven Earth-sized worlds orbiting the cool, dim star, and some are looking better and better as places to live (SN: 3/18/17, p. 6).

New mass measurements suggest that the septet probably have rocky surfaces and possibly thin atmospheres, researchers report February 5 in Astronomy & Astrophysics. For at least three of the planets, those atmospheres don’t appear to be too hot for life, many of these same researchers conclude February 5 in Nature Astronomy.
TRAPPIST-1 is about 40 light-years from Earth, and four of its planets lie within or near the habitable zone, the range where temperatures can sustain liquid water. That makes these worlds tempting targets in the search for extraterrestrial life (SN: 12/23/17, p. 25)

One clue to potential habitability is a planet’s mass — something not precisely nailed down in previous measurements of the TRAPPIST-1 worlds. Mass helps determine a planet’s density, which in turn provides clues to its makeup. High density could indicate that a planet doesn’t have an atmosphere. Low density could indicate that a planet is shrouded in a puffy, hydrogen-rich atmosphere that would cause a runaway greenhouse effect.

Using a new computer technique that accounts for the planets’ gravitational tugs on each other, astronomer Simon Grimm of the University of Bern in Switzerland and his colleagues calculated the seven planets’ masses with five to eight times better precision than before. Those measurements suggest that the innermost planet probably has a thick, viscous atmosphere like Venus, Grimm says. The other six, which may be covered in ice or oceans, may have more life-friendly atmospheres. The fourth planet from the star has the same density as Earth and receives the same amount of radiation from its star as Earth, Grimm’s team reports in Astronomy & Astrophysics.

“This is really the cool thing: We have one planet which is very, very similar to the Earth,” Grimm says. “That’s really nice.”
Having an atmosphere could suggest habitability, but not if it’s too hot. So using the Hubble Space Telescope, MIT astronomer Julien de Wit and his colleagues, including some members from Grimm’s team, observed the four middle planets as they passed in front of the star. The team was looking for a signature in near-infrared wavelengths of light filtering through planets’ atmospheres. That would have indicated that the atmospheres were full of heat-trapping hydrogen.

In four different observations, Hubble saw no sign of hydrogen-rich atmospheres around three of the worlds, de Wit and colleagues report in Nature Astronomy. “We ruled out one of the scenarios in which it would have been uninhabitable,” de Wit says.

The new observations don’t necessarily mean the planets have atmospheres, much less ones that are good for life, says planetary scientist Stephen Kane of the University of California, Riverside. It’s still possible that the star’s radiation blew the planets’ atmospheres away earlier in their histories. “That’s something which is still on the table,” he says. “This is a really important piece of that puzzle, but there are many, many pieces.”

Finishing the puzzle may have to wait for the James Webb Space Telescope, scheduled to launch in 2019, which will be powerful enough to figure out all the components of the planets’ atmospheres — if they exist.

What bees did during the Great American Eclipse

When the 2017 Great American Eclipse hit totality and the sky went dark, bees noticed.

Microphones in flower patches at 11 sites in the path of the eclipse picked up the buzzing sounds of bees flying among blooms before and after totality. But those sounds were noticeably absent during the full solar blackout, a new study finds.

Dimming light and some summer cooling during the onset of the eclipse didn’t appear to make a difference to the bees. But the deeper darkness of totality did, researchers report October 10 in the Annals of the Entomological Society of America. At the time of totality, the change in buzzing was abrupt, says study coauthor and ecologist Candace Galen of the University of Missouri in Columbia.
The recordings come from citizen scientists, mostly school classes, setting out small microphones at two spots in Oregon, one in Idaho and eight in Missouri. Often when bees went silent at the peak of the eclipse, Galen says, “you can hear the people in the background going ‘ooo,’ ‘ahh’ or clapping.”
There’s no entirely reliable way (yet) of telling what kinds of bees were doing the buzzing, based only on their sounds, Galen says. She estimates that the Missouri sites had a lot of bumblebees, while the western sites had more of the tinier, temperature-fussy Megachile bees.
More western samples, with the fussier bees, might have let researchers see an effect on the insects of temperatures dropping by at least 10 degrees Celsius during the eclipse. The temperature plunge in the Missouri summer just “made things feel a little more comfortable,” Galen says.

This study of buzz recordings gives the first formal data published on bees during a solar eclipse, as far as Galen knows. “Insects are remarkably neglected,” she says. “Everybody wants to know what their dog and cat are doing during the eclipse, but they don’t think about the flea.”

Here’s what’s unusual about Hurricane Michael

Call it an October surprise: Hurricane Michael strengthened unusually quickly before slamming into the Florida panhandle on October 10 and remained abnormally strong as it swept into Georgia. The storm made landfall with sustained winds of about 250 kilometers per hour, just shy of a category 5 storm, making it the strongest storm ever to hit the region, according to the National Oceanographic and Atmospheric Administration’s National Hurricane Center, or NHC.

Warm ocean waters are known to fuel hurricanes’ fury by adding heat and moisture; the drier air over land masses, by contrast, can help strip storms of strength. So hurricanes nearing the Florida panhandle, a curving landmass surrounding the northeastern Gulf of Mexico, tend to weaken as they pull in drier air from land. But waters in the Gulf that were about 1 degree to 2 degrees Celsius warmer than average for this time of year, as well as abundant moisture in the air over the eastern United States, helped to supercharge Michael. Despite some wind conditions that scientists expected to weaken the storm, it strengthened steadily until it made landfall, which the NHC noted “defies traditional logic.” The fast-moving storm weakened only slightly, to a category 3, before hurtling into Georgia.
Although it is not possible to attribute the generation of any one storm to climate change, scientists have long predicted that warming ocean waters would lead to more intense tropical cyclones in the future. More recent attribution studies have borne out that prediction, suggesting that very warm waters in the tropical Atlantic helped to fuel 2017’s powerful storm season, which spawned hurricanes Irma and Maria.

Hurricane Harvey, fueled by unusually warm waters in the Gulf of Mexico in August 2017, also underwent a rapid intensification, strengthening from a tropical storm to a category 4 hurricane within about 30 hours. And this year, scientists reported that Hurricane Florence, which slammed into the Carolinas in September, was probably warmer and wetter due to warmer than average sea surface temperatures in the Atlantic Ocean.