Tuesday, December 24, 2013

Book review: An Appetite for Wonder: The Making of a Scientist

Capping off our semester at Oxford, I thought it appropriate to read Richard Dawkins’ autobiography.

Dawkins is a product, in part, of the University of Oxford, but also Oxfordshire. A bit of a local icon these days, he spent much of his youth on the family farm just outside of Oxford, went to Oxford for his studies, and soon after became part of the faculty.

An autobiography, I suppose, can serve many purposes for the writer. For Dawkins, it was an exploration of how he became who he is.

Be forewarned, that means a lot of stories from his childhood. And apparently his dissertation.

Still, I can respect the exploration. Even if I have to read a little more quickly through another story about nursery rhymes he once knew or him writing another computer program.

Even with (despite of?) that, the book travels quickly.

A few interesting points stand out.

He credits an amazing amount of his scientific development to the tutorial system at Oxford. Briefly, for some classes at Oxford, students a paired with a fellow. The reader will tutor a small number of students each term and meet with them individually every week. Each week is devoted to a new topic within a general theme. A student is expected to become an expert on the topic and write a report on it by the end of the week. Dawkins holds an education based on lectures and memorization in some disdain.

Another is his respect for his advisor and the intellectual environment of his graduate work. Of note, for seminars, his advisor “set the tone by interrupting almost before the speaker could complete his first sentence: ‘Ja, ja, but what do you mean by…?’ This wasn’t as irritating as it sounds, because [his advisor's] interventions always aimed at clarification and it was usually necessary.” I sat through a number of talks in Dawkins’ Zoology at Oxford and, unfortunately, never heard an interruption.

Lastly, his description of life as a fellow in an Oxford college is well described “The life of a tutorial fellow of an oxford college is in many ways a charmed one. I got a room in a glowing oolitic limestone medieval building surrounded by famously beautiful gardens; a book allowance, a housing allowance, a research allowance; and free meals (though not free wine, contrary to envious rumours) in the stimulating and entertaining company of leading scholars of every subject except my own.

How the Selfish Gene came about is also recorded well for posterity, at which the autobiography ends.

To be honest, most people’s lives likely would not warrant a biography, auto- or allo-. This one could have used an editor to draw more stories (and opinions) out of him.


Still, empirically, a good companion for a train across France.

Tuesday, December 17, 2013

Greenhouse gas emissions from livestock




A follow-up to FAO’s Livestock’s long shadow, a new document from the FAO revisits the contribution of the livestock sector to global GHG emissions. LLS had stated that 18% of GHG emissions could be attributed to livestock. Here they examine patterns of GHG emissions more carefully.

A couple of interesting points in the document.

First, there is a good summary of the drivers of our livestock systems have changed:

“Traditionally, livestock was supply driven, converting waste material and other resources of limited alternative use into edible products and other goods and services. Its size was relatively limited and so were the environmental impacts. However, since the livestock sector has become increasingly demand-driven, growth has been faster and the sector now competes for natural resources with other sectors.”

In short, cattle use to graze marginal lands. Pigs were fed scraps. Now they compete with people for food.

They also have good summaries of the intensities of emissions at the global scale. Industrial agricultural is often thought to be intensive, but their efficiencies can be high.

The authors state that “High intensity of emissions are caused by low feed digestibility, poorer animal husbandry, and higher ages at slaughter. When feed digestibility is high and animals are brought to market quicker, intensity of emissions can be lower. Hence, industrial production of livestock tends to be associated with low intensity of emissions per unit protein produced. “

In short, when using marginal resources, efficiencies are lower. Graze animals on low-protein grass, and they gain weight slower and release more methane.

Still, the authors do not pull apart the relative contributions of different components of the supply chain. For example, what is the relative efficiency of grazing in North America vs. feeding cattle grain? Almost half of the emissions with cattle production come from feed production and processing.


The document also provides recommendations for reducing GHG emissions. Mostly, they say use “first-world” practices everywhere. The first-world systems are left with managing their manure better. 

No mention of producing or demanding less meat, or relying less on grain, as a mitigation strategy as far as I read.

Gerber, P. J., H. Steinfeld, B. Henderson, A. Mottet, C. Opio, J. Dijkman, A. Falcucci, and G. Tempio. 2013. . Tackling climate change through livestock – A global assessment of emissions and mitigation opportunities. Food and Agriculture Organization of the United Nations FAO, Rome.

Monday, December 16, 2013

Trees in colder environments are less tolerant of shade


Nice paper from Chris Lusk et al. They grew 17 NZ tree species in the glasshouse. Species that grew in places with colder winters had narrower vessels and lower conductance of water.

The mechanism that makes cold-tolerant species less tolerant species was unclear though.

Whether there is a direct tradeoff between cold-tolerance and shade-tolerance, or an indirect set of relationships is still to be worked out.

Lusk, C. H., T. Kaneko, E. Grierson, M. Clearwater, and F. Gilliam. 2013. Correlates of tree species sorting along a temperature gradient in New Zealand rain forests: seedling functional traits, growth and shade tolerance. Journal of Ecology 101:1531-1541.

Sunday, December 15, 2013

Model species sets



About three years ago I started to think hard about how to advance plant trait research. It was suffering from a lot of shortcomings.

One of these was that we couldn't compare many different traits for the same species. Researchers were not measuring many key traits. If they were, it wasn't on the same species in a manner that could be comparable. We needed more overlap.

The answer to these problems was model species sets. Just like model organisms generate synergy by encouraging multiple researchers to ask different questions on the same plant, model species sets can do the same for functional ecology.

I just put a letter together with a few others for New Phytologist. In it, we argue that model species sets are as important to functional trait research as model organisms were to comparative molecular biology.

We work through a few issues in the manuscript.

How do you pick what broader pool of species the model species sets should represent?

How do you assemble a model species set?

How do different researchers use the same species sets in synergistic manners?

As we discussed these issues, what was exciting was how quickly some fundamental questions about the evolution and ecology of plant species could be answered once model species sets are in place.

With just 100 species, the leaf economic spectrum would have been apparent. It would not take long to test for the wood or root economic spectra. How long would it take to find other sets of correlations in defense, life-history traits, or low-resource tolerance?

Once these traits are measured, comparisons among traits becomes simple. Knowledge stacks up on knowledge.

Another exciting part is really separating genotype by environment. Current trait relationships at the global scale mix genotype and environment. Different species are growing in different environments. Low fertility species are growing in low fertility areas. High fertility species are growing in high fertility areas.

By growing species under the same conditions, differences in traits arise from the genotype. But growing the sets across a range of environments can show how trait relationships respond to changes in environment.

That will be exciting.

One constraint on the approach is comparing across model species sets. Comparing leaf traits of the species in a grass model species set with those in a tree model species sets could suffer from confounding factors.

I guess once we get to the point where we understand the patterns within a model species set, new experiments would be necessary that grew them under comparable conditions. But after repeatedly growing 100 species, growing 200 or 300 doesn't sound too daunting.

Friday, December 13, 2013

Seasonal cycle of submissions to Nature Geoscience


Somehow, I had missed this last January in Nature Geoscience.

Funniest quote: "A few wrinkles in the record illustrate how our submission rates reflect the world of geoscientists (and sometimes, the world at large). We attribute the sharp peak around the end of July 2012 to the deadline for submission of manuscripts to be considered for the fifth assessment report of Working Group I of the Intergovernmental Panel on Climate Change. And the number of incoming papers dropped during fall meetings of the American Geophysical Union — and during football world cups."


Saturday, December 7, 2013

Reproducibility: standardized vs. heterogenized replicates



Here's a paper that caught Hendrik Poorter's attention awhile ago.

In ecology, we almost never run the same experiment twice. We reason that resources are just too thin to try to reproduce someone's results.

Even if we did try the same experiment twice, there are enough differences among ecosystems to explain any differences.

But, if we did try the same experiment twice, how do we set up our experiment to maximize reproducibility?

Reproducibility is simply the ability to generate the same results twice. If we want to affirm generalization in our results, experiments should be reproducible.

The Richter et al. paper from 2009 addressed this question for the biomedical world. There they noted that experiments with animals are costly, so there should be an impetus to make results be as reproducible as possible.

The natural approach to this is to make experiments as controlled and uniform as possible.

This reduces variance among subjects of a given experiment and maximizes the likelihood of a significant result, but actually works to reduce reproducibility.

In short, if you want reproducibility, experiments need to incorporate all the variation that one is likely to encounter when trying to reproduce experiments.

Differences in users.
Differences in noise.
Differences in cage types.

Instead of controlling for all of these, they argue that the experiments should allow these to vary. In doing so, any positive result is more likely to be generalizable.

Instead of using "standardized" replicates, replicates should be "heterogenized".


What is interesting is that the same holds for many ecological experiments. If we want to make sure we do not have a false positive, we should incorporate variation into our experiments.

It's risky for the investigator, because the approach reduces the likelihood of a positive result.

But for the discipline, it makes it more likely that any one result is not only reproducible, but also generalizable. 

Friday, December 6, 2013

And But Therefore


The idea of the triad of thesis-antithesis-synthesis goes back to Socrates.

The modern-day version is now apparently "and-but-therefore".

I've spent a lot of time working with people to sharpen their presentations.

If anything resonates is that many presentations have the structure of AND AND AND.

A simple pivot to AND BUT THEREFORE makes for a more compelling story. And a more interesting, impactful talk.

It's the same tension as Socrates, but modernized.

There are always pressures to show as much data as possible, but a scientific story is not a series of "ands".

Like any story it should start with and, but quickly pivot to the "But", which is the competing hypothesis or the antithesis.

The synthesis is the "therefore".

Great short piece in science and accompanying TEDMED talk.

If you watch the TEDMED talk, you'll see why Cartman gets a cameo.

http://www.sciencemag.org/content/342/6163/1168.1.short#ref-1
http://www.youtube.com/watch?v=ERB7ITvabA4

Wednesday, December 4, 2013

How many species for a model species set?

Angela Moles and I were discussing how many species would be necessary for a model species set.

Think of the model species set as the Arabidopsis of plant functional traits. Just like with a model organism, if we restrict the species that we work with, we can build synergy faster for different types of measurements. If I measure drought tolerance on my species and you measure tannin concentrations on your species, we don't learn much. But if we measure the same species, we learn more than we did before.

So, if we select just 100 species to represent the global diversity of grasses, herbaceous eudicots, or trees, would that be enough species to capture the global diversity of the functional group?

There are 100,000 tree species in the world. Can we possibly capture enough of the global diversity with just 100 randomly-selected species.

There is no way to really test the idea, but we talked about whether 100 species would have been enough to delineate the leaf economic spectrum? That paper had 2548 species. If we had just measured leaf traits like leaf longevity and N concentrations on 100, would we have discovered the leaf economic spectrum.

To explore this, I took the data from the paper and downsampled down to 100 species for some of the major relationships.

The LES looked like this with all the species:



What if the Amass-Nmass relationship just had 100 species? If I randomly select just 100 species from that pool...pattern is just as strong.


What about one of the weaker relationships, like LMA vs. Leaf Longevity?


Still there.

Some caveats here...the LES is not built on a random subset of the world's flora. I'm subsampling a stratified sampling of the global flora. Also, I'm working across a number of functional groups. Relationships might be weaker if we sampled just 100 species of a functional group. 

There are philosophical points to work through about the nature of plant strategies, between vs. within functional or phylogenetic groups for example. 

But, the key is that growing and measuring 100 species is not too hard. If we randomly select them so that there is a broad diversity of functional traits, we should be able to represent the global patterns of functional trait relationships.

The leaf economic spectrum has already been described (though there is still more to learn), but the other spectra have not. 

It might just take growing the right 100 species to quantify the root economic spectrum, the wood economic spectrum, or some other strategy we just aren't even aware of. 

Should we worry about the randomly selecting 100 species that end up in the same genus or all just come from Madagascar? We can test for representativity of the randomly selected species set to make sure that chance didn't screw things up too much. Odds are against it, but it is possible. No one would quibble too much if we had to randomly select a second time (20 times, maybe).

In all, it's encouraging.



Monday, December 2, 2013

Reading Krugman try to struggle describing whether someone was wrong

I remember going through these exact set of thought processes when it came to plant resource competition.

Interesting to read Krugman try to do the same thing.

Wednesday, November 27, 2013

Grassland managers not responding to phenology shifts

Time series of the national average 1st hay cut for 1951– 2011 (black circles), 1991–2011 (grey triangles) and beginning of flowering of meadow foxtail (black squares). 

In Germany, grasses in hayfields have been maturing faster due to warming.

Yet, hay cutting hasn't kept pace.

Amazingly, over 50 years of data.

Has forage quality suffered?


Sunday, November 24, 2013

New Forest: example of grazed parkland?

On Saturday, I had a chance to walk through the New Forest with Mary Edwards, Tony Brown, and Shinya Sugita.

The New Forest has a long, fascinating history stretching back thousands of years. Today, it is a national park where ponies graze and pigs pannage in the fall for acorns and beechnuts.


There are several vegetation types in the New Forest from lightly grazed heathlands to more productive forest parklands.

The parklands are interesting. 

Here, we have an open woodland. Large trees towering above a closely cropped lawn. 



For some, a scene like this represents a prehuman landscape. When aurochs and other megafauna roamed the landscape, this may been typical. 

The pollen records have been ambiguous to this point. Shinya's work on landscape openness shows that most pollen records have been selected to represent regional vegetation. Understanding local patterns requires sampling in a different manner. So, pollen records cannot say one way or the other whether this existed. 

Tony talked about beetle assemblages being used to reconstruct past vegetation. They would not support the existence of a system like this.

Franz Vera has put together other evidence that says these systems did exist and we should work to recreate them. He interprets historical records and the modern ecology of species to say that these systems did exist and we should work to promote them. 

Today, there is little regeneration of the overstory trees. Browsing pressure is too high. Pannaging also might be preventing regeneration. Based on this, one would say this landscape is a cultural artifact.

But is it just a small tweak to the system to maintain this? 

That is something worthy of study.

Friday, November 22, 2013

Parsimony




One of the hidden struggles in the culture of science is how to approach hypotheses.

Many ecological papers don't even use them anymore. They might write something like, "To better understand the relationships between x and y, we..."

Sometimes, they are there, but perfunctory. Like, "We hypothesize the technique will increase our explanatory power".

This is not good.

When you look at those papers that follow the time-tested approach of including hypotheses, actually very very few state null hypotheses.

This, is actually not bad.

Because in most sciences, we rarely ever prove things.

And null hypotheses give the illusion of proof.

At least with ecological data, there are always multiple competing hypotheses to explain data.

We do science ultimately to test theories. Hypotheses translate theories into mechanisms that underlie patterns that are observed.

When we state just one hypothesis, we aren't acknowledging the complexity of the world.

And with just one hypothesis, we are more likely to accept a hypothesis that is wrong.

Thinking in terms of parsimony, where we evaluate multiple hypotheses can be hard to do. Partly, because thresholds for favoring one hypothesis over another do not exist. You have to ascribe certainty to different hypotheses.

How in practice do we do this? How do we evaluate which is the current best hypothesis when you cannot rule out any hypothesis completely?

I should develop this more later in scientific terms, but here's a quick story to illustrate...

Earlier this fall, arrived at Merton College in Oxford, where we'd be staying. We had hauled our suitcases up to the 2nd floor of the building and entered our apartment. A quick look around confirmed that it was as small as we thought it would be. Kendra and I then went to see what would be our bedroom while the kids checked out theirs.

When we got to our bedroom, I saw our bed.  I was a tired and curious to know how hard the mattress was that I would have to try to sleep on for the next few months. So, I flopped down on the bed. As soon as I hit the mattress...

...an alarm went off.

A loud one.

And it wasn't stopping.

The first thing I thought was did I really hear that? I did. Kendra did, too.

Was I dreaming this? It didn't seem so. I've never had a dream with such a loud alarm in it. The odds of that seemed really low.

So, if I really was hearing an alarm and I wasn't dreaming...

What then?

Now it's time to assess competing hypotheses.

My first thought was there was a motion-sensitive alarm somewhere in the room. I looked around. I didn't see anything that looked like a sensor. What are the odds that Oxford puts alarms in their bedrooms? Did the kids tripped something? Maybe 1 in a 100?

I'd weight the burglar alarm hypothesis by 0.01.

It is not easy to calculate odds with loud alarms.

Maybe it is a fire alarm. What are the odds that there is a fire? Fires happen rarely. But there are false alarms. Maybe once a year? What are the odds of that happening in the very second I hit the bed? Seems incredibly low.

Quick math: 365 d * 24 h * 60 min * 60 sec = ...can't do math with an alarm going off. Let's say 1 in a million. Weight that hypothesis by a lower number: 0.000001.

But what if they are just testing the alarms? How often would that happen? Once a week? That would still be a lower number. 0.00005.

None of these hypotheses seem very certain. All fell below a 95% threshold. Still, we had to act.

The most parsimonious hypothesis was a "burglar" alarm had gone off somehow. And was still going off.

The conversation wasn't that calm, but we decided to go check with the porter at the college across the street...

Hypotheses need testing.

As we went to the door to head downstairs, the alarm stopped. After probably three minutes of loud, it was finally quiet.

When we walked downstairs, there was a man. He was standing next to the alarm box for the building.

"Sorry, if that was loud. Just testing the alarms."

That was it.

The odds seemed extremely low initially and it didn't seem the most parsimonious hypothesis, but it just so happened that the exact second I flopped down on the bed in our new apartment, someone had decided to test the fire alarm. Which apparently they do every week on Monday at about 10:30 a.m.

***
Parsimony alone didn't lead us to the right answer initially, but it was the approach of trying to formulate multiple hypotheses that gave us a path forward in investigation.

Every scientific project needs to follow the same approach. We didn't set out to disprove a null hypothesis, which would have been something like "there isn't a burglar alarm in our apartment that was just set off". We had multiple competing hypotheses that we were trying to simultaneously evaluate.

There are always more complicated explanations for phenomena than what seems most likely at the time.

If you can approach a scientific problem with multiple working hypotheses, it helps you maintain the mental flexibility to move forward when things seem uncertain.

At each stage, you also need to move forward not with the hypothesis that has a 95% probability of being true, but the one that has the most likelihood of being true.

That's the approach of competing hypotheses.

The hard part in all of this is assigning certainty to each hypothesis (or conclusion).**

**The IPCC struggled with how to do this for years.

I don't necessarily have the answers for how to translate this approach to each scientific question, but delineating potential hypotheses is one of the most crucial steps in the scientific methods.

Writing a paper that acknowledges potential competing hypotheses is a good sign that the scientific method is being carried out well.

Thursday, November 21, 2013

Link: Twenty tips for interpreting scientific claims


A quick piece in Nature today.

The goal of the Comment is to provide advice to scientific advisers or policy makers about how to interpret scientific work. But, it has broad applicability.

The authors list out 20 tips to think about when interpreting results from scientific studies.

It's a good list for any scientist (young or old) to remember.

Among those:

Correlation does not imply causation.
Extrapolating beyond the data is risky.
Randomization avoids bias.

It's early in the morning, but I'm having a hard time to come up with points they missed...

If anything, I would add:

Models can be overfit.
Science tests parsimony

Still, a good one for the bulletin board.







Sunday, November 17, 2013

Update on teaching on-line

This semester I started an on-line course for ecosystem ecology.

We're about 80% of the way done in the semester and a few things stand out.

First, we have a test every 3 weeks. The students tend to binge-watch the lectures.

Here's the pattern of minutes viewed over time. You can tell where the exams are.


Second, it might not seem to be the best way to do this, but empirically, the students are learning. Multiple choice questions, they do well. When I give them an open ended question like, "What is something interesting you learned about spatial patterns of ecosystems?" they have picked up on interesting points that go beyond bullet points you study to. 

In all, you can never truly compare among approaches and I'd always want on-line ecology lectures to be coupled with field experiences, but this is an approach that should scale. The upside of providing content in this manner far outweighs any negatives, or positives of the classroom lectures. 

Official evaluations come later, but so far this was a promising experience.