Wednesday, November 27, 2013

Grassland managers not responding to phenology shifts

Time series of the national average 1st hay cut for 1951– 2011 (black circles), 1991–2011 (grey triangles) and beginning of flowering of meadow foxtail (black squares). 

In Germany, grasses in hayfields have been maturing faster due to warming.

Yet, hay cutting hasn't kept pace.

Amazingly, over 50 years of data.

Has forage quality suffered?


Sunday, November 24, 2013

New Forest: example of grazed parkland?

On Saturday, I had a chance to walk through the New Forest with Mary Edwards, Tony Brown, and Shinya Sugita.

The New Forest has a long, fascinating history stretching back thousands of years. Today, it is a national park where ponies graze and pigs pannage in the fall for acorns and beechnuts.


There are several vegetation types in the New Forest from lightly grazed heathlands to more productive forest parklands.

The parklands are interesting. 

Here, we have an open woodland. Large trees towering above a closely cropped lawn. 



For some, a scene like this represents a prehuman landscape. When aurochs and other megafauna roamed the landscape, this may been typical. 

The pollen records have been ambiguous to this point. Shinya's work on landscape openness shows that most pollen records have been selected to represent regional vegetation. Understanding local patterns requires sampling in a different manner. So, pollen records cannot say one way or the other whether this existed. 

Tony talked about beetle assemblages being used to reconstruct past vegetation. They would not support the existence of a system like this.

Franz Vera has put together other evidence that says these systems did exist and we should work to recreate them. He interprets historical records and the modern ecology of species to say that these systems did exist and we should work to promote them. 

Today, there is little regeneration of the overstory trees. Browsing pressure is too high. Pannaging also might be preventing regeneration. Based on this, one would say this landscape is a cultural artifact.

But is it just a small tweak to the system to maintain this? 

That is something worthy of study.

Friday, November 22, 2013

Parsimony




One of the hidden struggles in the culture of science is how to approach hypotheses.

Many ecological papers don't even use them anymore. They might write something like, "To better understand the relationships between x and y, we..."

Sometimes, they are there, but perfunctory. Like, "We hypothesize the technique will increase our explanatory power".

This is not good.

When you look at those papers that follow the time-tested approach of including hypotheses, actually very very few state null hypotheses.

This, is actually not bad.

Because in most sciences, we rarely ever prove things.

And null hypotheses give the illusion of proof.

At least with ecological data, there are always multiple competing hypotheses to explain data.

We do science ultimately to test theories. Hypotheses translate theories into mechanisms that underlie patterns that are observed.

When we state just one hypothesis, we aren't acknowledging the complexity of the world.

And with just one hypothesis, we are more likely to accept a hypothesis that is wrong.

Thinking in terms of parsimony, where we evaluate multiple hypotheses can be hard to do. Partly, because thresholds for favoring one hypothesis over another do not exist. You have to ascribe certainty to different hypotheses.

How in practice do we do this? How do we evaluate which is the current best hypothesis when you cannot rule out any hypothesis completely?

I should develop this more later in scientific terms, but here's a quick story to illustrate...

Earlier this fall, arrived at Merton College in Oxford, where we'd be staying. We had hauled our suitcases up to the 2nd floor of the building and entered our apartment. A quick look around confirmed that it was as small as we thought it would be. Kendra and I then went to see what would be our bedroom while the kids checked out theirs.

When we got to our bedroom, I saw our bed.  I was a tired and curious to know how hard the mattress was that I would have to try to sleep on for the next few months. So, I flopped down on the bed. As soon as I hit the mattress...

...an alarm went off.

A loud one.

And it wasn't stopping.

The first thing I thought was did I really hear that? I did. Kendra did, too.

Was I dreaming this? It didn't seem so. I've never had a dream with such a loud alarm in it. The odds of that seemed really low.

So, if I really was hearing an alarm and I wasn't dreaming...

What then?

Now it's time to assess competing hypotheses.

My first thought was there was a motion-sensitive alarm somewhere in the room. I looked around. I didn't see anything that looked like a sensor. What are the odds that Oxford puts alarms in their bedrooms? Did the kids tripped something? Maybe 1 in a 100?

I'd weight the burglar alarm hypothesis by 0.01.

It is not easy to calculate odds with loud alarms.

Maybe it is a fire alarm. What are the odds that there is a fire? Fires happen rarely. But there are false alarms. Maybe once a year? What are the odds of that happening in the very second I hit the bed? Seems incredibly low.

Quick math: 365 d * 24 h * 60 min * 60 sec = ...can't do math with an alarm going off. Let's say 1 in a million. Weight that hypothesis by a lower number: 0.000001.

But what if they are just testing the alarms? How often would that happen? Once a week? That would still be a lower number. 0.00005.

None of these hypotheses seem very certain. All fell below a 95% threshold. Still, we had to act.

The most parsimonious hypothesis was a "burglar" alarm had gone off somehow. And was still going off.

The conversation wasn't that calm, but we decided to go check with the porter at the college across the street...

Hypotheses need testing.

As we went to the door to head downstairs, the alarm stopped. After probably three minutes of loud, it was finally quiet.

When we walked downstairs, there was a man. He was standing next to the alarm box for the building.

"Sorry, if that was loud. Just testing the alarms."

That was it.

The odds seemed extremely low initially and it didn't seem the most parsimonious hypothesis, but it just so happened that the exact second I flopped down on the bed in our new apartment, someone had decided to test the fire alarm. Which apparently they do every week on Monday at about 10:30 a.m.

***
Parsimony alone didn't lead us to the right answer initially, but it was the approach of trying to formulate multiple hypotheses that gave us a path forward in investigation.

Every scientific project needs to follow the same approach. We didn't set out to disprove a null hypothesis, which would have been something like "there isn't a burglar alarm in our apartment that was just set off". We had multiple competing hypotheses that we were trying to simultaneously evaluate.

There are always more complicated explanations for phenomena than what seems most likely at the time.

If you can approach a scientific problem with multiple working hypotheses, it helps you maintain the mental flexibility to move forward when things seem uncertain.

At each stage, you also need to move forward not with the hypothesis that has a 95% probability of being true, but the one that has the most likelihood of being true.

That's the approach of competing hypotheses.

The hard part in all of this is assigning certainty to each hypothesis (or conclusion).**

**The IPCC struggled with how to do this for years.

I don't necessarily have the answers for how to translate this approach to each scientific question, but delineating potential hypotheses is one of the most crucial steps in the scientific methods.

Writing a paper that acknowledges potential competing hypotheses is a good sign that the scientific method is being carried out well.

Thursday, November 21, 2013

Link: Twenty tips for interpreting scientific claims


A quick piece in Nature today.

The goal of the Comment is to provide advice to scientific advisers or policy makers about how to interpret scientific work. But, it has broad applicability.

The authors list out 20 tips to think about when interpreting results from scientific studies.

It's a good list for any scientist (young or old) to remember.

Among those:

Correlation does not imply causation.
Extrapolating beyond the data is risky.
Randomization avoids bias.

It's early in the morning, but I'm having a hard time to come up with points they missed...

If anything, I would add:

Models can be overfit.
Science tests parsimony

Still, a good one for the bulletin board.







Sunday, November 17, 2013

Update on teaching on-line

This semester I started an on-line course for ecosystem ecology.

We're about 80% of the way done in the semester and a few things stand out.

First, we have a test every 3 weeks. The students tend to binge-watch the lectures.

Here's the pattern of minutes viewed over time. You can tell where the exams are.


Second, it might not seem to be the best way to do this, but empirically, the students are learning. Multiple choice questions, they do well. When I give them an open ended question like, "What is something interesting you learned about spatial patterns of ecosystems?" they have picked up on interesting points that go beyond bullet points you study to. 

In all, you can never truly compare among approaches and I'd always want on-line ecology lectures to be coupled with field experiences, but this is an approach that should scale. The upside of providing content in this manner far outweighs any negatives, or positives of the classroom lectures. 

Official evaluations come later, but so far this was a promising experience.