Sunday, December 27, 2009

How to reach the Giga mark with plants




The 96-well microbiology plate and its plant analog?

New Phytologist recently had a commentary "From Galactic archeology to soil metagenomics – surfing on massive data streams" (New Phytologist (2010) 185: 343–348). It's an interesting, if not typical, update on Progress [with a capital "P"]. The comment discusses the large numbers of microbes in soils, the diversity of Operational Taxonomic Units, and the large number of sequences that can be read currently in a standard batch. Having worked tangentially with the new technology, it truly is breathtaking the amount of detail and volume of data that can be generated.

I used to think that the tradeoff between quantity and quality is a fundamental constraint in the world and it was especially acute in science. I'm not sure I think that any more. In some senses, quantity is quality--at least when it comes to scientific emphasis.

The amount of money that gets spent on new technologies in science is immense. Part of what drives where that money gets spent is perceived rates of Progress, but also just sheer numbers. It also helps to be able to collect data in the 10's of thousands, if not gigas or terras. Sophisticated data streams and analyses help.

For understanding plants, and definitely ecosystems, we have suffered from not being able to rapidly produce enough data. Pure and simple. Remote sensing data is weak, but we can generate a lot of it automatically. Same for microbial data. And genomics. Those things that we'd like to learn about, but we can't generate a lot of data, suffer at the macroscale when it comes to scientific investment.

One of the things that has been holding back plant work has been the scale at which we can generate data. It's too slow. We need to generate a lot of data fast.

96-well plates in microbiology are standard and provide a template for a number of processes. One thing that would help beginning to generate a lot of data would be to have the equivalent of the 96-well plates for plants. The similarity with 98-cell Conetainers is striking and makes me wonder whether it could become a standard. For example, there are roughly 2000 species of grass in the US. To grow one plant of each species would take about 20 trays, which is about 40 ft2, or the size of a standard growth chamber. A standard greenhouse might be 5000 ft2, which could house 2,500,000 cells, enough for 1000 replicates for every species of grass in the US, 250 replicates for every species of grass in the world, and 10 replicates for every species of plant in the world (assuming you could cram it into a tiny Conetainer). Collect one data point per cell and the numbers get big fast. A few data points and we hit the Giga mark in a month.

A standardized medium in each cell would provide comparable data, but one could also imagine a standard configuration of different soils to provide a spectrum of data, similar to the old Biolog plates. Soils could differ in nutrient availability or texture or salinity or origin. 98 cells gives you a lot of flexibility.

I wonder if in the plant world, we just haven't been thinking big enough. There are certainly logistical problems to overcome, but the giga mark is within reach. I just wonder why we don't do it.


Thursday, December 17, 2009

Nitrogen isotopes in different types of C4 Australian grasses



Sites sampled for grass nitrogen isotopes.

I'll admit I have a soft spot in my heart for expedition science. You start out with no real hypotheses. Instead you have a plan to measure something interesting along an interesting gradient not knowing what the ultimate patterns might be. You get a mess of data and then start to try and tell a story. It's an adventure to collect and an adventure to write.

In one recent study, over 400 grass samples were taken from what amounts to the entire continent of Australia. It's not a perfect study--the analyses could have been more complete and they could have spelled Ben Houlton's name right. That said, there is an interesting story that comes out. As water availability increased, del15N decreased, as we've seen before.



Yet, comparing C4 types, PCK C4's were enriched in 15N relative to other types. PCK's have always been a mystery ecologically. NADP's are the tallgrass C4's, NAD's are the shortgrass C4's and PCK's are the tropical C4's. So why the 15N enrichment? Are they less reliant on mycorrhizal fungi? Do they occupy higher N availability sites than the other types?

Hard to know, but with some broad surveys done, at least we know the patterns. Which is an important first step to understanding why the patterns are there.

Murphy, B. P. and D. Bowman. 2009. The carbon and nitrogen isotope composition of Australian grasses in relation to climate. Functional Ecology 23:1040-1049.



Monday, December 14, 2009

10 ways papers are rejected

As an author, there seems to be a myriad of ways that reviewers justify rejecting papers. As a reviewer, it can be a struggle to define why a paper is unfit for publication.

My goal here is to codify ways papers are rejected. For authors, it should help to improve a paper, if not rebut criticisms, by understanding the categories by which reviewers and editors reject papers. For reviewers, it should help sharpen the key points to make to authors so that they can improve their work.

The examples I give are all from papers that I have had rejected, but subsequently were accepted later. Reading through them, I sometimes wonder how I ever got anything published.

1) Poor fit for a journal. If these were relationship break-up lines, this is the equivalent of “It’s not you, it’s me.” There rarely is a objective analysis of “fit”, so it’s an easy catch-all rejection. Higher profile journals are more likely to use this reason at the editorial stage. Here are two examples:
a. Science: “Although your analysis is interesting, we feel that the scope and focus of your paper make it more appropriate for a more specialized journal.”
b. Nature: “We do not doubt the technical quality of your work or its interest to others working in this and related areas of research. However, we are not persuaded that your findings represent a sufficiently outstanding scientific advance to justify publication in Nature.”

2) Poorly referenced. No paper can include every study, but often there is a set of studies that the coauthor is thinking about that they did not find in the paper. Usually, but not always, this means that the authors forgot to reference the reviewer.
a. Example: “the authors of this manuscript have done an extremely bad job with respect to consideration of relevant literature for their review. It is specifically the duty of a research review to consider the whole range of literature in a balanced manner”. [this comment was followed by a list of 8 papers that all had one author in common].
b. “By completing a more thorough literature review and bringing concepts and information from those reports into this one, the authors could greatly strengthen this manuscript.”

3) Assumptions. When reviewers feel that the authors make incorrect assumptions, the results often do not matter.
a. Example: “THE FUNDAMENT [sic] PREMISE OF THIS MANUSCRIPT IS SERIOUSLY FLAWED.” Original CAPS.
b. “Their analysis is based on the supposition that changes in these drivers at any one location will have the same effects on these response variables as that which is currently seen across space in their data set. This may or not be true.”

4) Hypotheses. One description is that hypotheses are weak or absent. Sometimes a paper will be referred to as anecdotal. Many papers have no formal hypotheses, but when a reviewer feels a paper is too unstructured, this point will often be made. I haven’t found any examples of these in my reviews, but I’ll dig some more.

5) Methodological flaws in acquisition or analysis of data. For example, often experiments are too experimental. Gradient analyses are too unconstrained.
a. The authors “used a highly controlled, if not overly-artificial experimental system to address several key theoretical questions in plant ecology”
b. “Unfortunately, this ms suffers in my opinion from too many methodological flaws to really increase our understanding”
c. “the authors seem to pick and choose certain variables and ignore others that have been demonstrated to have a major influence on plant isotope composition”
d. “The approach that they followed seems to be a sort of wild west expedition where they sampled as much as they could seemingly randomly”

6) Poor demonstration of stated results. Sometimes a reviewer doesn’t believe authors showed what they said they showed.
a. “I was also very concerned about the conspicuous lack of critical data: Why are so many method details and results not presented?”
b. “Although the manuscript has the potential to show some interesting trends, it does not currently deliver on its objectives.”
c. “the introduction states that the aim is to determine how landscapes interact with herbivory to determine N availability, yet this does not appear to be addressed in the rest of the manuscript.”
d. “it is not entirely clear to me what they want to show with these data.”
e. “The manuscript does not live up to our expectations”

7) Results are not novel or confirmatory. This is the most common killing comment. Although the scientific method states that results should be repeatable, there is no reason that independent confirmation should be published apparently.
a. “The results are in complete accord with a book chapter I wrote back in 1986.” [23 years before the paper. No citation given.]
b. “In this sense, the data are confirmatory.”
c. “The questions…were certainly worth exploring, but the results seem pretty clear, pretty simple, and not too surprising.”
d. “While I do appreciate the scale of your study, this doesn’t seem like a particularly novel finding”
e. “While this was a detailed fertilization experiment with many collected data, it is not clear what it contributes to our understanding of relationships between nutrient limitation and N:P ratios for a number of reasons”

8) Excessively speculative discussion. This one often doesn’t kill a paper, but in conjunction with other comments is enough for rejection.
a. “I find the discussion unnecessarily speculative in places.”

9) Length to content ratio. Again, hard to kill a paper with this, but certainly not a positive.
a. “I don't think the analysis as currently executed is interesting enough to warrant a treatment of this length”
b. “I was taken aback by the number of co-authors (23). The reported study did not exactly crack the human genome, so the laundry-list approach towards authorship may be inappropriate for this manuscript.” [I guess length to content also applies to authorship.]

10) Poor writing. One missed verb tense opens the door for this one. It's a subtle way to question the authors' scientific ability.
a. “occasionally one encounters run-on or circular sentences, which could use rewording.”
b. “In general, the writing is wordy, causing the reader to slog through unnecessary text, and in many places, the wording obfuscates the authors intended meaning.”

Sunday, December 13, 2009

Natural history of bison dispersing seeds

Bison heads carry more than horns.


Bison do a lot in grasslands. They eat, poop, pee, rub, trample, and wallow, which fundamentally can restructure how a grassland functions. If you spend enough time watching bison, you'll see them eat some unusual things. For example, early in the season, I've seen a cow systematically nip off sumac buds. Not something we typically associate with bison, at least not overly curious bison. Another thing bison do is disperse seeds. And a close look at seeds makes us rethink a bit about what they eat.

Researchers at Oklahoma State recently published a paper where they analyzed the seeds attached to bison forehead fur in the fall and fecal material over the year. In all, they found the seeds of 76 species on the fur of bison. Turns out males and females had different seeds stuck to them, which related to where they spent time.

More interesting was what was found in the fecal material. There really is only one way for seeds to get into bison pies--they have to eat them. Half the seeds were grasses, which means half weren't. This is surprising because plains bison are thought to predominantly eat grass. Yet, in the spring there were seeds of Viola. In July, there was Solanum and Lepidium. In October, there was Lepidium.

Most of the generalizations from the grass dominance of diet comes from either microhistological studies (leftover plant parts) of bison fecal material or changes in species composition. Yet, microhistological studies might underestimate forbs if their cell walls are easily degradable. Changes in species compostion with grazing show increases in forbs, but cannot rule out which forbs they might eat.

Figuring out what they eat has never been easy. Here, some simple natural history might just reset one of the fundamental assumptions about bison.

Rosas, C. A., D. M. Engle, J. H. Shaw, and M. W. Palmer. 2008. Seed dispersal by Bison bison in a tallgrass prairie. Journal of Vegetation Science 19:769-778.