Biobeef Blog

Thoughts of public sector animal geneticist - all views are my own

Category: Blog post (page 2 of 3)

In what seems like a scene from the movie Groundhog Day, another rat study has come out of the laboratory of Dr. Giles-Eric Séralini, only in this case it is Roundup and not GMOs that are under fire. When I read the title of the paper, “Multiomics reveal non-alcoholic fatty liver disease in rats following chronic exposure to an ultra-low dose of Roundup herbicide”, I assumed a new study had been performed by the laboratory showing what this specific title appears to conclude i.e. that rats exposted to low levels of Roundup developed non-alcoholic fatty liver disease. However, when I read further I found that this was a study on tissues from a subset of the same lumpy rats that were involved in the famously retracted (and subsequently republished) paper from 2012 – the rats with horrific tumors (not fatty livers) due to GMOs (not glyphosate) that was breathlessly reported on the Doctor Oz show I participated in, and by media throughout the world.

I think if my work had been roundly criticized by scientific peers for poor experimental design and pathology data inadequacies, and critiqued by a multitude of separate national biosafety committees from  Belgium, Brazil, European UnionCanadaFranceGermany, Australia/New Zealand, and The High Council on Biotechnology,  I would not double down and continue to analyze 5-year old samples from that same experiment. What is weird is that although I vividly remember the images of grotesque tumors on the white Sprague Dawley female rats, (one does not forget those images with a “GMO” label contrasted against the shocking tumors) I did not recall any mention of non-alcoholic fatty liver disease. So I went back to the original paper and searched for the term “fatty liver disease”. Nada.

In fact, the only data on livers in that retracted/republished 2012 paper was presented for the male rats. According to the 2012 paper the males that received the low levels of Roundup (50 ng/L glyphosate equivalent dilution) displayed liver “congestions” and “macroscopic and microscopic necrotic foci”, not fatty liver disease. I asked a Laboratory Animal pathologist at UC Davis who specializes in rodent health to review the data in the paper to determine if it suggested the rats had fatty liver disease. There was no histopathologic evidence of hepatic lipidosis presented in males and no data on female livers was presented at all. Many of the “anatomical pathologies” observed are common aging related findings and this was not taken into account or discussed. They suggested the term “anatomopathological analysis” was a very irregular term for a veterinary pathologist to use and that the use of hepatodigestive tract and liver as separate categories of pathology incidence were redundant. They kept doggedly going back to the fact that no fatty liver phenotype data were ever presented on female livers so they could make no determination as to whether or which rats were suffering from fatty liver disease.

If you want a really interesting read from a group of veterinary pathologists who reviewed the pathology data in the 2012 Séralini study, their review contains the following understated scientific barbs (bold emphasis mine).

The sentence ‘The largest palpable growths (…) were found to be in 95% of cases non-regressive tumors, and were not infectious nodules.’ is very confusing. We hope that differentiating inflammatory from neoplastic lesions was not a challenge for the authors. Another clear example illustrating the lack of accuracy of the results is found in Fig. 3 where microscopic necrotic foci in the liver are grouped with clear-cell focus and basophilic focus with atypia. The first finding refers to a degenerative process whereas the remaining two refer to a proliferative one (Thoolen et al., 2010). Such basic error would be considered as a disqualifying mistake at an examination for pathologists.


They then go on to ask why there was no mention of which pathologist did the analyses, and why the rats were not euthanized earlier

as most members of the ESTP [European Society of Toxicologic Pathology] are veterinarians, we were shocked by the photographs of whole body animals bearing very large tumors. When looking at the lesions, we believe those animals should have been euthanized much earlier as imposed by the European legislation on laboratory animal protection

and then conclude their diatribe with the following

The ESTP comes to the conclusion that the pathology data presented in this paper are questionable and not correctly interpreted and displayed because they don’t concur with the established protocols for interpreting rodent carcinogenicity studies and their relevance for human risk assessment. The pathology description and conclusion of this study are unprofessional. There are misinterpretations of tumors and related biological processes, misuse of diagnostic terminology; pictures are not informative and presented changes do not correspond to the narrative.”

For those who are not immersed in science – these are damning criticisms.

So back to the 2017 study which cites a 2015 “transcriptomics” study by the same group for the observations on the female livers. In that study livers from 10 control females and the 10 females from the R (A) group from the 2012 study (for those of you paying attention) were analyzed using “transcriptomics”. So I went to read the 2015 paper to see if the Roundup-ingesting females perhaps had some liver data, and again there was no discussion of a fatty liver disease phenotype. There was however, an interesting discussion of why tissues from the females was used for the analysis in both the 2015 “transcriptomics” and 2017 “multiomics” paper.

In the 2012 study that started it all, apparently

“Most male rats were discovered after death had occurred. This resulted in organ necrosis making them unsuitable for further analysis. We therefore focused our investigation on female animals where freshly dissected tissues from cohorts of 9-10 euthanized and untreated rats were available. Female control and Roundup-treated animals were respectively euthanized at 701 ± 62 and 635 ± 131 days. Anatomopathological analysis of organs from these animals revealed that the liver and kidneys were the most affected organs.”

Well, the fact that the males got to a stage of necrosis because no one discovered they were dead seems strange in a study where rats are presumably checked every day as required by every animal care protocol I am familiar with. However, such protocols would also have required the rats to be sacrificed long before the tumors were able to grow to the size that were clearly evident in the photos associated with this study. And the fact that the liver and kidneys were the most affected organs might well have been true for the male rats (and these apparently necrotic tissues were analyzed and reported for these males), but for the female rats, according to the 2012 paper, it was all about the tumors!

Image from Seralini et al. 2012

That was the whole basis of the sensational 2012 paper that actually resulted in entire African countries rejecting all GMO imports. Reread that previous sentence because it shows the power of this one, poorly-designed study with 120 rats.

So livers were being harvested from these 20 females – several of which were compromised and euthanized “early” (2 from the control group, and 5 from the “treatment” group) at different ages due to the tumor load. Is it not obvious that these additional factors of tumor load and different ages would confound any data collected from their livers?

The 2015 paper goes on to show electronic microscope analysis of liver sections from females. But it turns out the photograph of the control female was actually the same photo at a different magnification as that shown for the control male hepatocyte image in the 2012 paper. The authors have since stated that was an honest mistake and have submitted a corrigendum, but go on to suggest that there are differences in the hepatocytes from Roundup-treated rats, specifically showing “a disruption of glycogen dispersion”, a disruption of nucleolar function and an overall decreased level of transcription. How transcription can be determined based on an electron micrograph is unclear. No mention is made of the “fatty liver disease” promised in the 2017 paper’s title.

So let me sum this up for those of you who may be lost. The original, highly-controversial 2012 study was done on 120 rats. The most recent study was performed on the livers of a subset of 10 female control rats and 10 female rats from that same 2012 study that were in “Roundup group (A)” which received 50 ng/L glyphosate equivalent dilution in their water. We do not know their water intake so have no idea of actual dosage of “Roundup”; we have little histological data on female liver samples – let alone a diagnosis of fatty liver disease; we know that the control and “treated” rats were euthanized at a variety of differing ages, and that the majority of these female rats had huge tumors that required several of the rats in both the control and Roundup groups to be euthanized before two years of age. And the livers from these 20 rats were the basis of the most recent “omics” paper. There is a saying in science (and perhaps other disciplines): “garbage in – garbage out”.

So let’s plough on – and read the 2017 paper which concludes that the metabolome and proteome analyses of the livers from the “Roundup-drinking” rats versus the controls “showed a substantial overlap with biomarkers of non-alcoholic fatty liver disease and its progression to steatohepatosis”. Hooray – now THERE is a testable hypothesis – so what ARE the biomarkers of non-alcoholic fatty liver disease? In other words, what proteins and metabolites might you expect to see upregulated (or downregulated) if in fact animals had non-alcoholic fatty liver disease? I have read the paper several times now and seen no reference to a paper that answers that question. So in the absence of  knowledge of fatty liver biomarkers, and given the fact no pathology diagnosed “fatty liver disease”, to conclude that “Multiomics reveal non-alcoholic fatty liver disease in rats following chronic exposure to an ultra-low dose of Roundup herbicide” is – to put it kindly – overstating the results of the research and making conclusions beyond that supported by the data.

Interestingly the bioinformatics analysis in this 2017 paper appears to be an improvement on previous works by this group in that the p-values were adjusted to account for the fact there were a high number of metabolites measured (1906 proteins and 673 metabolites), and there was therefore a need to do corrections for multiple comparisons to try to minimize the number of false positives. The authors even include a discussion of the need for corrections for multiple comparisons on page 9, and correctly state that there is a need to do this when measuring hundreds or thousands of observations to reduce the chance of making a type I error (false positive). However, they then lament the fact that there was a lack of statistical significance following the multiple comparison correction for all but three metabolites, due to the small sample size. That is the point! That is why these studies need to have sample size determinations based on the hypothesis being tested.

This study, which was based on the experimental design of a 90 day subchronic toxicity study (OECD, 1998) such that 10 animals were assigned to each group, was critiqued by the German Federal Institute for Risk Assessment (BfR) for small sample size for that very reason.

subchronic studies show a substantially lower variation of age-related pathological changes between animals within a group while those changes are inevitable in long-term studies. As the published study has confirmed, the two-year duration of the study is of the order of the expected life span in rats including the Sprague Dawley strain that was used in the study. This strain, provided by the breeder Harlan, is known to develop spontaneous tumors, particularly mammary and pituitary tumors, at relatively high rates compared to other strains (Brix et al., 2005; Dinse et al., 2010). Therefore, it can be expected that a significant number of animals develop age-related illnesses or die for diverse reasons already during conduct of the study. The distribution of the cases of death between groups can be random, and a number of 10 animals per sex and group is too low to confirm a trend or an effect. Furthermore, no statements on statistically significant dose-response-relationships can be made. Larger sample sizes, as recommended for carcinogenicity studies in OECD Test Guidelines No. 451 or No. 453, would be required in order to allow precise statements with respect to the findings.”

In other words you need to have bigger sample sizes to perform long term studies because many changes are associated with old age – especially when working with a rat strain that is known to develop spontaneous tumors, particularly female mammary and pituitary tumors!

Frustratingly, when the multiple comparisons removed all but three of the 673 metabolites as being statistically significant due to multiple comparison correction in the 2017 paper, the authors just went ahead and included the 55 that had a significant uncorrected p value(!), because “the non-adjusted statistically significant levels” fit a narrative, and so were revived from the statistical trash can on the basis that “they were found to be non-random and thus biologically meaningful”. This is the very definition of confirmation bias which is what multiple comparison correction and correct experimental design is trying to weed out because scientists are people too, and they are not without their own preconceived notions of how the world works.

More concerning, this 2017 paper is yet another in a string of papers from this group that was accepted in a peer-reviewed journal, in this case Scientific Reports, an online journal from the publishers of Nature. The problems in experimental design, lack of supporting pathology data on the test subjects, and wildly subjective overinterpretation of the results should have been grounds for soundly rejecting this manuscript. We live in an age of the willful neglect of scientific evidence, and the emergence of “alternative facts” and realities.  As a scientist it worries me that papers like this are published in apparently respected journals. I remember once hearing a member of the activist industry say that “peer-reviewed journals are the tool of the enemy” suggesting they were the gold standard communication tool for scientists to report inconvenient facts. At the time I did not appreciate the importance of that statement, and concerningly it appears that this in no longer the case. If we can’t trust the peer-review process to ensure the integrity of papers published in scientific journals, what can we trust? This is a problem that should worry the entire scientific community, not only those concerned with the topic of this particular paper.

FDA seeks public comments on regulation of genetically altered animals

The recently released FDA guidance for producers and developers of genetically improved animals and their products defining all intentional DNA alterations in animals as drugs, irrespective of their end product consequence, is nonsensical.

FDA “Guidance for Industry #187” updates the never finalized 2009 document “Regulation of Genetically Engineered Animals Containing Heritable rDNA Constructs” to  the much more expansive “Regulation of Intentionally Altered Genomic DNA in Animals” to expand the scope of the guidance to address animals intentionally altered through use of genome editing techniques. No longer is it the presence of an rDNA construct (which conceivably COULD have encoded a novel allergen or toxic protein) that triggers FDA regulatory oversight of genetically engineered animals, but rather it is the presence of ANY “intentionally altered genomic DNA” in an animal that triggers oversight.  Intention does not equate to risk. This trigger seems to be aimed squarely at breeder intention and human intervention in the DNA alteration.

DNA is generally regarded as safe. We eat it in every meal, and along with each bite we consume billions of DNA base pairs. Each individual differs from another by millions of base pair mutations – we are always consuming DNA alterations – the mutations that provided the variation that enabled plant and animal breeders to select corn from Teosinte and Angus cattle from Aurochs.  DNA does alter the form and function of animals – and all living creatures – it is called the genetic code, the central dogma,  and evolution. If DNA is a drug then all life on Earth is high.

The guidance states that “intentionally altered genomic DNA may result from random or targeted DNA sequence changes including nucleotide insertions, substitutions, or deletions”, however it clarifies that selective breeding, including random mutagenesis followed by phenotypic selection, are not included as triggers. So the random DNA alterations that result from de novo or chemical-induced mutagenesis with not be a trigger, but intentional precise and known alterations and any offtarget random changes that might be associated with the intended edit will trigger regulation, irrespective of the attributes of the end product. This is beyond process-based regulation, it is regulation triggered by human intent. That is if a breeder was involved, then it is regulated. If random mutations happened in nature or due to uncontrolled mutagenesis – not regulated.

This sounds a lot like what Greenpeace is arguing for when they state that a GMO is when “the genetic modification is enacted by heritable material (or material causing a heritable change) that has, for at least part of the procedure, been handled outside the organism by people.” The problem is that risk is associated with the attributes of the product, not the fact that it is handled by people or carries the taint of human intention.

This approach is the polar opposite of what the 2016 National Academies report concluded that the distinction between conventional breeding and genetic engineering is becoming less obvious. They reasoned that conventionally bred varieties are associated with the same benefits and risks as genetically engineered varieties. They further concluded that a process-based regulatory approach is becoming less and less technically defensible as the old approaches to genetic engineering become less novel and as emerging processes — such as gene editing — fail to fit current regulatory categories of genetic engineering. They recommended a tiered regulatory approach focused on intended and unintended novel characteristics of the end product resulting from the breeding methods that may present potential hazards, rather than focusing regulation on the process or breeding method by which that genetic change was achieved.

The new FDA Guidance, released two days before Trump’s inauguration, then goes on to state “a specific DNA alteration is an article that meets the definition of a new animal drug at each site in the genome where the alteration (insertion, substitution or deletion) occurs.  The specific alteration sequence and the site at which the alteration is located can affect both the health of the animals in the lineage and the level and control of expression of the altered sequence, which influences its effectiveness in that lineage. Therefore, in general, each specific genomic alteration is considered to be a separate new animal drug subject to new animal drug approval requirements.” So every SNP is potentially a new drug, if associated with an intended alteration.

To put this in perspective, in one recent analysis of whole-genome sequence data from 234 taurine cattle representing 3 breeds, >28 million variants were observed, comprising insertions, deletions and single nucleotide variants. A small fraction of these mutations have been selected owing to their beneficial effects on phenotypes of agronomic importance. None of them is known to produce ill effects on the consumers of milk and beef products, and few impact the well-being of the animals themselves.

What is not clear is how developers are meant to determine which alterations are due to their “intentions”, and which result from spontaneous de novo mutations that occur in every generation. Certainly breeders can sequence to confirm the intended alteration especially if they are inserting a novel DNA sequence, but how can they determine which of the random nucleotide insertions, substitutions, or deletions are part of the regulatory evaluation, and which are exempt as random mutagenesis. And if there is risk involved with the latter, why are only the random mutations associated with intentional modifications subject to regulatory evaluation? And what is the if intended modification is a single base pair deletion – will the regulatory trigger be the absence of that base pair – something that is not there?

Many proposed gene editing applications will result in animals carrying desirable alleles or sequences that originated in other breeds or individuals from within that species (e.g. hornless Holsteins were edited to carry the Celtic polled allele found in breeds like Angus). As such, there will be no novel combination of genetic material or phenotype (other than hornless). The genetic material will also not be altered in a way that could not be achieved by mating or techniques used in traditional breeding and selection. It will just be done with improved precision and minus the linkage drag of conventional introgression.

Does it make sense to regulate hornless dairy calves differently to hornless beef calves carrying the exact same allele at the polled locus? Does it make sense to base regulations on human intent rather than product risk? Regulatory processes should be proportional to risk and consistent across products that have equivalent levels of risk.

There is a need to ensure that the extent of regulatory oversight is proportional to the unique risks, if any, associated with the novel phenotypes, and weighed against the resultant benefits. This question is of course important from the point of view of technology development, innovation and international trade.  And quite frankly the ability of the animal breeding community to use genome editing.

Given there is currently not a single “genetically engineered animals containing heritable rDNA construct” being sold for food anywhere in the world  (see my BLOG on AquAdvantage salmon), animal breeders are perhaps the group most aware of the chilling impact that regulatory gridlock can have on the deployment of potentially valuable breeding techniques. While regulation to ensure the safety of new technologies is necessary, in a world facing burgeoning animal protein demands, overregulation is an indulgence that global food security can ill afford.

I urge the scientific community – including those not directly impacted by this proposed guidance because animal breeders are a small community – to submit comments to the FDA on this draft revised guidance #187 during the 90-day comment period which closes June 19, 2017. There are several questions posted there asking for scientific evidence demonstrating that there are categories of intentional alterations of genomic DNA in animals that pose low to no significant risk. Centuries of animal breeding and evolution itself would suggest there are many.

There is also a request for nomenclature for the regulatory trigger as outlined in the draft revised guidance. The FDA used the phrase “animals whose genomes have been altered intentionally” to expand their regulatory reach beyond genetically engineered animals containing heritable rDNA constructs (aka drugs), but suggested that other terms that could be used include “genome edited animals,” “intentionally altered animals,” or expanding the term “genetically engineered” to include the deliberate modification of the characteristics of an organism by manipulating its genetic material. They encourage the suggestion of other phrases that are accurate and inclusive. I can think of a couple!

Who does fund university research?

This is follow up to my BLOG last week about “Who should fund university research”? I thought it might be illustrative to examine actual data from my university. Not surprisingly for a large enterprise, UC Davis  tracks sources of all monies coming into the university, and oversees the expenditure of such funds.

There are two basic ways research funding can come into the university – as a formal contract or grant, or as a donation. In the former case, there is some type of a grant application or description of work to be carried out (but not what the results of the research will be!!!) for which the funding is provided, in the second case it is what is called an “unrestricted” donation. This is money that is directed towards an individual professor, program or department with no further specification as to what the money is to be used for. Of course such funding is still managed by the university, and can’t be used for a vacation to Hawaii.  Often it is used as seed funding to undertake a professor’s favorite research idea, perhaps one that is a bit too “out there” and risky to secure traditional grant funding in the absence of supporting preliminary data.  In that sense it is like a donation to your favorite charity, you donate the money because you like the type of work that charity does.  However you cannot directly specify exactly what the charity is to do with the money you donated.

Grants and contracts

These are the monies that really run research programs. The total awards by calendar year at UC Davis is in the ballpark of $750 million (i.e. three quarters of a billion). That is a lot of money, but UC Davis is a big university with a medical school which includes a hospital, a veterinary school, and all of the colleges that make up the campus. If we pessimistically (realistically) assume a 10% funding rate of public research funding that  means the UC Davis faculty are on average writing $7.5 billion worth of  grants each year, and are successfully bringing in one tenth of that. And to reiterate these funds are used to support graduate students, buy research supplies, perform experiments and advance knowledge. UC Davis is a powerful economic engine for California, generating $8.1 billion in statewide economic activity and supporting 72,000 jobs.

The approximate breakdown for the $786 million received in fiscal year 2014-15 was $427 million (54%) awards from the federal government, and likely a big chunk of research funding is also from the state government. $66.1 million (8.4%) was awards from foundations, and $59.4 million (6.7%) awards from industry sponsors. I think that is an interesting point, that UC Davis receives more sponsored research funding from foundations than it does from industry sponsors. The School of Medicine received the largest share of research grants at UC Davis with $264 million (34%), followed by the College of Agricultural and Environmental Sciences at $155 million (20%), and the School of Veterinary Medicine at $114 million (14.5%).


This pool of monies is more modest than that brought in by grants and contracts. I could only get this data for fiscal year, rather than calendar year, but it is in the vicinity of $200 million. Now the question that perhaps has been asked most frequently is how much funding is coming from specific companies – specifically those associated with the so-called “Agrochemical academic complex”? That all depends upon how you define such industries, but let’s go with the so-called “Big 6”; that is Monsanto, Syngenta, Bayer, BASF, DuPont/DuPont pioneer, and Dow.

The following table has the breakdown of total grants and contracts, donations and those two figures totaled, and then the breakout of how much of that funding and the (percentage of total) associated with the cumulative funding coming from the “Big 6” in recent years. (The numbers differ slightly from those above due to fiscal versus calendar year accounting.)

Year 2012 2013 2014 2015
Grants/Contracts 699,728,437 718,934,464 751,864,525 793,797,558
       From “Big 6” 1,407,821 (0.20%) 477,178 (0.07%) 881,856 (0.12%) 746,160 (0.09%)
Donations 132,451,535 149,134,036 165,704,178 184,180,960
       From “Big 6” 768,172 (0.58%) 1,386,079 (0.93%) 858,912 (0.52%) n/a
TOTAL 832,179,972 868, 068,500 917,568,703 977,978,518
       From “Big 6” 2,175,993 (0.26%) 1,863,257 (0.21%) 1,741,768 (0.19%) n/a

So in summary, at what is arguably the number one ranked agricultural research university in the world, the proportion of funding coming from the “Big 6 Agrochemical academic complex” funders is approximately $2 million per year, well under one half of one percent of total research funding received by the campus.  To put that in perspective, the College of Agricultural and Environmental Sciences alone has 330 faculty members and 1,000 graduate students . Two million dollars is approximately what it takes to fully fund ~ 35 graduate students for a year.

So what is the money being used for?

Not surprisingly most of the funding from the “Big 6” was associated with research working in plant sciences and entomology. Some went to the medical school because the search for “Bayer” also captured research funding sponsored by “Bayer Healthcare”.   A number of the donations were to Cooperative Extension county-based advisors performing field research with various crops. And just for transparency, none of it was directed to my research program (which is not surprising as I work on animals not plants!). Some was earmarked for work in specific crops like figs, pistachios, strawberries, rice, onions, woody crops and viticulture.  And that is not surprising because California grows hundreds of specialty crops. Noticeably none of these crops have commercialized genetically engineered varieties, and their breeding programs are mostly run by public sector scientists.The one thing California does not grow much of is large acreage corn and soybeans. We do not have the right climate and conditions for these crops, and there are high-value alternative crops that CA farmers chose to grow.  As a result, UC Davis does not do much research in these field crops, and the university therefore does not get much industry research funding for work in these crops.

I would wager that the University of Kentucky, home of the Kentucky Derby,  probably has industry funding supporting is equine science program, ’cause they have a huge equine industry in that state. In  general when a university has an important industry in its state, that industry helps to support research at that state-located public university. And in the case of California there is an amazing number of agricultural commodities grown – the fruit and vegetable industry raises a cornucopia of varieties in the state, and UC Davis has renowned brewing and wine making programs. As an example, the brewing science program at UC Davis has received several sizable donations from industry, including the recent $2 million donation from the owners of the local Sierra Nevada Brewing Company. Cheers to science-based beer brewing and wine making!

How does this breakdown compare to other land grant universities?

My colleague Kevin Folta at the University of Florida posted this useful graphic for the gators  (University of Florida).

Funding to University of Florida FY 2015-2016 broken down by funding source

In the case of the University of Florida, the faculty brought in $140 million in sponsored funding in FY 2015-16, and of that 70% was from federal agencies,  15.5% was from foundations, and 3.5% was from corporations and industry.  Kevin makes the observation in his blog regarding agricultural industry funders:

“They are frequently the beneficiaries of increased knowledge in agriculture, as well as the training and education we provide to the next generation of scientists”. I look forward to his next BLOG piece where he promises to write about whether industry support of science matters.

So there you have it – or at least a snapshot from two large agricultural universities as to which entities fund universities. By far the biggest source of funding is federal research grants – as might be expected at a public university.

Now I must go and focus my efforts on writing my next federal grant application – which unfortunately has a ~90% probability of not being funded and will likely only ever be read by 2 grant reviewers. As compared to this BLOG which has 100% chance of not securing funding for my research program, but hopefully will be of interest to more than 2 readers.

I would appreciate your comment on a recently published study

The email was simple enough. It was a request from a member of the press asking “I would appreciate your reaction/comments to the recently published study on GMO corn for an article I am putting together on it. Deadline: Wednesday 4 January.”

Just when I thought I was going to get a day off to myself to write up my own research results, in comes the dreaded time-sensitive press request for comments on a recently published paper. Dreaded because to respond properly means I need to sit down and read the whole paper and ensure I have understood the materials and methods, results, and discussion. For me that is a commitment of a couple of hours. And to top things off – it was a paper by Mesnage from France’s infamous Séralini group whose previous works have had numerous flaws. But I made a New Year’s Resolution to be more active in critiquing agricultural science and can’t in good faith renege on that resolution on January 2nd.

The paper’s title “An integrated multi-omics analysis of the NK603 Roundup-tolerant GM maize reveals metabolism disturbances caused by the transformation process” suggested the researchers had uncovered some altered metabolic processes caused by the transformation process used to create the NK603 Roundup-tolerant genetically modified (GM) maize line. This event was achieved by direct DNA transformation by microparticle bombardment of plant cells with DNA-coated gold particles and regeneration of plants by tissue culture on selective medium. This transformation process presumably happened last century as the feed/food approval for this line in the United States occurred in 2000. However, upon reading the abstract the paper was nothing to do with disturbances caused by the transformation process, but rather it was about whether the product of this transformation event was “substantially equivalent” based on proteomics and metabolomics evaluation. Strangely, the “conclusiony”-sounding title of the paper therefore had nothing to do with the experimental design or findings discussed in the paper.

According to the results section, the actual “objective of this investigation was to obtain a deeper understanding of the biology of the NK603 GM maize by molecular profiling (proteomics and metabolomics) in order to obtain insights into its substantial equivalence classification.” In plain English – the intent of the paper was to examine both proteins and metabolites found in NK603 Roundup-tolerant GM maize (both treated and untreated with Roundup), and non-GM isogenic lines to determine if the three groups were substantially equivalent using sensitive “-omics” assays.

To perform such an evaluation requires a common agreement as to what substantial equivalence means, and what constitutes an appropriate comparator(s).  Unfortunately, not such common understanding exists. According to an OECD publication in 1993, substantial equivalence is a concept which stresses than an assessment of a novel food, in particular one that is genetically modified, should demonstrate that the food is as safe as its traditional counterpart. This has been interpreted to mean that the levels and variation for characteristics in the genetically modified organism must be within the natural range of variation for those characteristics considered in the comparator.

And this brings up the issue of an appropriate comparator. Typically this involves the comparison of key compositional data collected from both the recombinant-DNA crop plant and the isogenic non-GM counterpart, grown under near identical conditions. Ideally, conventional non-GM corn hybrids are also included in analyses to determine the level of natural variation for compositional data in conventional varieties that are considered to be safe for consumption based on a history of safe use.

According to the original studies of the NK603 GM maize variety compositional analyses were conducted on the key corn tissues, grain and forage, produced in multiple locations (Kansas, Iowa, Illinois, Indiana, and Ohio in 1998 and in trials in Italy and France in 1999). Grain and forage samples were taken from plants of the corn event NK603 and the non-modified control both years. In the E.U. field trials, reference grain and forage samples also included 19 conventional, commercial hybrids. The NK603 plants were treated with Roundup Ultra herbicide. Fifty-one different compositional components were evaluated.

Not surprisingly there are protocols on how to best carry out experiments on GM crops  that are accepted by regulatory agencies world-wide (OECD 2006; Codex 2009). According to EFSA, for compositional analysis risk assessment, field trials will include: the GM plant under assessment, its conventional counterpart (isogenic non-GM counterpart), and non-GM reference-varieties, representative of those that would be normally grown in the areas where the field trials are performed. The later puts some figures and context to the natural biological variation in the different plant varieties we commonly consume.

So what did the Mesnage  paper in question do? The researchers planted a single replicate of the GM plant under assessment (DKC 2678 Roundup-tolerant NK603) and its conventional counterpart (DKC 2575 – although the exact genetic makeup of this line and whether it is a true isogenic counterpart is not well elaborated in the paper) at a single location on two different years. Half of the GM plants each year were treated with Roundup. Then the corn kernels were harvested and the proteins and metabolites from the three groups were assayed using proteome and metabolome profiling, the data from the two years were merged and analyzed. The three groups (isogenic non-GM counterpart), GM plant without roundup treatment, and GM plant with roundup treatment separated into three distinct groups based on a principal component analysis (PCA).

Integration of metabolome and proteome profiles of the NK603 maize and its near-isogenic counterpart into a multiple co-inertia analysis projection plot.

I draw your attention to a very similar graph (below) in a paper I recently published which shows a PCA analysis of the transcriptome (genes expressed) from cattle that have been exposed to different viruses and bacteria. Basically PCA can pull apart patterns of gene expression in different groups of cattle in response to the specific environmental challenges they are facing. The controls can clearly be seen to be clustering down in the bottom right corner, and the bacterial infections tend to cluster to the right and differently than those infected with viruses which cluster to the left.

Multidimensional scaling plot of samples based on all genes

That is – if you expose plants or animals to different environmental or disease challenge conditions – they express different genes in response. That is typically why researchers do “–omics” studies, to try to identify which genes/proteins/metabolites respond to different environmental conditions. What they do not show is whether any of beef that might be derived from these animals would be unsafe to eat – every animal and plant ever eaten is likely unique in terms of their exact protein and metabolite profile depending upon their unique environmental conditions and stressors.

Unfortunately there are a number of experimental design problems with the Mesnage et al. (2016) paper that complicate the interpretation of the results, and as concerning there appear to be confounders that further complicate the analyses.

These include:

  • Only a single replicate of each treatment (n=1) at a single location (over two years) is analyzed with no biological replication or randomization of locations to remove site variability.
  • The data from the two cultivations in different years were inexplicably merged prior to analysis which made it impossible to determine if results or trends were consistent or reproducible between years
  • No inclusion of non-GM reference-varieties (conventional commercial hybrids) representative of those that would be normally grown in the areas where the field trials are performed to put some figures and context to the natural biological variation in the composition of non-GM corn comparators
  • No discussion of correction for multiple comparisons (by chance one in every 20 comparisons would be expected to be significant at the p<0.05) of so. If doing multiple comparisons it is necessary to do a multiple-comparison correction
  • There appears to be evidence of different levels of fungal (Gibberella moniliformis Maize ear and stalk rot fungus) protein contamination between the three groups.  See Supplemental Dataset 5  where Tubulin alpha chain OS=Gibberella moniliformis (strain M3125 / FGSC 7600) appears as the  protein that had the biggest fold change between control and GM lines. If there were differing levels of fungal infestation among the groups this would also confound the data.

Others have commented on some of their concerns with this paper including a comprehensive analysis from a number of scientists with expertise in this area. There were also comments from European experts from the science media center. And another discussed the definition and importance of true isogenic lines.

Based on significant differences between proteins and metabolites, including the rather alarmingly named “putrescine and cadaverine” which were markedly increased in the GM NK603 corn (N-acetyl-cadaverine (2.9-fold), N-acetylputrescine (1.8-fold), putrescine (2.7-fold) and cadaverine (28-fold), Mesnage et al. (2016) concluded that NK603 and its isogenic control line are not substantially equivalent, meaning that there were statistical differences between the proteins and metabolites found in the three groups. However what is not clear is whether the levels and variation for characteristics in the genetically modified organism or the control were within the natural range of variation for those characteristics in corn, and the biological significance of the statistical differences in terms of posing a food safety concern.  Differences between the GM variety in the presence and absence of Roundup would presumably be similar to the differences that occur every time a crop is treated with an herbicide, be the plant GM or not.

I could not resist looking up these two metabolites putrescine and cadaverine which seem like they should more appropriately be associated with a decaying animal corpse.  According to Wikipedia, “Putrescine, or tetramethylenediamine, is a foul-smelling organic chemical compound that is related to cadaverine; both are produced by the breakdown of amino acids in living and dead organisms and both are toxic in large doses. The two compounds are largely responsible for the foul odor of putrefying flesh, but also contribute to the odor of such processes as bad breath and bacterial vaginosis. More specifically, cadaverine is a foul-smelling diamine compound produced by the putrefaction of animal tissue.”

So what are these two horrifying compounds doing in corn samples? Enquiring minds needed to know. So being a good scientist I googled “Cadaverine in corn”, and lo and behold a peer-reviewed study. Check out Table 1. Mean levels of free bioactive amines in fresh, canned and dried sweet corn (Zea mays).

According to this study on “Bioactive amines in fresh, canned and dried sweet corn, embryo and endosperm and germinated corn”, “Different levels of amines in corn products were reported in the literature. Okamoto et al. (1997) found higher concentrations of putrescine and spermidine in fresh corn. Zoumas-Morse et al. (2007) reported lower spermidine and putrescine levels in fresh and canned corn. The differences observed on the profile and levels of amines may be related to several factors such as cultivars, cultivation practices, water stress, harvest time, grain maturity, types of processing and storage time.” In other words, there is a lot of natural biological variation in the different plant varieties we commonly consume with regard to the amount of amines in corn products, and yet we commonly and safely consume fresh, canned and dried sweet corn. If you really want to get nerdy, there are databases of polyamines in food.

As the multi-omics analysis of the NK603 Roundup-tolerant GM maize paper by Mesnage correctly states “the vagueness of the term substantial equivalence generates conflict amount stakeholders to determine which compositional differences are sufficient to declare a GMO as non-substantially equivalent.” In the absence of knowledge of the natural variation in proteins and metabolites in the common foods we eat, the level of different proteins and metabolites that trigger a  safe/unsafe determination, and a testable hypothesis at the outset of an experiment, undisciplined “-omics” studies risk becoming statistical fishing trips.

As someone who works in genomics and knows the tens or even hundreds of thousands of statistical comparisons that are part of genomic analyses, there is a real need to understand the statistical methods required for multiple comparisons. If 10,000 comparisons are made at the p<0.05 rate, 500 would be expected to be statistically significant by chance alone. The biological relevance of statistical differences is also not always clear as discussed here. According to the European Food Safety Authority (EFSA) Scientific Committee,  good experimental design requires that “the nature and size of biological changes or differences seen in studies that would be considered relevant should be defined before studies are initiated. The size of such changes should be used to design studies with sufficient statistical power to be able to detect effects of such size if they truly occurred.”

In the first line of the discussion Mesnage et al. state “In this report we present the first multi-omics analysis of GM NK603 maize compared to a near isogenic non-GM counterpart”. There are actually two relevant  papers on the NK603 line here and here that were published in 2016 but which were inexplicably not even cited in the Mesnage publication. The later paper is entitled “Evaluation of metabolomics profiles of grain from maize hybrids derived from near-isogenic GM positive and negative segregant inbreds demonstrates that observed differences cannot be attributed unequivocally to the GM trait” which compared differences in grain from corn hybrids derived from a series of GM (NK603, herbicide tolerance) inbreds and corresponding negative segregants. The authors concluded “Results demonstrated that the largest effects on metabolomic variation were associated with different growing locations and the female tester. They further demonstrated that differences observed between GM and non-GM comparators, even in stringent tests utilizing near-isogenic positive and negative segregants, can simply reflect minor genomic differences associated with conventional back-crossing practices.”

Moreover, a 2013 meta-analysis by Ricroch examined data from 60 high-throughput ‘-omics’ comparisons between GE and non-GE crop lines. There are several papers on compositional data in GE versus non-GM corn varieties (here, here, here, here, here, here, here, here). The overwhelming conclusion that is common to these papers is that natural variation due to varying genetic backgrounds and environmental conditions  explained most of the variability among the samples. And yet this nuance is missing in the 2016 Mesnage paper – the conflation of any factors other than the genetic modification and treatment with Roundup that could influence the results given the poor experimental design is ignored.  This tends to be a common feature of this research group – to ignore standard experimental design protocols such as randomization and biological replication, cherry pick cited literature and ignore contradictory or preceeding studies with dissimilar results, rather than discussing their results in the context of what is known based on the entire weight-of-evidence in the scientific literature.

Ricroch in her meta-analysis summarized that “The ‘-omics’ comparisons revealed that the genetic modification has less impact on plant gene expression and composition than that of conventional plant breeding. Moreover, environmental factors (such as field location, sampling time, or agricultural practices) have a greater impact than transgenesis. None of these ‘-omics’ profiling studies has raised new safety concerns about GE varieties”

Interestingly, one study showed that transcriptome alteration was greater in mutagenized plants than in transgenic plants. Of course the random mutations associated with mutation breeding undergo no regulatory evaluation or substantial equivalence assessment prior to commercialization. Variation is the driver of breeding programs, and the reason that varieties  like red delicious and golden delicious apples differ from each other in the first place.

Finally Mesnage et al. acknowledge funding from “The Sustainable Food Alliance” for their paper. There is no link as to which groups or interests provide funding for this Alliance. This is not reassuring and runs counter to the absolute transparency of all funding sources that is being demanded of public sector researchers working in this field.

At the end of the day if I have concerns about a paper by a group that has a track record of publishing highly controversial studies, I like to go back to the Nature graphic shown above to see how many red flags are raised. In this case there were a few, most particularly around experimental design and omitting references and discussion of the finding of other “-omics” studies which have consistently shown the high levels of natural variation that is seen in the composition of food due to the differing environments experienced by the plants (and animals) we consume.

I know that this is more of a response than any journalist could ever use, but as with most everything in agriculture, there are no simple sound-bite answers . Having said that I appreciate the press reaching out to seek comment from scientists and hope that is increasingly common in 2017. Although taking the time to respond kinda took the rest of my day. I may have to rethink my New Year’s Resolutions if I plan to get any of my own research done this year, I will worry about that tomorrow when I return to work for the year.

Who should fund University research?

A recent article by Danny Hakim on the so-called “agrochemical academic complex” includes a quote, “If you are funded by industry, people are suspicious of your research. If you are not funded by industry, you’re accused of being a tree-hugging greenie activist. There is no scientist who comes out of this unscathed.”

I beg to differ. My research is not funded by industry, and yet to my knowledge I have never been called a “tree-hugging greenie activist.” Quite the opposite – I have been demonized by groups opposed to genetically modified organisms (GMOs) because my publicly-funded research on animal biotechnology and genomics has occasionally published results in line with the weight-of-evidence around the safety of genetically engineered organisms.

After reading the article, I was left with the conclusion that industry (and by this it appears to be any industry associated with the “agrochemical academic complex”, not the activist or NGO industries – the influence of whose funding is noticeably absent from the article) dictates the outcome of the research and public academics are just hapless puppets to be played to produce favorable outcomes for industry.

That is not how my laboratory works. Nor is it how my department operates. Nor my university. Nor in my experience are the public scientists I know willing to trade their hard-won scientific integrity for research funding. Such a move would be career suicide. Publishing incorrect or false data in science sets off a ticking time bomb for retraction when others are unable to repeat these results.

More generally, the article does not seem to understand how academic funding works. And I have found this misunderstanding to be true in interactions with my friends and neighbors too. My university has little control over what I research – it is called “academic freedom”.

As a researcher at UC Davis, I am provided with my salary and a laboratory. The rest is up to me. What I mean by that is if I want to do original research, I need to obtain research funding to conduct that research. Typically this involves writing research grants.

My university has absolutely no say over the topics I chose to research or where I apply for funding. They have absolute direct control over how I spend the funding I do receive in terms of ensuring I abide by all university policies, and that the grant monies are spent appropriately.

By FAR the biggest expense I have in my research program are graduate students. At the current time the annual cost of a UC Davis graduate student is around $26,734 for a 50% time stipend for 12 months, and $17,581 for fees and health insurance for a total of $44,315 annually. So for a 2 year Masters student that is $88,630 and a 5 year Ph.D. student $221,575 – let’s just call that an even quarter of a million.

And that does not include the University’s “indirect rates” which currently adds an additional 57% on top of the stipend, so the amount that needs to be in a research grant for that PhD stipend is an additional $76,192 for a grand total of around $297,767 for a 5-year student, close to an even $300,000 assuming no tuition hikes– and that only gets you 50% time of that student, the other 50% time they are doing their course work!

Each laboratory is effectively a small business within a much larger entity. Their business is to obtain funding to pay student salaries, conduct research, and publish peer-reviewed articles. The University provides the location and facilities to perform that research. And for that the University taxes a 57% “overhead” rate off the top of the grant award.

If I did not write successful grants to fund graduate students, pay for research supplies, travel to field plots and perform my extension activities then I would not have a research program or anything to publish in the peer-reviewed literature. And that is how I am evaluated – by my peer-reviewed publications and research productivity, not by the amount or source of the research funding I am able to secure. It is the researcher, the so-called “principal investigator”, that drives this enterprise and makes the decisions on what to research and where to obtain grant funding, not the university.

If the university as an entity happens to obtain money from a donor to build a building, for example the UC Davis Mondavi Center for the Performing Arts built with a gift from the wine industry, that in no way affects my research funding or what I chose to research, or means that researchers have to perform research that is favorable to the wine industry.

If a researcher chooses to seek industry funding, as many do especially in the information technology, transportation and energy sectors, there are strict guidelines around managing potential conflicts of interest and ability to publish the results of the research. According to UC Davis university policy: “Freedom to publish is a major criterion when determining the appropriateness of a research sponsorship; a contract or grant is normally not acceptable if it limits this freedom. The campus cannot accept grants or contracts that give the sponsoring agency the right to prevent for an unreasonable or unlimited time, the release of the results of work performed.”

It is perhaps not well publicized that an inordinate amount of a University professor’s time is spent writing grants to public agencies, which often have a funding success rate of one grant in 10. As such 9 out of every 10 grants are only ever read by one or two reviewers, and then never see the light of day again. This is not an efficient system, but it is the system we have. If a researcher has public funding, their research has already survived pre-project peer-review by the grant panel. Ironically some have even suggested that public funding is also tainted. If funding from both private and pubic sources is suspect, where does that leave academic laboratories, and who will pay to train the next generation of scientists?

Writing grants is perhaps (actually for sure) the least favorite part of my job, and it is time-consuming. But it is a necessary evil if I am to perform research and fund my graduate students. I have been fortunate to be able to secure public funding for my research mostly from the United States Department of Agriculture (USDA) (thanks NIFA!), but many other researchers quite appropriately work with industry sponsorship in the development and evaluation of new technologies. Industries of all types seek such partnerships with public universities to obtain an impartial evaluation of their technology.

Demonizing industry funding as unilaterally suspect in the absence of wrongdoing fails to take into consideration the checks and balances that are in place at public universities and the importance of public:private partnerships in the development of new technologies, and the unfortunate reality that there is a paucity of public research funding.

Fishy Business

I was updating a chapter I am writing about the regulations around genetically engineered  (GE) animals, and both supporters and detractors of GE animals have criticisms of the US regulatory approach.

Criticisms from the biotech industry include:

• the approach is process-triggered rather than being based on the actual, relative risks (if any) associated with the novel phenotype or product as compared to those associated with conventional comparators.
• the process is prohibitively time consuming and expensive to complete all of the mandatory testing. AquaBounty has expended over $60 million in attempting to bring the AquAdvantage® salmon through the regulatory approval process thus far (David Frank, AquaBounty, pers. comm.), and still does not have authorization to market in the US.
• some tests are not based on unique risks (e.g. endogenous allergens), and the absence of a known tolerance level or natural levels of variability means regulatory studies are problematic to design and the results are ambiguous to interpret.
• unpredictable timeline.
• focus is only on risks, no consideration is given to potential benefits of the novel phenotype or product, nor are the risks associated with existing alternative approaches to genetic modification and breeding, or regulatory inaction, considered.

The following table outlines the key events in the timeline of the regulatory process for the AquAdvantage® salmon since its genesis over a quarter of a century ago in 1989. No other GE food animal has been approved anywhere else in the world.

Year Event
1989 The founder animal from which the AquAdvantage® line was derived was created by microinjection of the transgene into fertilized eggs of wild Atlantic salmon in Canada
1992 The AquAdvantage® line was created from the F1-progeny of the EO-1α line.
1995 AquaBounty technologies established an Investigational New Animal Drug (INAD) file with the Center of Veterinary Medicine (CVM) of the U.S. FDA to pursue the development of AquAdvantage® salmon.
2001 First regulatory study submitted by Aqua Bounty Technologies to U.S. FDA for a New Animal Drug Applications (NADA)
2008 U.S. FDA approves Aqua Bounty Canada as a manufacturing site for production of  AquAdvantage® salmon eggs
2009 FDA guidance on how GE animals will be regulated

FDA approval of first GE animal pharmaceutical

Final AquAdvantage® regulatory study submitted to U.S. FDA.

U.S. FDA inspected and approved Aqua Bounty Panamá as an authorized site for the commercial production of AquAdvantage® salmon for purposes of export to the U.S.

2010 September: FDA VMAC meeting on AquAdvantage® salmon
2011 Political efforts to prevent FDA from regulating GE salmon, ban GE salmon, delay regulatory approval
2012 FDA released finding of no significant impact “FONSI” environmental assessment
2015 November: US FDA AquAdvantage® Approval

AquaBounty expended over $60 million to bring the AquAdvantage® salmon through the regulatory approval process

2016 January: US FDA issued a ban on the import and sale of GE salmon until FDA “publishes final labeling guidelines for informing consumers of such content”. The ban was the result of language Alaska Sen. Lisa Murkowski introduced into the 2016 fiscal budget, or omnibus, bill. It also authorizes “an independent scientific review” of the effects of GE salmon on wild salmon stocks and for human consumption.” Fact check investigated some of her claims about the salmon

March: a coalition of environmental, consumer, commercial and recreational fishing organizations sues US FDA over approval of GE salmon approval

May: Canadian Approval of AquAdvantage® for sale in Canada

December: FDA bills AquaBounty for $113,000 “Animal Drug” User Fee for their “approved” animal drug product despite continued FDA ban on the import and commercial sale of AquAdvantage® fillets


Criticisms from the activist industry include:

  • lack of public participation and transparency – although the entire AquAdvantage data package was made publicly available.
  • no consideration of socio-economic issues including ethics and moral concerns.
  • concern about impartiality of the data because the regulatory review studies are being conducted by the company seeking approval of the GE animal.
  • environmental risk is not given appropriate consideration as the National Environmental Policy Act is procedural in nature and does not give the FDA the authority to deny an application on environmental grounds.
  • lack of mandatory process-based labels on food derived from GE animals, although the passage of the 2016 National Bioengineering Food Disclosure Law may pre-empt that concern.

The AquAdvantage® Atlantic salmon carries an “all fish” growth hormone chimeric gene. The rDNA construct consists of  an antifreeze protein gene promoter from ocean pout linked to a chinook salmon growth hormone cDNA. It was envisioned that this construct would raise less public concern because of the “all-fish” nature of the rDNA construct and the fact that does not include any viral or selection marker sequences. Given the founder of this transgenic line was generated in 1989 and that the commercial product was first approved in December 2015, it is hard to argue that the mandatory process-based regulatory process encountered by the GE AquAdvantage® salmon  created “an environment of certainty and confidence for researchers, industry and consumers. It may even have convinced consumers that, since mandatory regulation is required, it must mean that this fast growing fish is somehow more intrinsically risky than conventionally bred fast growing salmon.

One thing is for certain – no small company or university can afford to try to get a GE animal application through regulatory given the unpredictability of the timeline and costs. The protracted regulatory evaluation of this fast-growing Atlantic salmon and uncertainty associated with the process and the timeline has had a chilling effect on investment and development of GE animals for food applications in the US.


Teenager raped to death and doughnuts in the surgery room

If I wanted to create a fake news story – I would lead with a sensational headline. Something that would incense and shock the readers and be extra “clickbaity”. Perhaps a hook about a teenager getting raped to death. That should get some serious traffic.

That seemed to be the approach taken by New York Time’s journalist Michael Moss back in in January 2015 when he wrote “Research Lab Lets Livestock Suffer in Quest for Profit” skewering the US Meat Animal Research Center (US MARC).

Moss began investigating the USDA-research facility after being contacted by Dr. James Keen, a disgruntled ex-employee who is prohibited from setting foot in the center. In the article Keen is quoted as saying that in 1989, “There was a young cow, a teenager, with as many as six bulls,”…”the bulls were being studied for their sexual libido, and normally you would do that by putting a single bull in with a cow for 15 minutes. But these bulls has been in there for hours mounting her”…”Her back legs were broken. Her body was torn up.” According to the article a few hours later the (teenager) cow died.

That is a gut-wrenching image! I have never heard a heifer referred to as a teenager, but that one compelling anecdote was enough for me. If true, this was egregious and unacceptable animal cruelty. However, the scenario did not ring true. I know the protocol for libido testing bulls, and it never involves multiple animals. There would be no reason to EVER put six bulls in with a single heifer for 5 minutes, let alone for hours.

There were numerous other horrific claims in the article. Moss quoted another source, Robert A. Downey, the executive director of the Capital Humane Society, in Lincoln, NE as saying “Experimental surgery is being performed in some (not all) cases by untrained, unskilled and unsupervised staff. This has resulted in the suffering of animals and in some cases the subsequent death of all animals.” During a visit, he said, he saw animals headed to surgery that fell from carts or were pushed to the floor by their handlers, while two other workers in the operating room ate doughnuts.

Again that infuriated me – as an animal scientist I know the very strict animal care protocols we have to comply with to undertake any animal research at UC Davis, even routine herd management is strictly monitored. However, again the scenario did not ring true given what I know of the character of the researchers at US MARC, not to mention the senselessness and research futility of doing a surgery if all the animals subsequently died. That does not pass the common sense smell test. And surgeries do not mix with doughnuts. For obvious reasons.

As I wrote in 2015 “As an animal geneticist, I have worked with the researchers in the Genetics, Breeding and Animal Health research unit at MARC, and personally visited the center on several occasions over the past decade. The story published by The New York Times does not reflect my knowledge of the current research that is being conducted at the center. Nor does it in any way align with my observations as it relates to the handling and treatment of animals at the center.”

Even back then I challenged Micheal Moss’ statement that “the center has one overarching mission: helping producers of beef, pork and lamb turn a higher profit” It is unclear where that demonstrably incorrect statement originated. Moss’ personal opinion perhaps. MARC’s publicly-available mission statement is to develop “scientific information and new technology to solve high priority problems for the U.S. beef, sheep, and swine industries.” Even the article itself then goes on to state that since MARC was founded 50 years ago, it “has fought the spread of disease, fostered food safety, and helped American ranchers compete in a global marketplace.”

However it was the allegations of the death raping of the teenage cow and experimental surgery being undertaken by untrained, unskilled and unsupervised staff that really stuck in my mind. I was therefore glad when the USDA Office of the Inspector General (OIG) announced it was going to investigate and audit the allegations made in the New York Times article. Specifically they stated they were going to investigate “33 statements from the article to evaluate and attempt to determine their veracity”.

I watched with interest as the interim report came out in 2015 to see which 33 statements were going to be investigated, and was satisfied to see that the two most egregious statements about the teenage cow (Statement 14), and that untrained staff performing experimental surgery (Statement 15) were going to be thoroughly investigated. At the time the interim report was released in September 2015, it stated with specific reference to these two statements, “We have no observations on this statement at the current time.”

So imagine my surprise when the OIG final report was released Friday December 16, 2016 and I was none the wiser. The OIG report stated that of the 33 statements made by the New York Times, “we determined that only 7 were materially accurate — 26 were inaccurate, lacked sufficient context or were uncorroborated”. New York Times that is a 21% material accuracy rate – also known as an F in my classes. The OIG report further clarified that “Overall, we did not note evidence indicating a systemic problem with animal welfare at US MARC”

Wait – death raping of a teenage cow and experimental surgery being undertaken by unskilled staff – is that not the definition of an animal welfare problem? So I looked to see what the report specifically said about Statement 14 and 15. It said nothing. Because the entire text was redacted. Both the statements and the findings – blacked out. Even the statement text that was in the interim report was inexplicably redacted.

So of the 26 statements in the New York Times article that were determined to be “inaccurate, lacked sufficient context or were uncorroborated” which was which? How many were inaccurate? There is a HUGE difference between inaccurate (i.e. fake news), and lacking sufficient context. And hopefully any journalist worth their salt would corroborate statements from a disgruntled employee and an executive director of a Humane Society in an article about animal welfare at an animal research facility with at least one independent source. Isn’t that how journalism works? Only two of the statements were listed as inaccurate in the OIG report, but the OIG conclusions on several others was bizarrely redacted. So your guess is as good as mine.

In its 12/20/2016 article about the report “U.S. Animal Research Center Needs More Oversight, Audit Says”, the New York Times explained that “The Times did not answer questions from the auditors, telling the inspector general’s office that the article spoke for itself.” Actually it didn’t, that is why there was an OIG investigation and audit which found only about one in five statements were correct. Nary a mention of that failing grade in material accuracy.

And in perhaps the ultimate piece of irony, the OIG report concluded that US MARC “could make its research more transparent to the public”. I might say the same to the OIG in its report! What are the privacy concerns that required the redaction of a simple conclusion of either materially accurate, inaccurate, lacked sufficient context or uncorroborated? Those sensational allegations and emotive images are out there now – right or wrong – unchallenged.

At the end of the day we will never know if the bull libido test heifer incident ever happened, or if untrained, staff performed experimental surgeries. All we have is OIG report conclusion that they “did not note evidence indicating a systemic problem with animal welfare at US MARC.” The OIG investigators said the Pulitzer Prize-winning reporter Michael Moss did not agree to be interviewed for their audit of the veracity of his article’s statements.

Rather than being shamed by his failing grade, in a recent email Moss doubled down his assertion that US MARC pushes the biology of livestock for profit, a statement that was graded plain “incorrect” in the OIG report. Repeating something does not make it true. Once again the facts are that US MARC develops “scientific information and new technology to solve high priority problems for the U.S. beef, sheep, and swine industries.”

But as has been evidenced repeatedly in recent months perception soon becomes reality and the truth gets left in the dust. And policy based on perception ensues which seems to have been the article’s original objective. Facts be damned.

This blog post was also posted in Science 2.0

Four Legs, Two Legs, No Legs: What Does Science Tell Us About the Best Sources of Sustainable Animal Protein?

The following is an excerpt of a presentation I gave to the California Academy of Nutrition and Dietetics Annual Conference and Exhibition 2016 in Riverside on April 21, 2016. The full paper which goes into MUCH greater detail and includes more scientific references is available on their conference website. Seemed an appropriate topic for my blog given Earth Day is tomorrow.

What is the “best” source of sustainable animal protein? There is no easy answer to that question. It all depends upon what sustainable means to you, and which metric(s) of sustainability you want to guide your decisions. Definitions of sustainability generally have to do with living within the limits of, and understanding the interconnections among, the three pillars of sustainability: economic, environmental and social. People put varying emphasis on these different pillars. Consider this graphic below – which is the sustainable system (1)? There is no one correct answer since it will depend upon the weighting you put on the various competing pillars of sustainability. There are pros and cons to each of the various scenarios.

Which system is sustainable (1)?

Which system is sustainable (1)?

These tradeoffs occur more generally in all of our food choices. For example, some people swear by grass fed beef – but based on a carbon footprint per unit of protein perspective, it is much less efficient and therefore has a bigger carbon footprint per kg beef than intensively raised beef (2). Others advocate obtaining protein from nuts, but from a water footprint per unit of protein perspective they are more water intensive than all animal products (3). Others swear by wild-caught fish, but from a carbon footprint perspective, animal products from this source are very energy intensive if they involve bottom trawling and longline fishing (4). And let’s not even get into the issue of air miles which can make even the innocuous asparagus appear to be public enemy number one (5) based on CO2-equvalents per unit weight of food product as illustrated in the graphic below.

Carbon emissions (CO2-equivalent/kg) for fruits and vegetables on a weight basis (5).

Carbon emissions (CO2-equivalent/kg) for fruits and vegetables on a weight basis (5).

The sustainability question becomes even more complicated when considering the requirements for a nutritionally-balanced diet. Although it may seem like switching to a diet with less red meat and more fruits, fish and milk should be desirable from an environmental perspective, it may actually exacerbate climate change due to the relatively high energy and water use per calorie of these food products. A recent research paper (4) compared 3 different scenarios: 1) a reduced calorie diet (-300 calories/day) with the same mix of food as the average US diet; 2) the USDA-recommended food mix without reducing the total calories of an average diet; 3) reducing calories AND shifting to the USDA-recommended food mix. The first option resulted in a desirable 10% reduction in energy use, water use and emissions. The second scenario increased energy use (43%), water use (16%), and emissions (11%). Even when reducing calories on the USDA-recommended diet, the scenario 3 diet resulted in a significant increase in energy (38%), water use (10%), and emissions (6%) compared with the current status quo.

Why are the “costs” of these scenarios so different? Because fruit, fish, and dairy – as emphasized in the USDA guidelines – are foods that on a per calorie basis require the most energy and water to grow.

Input/emissions per calorie from some common US dietary items on calorie basis (10).

Input/emissions per calorie from some common US dietary items on calorie basis (6).

In contrast, added sugars, fats, oils, and grains require fewer resources and create fewer emissions per calorie. So although these might be the most environmentally-friendly sources of calories, they are not likely to be the ones that are recommended for consumption in large quantities as part of a healthy diet. If you totally forget health and consume a diet that would have the least impact on the environment, you would eat a lot more fats and sugars. Additionally, grains are also an excellent source of calories despite the fact they tend to be vilified in US dietary culture.

The bottom line is that food is more than calories and protein, and the dietary mix of foods and their availability will determine the best balance for a healthy diet. Adding in sustainability metrics complicates the discussion, and often conflicting results will be generated depending upon which metric is being optimized. Sometimes the most environmentally friendly diet might be the least healthy option. As with all discussions around sustainability, and agricultural production systems in general (organic, conventional, genetically engineered etc.), it is complicated and there are tradeoffs. Beware of anyone who touts a seemingly magic solution. There will never be black and white answers to the questions of which foods are the most sustainable, other than perhaps just eating less of whatever you are currently eating. Although this is a privileged first-world perspective, as evidenced by the approximately 25,000 people who die of malnutrition or starvation daily. As the saying goes, “A well-fed man (perhaps we could substitute society in here) has many problems (and food choices!), a starving man has but one.”


There is no one sustainable source of protein, and depending upon the question that is being asked (e.g. carbon emissions/water use/land use/energy use per calorie/unit weight/unit protein), different food products will look like the “most sustainable” choice. There are also ethical and religious concerns around animal welfare and/or consuming meat and/or animal products (e.g. eggs, milk). Often there are direct conflicts between what is perceived as the most sustainable production system. Is it the one that best protects animal health/welfare, the one with the lowest environmental footprint per unit of product, or the most efficient? As with all dietary decisions there are tradeoffs among the various pillars of sustainability, and consumers will need to make the choices they consider to be best for their particular family values, budget, and circumstances.


  1. Stern S, Sonesson U, Gunnarsson S, et al. 2005. Sustainable development of food production: a case study on scenarios for pig production. Ambio 34:402-407.
  2. Nijdam D, Rood T, Westhoek H. 2012. The price of protein: Review of land use and carbon footprints from life cycle assessments of animal food products and their substitutes. Food Policy 37:760-770.
  3. Mekonnen MM, Hoekstra AY. 2012. A Global Assessment of the Water Footprint of Farm Animal Products. Ecosystems 15:401-415
  4. Tom MS, Fischbeck PS, Hendrickson CT. 2015. Energy use, blue water footprint, and greenhouse gas emissions for current food consumption patterns and dietary recommendations in the US. Environment Systems and Decisions 2015:1-12.
  5. Heller, M. C. and Keoleian, G. A. 2015. Greenhouse Gas Emission Estimates of U.S. Dietary Choices and Food Loss. Journal of Industrial Ecology, 19: 391–401. doi: 10.1111/jiec.12174

Maserati or a Graduate student?

A couple of years ago a science communication paper came out suggesting public trust differs depending upon the perceived warmth and competence of different professional groups.

As can be seen in the graphic below from that paper by Susan T. Fiske and Cydney Dupree (PNAS 2014;111:13593-13597) that was good news for nurses and teachers, and not such great news for lawyers who are seen as competent but not trustworthy. But lawyers are not typically trying to communicate science and so perhaps this lack of trust is not a deal breaker for them.

Susan T. Fiske, and Cydney Dupree PNAS 2014;111:13593-13597

Warmth–competence ratings of commonly mentioned jobs.

Scientists fared less well than professors, although I am not 100% sure what the difference is between a scientist and a professor. I assume that professors are seen to be those teaching at universities, whereas scientists might include those presumably nefarious industry scientists.

But for scientists interested in scientific communication the take home message of the paper was to attempt to be “warmer”, and show concern for humanity and the environment. As someone who entered agricultural science because of my interest in how genetics can be used to simultaneously increase food production and minimize the environmental footprint of agriculture, those concerns for humanity and the environment are actually the cornerstone of my profession.

However it was another finding of the study that has been rattling around in the back of my brain.

And that finding was that in particular, “Americans seem wary of researchers seeking grant funding”. Think about that. As a public sector scientist who mentors several graduate students in my laboratory, I would perhaps be more wary of scientists who were not seeking grant funding because how else can you pay for research supplies and expenses? That made me think there is a fundamental disconnect between how science is actually funded, and the public perception of how science is funded.

In my own case, I receive a salary from the University of California, an office on the Davis camps, and the holy grail of space, a wet laboratory. The rest of it is up to me. If I want to do research, I have to “seek grant funding”. And as a molecular geneticist working with large animals who need feed and care, this is expensive research.

Perhaps one of the biggest expenses, and undoubtedly the most satisfying part of research, is funding those pesky graduate students who actually do the work. And they do not come cheap. I looked at a recent USDA grant I was awarded and the approximate annual cost of a graduate student is in the ball park of $40K a year in direct costs for fees and living stipend. Apparently students need both an apartment to live in and food to eat.

And with University overhead, which for this grant was 30% of total costs, this brings the annual cost up to around $52,000 a year – so for a 2 years Masters student I have to seek grant funding for $100,000. That is about the approximate cost of new Maserati. And funds for a PhD, which is typically 5 years, is a whopping quarter of a million dollars! And that does not actually pay for any experimental reagents or animal care fees, just brains and boots on the ground.

Therefore if we want graduate students to be trained at public universities their professors need to seek grant funding. I have been fortunate to obtain public funding for my research program, all of which can be freely accessed at my university webpage.

Public research funding programs are highly competitive. In 2014, the last year for which I could find data , at the USDA National Institute of Food and Agriculture’s  flagship competitive grants program, the Agriculture and Food Research Initiative (AFRI), “the success rate in FY 2014, calculated in terms of number of proposals funded (excluding conferences, supplements, continuing increments of the same grant, and NIFA Fellowships) divided by the number of proposals submitted for review, was 11 percent”. That roughly means that for every 100 proposals submitted, 11 got funded.

Imagine spending 90% of your time putting words to paper that no else will read, with the exception of a single grant review panel. Or writing a paper that only has a 1 in 10 chance of being accepted by a peer-reviewed journal. And yet scientists seeking public research funding spend a lot of their time writing grants that never are funded.

Things are a little better for medical research. in 2014 the National Institutes of Health (NIH) received 51,073 research project grant (RPG) applications, out of which they funded 9,241, resulting in a success rate of 18.1 percent.

So make whatever judgment you may about the trustworthiness and perceived warmth of scientists and professors. But please understand that researchers seeking grant funding, especially public grant funding, is not a reason to be suspicious. It is actually a sign of an active scientist who is likely trying to fund graduate students, reagents and experimental supplies to enable them to undertake research in their chosen field of interest.

If the public is wary and untrusting of scientists who seek research funding, we have a real scientific communication problem. Those are most likely public sector scientists trying to secure funding to enable them to support, mentor and train the next generation of scientists.

Got pests?


Cattle grazing in Northern California

Over the past two weeks I have been speaking to livestock producers at locations from Bakersfield in Southern California to Montague on the Oregon border. I was a speaker at a number of “Winter Animal Health” meetings which are organized annually by the University of California Cooperative Extension.

As a geneticist, I was speaking about genetics and how to make better bull selection decisions. Not surprisingly (to all but a geneticist) genetics was only one of several topics covered, and many of the subjects were unrelated to my area of expertise.

Being located on the Davis campus, I relish the opportunity that extension meetings afford me to get out among actual producers and listen to their experiences, concerns and problems. And what really struck me about this series of meetings is that everything is trying to eat our lunch!

What I mean by that is that pests from all kingdoms – microbial, plant and animal – are competing for the resources used by the plants and animals that we eat, or are beating us to the punch by actually eating those same things for themselves!

The traditional definition of a pest is an epidemic disease associated with high mortality; specifically the plague or “pestilence”. It is also defined as something resembling a pest in destructiveness; especially a plant or animal detrimental to humans or human concerns (as in agriculture or livestock production).

The pests that were covered by this series of meetings included the traditional viruses and bacteria that result in animal diseases, weeds that compete with pasture and crops for scarce water and nutrients, a wide-ranging list of insects like ticks and mosquitos that carry disease, ground squirrels that eat vegetation and construct leg-breaking holes and destructive mounds in the middle of fields, wild deer that graze crops, pastures and even watermelons, and wild horses that compete for precious resources on the drought-stricken range.

Listening to the plethora of animal and plant pests discussed at these meetings, it is impressive that the producers are able to preserve any of their product to market for human consumers.

Control measures to address pests were as varied as the pests, and indeed the producers, themselves. Depending upon the production system requirements of their target market, producers are using a variety of chemical, physical, and  temporal pest management approaches.

One thing is constant – all production system face pests. Some pests are common to all – like threat of animal disease and weeds – whereas others are crop and/or species specific.

Google image search for “ideal farm”

Often images of agriculture show a utopian red barn in a green field, and rarely is there even a single pest present in the image. The animals are all healthy, the fields are all weed-free, and there is not a single ground squirrel, tick or other scourge in sight.

Skillful management can help approach that ideal through the judicious use of preventative measures like vaccines to forestall disease, and ploughing or tilling to manage undesired weeds. But in the open outdoor environment of agriculture it is hard to avoid airborne weed seeds, disease-causing microbes and insects, and grazing competition from wild herbivores. And then there are the predators that prey on the livestock themselves.

Failure to correctly or effectively manage pests can result in complete crop failure (i.e. no food), or catastrophic health outcomes for livestock. It has been estimated that without pesticides 70% of the world food crop would be lost and even with pesticide use, 42% is destroyed by insects and fungal damage. Dispensing with pesticides would require at least 90% more cropland to maintain present yields.

The producers present at these meetings spanned the range from backyard hobby farmers, to part-time ranchers with  day jobs in the city, to fulltime commercial family farmers and ranchers. They represented a range of farming systems, including some that prohibited the use of certain pest management methods such as certain herbicides, antibiotics, or selected insecticides.

What was  refreshing to me was that the different control methods and integrated pest management approaches were presented objectively, and the producers listened with respectful interest, rather than judgment, as to how their neighbors were managing their pest problems.

There were no black and white choices or magic silver bullets provided by the various speakers. Different approaches were presented along with their nuanced pros and cons; benefits and tradeoffs were discussed fairly and without prejudice in comparison among the different choices that are available.

In my 20 year experience working with farmers and ranchers, this set of meetings was not atypical. Extension has long presented objective information on solutions to agricultural problems through informal educational meetings.

I just wish more urban people could hear these discussions and see the level of technical competence, professionalism and thought that producers put into their production and pest management decisions, and appreciate the fact that at the end of the day some food remains for their dinner and dessert plates. Including this impressive spread of homemade pies on offer at the conclusion of the Montague cattle health meeting, aka “Pie Night” for obvious reasons. One of the lesser known and most enjoyable perks of an extension meeting in Northern California ! With thanks to Siskiyou County Cattlemen and Cattlewomen!

Home made pies at the Montague Cattle Health Meeting

Home made pies at the 2016 Montague Cattle Health Meeting


Older posts Newer posts

© 2018 Biobeef Blog

Theme by Anders NorenUp ↑