Thoughts of public sector animal geneticist - all views are my own

Month: January 2017

In what seems like a scene from the movie Groundhog Day, another rat study has come out of the laboratory of Dr. Giles-Eric Séralini, only in this case it is Roundup and not GMOs that are under fire. When I read the title of the paper, “Multiomics reveal non-alcoholic fatty liver disease in rats following chronic exposure to an ultra-low dose of Roundup herbicide”, I assumed a new study had been performed by the laboratory showing what this specific title appears to conclude i.e. that rats exposted to low levels of Roundup developed non-alcoholic fatty liver disease. However, when I read further I found that this was a study on tissues from a subset of the same lumpy rats that were involved in the famously retracted (and subsequently republished) paper from 2012 – the rats with horrific tumors (not fatty livers) due to GMOs (not glyphosate) that was breathlessly reported on the Doctor Oz show I participated in, and by media throughout the world.

I think if my work had been roundly criticized by scientific peers for poor experimental design and pathology data inadequacies, and critiqued by a multitude of separate national biosafety committees from  Belgium, Brazil, European UnionCanadaFranceGermany, Australia/New Zealand, and The High Council on Biotechnology,  I would not double down and continue to analyze 5-year old samples from that same experiment. What is weird is that although I vividly remember the images of grotesque tumors on the white Sprague Dawley female rats, (one does not forget those images with a “GMO” label contrasted against the shocking tumors) I did not recall any mention of non-alcoholic fatty liver disease. So I went back to the original paper and searched for the term “fatty liver disease”. Nada.

In fact, the only data on livers in that retracted/republished 2012 paper was presented for the male rats. According to the 2012 paper the males that received the low levels of Roundup (50 ng/L glyphosate equivalent dilution) displayed liver “congestions” and “macroscopic and microscopic necrotic foci”, not fatty liver disease. I asked a Laboratory Animal pathologist at UC Davis who specializes in rodent health to review the data in the paper to determine if it suggested the rats had fatty liver disease. There was no histopathologic evidence of hepatic lipidosis presented in males and no data on female livers was presented at all. Many of the “anatomical pathologies” observed are common aging related findings and this was not taken into account or discussed. They suggested the term “anatomopathological analysis” was a very irregular term for a veterinary pathologist to use and that the use of hepatodigestive tract and liver as separate categories of pathology incidence were redundant. They kept doggedly going back to the fact that no fatty liver phenotype data were ever presented on female livers so they could make no determination as to whether or which rats were suffering from fatty liver disease.

If you want a really interesting read from a group of veterinary pathologists who reviewed the pathology data in the 2012 Séralini study, their review contains the following understated scientific barbs (bold emphasis mine).

The sentence ‘The largest palpable growths (…) were found to be in 95% of cases non-regressive tumors, and were not infectious nodules.’ is very confusing. We hope that differentiating inflammatory from neoplastic lesions was not a challenge for the authors. Another clear example illustrating the lack of accuracy of the results is found in Fig. 3 where microscopic necrotic foci in the liver are grouped with clear-cell focus and basophilic focus with atypia. The first finding refers to a degenerative process whereas the remaining two refer to a proliferative one (Thoolen et al., 2010). Such basic error would be considered as a disqualifying mistake at an examination for pathologists.

Ouch.

They then go on to ask why there was no mention of which pathologist did the analyses, and why the rats were not euthanized earlier

as most members of the ESTP [European Society of Toxicologic Pathology] are veterinarians, we were shocked by the photographs of whole body animals bearing very large tumors. When looking at the lesions, we believe those animals should have been euthanized much earlier as imposed by the European legislation on laboratory animal protection

and then conclude their diatribe with the following

The ESTP comes to the conclusion that the pathology data presented in this paper are questionable and not correctly interpreted and displayed because they don’t concur with the established protocols for interpreting rodent carcinogenicity studies and their relevance for human risk assessment. The pathology description and conclusion of this study are unprofessional. There are misinterpretations of tumors and related biological processes, misuse of diagnostic terminology; pictures are not informative and presented changes do not correspond to the narrative.”

For those who are not immersed in science – these are damning criticisms.

So back to the 2017 study which cites a 2015 “transcriptomics” study by the same group for the observations on the female livers. In that study livers from 10 control females and the 10 females from the R (A) group from the 2012 study (for those of you paying attention) were analyzed using “transcriptomics”. So I went to read the 2015 paper to see if the Roundup-ingesting females perhaps had some liver data, and again there was no discussion of a fatty liver disease phenotype. There was however, an interesting discussion of why tissues from the females was used for the analysis in both the 2015 “transcriptomics” and 2017 “multiomics” paper.

In the 2012 study that started it all, apparently

“Most male rats were discovered after death had occurred. This resulted in organ necrosis making them unsuitable for further analysis. We therefore focused our investigation on female animals where freshly dissected tissues from cohorts of 9-10 euthanized and untreated rats were available. Female control and Roundup-treated animals were respectively euthanized at 701 ± 62 and 635 ± 131 days. Anatomopathological analysis of organs from these animals revealed that the liver and kidneys were the most affected organs.”

Well, the fact that the males got to a stage of necrosis because no one discovered they were dead seems strange in a study where rats are presumably checked every day as required by every animal care protocol I am familiar with. However, such protocols would also have required the rats to be sacrificed long before the tumors were able to grow to the size that were clearly evident in the photos associated with this study. And the fact that the liver and kidneys were the most affected organs might well have been true for the male rats (and these apparently necrotic tissues were analyzed and reported for these males), but for the female rats, according to the 2012 paper, it was all about the tumors!

Image from Seralini et al. 2012

That was the whole basis of the sensational 2012 paper that actually resulted in entire African countries rejecting all GMO imports. Reread that previous sentence because it shows the power of this one, poorly-designed study with 120 rats.

So livers were being harvested from these 20 females – several of which were compromised and euthanized “early” (2 from the control group, and 5 from the “treatment” group) at different ages due to the tumor load. Is it not obvious that these additional factors of tumor load and different ages would confound any data collected from their livers?

The 2015 paper goes on to show electronic microscope analysis of liver sections from females. But it turns out the photograph of the control female was actually the same photo at a different magnification as that shown for the control male hepatocyte image in the 2012 paper. The authors have since stated that was an honest mistake and have submitted a corrigendum, but go on to suggest that there are differences in the hepatocytes from Roundup-treated rats, specifically showing “a disruption of glycogen dispersion”, a disruption of nucleolar function and an overall decreased level of transcription. How transcription can be determined based on an electron micrograph is unclear. No mention is made of the “fatty liver disease” promised in the 2017 paper’s title.

So let me sum this up for those of you who may be lost. The original, highly-controversial 2012 study was done on 120 rats. The most recent study was performed on the livers of a subset of 10 female control rats and 10 female rats from that same 2012 study that were in “Roundup group (A)” which received 50 ng/L glyphosate equivalent dilution in their water. We do not know their water intake so have no idea of actual dosage of “Roundup”; we have little histological data on female liver samples – let alone a diagnosis of fatty liver disease; we know that the control and “treated” rats were euthanized at a variety of differing ages, and that the majority of these female rats had huge tumors that required several of the rats in both the control and Roundup groups to be euthanized before two years of age. And the livers from these 20 rats were the basis of the most recent “omics” paper. There is a saying in science (and perhaps other disciplines): “garbage in – garbage out”.

So let’s plough on – and read the 2017 paper which concludes that the metabolome and proteome analyses of the livers from the “Roundup-drinking” rats versus the controls “showed a substantial overlap with biomarkers of non-alcoholic fatty liver disease and its progression to steatohepatosis”. Hooray – now THERE is a testable hypothesis – so what ARE the biomarkers of non-alcoholic fatty liver disease? In other words, what proteins and metabolites might you expect to see upregulated (or downregulated) if in fact animals had non-alcoholic fatty liver disease? I have read the paper several times now and seen no reference to a paper that answers that question. So in the absence of  knowledge of fatty liver biomarkers, and given the fact no pathology diagnosed “fatty liver disease”, to conclude that “Multiomics reveal non-alcoholic fatty liver disease in rats following chronic exposure to an ultra-low dose of Roundup herbicide” is – to put it kindly – overstating the results of the research and making conclusions beyond that supported by the data.

Interestingly the bioinformatics analysis in this 2017 paper appears to be an improvement on previous works by this group in that the p-values were adjusted to account for the fact there were a high number of metabolites measured (1906 proteins and 673 metabolites), and there was therefore a need to do corrections for multiple comparisons to try to minimize the number of false positives. The authors even include a discussion of the need for corrections for multiple comparisons on page 9, and correctly state that there is a need to do this when measuring hundreds or thousands of observations to reduce the chance of making a type I error (false positive). However, they then lament the fact that there was a lack of statistical significance following the multiple comparison correction for all but three metabolites, due to the small sample size. That is the point! That is why these studies need to have sample size determinations based on the hypothesis being tested.

This study, which was based on the experimental design of a 90 day subchronic toxicity study (OECD, 1998) such that 10 animals were assigned to each group, was critiqued by the German Federal Institute for Risk Assessment (BfR) for small sample size for that very reason.

subchronic studies show a substantially lower variation of age-related pathological changes between animals within a group while those changes are inevitable in long-term studies. As the published study has confirmed, the two-year duration of the study is of the order of the expected life span in rats including the Sprague Dawley strain that was used in the study. This strain, provided by the breeder Harlan, is known to develop spontaneous tumors, particularly mammary and pituitary tumors, at relatively high rates compared to other strains (Brix et al., 2005; Dinse et al., 2010). Therefore, it can be expected that a significant number of animals develop age-related illnesses or die for diverse reasons already during conduct of the study. The distribution of the cases of death between groups can be random, and a number of 10 animals per sex and group is too low to confirm a trend or an effect. Furthermore, no statements on statistically significant dose-response-relationships can be made. Larger sample sizes, as recommended for carcinogenicity studies in OECD Test Guidelines No. 451 or No. 453, would be required in order to allow precise statements with respect to the findings.”

In other words you need to have bigger sample sizes to perform long term studies because many changes are associated with old age – especially when working with a rat strain that is known to develop spontaneous tumors, particularly female mammary and pituitary tumors!

Frustratingly, when the multiple comparisons removed all but three of the 673 metabolites as being statistically significant due to multiple comparison correction in the 2017 paper, the authors just went ahead and included the 55 that had a significant uncorrected p value(!), because “the non-adjusted statistically significant levels” fit a narrative, and so were revived from the statistical trash can on the basis that “they were found to be non-random and thus biologically meaningful”. This is the very definition of confirmation bias which is what multiple comparison correction and correct experimental design is trying to weed out because scientists are people too, and they are not without their own preconceived notions of how the world works.

More concerning, this 2017 paper is yet another in a string of papers from this group that was accepted in a peer-reviewed journal, in this case Scientific Reports, an online journal from the publishers of Nature. The problems in experimental design, lack of supporting pathology data on the test subjects, and wildly subjective overinterpretation of the results should have been grounds for soundly rejecting this manuscript. We live in an age of the willful neglect of scientific evidence, and the emergence of “alternative facts” and realities.  As a scientist it worries me that papers like this are published in apparently respected journals. I remember once hearing a member of the activist industry say that “peer-reviewed journals are the tool of the enemy” suggesting they were the gold standard communication tool for scientists to report inconvenient facts. At the time I did not appreciate the importance of that statement, and concerningly it appears that this in no longer the case. If we can’t trust the peer-review process to ensure the integrity of papers published in scientific journals, what can we trust? This is a problem that should worry the entire scientific community, not only those concerned with the topic of this particular paper.

FDA seeks public comments on regulation of genetically altered animals

The recently released FDA guidance for producers and developers of genetically improved animals and their products defining all intentional DNA alterations in animals as drugs, irrespective of their end product consequence, is nonsensical.

FDA “Guidance for Industry #187” updates the never finalized 2009 document “Regulation of Genetically Engineered Animals Containing Heritable rDNA Constructs” to  the much more expansive “Regulation of Intentionally Altered Genomic DNA in Animals” to expand the scope of the guidance to address animals intentionally altered through use of genome editing techniques. No longer is it the presence of an rDNA construct (which conceivably COULD have encoded a novel allergen or toxic protein) that triggers FDA regulatory oversight of genetically engineered animals, but rather it is the presence of ANY “intentionally altered genomic DNA” in an animal that triggers oversight.  Intention does not equate to risk. This trigger seems to be aimed squarely at breeder intention and human intervention in the DNA alteration.

DNA is generally regarded as safe. We eat it in every meal, and along with each bite we consume billions of DNA base pairs. Each individual differs from another by millions of base pair mutations – we are always consuming DNA alterations – the mutations that provided the variation that enabled plant and animal breeders to select corn from Teosinte and Angus cattle from Aurochs.  DNA does alter the form and function of animals – and all living creatures – it is called the genetic code, the central dogma,  and evolution. If DNA is a drug then all life on Earth is high.

The guidance states that “intentionally altered genomic DNA may result from random or targeted DNA sequence changes including nucleotide insertions, substitutions, or deletions”, however it clarifies that selective breeding, including random mutagenesis followed by phenotypic selection, are not included as triggers. So the random DNA alterations that result from de novo or chemical-induced mutagenesis with not be a trigger, but intentional precise and known alterations and any offtarget random changes that might be associated with the intended edit will trigger regulation, irrespective of the attributes of the end product. This is beyond process-based regulation, it is regulation triggered by human intent. That is if a breeder was involved, then it is regulated. If random mutations happened in nature or due to uncontrolled mutagenesis – not regulated.

This sounds a lot like what Greenpeace is arguing for when they state that a GMO is when “the genetic modification is enacted by heritable material (or material causing a heritable change) that has, for at least part of the procedure, been handled outside the organism by people.” The problem is that risk is associated with the attributes of the product, not the fact that it is handled by people or carries the taint of human intention.

This approach is the polar opposite of what the 2016 National Academies report concluded that the distinction between conventional breeding and genetic engineering is becoming less obvious. They reasoned that conventionally bred varieties are associated with the same benefits and risks as genetically engineered varieties. They further concluded that a process-based regulatory approach is becoming less and less technically defensible as the old approaches to genetic engineering become less novel and as emerging processes — such as gene editing — fail to fit current regulatory categories of genetic engineering. They recommended a tiered regulatory approach focused on intended and unintended novel characteristics of the end product resulting from the breeding methods that may present potential hazards, rather than focusing regulation on the process or breeding method by which that genetic change was achieved.

The new FDA Guidance, released two days before Trump’s inauguration, then goes on to state “a specific DNA alteration is an article that meets the definition of a new animal drug at each site in the genome where the alteration (insertion, substitution or deletion) occurs.  The specific alteration sequence and the site at which the alteration is located can affect both the health of the animals in the lineage and the level and control of expression of the altered sequence, which influences its effectiveness in that lineage. Therefore, in general, each specific genomic alteration is considered to be a separate new animal drug subject to new animal drug approval requirements.” So every SNP is potentially a new drug, if associated with an intended alteration.

To put this in perspective, in one recent analysis of whole-genome sequence data from 234 taurine cattle representing 3 breeds, >28 million variants were observed, comprising insertions, deletions and single nucleotide variants. A small fraction of these mutations have been selected owing to their beneficial effects on phenotypes of agronomic importance. None of them is known to produce ill effects on the consumers of milk and beef products, and few impact the well-being of the animals themselves.

What is not clear is how developers are meant to determine which alterations are due to their “intentions”, and which result from spontaneous de novo mutations that occur in every generation. Certainly breeders can sequence to confirm the intended alteration especially if they are inserting a novel DNA sequence, but how can they determine which of the random nucleotide insertions, substitutions, or deletions are part of the regulatory evaluation, and which are exempt as random mutagenesis. And if there is risk involved with the latter, why are only the random mutations associated with intentional modifications subject to regulatory evaluation? And what if the intended modification is a single base pair deletion – will the regulatory trigger be the absence of that base pair – something that is not there?

Many proposed gene editing applications will result in animals carrying desirable alleles or sequences that originated in other breeds or individuals from within that species (e.g. hornless Holsteins were edited to carry the Celtic polled allele found in breeds like Angus). As such, there will be no novel combination of genetic material or phenotype (other than hornless). The genetic material will also not be altered in a way that could not be achieved by mating or techniques used in traditional breeding and selection. It will just be done with improved precision and minus the linkage drag of conventional introgression.

Does it make sense to regulate hornless dairy calves differently to hornless beef calves carrying the exact same allele at the polled locus? Does it make sense to base regulations on human intent rather than product risk? Regulatory processes should be proportional to risk and consistent across products that have equivalent levels of risk.

There is a need to ensure that the extent of regulatory oversight is proportional to the unique risks, if any, associated with the novel phenotypes, and weighed against the resultant benefits. This question is of course important from the point of view of technology development, innovation and international trade.  And quite frankly the ability of the animal breeding community to use genome editing.

Given there is currently not a single “genetically engineered animals containing heritable rDNA construct” being sold for food anywhere in the world  (see my BLOG on AquAdvantage salmon), animal breeders are perhaps the group most aware of the chilling impact that regulatory gridlock can have on the deployment of potentially valuable breeding techniques. While regulation to ensure the safety of new technologies is necessary, in a world facing burgeoning animal protein demands, overregulation is an indulgence that global food security can ill afford.

I urge the scientific community – including those not directly impacted by this proposed guidance because animal breeders are a small community – to submit comments to the FDA on this draft revised guidance #187 during the 90-day comment period which closes June 19, 2017. There are several questions posted there asking for scientific evidence demonstrating that there are categories of intentional alterations of genomic DNA in animals that pose low to no significant risk. Centuries of animal breeding and evolution itself would suggest there are many.

There is also a request for nomenclature for the regulatory trigger as outlined in the draft revised guidance. The FDA used the phrase “animals whose genomes have been altered intentionally” to expand their regulatory reach beyond genetically engineered animals containing heritable rDNA constructs (aka drugs), but suggested that other terms that could be used include “genome edited animals,” “intentionally altered animals,” or expanding the term “genetically engineered” to include the deliberate modification of the characteristics of an organism by manipulating its genetic material. They encourage the suggestion of other phrases that are accurate and inclusive. I can think of a couple!

Who does fund university research?

This is follow up to my BLOG last week about “Who should fund university research”? I thought it might be illustrative to examine actual data from my university. Not surprisingly for a large enterprise, UC Davis  tracks sources of all monies coming into the university, and oversees the expenditure of such funds.

There are two basic ways research funding can come into the university – as a formal contract or grant, or as a donation. In the former case, there is some type of a grant application or description of work to be carried out (but not what the results of the research will be!!!) for which the funding is provided, in the second case it is what is called an “unrestricted” donation. This is money that is directed towards an individual professor, program or department with no further specification as to what the money is to be used for. Of course such funding is still managed by the university, and can’t be used for a vacation to Hawaii.  Often it is used as seed funding to undertake a professor’s favorite research idea, perhaps one that is a bit too “out there” and risky to secure traditional grant funding in the absence of supporting preliminary data.  In that sense it is like a donation to your favorite charity, you donate the money because you like the type of work that charity does.  However you cannot directly specify exactly what the charity is to do with the money you donated.

Grants and contracts

These are the monies that really run research programs. The total awards by calendar year at UC Davis is in the ballpark of $750 million (i.e. three quarters of a billion). That is a lot of money, but UC Davis is a big university with a medical school which includes a hospital, a veterinary school, and all of the colleges that make up the campus. If we pessimistically (realistically) assume a 10% funding rate of public research funding that  means the UC Davis faculty are on average writing $7.5 billion worth of  grants each year, and are successfully bringing in one tenth of that. And to reiterate these funds are used to support graduate students, buy research supplies, perform experiments and advance knowledge. UC Davis is a powerful economic engine for California, generating $8.1 billion in statewide economic activity and supporting 72,000 jobs.

The approximate breakdown for the $786 million received in fiscal year 2014-15 was $427 million (54%) awards from the federal government, and likely a big chunk of research funding is also from the state government. $66.1 million (8.4%) was awards from foundations, and $59.4 million (6.7%) awards from industry sponsors. I think that is an interesting point, that UC Davis receives more sponsored research funding from foundations than it does from industry sponsors. The School of Medicine received the largest share of research grants at UC Davis with $264 million (34%), followed by the College of Agricultural and Environmental Sciences at $155 million (20%), and the School of Veterinary Medicine at $114 million (14.5%).

Donations

This pool of monies is more modest than that brought in by grants and contracts. I could only get this data for fiscal year, rather than calendar year, but it is in the vicinity of $200 million. Now the question that perhaps has been asked most frequently is how much funding is coming from specific companies – specifically those associated with the so-called “Agrochemical academic complex”? That all depends upon how you define such industries, but let’s go with the so-called “Big 6”; that is Monsanto, Syngenta, Bayer, BASF, DuPont/DuPont pioneer, and Dow.

The following table has the breakdown of total grants and contracts, donations and those two figures totaled, and then the breakout of how much of that funding and the (percentage of total) associated with the cumulative funding coming from the “Big 6” in recent years. (The numbers differ slightly from those above due to fiscal versus calendar year accounting.)

Year 2012 2013 2014 2015
Grants/Contracts 699,728,437 718,934,464 751,864,525 793,797,558
       From “Big 6” 1,407,821 (0.20%) 477,178 (0.07%) 881,856 (0.12%) 746,160 (0.09%)
Donations 132,451,535 149,134,036 165,704,178 184,180,960
       From “Big 6” 768,172 (0.58%) 1,386,079 (0.93%) 858,912 (0.52%) n/a
TOTAL 832,179,972 868, 068,500 917,568,703 977,978,518
       From “Big 6” 2,175,993 (0.26%) 1,863,257 (0.21%) 1,741,768 (0.19%) n/a

So in summary, at what is arguably the number one ranked agricultural research university in the world, the proportion of funding coming from the “Big 6 Agrochemical academic complex” funders is approximately $2 million per year, well under one half of one percent of total research funding received by the campus.  To put that in perspective, the College of Agricultural and Environmental Sciences alone has 330 faculty members and 1,000 graduate students . Two million dollars is approximately what it takes to fully fund ~ 35 graduate students for a year.

So what is the money being used for?

Not surprisingly most of the funding from the “Big 6” was associated with research working in plant sciences and entomology. Some went to the medical school because the search for “Bayer” also captured research funding sponsored by “Bayer Healthcare”.   A number of the donations were to Cooperative Extension county-based advisors performing field research with various crops. And just for transparency, none of it was directed to my research program (which is not surprising as I work on animals not plants!). Some was earmarked for work in specific crops like figs, pistachios, strawberries, rice, onions, woody crops and viticulture.  And that is not surprising because California grows hundreds of specialty crops. Noticeably none of these crops have commercialized genetically engineered varieties, and their breeding programs are mostly run by public sector scientists.The one thing California does not grow much of is large acreage corn and soybeans. We do not have the right climate and conditions for these crops, and there are high-value alternative crops that CA farmers chose to grow.  As a result, UC Davis does not do much research in these field crops, and the university therefore does not get much industry research funding for work in these crops.

I would wager that the University of Kentucky, home of the Kentucky Derby,  probably has industry funding supporting is equine science program, ’cause they have a huge equine industry in that state. In  general when a university has an important industry in its state, that industry helps to support research at that state-located public university. And in the case of California there is an amazing number of agricultural commodities grown – the fruit and vegetable industry raises a cornucopia of varieties in the state, and UC Davis has renowned brewing and wine making programs. As an example, the brewing science program at UC Davis has received several sizable donations from industry, including the recent $2 million donation from the owners of the local Sierra Nevada Brewing Company. Cheers to science-based beer brewing and wine making!

How does this breakdown compare to other land grant universities?

My colleague Kevin Folta at the University of Florida posted this useful graphic for the gators  (University of Florida).

Funding to University of Florida FY 2015-2016 broken down by funding source

In the case of the University of Florida, the faculty brought in $140 million in sponsored funding in FY 2015-16, and of that 70% was from federal agencies,  15.5% was from foundations, and 3.5% was from corporations and industry.  Kevin makes the observation in his blog regarding agricultural industry funders:

“They are frequently the beneficiaries of increased knowledge in agriculture, as well as the training and education we provide to the next generation of scientists”. I look forward to his next BLOG piece where he promises to write about whether industry support of science matters.

So there you have it – or at least a snapshot from two large agricultural universities as to which entities fund universities. By far the biggest source of funding is federal research grants – as might be expected at a public university.

Now I must go and focus my efforts on writing my next federal grant application – which unfortunately has a ~90% probability of not being funded and will likely only ever be read by 2 grant reviewers. As compared to this BLOG which has 100% chance of not securing funding for my research program, but hopefully will be of interest to more than 2 readers.

I would appreciate your comment on a recently published study

The email was simple enough. It was a request from a member of the press asking “I would appreciate your reaction/comments to the recently published study on GMO corn for an article I am putting together on it. Deadline: Wednesday 4 January.”

Just when I thought I was going to get a day off to myself to write up my own research results, in comes the dreaded time-sensitive press request for comments on a recently published paper. Dreaded because to respond properly means I need to sit down and read the whole paper and ensure I have understood the materials and methods, results, and discussion. For me that is a commitment of a couple of hours. And to top things off – it was a paper by Mesnage from France’s infamous Séralini group whose previous works have had numerous flaws. But I made a New Year’s Resolution to be more active in critiquing agricultural science and can’t in good faith renege on that resolution on January 2nd.

The paper’s title “An integrated multi-omics analysis of the NK603 Roundup-tolerant GM maize reveals metabolism disturbances caused by the transformation process” suggested the researchers had uncovered some altered metabolic processes caused by the transformation process used to create the NK603 Roundup-tolerant genetically modified (GM) maize line. This event was achieved by direct DNA transformation by microparticle bombardment of plant cells with DNA-coated gold particles and regeneration of plants by tissue culture on selective medium. This transformation process presumably happened last century as the feed/food approval for this line in the United States occurred in 2000. However, upon reading the abstract the paper was nothing to do with disturbances caused by the transformation process, but rather it was about whether the product of this transformation event was “substantially equivalent” based on proteomics and metabolomics evaluation. Strangely, the “conclusiony”-sounding title of the paper therefore had nothing to do with the experimental design or findings discussed in the paper.

According to the results section, the actual “objective of this investigation was to obtain a deeper understanding of the biology of the NK603 GM maize by molecular profiling (proteomics and metabolomics) in order to obtain insights into its substantial equivalence classification.” In plain English – the intent of the paper was to examine both proteins and metabolites found in NK603 Roundup-tolerant GM maize (both treated and untreated with Roundup), and non-GM isogenic lines to determine if the three groups were substantially equivalent using sensitive “-omics” assays.

To perform such an evaluation requires a common agreement as to what substantial equivalence means, and what constitutes an appropriate comparator(s).  Unfortunately, not such common understanding exists. According to an OECD publication in 1993, substantial equivalence is a concept which stresses than an assessment of a novel food, in particular one that is genetically modified, should demonstrate that the food is as safe as its traditional counterpart. This has been interpreted to mean that the levels and variation for characteristics in the genetically modified organism must be within the natural range of variation for those characteristics considered in the comparator.

And this brings up the issue of an appropriate comparator. Typically this involves the comparison of key compositional data collected from both the recombinant-DNA crop plant and the isogenic non-GM counterpart, grown under near identical conditions. Ideally, conventional non-GM corn hybrids are also included in analyses to determine the level of natural variation for compositional data in conventional varieties that are considered to be safe for consumption based on a history of safe use.

According to the original studies of the NK603 GM maize variety compositional analyses were conducted on the key corn tissues, grain and forage, produced in multiple locations (Kansas, Iowa, Illinois, Indiana, and Ohio in 1998 and in trials in Italy and France in 1999). Grain and forage samples were taken from plants of the corn event NK603 and the non-modified control both years. In the E.U. field trials, reference grain and forage samples also included 19 conventional, commercial hybrids. The NK603 plants were treated with Roundup Ultra herbicide. Fifty-one different compositional components were evaluated.

Not surprisingly there are protocols on how to best carry out experiments on GM crops  that are accepted by regulatory agencies world-wide (OECD 2006; Codex 2009). According to EFSA, for compositional analysis risk assessment, field trials will include: the GM plant under assessment, its conventional counterpart (isogenic non-GM counterpart), and non-GM reference-varieties, representative of those that would be normally grown in the areas where the field trials are performed. The later puts some figures and context to the natural biological variation in the different plant varieties we commonly consume.

So what did the Mesnage  paper in question do? The researchers planted a single replicate of the GM plant under assessment (DKC 2678 Roundup-tolerant NK603) and its conventional counterpart (DKC 2575 – although the exact genetic makeup of this line and whether it is a true isogenic counterpart is not well elaborated in the paper) at a single location on two different years. Half of the GM plants each year were treated with Roundup. Then the corn kernels were harvested and the proteins and metabolites from the three groups were assayed using proteome and metabolome profiling, the data from the two years were merged and analyzed. The three groups (isogenic non-GM counterpart), GM plant without roundup treatment, and GM plant with roundup treatment separated into three distinct groups based on a principal component analysis (PCA).

Integration of metabolome and proteome profiles of the NK603 maize and its near-isogenic counterpart into a multiple co-inertia analysis projection plot.

I draw your attention to a very similar graph (below) in a paper I recently published which shows a PCA analysis of the transcriptome (genes expressed) from cattle that have been exposed to different viruses and bacteria. Basically PCA can pull apart patterns of gene expression in different groups of cattle in response to the specific environmental challenges they are facing. The controls can clearly be seen to be clustering down in the bottom right corner, and the bacterial infections tend to cluster to the right and differently than those infected with viruses which cluster to the left.

Multidimensional scaling plot of samples based on all genes

That is – if you expose plants or animals to different environmental or disease challenge conditions – they express different genes in response. That is typically why researchers do “–omics” studies, to try to identify which genes/proteins/metabolites respond to different environmental conditions. What they do not show is whether any of beef that might be derived from these animals would be unsafe to eat – every animal and plant ever eaten is likely unique in terms of their exact protein and metabolite profile depending upon their unique environmental conditions and stressors.

Unfortunately there are a number of experimental design problems with the Mesnage et al. (2016) paper that complicate the interpretation of the results, and as concerning there appear to be confounders that further complicate the analyses.

These include:

  • Only a single replicate of each treatment (n=1) at a single location (over two years) is analyzed with no biological replication or randomization of locations to remove site variability.
  • The data from the two cultivations in different years were inexplicably merged prior to analysis which made it impossible to determine if results or trends were consistent or reproducible between years
  • No inclusion of non-GM reference-varieties (conventional commercial hybrids) representative of those that would be normally grown in the areas where the field trials are performed to put some figures and context to the natural biological variation in the composition of non-GM corn comparators
  • No discussion of correction for multiple comparisons (by chance one in every 20 comparisons would be expected to be significant at the p<0.05) of so. If doing multiple comparisons it is necessary to do a multiple-comparison correction
  • There appears to be evidence of different levels of fungal (Gibberella moniliformis Maize ear and stalk rot fungus) protein contamination between the three groups.  See Supplemental Dataset 5  where Tubulin alpha chain OS=Gibberella moniliformis (strain M3125 / FGSC 7600) appears as the  protein that had the biggest fold change between control and GM lines. If there were differing levels of fungal infestation among the groups this would also confound the data.

Others have commented on some of their concerns with this paper including a comprehensive analysis from a number of scientists with expertise in this area. There were also comments from European experts from the science media center. And another discussed the definition and importance of true isogenic lines.

Based on significant differences between proteins and metabolites, including the rather alarmingly named “putrescine and cadaverine” which were markedly increased in the GM NK603 corn (N-acetyl-cadaverine (2.9-fold), N-acetylputrescine (1.8-fold), putrescine (2.7-fold) and cadaverine (28-fold), Mesnage et al. (2016) concluded that NK603 and its isogenic control line are not substantially equivalent, meaning that there were statistical differences between the proteins and metabolites found in the three groups. However what is not clear is whether the levels and variation for characteristics in the genetically modified organism or the control were within the natural range of variation for those characteristics in corn, and the biological significance of the statistical differences in terms of posing a food safety concern.  Differences between the GM variety in the presence and absence of Roundup would presumably be similar to the differences that occur every time a crop is treated with an herbicide, be the plant GM or not.

I could not resist looking up these two metabolites putrescine and cadaverine which seem like they should more appropriately be associated with a decaying animal corpse.  According to Wikipedia, “Putrescine, or tetramethylenediamine, is a foul-smelling organic chemical compound that is related to cadaverine; both are produced by the breakdown of amino acids in living and dead organisms and both are toxic in large doses. The two compounds are largely responsible for the foul odor of putrefying flesh, but also contribute to the odor of such processes as bad breath and bacterial vaginosis. More specifically, cadaverine is a foul-smelling diamine compound produced by the putrefaction of animal tissue.”

So what are these two horrifying compounds doing in corn samples? Enquiring minds needed to know. So being a good scientist I googled “Cadaverine in corn”, and lo and behold a peer-reviewed study. Check out Table 1. Mean levels of free bioactive amines in fresh, canned and dried sweet corn (Zea mays).

According to this study on “Bioactive amines in fresh, canned and dried sweet corn, embryo and endosperm and germinated corn”, “Different levels of amines in corn products were reported in the literature. Okamoto et al. (1997) found higher concentrations of putrescine and spermidine in fresh corn. Zoumas-Morse et al. (2007) reported lower spermidine and putrescine levels in fresh and canned corn. The differences observed on the profile and levels of amines may be related to several factors such as cultivars, cultivation practices, water stress, harvest time, grain maturity, types of processing and storage time.” In other words, there is a lot of natural biological variation in the different plant varieties we commonly consume with regard to the amount of amines in corn products, and yet we commonly and safely consume fresh, canned and dried sweet corn. If you really want to get nerdy, there are databases of polyamines in food.

As the multi-omics analysis of the NK603 Roundup-tolerant GM maize paper by Mesnage correctly states “the vagueness of the term substantial equivalence generates conflict amount stakeholders to determine which compositional differences are sufficient to declare a GMO as non-substantially equivalent.” In the absence of knowledge of the natural variation in proteins and metabolites in the common foods we eat, the level of different proteins and metabolites that trigger a  safe/unsafe determination, and a testable hypothesis at the outset of an experiment, undisciplined “-omics” studies risk becoming statistical fishing trips.

As someone who works in genomics and knows the tens or even hundreds of thousands of statistical comparisons that are part of genomic analyses, there is a real need to understand the statistical methods required for multiple comparisons. If 10,000 comparisons are made at the p<0.05 rate, 500 would be expected to be statistically significant by chance alone. The biological relevance of statistical differences is also not always clear as discussed here. According to the European Food Safety Authority (EFSA) Scientific Committee,  good experimental design requires that “the nature and size of biological changes or differences seen in studies that would be considered relevant should be defined before studies are initiated. The size of such changes should be used to design studies with sufficient statistical power to be able to detect effects of such size if they truly occurred.”

In the first line of the discussion Mesnage et al. state “In this report we present the first multi-omics analysis of GM NK603 maize compared to a near isogenic non-GM counterpart”. There are actually two relevant  papers on the NK603 line here and here that were published in 2016 but which were inexplicably not even cited in the Mesnage publication. The later paper is entitled “Evaluation of metabolomics profiles of grain from maize hybrids derived from near-isogenic GM positive and negative segregant inbreds demonstrates that observed differences cannot be attributed unequivocally to the GM trait” which compared differences in grain from corn hybrids derived from a series of GM (NK603, herbicide tolerance) inbreds and corresponding negative segregants. The authors concluded “Results demonstrated that the largest effects on metabolomic variation were associated with different growing locations and the female tester. They further demonstrated that differences observed between GM and non-GM comparators, even in stringent tests utilizing near-isogenic positive and negative segregants, can simply reflect minor genomic differences associated with conventional back-crossing practices.”

Moreover, a 2013 meta-analysis by Ricroch examined data from 60 high-throughput ‘-omics’ comparisons between GE and non-GE crop lines. There are several papers on compositional data in GE versus non-GM corn varieties (here, here, here, here, here, here, here, here). The overwhelming conclusion that is common to these papers is that natural variation due to varying genetic backgrounds and environmental conditions  explained most of the variability among the samples. And yet this nuance is missing in the 2016 Mesnage paper – the conflation of any factors other than the genetic modification and treatment with Roundup that could influence the results given the poor experimental design is ignored.  This tends to be a common feature of this research group – to ignore standard experimental design protocols such as randomization and biological replication, cherry pick cited literature and ignore contradictory or preceeding studies with dissimilar results, rather than discussing their results in the context of what is known based on the entire weight-of-evidence in the scientific literature.

Ricroch in her meta-analysis summarized that “The ‘-omics’ comparisons revealed that the genetic modification has less impact on plant gene expression and composition than that of conventional plant breeding. Moreover, environmental factors (such as field location, sampling time, or agricultural practices) have a greater impact than transgenesis. None of these ‘-omics’ profiling studies has raised new safety concerns about GE varieties”

Interestingly, one study showed that transcriptome alteration was greater in mutagenized plants than in transgenic plants. Of course the random mutations associated with mutation breeding undergo no regulatory evaluation or substantial equivalence assessment prior to commercialization. Variation is the driver of breeding programs, and the reason that varieties  like red delicious and golden delicious apples differ from each other in the first place.

Finally Mesnage et al. acknowledge funding from “The Sustainable Food Alliance” for their paper. There is no link as to which groups or interests provide funding for this Alliance. This is not reassuring and runs counter to the absolute transparency of all funding sources that is being demanded of public sector researchers working in this field.

At the end of the day if I have concerns about a paper by a group that has a track record of publishing highly controversial studies, I like to go back to the Nature graphic shown above to see how many red flags are raised. In this case there were a few, most particularly around experimental design and omitting references and discussion of the finding of other “-omics” studies which have consistently shown the high levels of natural variation that is seen in the composition of food due to the differing environments experienced by the plants (and animals) we consume.

I know that this is more of a response than any journalist could ever use, but as with most everything in agriculture, there are no simple sound-bite answers . Having said that I appreciate the press reaching out to seek comment from scientists and hope that is increasingly common in 2017. Although taking the time to respond kinda took the rest of my day. I may have to rethink my New Year’s Resolutions if I plan to get any of my own research done this year, I will worry about that tomorrow when I return to work for the year.

Who should fund University research?

A recent article by Danny Hakim on the so-called “agrochemical academic complex” includes a quote, “If you are funded by industry, people are suspicious of your research. If you are not funded by industry, you’re accused of being a tree-hugging greenie activist. There is no scientist who comes out of this unscathed.”

I beg to differ. My research is not funded by industry, and yet to my knowledge I have never been called a “tree-hugging greenie activist.” Quite the opposite – I have been demonized by groups opposed to genetically modified organisms (GMOs) because my publicly-funded research on animal biotechnology and genomics has occasionally published results in line with the weight-of-evidence around the safety of genetically engineered organisms.

After reading the article, I was left with the conclusion that industry (and by this it appears to be any industry associated with the “agrochemical academic complex”, not the activist or NGO industries – the influence of whose funding is noticeably absent from the article) dictates the outcome of the research and public academics are just hapless puppets to be played to produce favorable outcomes for industry.

That is not how my laboratory works. Nor is it how my department operates. Nor my university. Nor in my experience are the public scientists I know willing to trade their hard-won scientific integrity for research funding. Such a move would be career suicide. Publishing incorrect or false data in science sets off a ticking time bomb for retraction when others are unable to repeat these results.

More generally, the article does not seem to understand how academic funding works. And I have found this misunderstanding to be true in interactions with my friends and neighbors too. My university has little control over what I research – it is called “academic freedom”.

As a researcher at UC Davis, I am provided with my salary and a laboratory. The rest is up to me. What I mean by that is if I want to do original research, I need to obtain research funding to conduct that research. Typically this involves writing research grants.

My university has absolutely no say over the topics I chose to research or where I apply for funding. They have absolute direct control over how I spend the funding I do receive in terms of ensuring I abide by all university policies, and that the grant monies are spent appropriately.

By FAR the biggest expense I have in my research program are graduate students. At the current time the annual cost of a UC Davis graduate student is around $26,734 for a 50% time stipend for 12 months, and $17,581 for fees and health insurance for a total of $44,315 annually. So for a 2 year Masters student that is $88,630 and a 5 year Ph.D. student $221,575 – let’s just call that an even quarter of a million.

And that does not include the University’s “indirect rates” which currently adds an additional 57% on top of the stipend, so the amount that needs to be in a research grant for that PhD stipend is an additional $76,192 for a grand total of around $297,767 for a 5-year student, close to an even $300,000 assuming no tuition hikes– and that only gets you 50% time of that student, the other 50% time they are doing their course work!

Each laboratory is effectively a small business within a much larger entity. Their business is to obtain funding to pay student salaries, conduct research, and publish peer-reviewed articles. The University provides the location and facilities to perform that research. And for that the University taxes a 57% “overhead” rate off the top of the grant award.

If I did not write successful grants to fund graduate students, pay for research supplies, travel to field plots and perform my extension activities then I would not have a research program or anything to publish in the peer-reviewed literature. And that is how I am evaluated – by my peer-reviewed publications and research productivity, not by the amount or source of the research funding I am able to secure. It is the researcher, the so-called “principal investigator”, that drives this enterprise and makes the decisions on what to research and where to obtain grant funding, not the university.

If the university as an entity happens to obtain money from a donor to build a building, for example the UC Davis Mondavi Center for the Performing Arts built with a gift from the wine industry, that in no way affects my research funding or what I chose to research, or means that researchers have to perform research that is favorable to the wine industry.

If a researcher chooses to seek industry funding, as many do especially in the information technology, transportation and energy sectors, there are strict guidelines around managing potential conflicts of interest and ability to publish the results of the research. According to UC Davis university policy: “Freedom to publish is a major criterion when determining the appropriateness of a research sponsorship; a contract or grant is normally not acceptable if it limits this freedom. The campus cannot accept grants or contracts that give the sponsoring agency the right to prevent for an unreasonable or unlimited time, the release of the results of work performed.”

It is perhaps not well publicized that an inordinate amount of a University professor’s time is spent writing grants to public agencies, which often have a funding success rate of one grant in 10. As such 9 out of every 10 grants are only ever read by one or two reviewers, and then never see the light of day again. This is not an efficient system, but it is the system we have. If a researcher has public funding, their research has already survived pre-project peer-review by the grant panel. Ironically some have even suggested that public funding is also tainted. If funding from both private and pubic sources is suspect, where does that leave academic laboratories, and who will pay to train the next generation of scientists?

Writing grants is perhaps (actually for sure) the least favorite part of my job, and it is time-consuming. But it is a necessary evil if I am to perform research and fund my graduate students. I have been fortunate to be able to secure public funding for my research mostly from the United States Department of Agriculture (USDA) (thanks NIFA!), but many other researchers quite appropriately work with industry sponsorship in the development and evaluation of new technologies. Industries of all types seek such partnerships with public universities to obtain an impartial evaluation of their technology.

Demonizing industry funding as unilaterally suspect in the absence of wrongdoing fails to take into consideration the checks and balances that are in place at public universities and the importance of public:private partnerships in the development of new technologies, and the unfortunate reality that there is a paucity of public research funding.

© 2024 Biobeef Blog

Theme by Anders NorenUp ↑