Thoughts of public sector animal geneticist - all views are my own

Category: Blog post (Page 4 of 5)

Summer caught up with me and I have not blogged in a while, but recently I spent 9 consecutive days watching a docuseries called “GMOs revealed”. I finished watching the series feeling dismayed and confused. Dismayed as a scientist at the fearmongering and fallacies that were being promoted with disciplined repetition based on anecdotes and gut feelings mixed with a little woo and magic – and in apparently cheerful contradiction to the published opinions of the entire worlds’ scientific societies and risk assessment agencies – which were summarily dismissed by a host of conspiracy theories.

I would also imagine most viewers left confused as consumers too, because the mixed messages seemed to be that basically everything from food to water to the air we breathe was contaminated, poisoned, and terrifying. The sobering take home message seemed to be one of despair and the fact-devoid prediction that we are all going to get sick and die soon due to various toxins and GMOs and maladies and just  “AAAAGGGGGGGGGHHHHHHH”.

As a parent I don’t know what to do with that message. There are enough actual risks to protect kids from, and hazards to steer clear from in this world – without worrying about ones that don’t exist.

Some of the times the message was to eat organic, sometimes it was to eat low on the food chain, other times it was that you can’t trust the organic label because of big companies and that instead you should eat non-GMO verified products, then there was concern that they too are sprayed with glyphosate (a herbicide which an inordinately high proportion of the interviewees could not correctly pronounce for some inexplicable reason – maybe like Voldemort, it is the herbicide that cannot be named), and several guests were peddling detox products to fix what ails you – some that even reportedly and magically cured your body from the many evils of glyphosate.

And according to the show the evils of glyphosate were surprisingly numerous. The first claim was actually true, and that is that glyphosate blocks the Shikimate pathway (shikimic acid pathway) which is a seven step metabolic route used by bacteria, fungi, algae, some protozoan parasites and plants for the biosynthesis of folates and aromatic amino acids (phenylalanine, tyrosine, and tryptophan). Blocking this pathway is why Roundup kills weeds. But then it was suggested that the RoundUp ready crops that end up as our food therefore lacked these essential amino acids.

Except RoundUp ready crops have a glyphosate-insensitive transgene, CP4 EPSP synthase, that confers crop resistance to glyphosate. This allows the Shikimate pathway to proceed unfettered, that is the whole point! RoundUp crops continue to produce aromatic amino acids even after treatment with glyphosate and so the levels of aromatic amino acids in GMO crops are unchanged from their conventional counterparts i.e. they are substantially equivalent in amino acid content and nutritional value.

Then the claims about glyphosate started getting more wide-ranging and bizarre including decreasing dopamine and serotonin leading to tiredness and anger, shutting down cytochrome P450 detox pathway leading to the accumulation of “toxins , and glyphosate crossing the blood:brain barrier for some reason? – especially in conjunction with lead and mercury, that glyphosate is an endocrine disrupting chemical which is more dangerous at lower doses than at higher doses, that it kills beneficial bacteria but not the bad nasty bacteria and this leads to a long list of disorders including poor sex drive, and infertility, and that it stops mitochondria from making energy and this creates brain fog and is linked to birth defects and destruction of endocrine systems, and also non-alcoholic fatty acid disease.  And that the extracellular matrix which communicates information body wide gets “crushed” by glyphosate, and finally perhaps my favorite – and that is that glyphosate is a highly toxic and long lasting organophosphate.

That last one caught my attention, because organophosphate pesticides, specifically organophosphate insecticides that act as cholinesterase inhibitors are a highly toxic and long lasting class of neurotoxic pesticides. But glyphosate in not in the category. While it can be described as an organophosphorus compound because of its carbon and phosphorus atoms, glyphosate is not an organophosphate ester but a phosphanoglycine, and it does not inhibit cholinesterase activity. Glyphosate is an herbicide and has a very low chronic toxicity, with the acute oral LD50 (dose at which 50% of rats die following oral ingestion) of 5,600 mg/kg.

Why is this relevant? Organophosphates (OPs) are a class of insecticides, several of which are highly toxic. Until the 21st century, they were among the most widely used insecticides available. Organophosphates poison insects and other animals, including birds, amphibians and mammals, primarily by phosphorylation of the acetylcholinesterase enzyme (AChE) at nerve endings. However, in the past decade, several notable OPs have been discontinued for use, including parathion the oral LD50 of which in rats is between 3-8 mg/kg, which means it is quite toxic, in fact around 700 times (5,600/8) more toxic than glyphosate.

Ironically it is the banning of these actual toxic organophosphate insecticides, along with the adoption of Bt crops that decreased insecticide spraying and RoundUp ready crops that allowed herbicide substitution to glyphosate that has led to the documented decrease in the toxicity of the pesticides applied to the four major US crops. The pesticides being used back in “the good old days” of agriculture in the 1970s were considerably more toxic and persitant than those in use today – a GOOD NEWS story that gets remarkable little airtime.

Although the docuseries repeatedly tried to imply glyphosate was absorbed by the body, in fact it is poorly absorbed from the digestive tract and is largely excreted unchanged by mammals. Cows, chickens, and pigs fed small amounts had undetectable levels (less than 0.05 ppm) in muscle tissue and fat. Levels in milk and eggs were also undetectable (less than 0.025 ppm). Glyphosate has no significant potential to accumulate in animal tissue.

So who cares if shows like GMO OMG and GMOs Revealed and the like demonize GMOs and glyphosate in the absence of any objective evidence? I do!, as both a mother and an agricultural scientist. Because fearmongering around other safe technologies – think vaccines, food irradiation and even pink slime – has real world consequences. And every time a safe technology gets taken off the shelf for no good reason and without a serious and honest discussion of the resulting tradeoffs or opportunity costs – agriculture becomes a little less sustainable, and that deleteriously impacts the future of all our kids.

Recent case studies conducted by researchers in Germany and the UK predict that losing glyphosate would have a considerable effect on crop production costs and would also have an impact on the international trade in several European winter crops and sugar.  “However, the biggest changes in the event of a glyphosate ban are likely to relate to running costs, since many farmers will probably revert to ploughing for weed control. It is estimated that more ploughing and higher costs for machinery and labor would increase production costs for several crops by EUR 8 to EUR 30 per hectare in Germany. This means that even if yields remained stable, the farmers’ profit margin would drop by 7%.”

A recent paper by Oxford Economics examined the likely impact of a ban on glyphosate would have on UK farming – it suggested a decrease in yields of 12-14% in wheat and oilseed rape due to more weeds, and a decrease of 15-37% in the acreage of cereals, wheat and oilseed rape.  And that is in a relatively small country that currently grows no Round-up ready GMO crops (although it does import a lot of GMO animal feed ironically in part due to the essential aromatic amino acids that are prevalent  in soy-based feed).

Impact of a Glyphosate Ban on Farming in the UK

What these scary documentaries seem to be advocating for is a ban on both glyphosate and GMO crops. What might that look like? Well probably the opposite of the documented impacts of GMO crops on pesticide use and carbon emissions  1996-2015– facts and potential tradeoffs that are truly scary and are unfortunately never discussed in these documentaries.

“The adoption of GE insect resistant and herbicide tolerant technology has reduced GLOBAL pesticide spraying by 618.7 million kg (~8.1%) and, as a result, decreased the environmental impact associated with (less toxic) herbicide and insecticide use on these crops by 18.6%. The technology has also facilitated important cuts in fuel use and tillage changes, resulting in a significant reduction in the release of greenhouse gas emissions from the GM cropping area. In 2015, this was equivalent to removing 11.9 million cars from the roads.”

Even critics of glyphosate warn that banning it will lead to the use of chemical alternatives that are orders of magnitude more harmful, both in terms of environmental and human health risks.

So while the “worried wealthy” seem to be increasingly obsessed with avoiding undocumented risks in their food, ensuring the clean composition of their well-fed dog’s bowl, and acting to ultimately preclude farmer’s access to safe technology, what worries me is the impact that these fearmongering “documentaries” are ultimately going to have on what it appears are our shared values: decreasing the global environmental footprint of food production while cutting down on the use of harmful pesticides and carbon emissions for the future well-being of agriculture and the planet.

False and Misleading

The standard for voluntary food labeling in the US is that it must be “truthful and not misleading”. I wish that was true for all speech. In this era of alternative facts and disdain for expertise, there are many politicized topics where objective facts and inconvenient truths are ignored if they don’t match up with preexisting beliefs.

Although many on the left like to point fingers at the right as science denialists when it comes to climate change, there are also some topics such as vaccines and GMOs that are sacred cows, facts be damned for some left of center folks.

I am a faculty member at UC Davis, and I happen to work in animal agriculture. Our sector, in particular, has been the target of many misinformation campaigns. Think of the “pink slime” lawsuit that was just settled between a producer of lean finely textured beef and ABC News. Meanwhile,  people routinely reach for milk labelled free of antibiotics, despite the fact that all milk is free of antibiotics This flows from the oft-repeated myth that dairy cows are “pumped full” of antibiotics. They are not, despite what this misleading labeling might have you believe, and every single tanker of milk in the state is tested prior to sale to ensure it contains no antibiotic residues.

Perhaps nowhere is food fear-mongering more prevalent than in the toxic debate around genetic engineering and “GMOs”. The 51% gap in perception between the public’s feelings on the safety of GMOs and the understanding of the scientific community (37% of the public think GE products are safe versus 88% of scientists) is greater than the gap for any other topic, including anthropogenic climate change.

For 20 years, thousands of studies, eleven National Academies reports, and indeed every major scientific society in the world have  attempted to interject objective evidence of GMO safety into the debate without making much progress. The fear-mongering, however, has been relentless – and often – disingenuous, as evidenced by the “non-GMO” labeled rock salt that has popped up in the grocery story (spoiler alert – salt doesn’t contain DNA so salt cannot be genetically engineered – all salt is “non-GMO” salt). But, it is much easier to sell fear than science.

As frustrating as it is to logic-driven scientists, people often don’t make decisions on facts alone. Rather they base them on a mixture of gut instinct, world view, and trust. And in this age of widespread suspicion and distrust, it seems many marketers stand willing and ready to monetize distrust by providing “natural” food and “absence” labels for attributes that were never present in that product in the first place. “Gluten free” water comes to mind.

The GMO safety narrative is seemingly chock full of villains (corporations), victims (public health), and heroes (activists) – the necessities of a great story. And although this narrative has been  accurate in the past – think tobacco or PCBs – and may be again on for some new product – , in this case the data do not square  with the frightening health claims that have been associated with genetic engineering. Betting against the overwhelming weight of scientific evidence on any topic, be it GMOs, vaccines or climate change, on the basis of a single study or a conspiracy theory is a very high stakes wager.

Scientific societies are encouraging scientists stop shying away from engagement with the public, even on the most polarizing science. And they are advocating for effective science communication. That is easier said than done, but I am engaging – as I am passionate about science, the scientific method, and the need for science-based policy as a key basis for an informed democracy. That is the future I want to leave to my children.

To address this call for increased public engagement, I recently agreed to participate in a feature documentary movie, Food Evolution. Narrated by the esteemed science-communicator Neil deGrasse Tyson, this film uses the GMO debate as an illustrative proxy for broader questions around how we make decisions, and which sources of information we put our trust in.

It has been an interesting experience to interact with audiences at the various screening venues I’ve attended, ranging from New York to Cleveland to Berkeley. Some in the audience have been open to considering new information about how GMO breeding methods might be used in certain situations to produce disease-resistant crops. Some have changed their mind. Others have resorted to motivated reasoning in order to summarily disregard facts that did not agree with their world view. Post-screening questions from those individuals tended to be monologues cataloging points of disagreement, rather than a conversation about possible areas of agreement or solutions to problems.

And then there has been a loud outcry from some members of academia following a June 16 evening screening of Food Evolution at UC Berkeley that I helped organize. A letter signed by 45 individuals entitled “Response to UC Berkeley Early Screening of “Food Evolution”” was posted on the foodfirst.org website on June 16. I happened to see a draft of that letter from a UC Berkeley listserve dated June 15, one day before the screening, that read

This particular film — Food Evolution — deserves to be called out for what it is: a piece of propaganda. Full disclosure: most of us have yet to see the film in full, but many of us have seen clips, and a few of us were interviewed during film production.”

That clause was removed from the version posted on the web the next day, but the rest of the wording remain unchanged. In other words, most of the people that willingly signed onto the letter criticizing the movie had not even seen it. That is the actual definition of confirmation bias. As academics, shouldn’t you see/analyze something before you offer a detailed critique? I have not yet seen your movie but I offer the following criticisms…..

This group wrote that the movie “manufactures scientific consensus where no such agreement exists”, and cites one paper entitled, “No scientific consensus on GMO safety” that was also a document signed by like-minded individuals including several of the 45 signatories of the Food Evolution critique. This “No Consensus” paper was coordinated by the European Network of Scientists for Social and Environmental Responsibility (ENSSER). It should be noted that this group, formed in summer 2008 to challenge the consensus on GMO safety, holds a position on GMO safety that runs counter to that of every other major scientific society in the entire world including the National Academies of Sciences, Engineering, and Medicine which was formed by Abraham Lincoln in 1863. Food Evolution argues for evaluation of the entirety of the scientific literature, and to avoid cherry picking single studies just because they agree with your perspective. The movie did not manufacture scientific consensus on GMO safety, it reported the conclusions of the world’s scientific societies, sans one outlier.

As expected, activists have also gone after the movie. Zen Honeycutt, from Moms Across America, first went for a full sexism attack, with an article originally headlined, “Why Have the ‘Food Evolution’ Filmmakers Mistreated Women?” The salvo opened by calling the film “misogynistic and patronizing.” A misogynist is a person who dislikes, despises, or is strongly prejudiced against women. Her evidence of that was pretty slim, especially given the prominent role played by a number of female scientists, farmers, and journalists in the movie. She has since toned it down to now just ask the question, “Have the Food Evolution filmmakers mistreated Moms?” (although the URL still is called “why-have-food-evolution-filmmakers-mistreated-women?”).  So I guess the accusation now is that the film makers only mistreated those women who have given birth, not the entire female population.

The original posting has a heading that stated Food Evolution had mistreated women

As a mother, and a woman, and someone who has dealt with my fair share of sexism in my career, I take the charge of misogyny pretty seriously. I feel I have a pretty good sense of spotting misogynists, as I have seen the damaging impact of their behavior in numerous professional situations. Calling someone a misogynist is a damning allegation. And when it is thrown around baselessly with the malicious intent of slandering the filmmakers of Food Evolution, director Scott Hamilton Kennedy and producer Trace Sheehan, two men whom I respect and admire and who are absolutely NOT misogynists, I call out of bounds. If anything, the scaremongering around GMOs mistreats moms and their families by creating fear and mistrust of the conventional food supply in the absence of any scientific evidence. This can scare mothers on tight budgets to pay money they can’t afford for expensively labeled foods and to avoid fresh produce due to a misplaced fear of pesticides. Praying on a mother’s fears for the safety of her children is the most disingenuous use of marketing that I can imagine. May there be a special place in hell reserved for people who profit from exploiting moms’ protective instincts.

Food Evolution weaves science into a narrative story.  It advances the discourse around GMOs from a stale false dichotomy to a more nuanced discussion about how replicable science might be used to develop “Yes/And” solutions to problems. It does not address every issue associated with food production. No movie could. But it clearly puts a stake in the ground around the safety of GMOs, to try to dispel the pervasive myths that are blocking the deployment of this breeding method to address real problems. Some are trying to label Food Evolution as propaganda. That label is again false and misleading.

At the end of the day, Food Evolution is really a movie about how people make decisions in the face of uncertainty. It’s also about the importance, and difficulty, of changing your mind based on new evidence and objective truths. At this juncture in history, it is an opportune time to consider one of the key questions posed in the movie – when considering a matter of substance: When was the last time you changed your mind, or perhaps as importantly when is the last time you opened your mind?

What defines organic milk?

On May 1 the Washington Post came out with an incredibly misleading article entitled “Why your ‘organic’ milk may not be organic”. What they did not provide was incontrovertible evidence to support this assertion. Such evidence would have been a clear violation of the “Organic Standards”, for they alone define organic in this country. Rather the article seemed to be concerned that there are large organic dairies.

The article stated that “organic dairies are required to allow the cows to graze daily throughout the growing season — that is, the cows are supposed to be grass-fed, not confined to barns and feedlots. This method is considered more natural and alters the constituents of the cows’ milk in ways consumers deem beneficial.”

A simple check of the “Organic Standards”, i.e USDA National Organic Program regulations would have revealed that statement to be misleading. In fact the rules are that “Organic ruminant livestock must have free access to certified organic pasture for the entire grazing season. This period is specific to the farm’s geographic location, but must be at least 120 days. Additionally organic ruminants’ diets must contain at least 30 percent dry matter (on average) from certified organic pasture.  The rest of its diet must also be certified organic, including hay, grain (although see another recent WaPO article on organic grain imports), and other agricultural products. Outside the grazing season, ruminants must have free access to the outdoors year-round except under specified conditions (e.g. inclement weather).”

So the objective fact is that organic cows are required to get 30% of their dry matter intake, on average, from certified organic pasture during the growing season. They are not “supposed to be grass-fed”, although they are required to have access to outdoors year-round except under specified conditions. Colorado ‘s growing season is defined as March or April for cool season grasses, and May for warm-season grasses through the first hard freeze, generally late August to early September. That is about 120-150 days. If you go there for three days in August, three days in September and two days in October – you have pretty much missed most of the growing season. And so just because you do not see cows grazing, does not mean they are suddenly not organic! The rest of the diet is still required to be certified organic feed. These feedstuffs are considerably more expensive than conventionally-raised feedstuffs, and so organic farmers incur higher feed costs, which is part of the reason organic milk is more expensive. Feed is the major cost in animal production systems.

Organic agriculture is delineated by its standards as defined by the Organic Foods Production Act of 1990. In other words, although the Washington Post article asserts that organic cows are “supposed to be grass-fed”, that is NOT what the standards require. This is important. If you are going to claim that operations are violating the organic standards, you had better be aware of exactly what the standards require!  Articles like this cast doubt on the organic milk market supplied by some of the farmers I work with here in California, also a dry Mediterranean climate with an equally short growing season. Just because cows are not on pasture, does not mean they are out of compliance with the organic standards, and so the headline “Why your ‘organic’ milk may not be organic” is not supported by the information in the article.

A: Linoleic acid (LA, ω-6). B: α-linolenic acid (ALA, ω-3). C: Conjugated linoleic acid. Abbreviations: NW  =  Northwest, CA  =  California, RM  =  Rocky Mountain, TX  =  Texas, MW  =  Midwest, NE  =  Northeast, M-A  =  mid-Atlantic. Numbers of samples apply to panels B and C; for panel A conventional NE is 34 and All is 107. For LA and ALA, all differences between organic and conventional contents are statistically significant by Mann-Whitney test (P<0.005) except for the CA region (P≥0.10). For CLA no such differences are statistically significant (P>0.08) except for the NE region and All regions (P<0.001)

Figure 1. Regional variation in fatty acid content of retail whole milk, g/100 g (12-month average ± SE). A: Linoleic acid (LA, ω-6). B: α-linolenic acid (ALA, ω-3). C: Conjugated linoleic acid. Abbreviations: NW  =  Northwest, CA  =  California, RM  =  Rocky Mountain, TX  =  Texas, MW  =  Midwest, NE  =  Northeast, M-A  =  mid-Atlantic. Numbers of samples apply to panels B and C; for panel A conventional NE is 34 and All is 107. For LA and ALA, all differences between organic and conventional contents are statistically significant by Mann-Whitney test (P<0.005) except for the CA region (P≥0.10). For CLA no such differences are statistically significant (P>0.08) except for the NE region and All regions (P<0.001)

Further the article continues on to suggest that organic is associated with some milk quality attributes in terms of milk fatty acid composition.  Organic does not guarantee a certain fatty acid milk composition of the product. In fact grass fed “conventional” cows have the same milk fatty acid profile as organic grass fed cows (as I discussed in a previous blog post).

If you happen to live in a place that favors a year round growing season for grass like New Zealand or the Northern coastal counties of California (e.g. Humboldt as shown in this graph), then both organic and conventionally-farmed milk will have marginally higher levels of omega-3 fatty acids as compared to cattle fed a diet higher in omega-6 fatty acid feed sources.   This can even be seen in the California data point in Figure 1 on the left  from  the article cited by the Washington Post where in fact the conventional milk had significantly higher levels of the desirable omega-3 α-linolenic acid acid, and marginally higher levels of CLA (conjugated linoleic acid).

In other words it is what the cow eats that impacts the fatty acid composition of her milk, not whether she is in a conventional or organic production system. While the organic standards have something to say about organic cows being required to get 30% of their dry matter intake, on average, from certified organic pasture during the growing season, it is not against the standards for cows to be fed organic feedstuffs, nor for 100% of their diet to come from such diets when it is not the growing season.

As stated by former Secretary of Agriculture Dan Glickman regarding organic products

“Let me be clear about one thing. The organic label is a marketing tool. It is not a statement about food safety. Nor is ‘organic’ a value judgement about nutrition or quality,” SECRETARY OF AGRICULTURE DAN GLICKMAN, DECEMBER 2000

The only specific features that distinguish organic from all other forms of farming is the requirement to abide by the production methods outlined in its standards. These include the rejection of antibiotics to treat sick animals, prohibiting the use of genetically engineered seed and feed and soluble minerals as fertilizer, and avoiding the use of most synthetic pesticides (except dairy cattle dewormers) in favor of natural ones.

The Washington post article states that grazing “alters the constituents of the cows’ milk in ways consumers deem beneficial.” I love that it says in a way that “consumers” deem beneficial, because that is not what the scientific literature says. As I mentioned in my previous BLOG – if you are looking to get omega-3s – eat a food source that has high levels of omega-3 fatty acids like salmon! The marginal differences in omega-3s between pasture fed and concentrate fed dairy cows are unlikely to be biologically meaningful as milk is not considered a good source of fatty acids to begin with. Milk is a good source of other nutrients like vitamin D, Calcium, and potassium.

According to the USDA standard reference database, an eight fluid ounce cup (244 g) of 3.25% fat milk has 0.183 grams of omega-3s, most of it 18:3 (α-linolenic acid). A half fillet serving (178 g) of salmon has 4.023 grams of omega-3s,  most of it long-chain fatty acids (EPA and DHA). In other words I get more than 20 times the omega-3 fatty acids from a serving of salmon that I get from a glass of milk, and they are the long-chain varieties. And if the milk is non-fat or skim the amount goes down to 0.0049 grams of omega-3s, because – well they removed the fat!

So why do I care if the Washington Post gets an article on organic livestock standards wrong? Because accuracy around reporting  in agriculture is as important as accuracy around all other subjects, and yet often the nuances of farming are omitted to make for a better story. And for years there have been negative stories especially about “big” agriculture. The organic standards for livestock are what they are; and they are clearly delineated. If you comply with them then you are allowed to label the products coming from that production system as organic. Organic certification does not guarantee food safety, or improved nutrition, or 100% grass fed, or a specific size of farm, or a specific fatty acid profile in the milk derived from cows raised on organic dairies.

I am certainly no fan of ANY production system that arbitrarily prohibits the use of safe technologies that could reduce the environmental footprint of our food production. It goes against my understanding of the need to allow farmers to have flexibility when addressing the unique problems on their farm, and  of my interest in researching new ways to try to improve the efficiency of agricultural production systems. And I certainly share the concern of Anthony Trewavas  that as demand increases “The consequence of less-efficient agriculture will be the elimination of wilderness that by any measure of biodiversity far exceeds that of any kind of farming system”. However I think this article did not fairly explain the organic livestock standards, nor in fact prove that there was a clear violation of those standards which is the basic premise of its misleading headline.

Are slow-growing chickens better?

Many agricultural scientists research ways to make agriculture more sustainable. As a geneticist, I see genetics as a solution to many of the problems that farmers face, be that disease resistant plants and animals, or species that are optimally suited to their place in agricultural production systems. Plant and animal breeders have perhaps the most compelling sustainability story of all time . Genetic improvements in our food species have dramatically increased the yield per plant, animal, or acre – and unlike other inputs – genetic improvements are cumulative and permanent . The following graphic illustrates the additional land and/or animals we would need to deliver 2014 levels of production using 1950s genetics and farming methods.

Since I am an animal scientist I am going to focus on that last row containing the broilers. If not for the genetic and management improvements in broiler production since the 1950s, we would need to grow an additional 8 billion animals annually to equal the production achieved in 2014.  Think about that number.  8 billion more.  Every year.

It’s obvious that staggering advances have been made in plant and animal production since the 1950s. How did breeding companies achieve such improvements? They did it largely through conventional selection which includes sophisticated techniques such as genomic selection, large pedigrees, and very comprehensive performance recording for a number of traits. For example, Cobb (Cobb-Vantress Inc., Siloam Springs, AR) records 56 individual observations on each pedigree selection candidate in their broiler breeding program. More than 50% of these 56 individual traits are some measure of health and fitness of an individual. This underscores the importance of combined selection for many traits, including robustness, specific and general disease resistance, absence of feet and leg problems and metabolic defects in the breeding objectives.

Current breeding programs are improving the efficiency of meat production in the broiler industry by 2–3% per year. In the United States, growth rates and breast meat yields continue to improve by 0.74 days and 0.5% per year for a broiler grown to 5 lbs, respectively, whereas the feed-conversion ratio (FCR, lb of feed required to obtain one lb of growth) is decreasing by 0.025 per year. At the same time, the livability (survival expectancy) of broilers is improving 0.22% per year, and condemnation rates have decreased 0.7% per year.

So by using balanced selection objectives that consider not only efficiency but also the health and fitness of birds, breeders have been able to improve the feed conversion ratio, decrease condemnation rates and increase the survival expectancy of broilers. This would seem to align with most people’s values of decreasing the environmental footprint of food production by improving efficiency, and also improving the livability (decreasing mortality) of the birds. Is this a rare example of a win:win situation?

Entering the “alternative fact” zone

Not according to Whole Foods, who have committed “to replace fast-growing chicken breeds with slower-growing breeds.” Although this change is not expected to be completed until 2024, Whole Foods is the first major food company to make this change. And why? Well according to Theo Weening, the global meat buyer for Whole Foods Market, the slow-growing bird “is a much better, healthier chicken, and at the same time it’s a much [more] flavorful chicken as well”. Unfortunately, he does not present any data to back up those wishful claims. Why would slow growth equate to a more flavorful chicken if none of the other production parameters changed? And what is the basis for suggesting they are healthier, which seems to contradict the evidence-based literature suggesting that the livability (survival expectancy) of broilers is improving 0.22% per year due to selection?

According to the Global Animal Partnership (GAP),  an organization that Whole Foods set up to create welfare standards for its suppliers, seems to have arbitrarily decided that “slower growing,” is equal to or less than 50 grams of weight gained per chicken per day averaged over the growth cycle, compared to current industry average for all birds of approximately 61 grams per day. This means that in order to reach the same market weight, the birds would need to stay on the farm significantly longer, 58 days rather than 44 days.

It does not take a rocket scientist to figure out that slower growing birds require more feed per pound of gain (the feed conversion ratio (FCR) is 2.2 for the slow growing birds, versus 1.9 for the industry average). In all, the impact of adopting slow growing birds is a 34% increase in feed per lb prime meat, a 40% increase in gallons of water and a 53% increase in the manure per bird marketed, and a 49% increase in costs per bird marketed. So in one fell swoop this decision dramatically increased the environmental footprint of broiler production by intentionally switching to a “Hummer” type of chicken rather than a “Prius”.

And to what end is this big step backwards in terms of sustainability being undertaken? Theoretically for animal welfare. But what is absent in this discussion is why slower growing = better welfare. Why is growing at less than 50 grams of weight gained per chicken per day for 58 days better for welfare than growing at 61 grams per day for 44 days? Where is the objective, evidence-base to support this assertion? Nothing else about how the chickens are being raised is changing, they are just around for 14 more days before slaughter.

Upon receiving an award “recognizing the commitment that Whole Foods Market and GAP have made to offering only slower-growing chicken breeds by 2024”,  Anne Malleau, executive director for GAP stated “By addressing fast growth in chickens, we will be getting to the root of the welfare problem facing chickens today.” That may be her opinion, but I would like to see the data supporting this contention – where is it shown that growing at less than 50 grams of weight gained per chicken per day is associated with improved welfare? What metrics were used? And does that mean even better welfare is associated with growing even slower? The evidence base for this determination is important given this decision has real negative impacts on the environmental and economic components of sustainability. There are almost always goal conflicts and tradeoffs between the environmental, social, and economic goals of sustainability, and as a result of these goal conflicts we have all sorts of marketers profiting off this to suggest THEIRS is the ONLY truly sustainable system!

At the current time the evaluation and ranking of sustainability goals is subjective and open to interpretation by marketing groups. While marketers are free to make decisions that appeal to their target customer, it is important to consider the actual implications of these decisions. In this case the unproven claim that chickens have to gain less than 50 grams of weight per day to have “good welfare” must be balanced against the very real increase in the environmental footprint and cost of broiler production associated with the adoption of “slow growing” genetics.

And perhaps as concerning to me, these arbitrary marketing decisions made in the absence of any data are working in direct opposition to the efforts of agricultural scientists to improve efficiency and decrease the environmental footprint of food production, a goal that I believe is also an important component of sustainability.

In what seems like a scene from the movie Groundhog Day, another rat study has come out of the laboratory of Dr. Giles-Eric Séralini, only in this case it is Roundup and not GMOs that are under fire. When I read the title of the paper, “Multiomics reveal non-alcoholic fatty liver disease in rats following chronic exposure to an ultra-low dose of Roundup herbicide”, I assumed a new study had been performed by the laboratory showing what this specific title appears to conclude i.e. that rats exposted to low levels of Roundup developed non-alcoholic fatty liver disease. However, when I read further I found that this was a study on tissues from a subset of the same lumpy rats that were involved in the famously retracted (and subsequently republished) paper from 2012 – the rats with horrific tumors (not fatty livers) due to GMOs (not glyphosate) that was breathlessly reported on the Doctor Oz show I participated in, and by media throughout the world.

I think if my work had been roundly criticized by scientific peers for poor experimental design and pathology data inadequacies, and critiqued by a multitude of separate national biosafety committees from  Belgium, Brazil, European UnionCanadaFranceGermany, Australia/New Zealand, and The High Council on Biotechnology,  I would not double down and continue to analyze 5-year old samples from that same experiment. What is weird is that although I vividly remember the images of grotesque tumors on the white Sprague Dawley female rats, (one does not forget those images with a “GMO” label contrasted against the shocking tumors) I did not recall any mention of non-alcoholic fatty liver disease. So I went back to the original paper and searched for the term “fatty liver disease”. Nada.

In fact, the only data on livers in that retracted/republished 2012 paper was presented for the male rats. According to the 2012 paper the males that received the low levels of Roundup (50 ng/L glyphosate equivalent dilution) displayed liver “congestions” and “macroscopic and microscopic necrotic foci”, not fatty liver disease. I asked a Laboratory Animal pathologist at UC Davis who specializes in rodent health to review the data in the paper to determine if it suggested the rats had fatty liver disease. There was no histopathologic evidence of hepatic lipidosis presented in males and no data on female livers was presented at all. Many of the “anatomical pathologies” observed are common aging related findings and this was not taken into account or discussed. They suggested the term “anatomopathological analysis” was a very irregular term for a veterinary pathologist to use and that the use of hepatodigestive tract and liver as separate categories of pathology incidence were redundant. They kept doggedly going back to the fact that no fatty liver phenotype data were ever presented on female livers so they could make no determination as to whether or which rats were suffering from fatty liver disease.

If you want a really interesting read from a group of veterinary pathologists who reviewed the pathology data in the 2012 Séralini study, their review contains the following understated scientific barbs (bold emphasis mine).

The sentence ‘The largest palpable growths (…) were found to be in 95% of cases non-regressive tumors, and were not infectious nodules.’ is very confusing. We hope that differentiating inflammatory from neoplastic lesions was not a challenge for the authors. Another clear example illustrating the lack of accuracy of the results is found in Fig. 3 where microscopic necrotic foci in the liver are grouped with clear-cell focus and basophilic focus with atypia. The first finding refers to a degenerative process whereas the remaining two refer to a proliferative one (Thoolen et al., 2010). Such basic error would be considered as a disqualifying mistake at an examination for pathologists.

Ouch.

They then go on to ask why there was no mention of which pathologist did the analyses, and why the rats were not euthanized earlier

as most members of the ESTP [European Society of Toxicologic Pathology] are veterinarians, we were shocked by the photographs of whole body animals bearing very large tumors. When looking at the lesions, we believe those animals should have been euthanized much earlier as imposed by the European legislation on laboratory animal protection

and then conclude their diatribe with the following

The ESTP comes to the conclusion that the pathology data presented in this paper are questionable and not correctly interpreted and displayed because they don’t concur with the established protocols for interpreting rodent carcinogenicity studies and their relevance for human risk assessment. The pathology description and conclusion of this study are unprofessional. There are misinterpretations of tumors and related biological processes, misuse of diagnostic terminology; pictures are not informative and presented changes do not correspond to the narrative.”

For those who are not immersed in science – these are damning criticisms.

So back to the 2017 study which cites a 2015 “transcriptomics” study by the same group for the observations on the female livers. In that study livers from 10 control females and the 10 females from the R (A) group from the 2012 study (for those of you paying attention) were analyzed using “transcriptomics”. So I went to read the 2015 paper to see if the Roundup-ingesting females perhaps had some liver data, and again there was no discussion of a fatty liver disease phenotype. There was however, an interesting discussion of why tissues from the females was used for the analysis in both the 2015 “transcriptomics” and 2017 “multiomics” paper.

In the 2012 study that started it all, apparently

“Most male rats were discovered after death had occurred. This resulted in organ necrosis making them unsuitable for further analysis. We therefore focused our investigation on female animals where freshly dissected tissues from cohorts of 9-10 euthanized and untreated rats were available. Female control and Roundup-treated animals were respectively euthanized at 701 ± 62 and 635 ± 131 days. Anatomopathological analysis of organs from these animals revealed that the liver and kidneys were the most affected organs.”

Well, the fact that the males got to a stage of necrosis because no one discovered they were dead seems strange in a study where rats are presumably checked every day as required by every animal care protocol I am familiar with. However, such protocols would also have required the rats to be sacrificed long before the tumors were able to grow to the size that were clearly evident in the photos associated with this study. And the fact that the liver and kidneys were the most affected organs might well have been true for the male rats (and these apparently necrotic tissues were analyzed and reported for these males), but for the female rats, according to the 2012 paper, it was all about the tumors!

Image from Seralini et al. 2012

That was the whole basis of the sensational 2012 paper that actually resulted in entire African countries rejecting all GMO imports. Reread that previous sentence because it shows the power of this one, poorly-designed study with 120 rats.

So livers were being harvested from these 20 females – several of which were compromised and euthanized “early” (2 from the control group, and 5 from the “treatment” group) at different ages due to the tumor load. Is it not obvious that these additional factors of tumor load and different ages would confound any data collected from their livers?

The 2015 paper goes on to show electronic microscope analysis of liver sections from females. But it turns out the photograph of the control female was actually the same photo at a different magnification as that shown for the control male hepatocyte image in the 2012 paper. The authors have since stated that was an honest mistake and have submitted a corrigendum, but go on to suggest that there are differences in the hepatocytes from Roundup-treated rats, specifically showing “a disruption of glycogen dispersion”, a disruption of nucleolar function and an overall decreased level of transcription. How transcription can be determined based on an electron micrograph is unclear. No mention is made of the “fatty liver disease” promised in the 2017 paper’s title.

So let me sum this up for those of you who may be lost. The original, highly-controversial 2012 study was done on 120 rats. The most recent study was performed on the livers of a subset of 10 female control rats and 10 female rats from that same 2012 study that were in “Roundup group (A)” which received 50 ng/L glyphosate equivalent dilution in their water. We do not know their water intake so have no idea of actual dosage of “Roundup”; we have little histological data on female liver samples – let alone a diagnosis of fatty liver disease; we know that the control and “treated” rats were euthanized at a variety of differing ages, and that the majority of these female rats had huge tumors that required several of the rats in both the control and Roundup groups to be euthanized before two years of age. And the livers from these 20 rats were the basis of the most recent “omics” paper. There is a saying in science (and perhaps other disciplines): “garbage in – garbage out”.

So let’s plough on – and read the 2017 paper which concludes that the metabolome and proteome analyses of the livers from the “Roundup-drinking” rats versus the controls “showed a substantial overlap with biomarkers of non-alcoholic fatty liver disease and its progression to steatohepatosis”. Hooray – now THERE is a testable hypothesis – so what ARE the biomarkers of non-alcoholic fatty liver disease? In other words, what proteins and metabolites might you expect to see upregulated (or downregulated) if in fact animals had non-alcoholic fatty liver disease? I have read the paper several times now and seen no reference to a paper that answers that question. So in the absence of  knowledge of fatty liver biomarkers, and given the fact no pathology diagnosed “fatty liver disease”, to conclude that “Multiomics reveal non-alcoholic fatty liver disease in rats following chronic exposure to an ultra-low dose of Roundup herbicide” is – to put it kindly – overstating the results of the research and making conclusions beyond that supported by the data.

Interestingly the bioinformatics analysis in this 2017 paper appears to be an improvement on previous works by this group in that the p-values were adjusted to account for the fact there were a high number of metabolites measured (1906 proteins and 673 metabolites), and there was therefore a need to do corrections for multiple comparisons to try to minimize the number of false positives. The authors even include a discussion of the need for corrections for multiple comparisons on page 9, and correctly state that there is a need to do this when measuring hundreds or thousands of observations to reduce the chance of making a type I error (false positive). However, they then lament the fact that there was a lack of statistical significance following the multiple comparison correction for all but three metabolites, due to the small sample size. That is the point! That is why these studies need to have sample size determinations based on the hypothesis being tested.

This study, which was based on the experimental design of a 90 day subchronic toxicity study (OECD, 1998) such that 10 animals were assigned to each group, was critiqued by the German Federal Institute for Risk Assessment (BfR) for small sample size for that very reason.

subchronic studies show a substantially lower variation of age-related pathological changes between animals within a group while those changes are inevitable in long-term studies. As the published study has confirmed, the two-year duration of the study is of the order of the expected life span in rats including the Sprague Dawley strain that was used in the study. This strain, provided by the breeder Harlan, is known to develop spontaneous tumors, particularly mammary and pituitary tumors, at relatively high rates compared to other strains (Brix et al., 2005; Dinse et al., 2010). Therefore, it can be expected that a significant number of animals develop age-related illnesses or die for diverse reasons already during conduct of the study. The distribution of the cases of death between groups can be random, and a number of 10 animals per sex and group is too low to confirm a trend or an effect. Furthermore, no statements on statistically significant dose-response-relationships can be made. Larger sample sizes, as recommended for carcinogenicity studies in OECD Test Guidelines No. 451 or No. 453, would be required in order to allow precise statements with respect to the findings.”

In other words you need to have bigger sample sizes to perform long term studies because many changes are associated with old age – especially when working with a rat strain that is known to develop spontaneous tumors, particularly female mammary and pituitary tumors!

Frustratingly, when the multiple comparisons removed all but three of the 673 metabolites as being statistically significant due to multiple comparison correction in the 2017 paper, the authors just went ahead and included the 55 that had a significant uncorrected p value(!), because “the non-adjusted statistically significant levels” fit a narrative, and so were revived from the statistical trash can on the basis that “they were found to be non-random and thus biologically meaningful”. This is the very definition of confirmation bias which is what multiple comparison correction and correct experimental design is trying to weed out because scientists are people too, and they are not without their own preconceived notions of how the world works.

More concerning, this 2017 paper is yet another in a string of papers from this group that was accepted in a peer-reviewed journal, in this case Scientific Reports, an online journal from the publishers of Nature. The problems in experimental design, lack of supporting pathology data on the test subjects, and wildly subjective overinterpretation of the results should have been grounds for soundly rejecting this manuscript. We live in an age of the willful neglect of scientific evidence, and the emergence of “alternative facts” and realities.  As a scientist it worries me that papers like this are published in apparently respected journals. I remember once hearing a member of the activist industry say that “peer-reviewed journals are the tool of the enemy” suggesting they were the gold standard communication tool for scientists to report inconvenient facts. At the time I did not appreciate the importance of that statement, and concerningly it appears that this in no longer the case. If we can’t trust the peer-review process to ensure the integrity of papers published in scientific journals, what can we trust? This is a problem that should worry the entire scientific community, not only those concerned with the topic of this particular paper.

FDA seeks public comments on regulation of genetically altered animals

The recently released FDA guidance for producers and developers of genetically improved animals and their products defining all intentional DNA alterations in animals as drugs, irrespective of their end product consequence, is nonsensical.

FDA “Guidance for Industry #187” updates the never finalized 2009 document “Regulation of Genetically Engineered Animals Containing Heritable rDNA Constructs” to  the much more expansive “Regulation of Intentionally Altered Genomic DNA in Animals” to expand the scope of the guidance to address animals intentionally altered through use of genome editing techniques. No longer is it the presence of an rDNA construct (which conceivably COULD have encoded a novel allergen or toxic protein) that triggers FDA regulatory oversight of genetically engineered animals, but rather it is the presence of ANY “intentionally altered genomic DNA” in an animal that triggers oversight.  Intention does not equate to risk. This trigger seems to be aimed squarely at breeder intention and human intervention in the DNA alteration.

DNA is generally regarded as safe. We eat it in every meal, and along with each bite we consume billions of DNA base pairs. Each individual differs from another by millions of base pair mutations – we are always consuming DNA alterations – the mutations that provided the variation that enabled plant and animal breeders to select corn from Teosinte and Angus cattle from Aurochs.  DNA does alter the form and function of animals – and all living creatures – it is called the genetic code, the central dogma,  and evolution. If DNA is a drug then all life on Earth is high.

The guidance states that “intentionally altered genomic DNA may result from random or targeted DNA sequence changes including nucleotide insertions, substitutions, or deletions”, however it clarifies that selective breeding, including random mutagenesis followed by phenotypic selection, are not included as triggers. So the random DNA alterations that result from de novo or chemical-induced mutagenesis with not be a trigger, but intentional precise and known alterations and any offtarget random changes that might be associated with the intended edit will trigger regulation, irrespective of the attributes of the end product. This is beyond process-based regulation, it is regulation triggered by human intent. That is if a breeder was involved, then it is regulated. If random mutations happened in nature or due to uncontrolled mutagenesis – not regulated.

This sounds a lot like what Greenpeace is arguing for when they state that a GMO is when “the genetic modification is enacted by heritable material (or material causing a heritable change) that has, for at least part of the procedure, been handled outside the organism by people.” The problem is that risk is associated with the attributes of the product, not the fact that it is handled by people or carries the taint of human intention.

This approach is the polar opposite of what the 2016 National Academies report concluded that the distinction between conventional breeding and genetic engineering is becoming less obvious. They reasoned that conventionally bred varieties are associated with the same benefits and risks as genetically engineered varieties. They further concluded that a process-based regulatory approach is becoming less and less technically defensible as the old approaches to genetic engineering become less novel and as emerging processes — such as gene editing — fail to fit current regulatory categories of genetic engineering. They recommended a tiered regulatory approach focused on intended and unintended novel characteristics of the end product resulting from the breeding methods that may present potential hazards, rather than focusing regulation on the process or breeding method by which that genetic change was achieved.

The new FDA Guidance, released two days before Trump’s inauguration, then goes on to state “a specific DNA alteration is an article that meets the definition of a new animal drug at each site in the genome where the alteration (insertion, substitution or deletion) occurs.  The specific alteration sequence and the site at which the alteration is located can affect both the health of the animals in the lineage and the level and control of expression of the altered sequence, which influences its effectiveness in that lineage. Therefore, in general, each specific genomic alteration is considered to be a separate new animal drug subject to new animal drug approval requirements.” So every SNP is potentially a new drug, if associated with an intended alteration.

To put this in perspective, in one recent analysis of whole-genome sequence data from 234 taurine cattle representing 3 breeds, >28 million variants were observed, comprising insertions, deletions and single nucleotide variants. A small fraction of these mutations have been selected owing to their beneficial effects on phenotypes of agronomic importance. None of them is known to produce ill effects on the consumers of milk and beef products, and few impact the well-being of the animals themselves.

What is not clear is how developers are meant to determine which alterations are due to their “intentions”, and which result from spontaneous de novo mutations that occur in every generation. Certainly breeders can sequence to confirm the intended alteration especially if they are inserting a novel DNA sequence, but how can they determine which of the random nucleotide insertions, substitutions, or deletions are part of the regulatory evaluation, and which are exempt as random mutagenesis. And if there is risk involved with the latter, why are only the random mutations associated with intentional modifications subject to regulatory evaluation? And what if the intended modification is a single base pair deletion – will the regulatory trigger be the absence of that base pair – something that is not there?

Many proposed gene editing applications will result in animals carrying desirable alleles or sequences that originated in other breeds or individuals from within that species (e.g. hornless Holsteins were edited to carry the Celtic polled allele found in breeds like Angus). As such, there will be no novel combination of genetic material or phenotype (other than hornless). The genetic material will also not be altered in a way that could not be achieved by mating or techniques used in traditional breeding and selection. It will just be done with improved precision and minus the linkage drag of conventional introgression.

Does it make sense to regulate hornless dairy calves differently to hornless beef calves carrying the exact same allele at the polled locus? Does it make sense to base regulations on human intent rather than product risk? Regulatory processes should be proportional to risk and consistent across products that have equivalent levels of risk.

There is a need to ensure that the extent of regulatory oversight is proportional to the unique risks, if any, associated with the novel phenotypes, and weighed against the resultant benefits. This question is of course important from the point of view of technology development, innovation and international trade.  And quite frankly the ability of the animal breeding community to use genome editing.

Given there is currently not a single “genetically engineered animals containing heritable rDNA construct” being sold for food anywhere in the world  (see my BLOG on AquAdvantage salmon), animal breeders are perhaps the group most aware of the chilling impact that regulatory gridlock can have on the deployment of potentially valuable breeding techniques. While regulation to ensure the safety of new technologies is necessary, in a world facing burgeoning animal protein demands, overregulation is an indulgence that global food security can ill afford.

I urge the scientific community – including those not directly impacted by this proposed guidance because animal breeders are a small community – to submit comments to the FDA on this draft revised guidance #187 during the 90-day comment period which closes June 19, 2017. There are several questions posted there asking for scientific evidence demonstrating that there are categories of intentional alterations of genomic DNA in animals that pose low to no significant risk. Centuries of animal breeding and evolution itself would suggest there are many.

There is also a request for nomenclature for the regulatory trigger as outlined in the draft revised guidance. The FDA used the phrase “animals whose genomes have been altered intentionally” to expand their regulatory reach beyond genetically engineered animals containing heritable rDNA constructs (aka drugs), but suggested that other terms that could be used include “genome edited animals,” “intentionally altered animals,” or expanding the term “genetically engineered” to include the deliberate modification of the characteristics of an organism by manipulating its genetic material. They encourage the suggestion of other phrases that are accurate and inclusive. I can think of a couple!

Who does fund university research?

This is follow up to my BLOG last week about “Who should fund university research”? I thought it might be illustrative to examine actual data from my university. Not surprisingly for a large enterprise, UC Davis  tracks sources of all monies coming into the university, and oversees the expenditure of such funds.

There are two basic ways research funding can come into the university – as a formal contract or grant, or as a donation. In the former case, there is some type of a grant application or description of work to be carried out (but not what the results of the research will be!!!) for which the funding is provided, in the second case it is what is called an “unrestricted” donation. This is money that is directed towards an individual professor, program or department with no further specification as to what the money is to be used for. Of course such funding is still managed by the university, and can’t be used for a vacation to Hawaii.  Often it is used as seed funding to undertake a professor’s favorite research idea, perhaps one that is a bit too “out there” and risky to secure traditional grant funding in the absence of supporting preliminary data.  In that sense it is like a donation to your favorite charity, you donate the money because you like the type of work that charity does.  However you cannot directly specify exactly what the charity is to do with the money you donated.

Grants and contracts

These are the monies that really run research programs. The total awards by calendar year at UC Davis is in the ballpark of $750 million (i.e. three quarters of a billion). That is a lot of money, but UC Davis is a big university with a medical school which includes a hospital, a veterinary school, and all of the colleges that make up the campus. If we pessimistically (realistically) assume a 10% funding rate of public research funding that  means the UC Davis faculty are on average writing $7.5 billion worth of  grants each year, and are successfully bringing in one tenth of that. And to reiterate these funds are used to support graduate students, buy research supplies, perform experiments and advance knowledge. UC Davis is a powerful economic engine for California, generating $8.1 billion in statewide economic activity and supporting 72,000 jobs.

The approximate breakdown for the $786 million received in fiscal year 2014-15 was $427 million (54%) awards from the federal government, and likely a big chunk of research funding is also from the state government. $66.1 million (8.4%) was awards from foundations, and $59.4 million (6.7%) awards from industry sponsors. I think that is an interesting point, that UC Davis receives more sponsored research funding from foundations than it does from industry sponsors. The School of Medicine received the largest share of research grants at UC Davis with $264 million (34%), followed by the College of Agricultural and Environmental Sciences at $155 million (20%), and the School of Veterinary Medicine at $114 million (14.5%).

Donations

This pool of monies is more modest than that brought in by grants and contracts. I could only get this data for fiscal year, rather than calendar year, but it is in the vicinity of $200 million. Now the question that perhaps has been asked most frequently is how much funding is coming from specific companies – specifically those associated with the so-called “Agrochemical academic complex”? That all depends upon how you define such industries, but let’s go with the so-called “Big 6”; that is Monsanto, Syngenta, Bayer, BASF, DuPont/DuPont pioneer, and Dow.

The following table has the breakdown of total grants and contracts, donations and those two figures totaled, and then the breakout of how much of that funding and the (percentage of total) associated with the cumulative funding coming from the “Big 6” in recent years. (The numbers differ slightly from those above due to fiscal versus calendar year accounting.)

Year 2012 2013 2014 2015
Grants/Contracts 699,728,437 718,934,464 751,864,525 793,797,558
       From “Big 6” 1,407,821 (0.20%) 477,178 (0.07%) 881,856 (0.12%) 746,160 (0.09%)
Donations 132,451,535 149,134,036 165,704,178 184,180,960
       From “Big 6” 768,172 (0.58%) 1,386,079 (0.93%) 858,912 (0.52%) n/a
TOTAL 832,179,972 868, 068,500 917,568,703 977,978,518
       From “Big 6” 2,175,993 (0.26%) 1,863,257 (0.21%) 1,741,768 (0.19%) n/a

So in summary, at what is arguably the number one ranked agricultural research university in the world, the proportion of funding coming from the “Big 6 Agrochemical academic complex” funders is approximately $2 million per year, well under one half of one percent of total research funding received by the campus.  To put that in perspective, the College of Agricultural and Environmental Sciences alone has 330 faculty members and 1,000 graduate students . Two million dollars is approximately what it takes to fully fund ~ 35 graduate students for a year.

So what is the money being used for?

Not surprisingly most of the funding from the “Big 6” was associated with research working in plant sciences and entomology. Some went to the medical school because the search for “Bayer” also captured research funding sponsored by “Bayer Healthcare”.   A number of the donations were to Cooperative Extension county-based advisors performing field research with various crops. And just for transparency, none of it was directed to my research program (which is not surprising as I work on animals not plants!). Some was earmarked for work in specific crops like figs, pistachios, strawberries, rice, onions, woody crops and viticulture.  And that is not surprising because California grows hundreds of specialty crops. Noticeably none of these crops have commercialized genetically engineered varieties, and their breeding programs are mostly run by public sector scientists.The one thing California does not grow much of is large acreage corn and soybeans. We do not have the right climate and conditions for these crops, and there are high-value alternative crops that CA farmers chose to grow.  As a result, UC Davis does not do much research in these field crops, and the university therefore does not get much industry research funding for work in these crops.

I would wager that the University of Kentucky, home of the Kentucky Derby,  probably has industry funding supporting is equine science program, ’cause they have a huge equine industry in that state. In  general when a university has an important industry in its state, that industry helps to support research at that state-located public university. And in the case of California there is an amazing number of agricultural commodities grown – the fruit and vegetable industry raises a cornucopia of varieties in the state, and UC Davis has renowned brewing and wine making programs. As an example, the brewing science program at UC Davis has received several sizable donations from industry, including the recent $2 million donation from the owners of the local Sierra Nevada Brewing Company. Cheers to science-based beer brewing and wine making!

How does this breakdown compare to other land grant universities?

My colleague Kevin Folta at the University of Florida posted this useful graphic for the gators  (University of Florida).

Funding to University of Florida FY 2015-2016 broken down by funding source

In the case of the University of Florida, the faculty brought in $140 million in sponsored funding in FY 2015-16, and of that 70% was from federal agencies,  15.5% was from foundations, and 3.5% was from corporations and industry.  Kevin makes the observation in his blog regarding agricultural industry funders:

“They are frequently the beneficiaries of increased knowledge in agriculture, as well as the training and education we provide to the next generation of scientists”. I look forward to his next BLOG piece where he promises to write about whether industry support of science matters.

So there you have it – or at least a snapshot from two large agricultural universities as to which entities fund universities. By far the biggest source of funding is federal research grants – as might be expected at a public university.

Now I must go and focus my efforts on writing my next federal grant application – which unfortunately has a ~90% probability of not being funded and will likely only ever be read by 2 grant reviewers. As compared to this BLOG which has 100% chance of not securing funding for my research program, but hopefully will be of interest to more than 2 readers.

I would appreciate your comment on a recently published study

The email was simple enough. It was a request from a member of the press asking “I would appreciate your reaction/comments to the recently published study on GMO corn for an article I am putting together on it. Deadline: Wednesday 4 January.”

Just when I thought I was going to get a day off to myself to write up my own research results, in comes the dreaded time-sensitive press request for comments on a recently published paper. Dreaded because to respond properly means I need to sit down and read the whole paper and ensure I have understood the materials and methods, results, and discussion. For me that is a commitment of a couple of hours. And to top things off – it was a paper by Mesnage from France’s infamous Séralini group whose previous works have had numerous flaws. But I made a New Year’s Resolution to be more active in critiquing agricultural science and can’t in good faith renege on that resolution on January 2nd.

The paper’s title “An integrated multi-omics analysis of the NK603 Roundup-tolerant GM maize reveals metabolism disturbances caused by the transformation process” suggested the researchers had uncovered some altered metabolic processes caused by the transformation process used to create the NK603 Roundup-tolerant genetically modified (GM) maize line. This event was achieved by direct DNA transformation by microparticle bombardment of plant cells with DNA-coated gold particles and regeneration of plants by tissue culture on selective medium. This transformation process presumably happened last century as the feed/food approval for this line in the United States occurred in 2000. However, upon reading the abstract the paper was nothing to do with disturbances caused by the transformation process, but rather it was about whether the product of this transformation event was “substantially equivalent” based on proteomics and metabolomics evaluation. Strangely, the “conclusiony”-sounding title of the paper therefore had nothing to do with the experimental design or findings discussed in the paper.

According to the results section, the actual “objective of this investigation was to obtain a deeper understanding of the biology of the NK603 GM maize by molecular profiling (proteomics and metabolomics) in order to obtain insights into its substantial equivalence classification.” In plain English – the intent of the paper was to examine both proteins and metabolites found in NK603 Roundup-tolerant GM maize (both treated and untreated with Roundup), and non-GM isogenic lines to determine if the three groups were substantially equivalent using sensitive “-omics” assays.

To perform such an evaluation requires a common agreement as to what substantial equivalence means, and what constitutes an appropriate comparator(s).  Unfortunately, not such common understanding exists. According to an OECD publication in 1993, substantial equivalence is a concept which stresses than an assessment of a novel food, in particular one that is genetically modified, should demonstrate that the food is as safe as its traditional counterpart. This has been interpreted to mean that the levels and variation for characteristics in the genetically modified organism must be within the natural range of variation for those characteristics considered in the comparator.

And this brings up the issue of an appropriate comparator. Typically this involves the comparison of key compositional data collected from both the recombinant-DNA crop plant and the isogenic non-GM counterpart, grown under near identical conditions. Ideally, conventional non-GM corn hybrids are also included in analyses to determine the level of natural variation for compositional data in conventional varieties that are considered to be safe for consumption based on a history of safe use.

According to the original studies of the NK603 GM maize variety compositional analyses were conducted on the key corn tissues, grain and forage, produced in multiple locations (Kansas, Iowa, Illinois, Indiana, and Ohio in 1998 and in trials in Italy and France in 1999). Grain and forage samples were taken from plants of the corn event NK603 and the non-modified control both years. In the E.U. field trials, reference grain and forage samples also included 19 conventional, commercial hybrids. The NK603 plants were treated with Roundup Ultra herbicide. Fifty-one different compositional components were evaluated.

Not surprisingly there are protocols on how to best carry out experiments on GM crops  that are accepted by regulatory agencies world-wide (OECD 2006; Codex 2009). According to EFSA, for compositional analysis risk assessment, field trials will include: the GM plant under assessment, its conventional counterpart (isogenic non-GM counterpart), and non-GM reference-varieties, representative of those that would be normally grown in the areas where the field trials are performed. The later puts some figures and context to the natural biological variation in the different plant varieties we commonly consume.

So what did the Mesnage  paper in question do? The researchers planted a single replicate of the GM plant under assessment (DKC 2678 Roundup-tolerant NK603) and its conventional counterpart (DKC 2575 – although the exact genetic makeup of this line and whether it is a true isogenic counterpart is not well elaborated in the paper) at a single location on two different years. Half of the GM plants each year were treated with Roundup. Then the corn kernels were harvested and the proteins and metabolites from the three groups were assayed using proteome and metabolome profiling, the data from the two years were merged and analyzed. The three groups (isogenic non-GM counterpart), GM plant without roundup treatment, and GM plant with roundup treatment separated into three distinct groups based on a principal component analysis (PCA).

Integration of metabolome and proteome profiles of the NK603 maize and its near-isogenic counterpart into a multiple co-inertia analysis projection plot.

I draw your attention to a very similar graph (below) in a paper I recently published which shows a PCA analysis of the transcriptome (genes expressed) from cattle that have been exposed to different viruses and bacteria. Basically PCA can pull apart patterns of gene expression in different groups of cattle in response to the specific environmental challenges they are facing. The controls can clearly be seen to be clustering down in the bottom right corner, and the bacterial infections tend to cluster to the right and differently than those infected with viruses which cluster to the left.

Multidimensional scaling plot of samples based on all genes

That is – if you expose plants or animals to different environmental or disease challenge conditions – they express different genes in response. That is typically why researchers do “–omics” studies, to try to identify which genes/proteins/metabolites respond to different environmental conditions. What they do not show is whether any of beef that might be derived from these animals would be unsafe to eat – every animal and plant ever eaten is likely unique in terms of their exact protein and metabolite profile depending upon their unique environmental conditions and stressors.

Unfortunately there are a number of experimental design problems with the Mesnage et al. (2016) paper that complicate the interpretation of the results, and as concerning there appear to be confounders that further complicate the analyses.

These include:

  • Only a single replicate of each treatment (n=1) at a single location (over two years) is analyzed with no biological replication or randomization of locations to remove site variability.
  • The data from the two cultivations in different years were inexplicably merged prior to analysis which made it impossible to determine if results or trends were consistent or reproducible between years
  • No inclusion of non-GM reference-varieties (conventional commercial hybrids) representative of those that would be normally grown in the areas where the field trials are performed to put some figures and context to the natural biological variation in the composition of non-GM corn comparators
  • No discussion of correction for multiple comparisons (by chance one in every 20 comparisons would be expected to be significant at the p<0.05) of so. If doing multiple comparisons it is necessary to do a multiple-comparison correction
  • There appears to be evidence of different levels of fungal (Gibberella moniliformis Maize ear and stalk rot fungus) protein contamination between the three groups.  See Supplemental Dataset 5  where Tubulin alpha chain OS=Gibberella moniliformis (strain M3125 / FGSC 7600) appears as the  protein that had the biggest fold change between control and GM lines. If there were differing levels of fungal infestation among the groups this would also confound the data.

Others have commented on some of their concerns with this paper including a comprehensive analysis from a number of scientists with expertise in this area. There were also comments from European experts from the science media center. And another discussed the definition and importance of true isogenic lines.

Based on significant differences between proteins and metabolites, including the rather alarmingly named “putrescine and cadaverine” which were markedly increased in the GM NK603 corn (N-acetyl-cadaverine (2.9-fold), N-acetylputrescine (1.8-fold), putrescine (2.7-fold) and cadaverine (28-fold), Mesnage et al. (2016) concluded that NK603 and its isogenic control line are not substantially equivalent, meaning that there were statistical differences between the proteins and metabolites found in the three groups. However what is not clear is whether the levels and variation for characteristics in the genetically modified organism or the control were within the natural range of variation for those characteristics in corn, and the biological significance of the statistical differences in terms of posing a food safety concern.  Differences between the GM variety in the presence and absence of Roundup would presumably be similar to the differences that occur every time a crop is treated with an herbicide, be the plant GM or not.

I could not resist looking up these two metabolites putrescine and cadaverine which seem like they should more appropriately be associated with a decaying animal corpse.  According to Wikipedia, “Putrescine, or tetramethylenediamine, is a foul-smelling organic chemical compound that is related to cadaverine; both are produced by the breakdown of amino acids in living and dead organisms and both are toxic in large doses. The two compounds are largely responsible for the foul odor of putrefying flesh, but also contribute to the odor of such processes as bad breath and bacterial vaginosis. More specifically, cadaverine is a foul-smelling diamine compound produced by the putrefaction of animal tissue.”

So what are these two horrifying compounds doing in corn samples? Enquiring minds needed to know. So being a good scientist I googled “Cadaverine in corn”, and lo and behold a peer-reviewed study. Check out Table 1. Mean levels of free bioactive amines in fresh, canned and dried sweet corn (Zea mays).

According to this study on “Bioactive amines in fresh, canned and dried sweet corn, embryo and endosperm and germinated corn”, “Different levels of amines in corn products were reported in the literature. Okamoto et al. (1997) found higher concentrations of putrescine and spermidine in fresh corn. Zoumas-Morse et al. (2007) reported lower spermidine and putrescine levels in fresh and canned corn. The differences observed on the profile and levels of amines may be related to several factors such as cultivars, cultivation practices, water stress, harvest time, grain maturity, types of processing and storage time.” In other words, there is a lot of natural biological variation in the different plant varieties we commonly consume with regard to the amount of amines in corn products, and yet we commonly and safely consume fresh, canned and dried sweet corn. If you really want to get nerdy, there are databases of polyamines in food.

As the multi-omics analysis of the NK603 Roundup-tolerant GM maize paper by Mesnage correctly states “the vagueness of the term substantial equivalence generates conflict amount stakeholders to determine which compositional differences are sufficient to declare a GMO as non-substantially equivalent.” In the absence of knowledge of the natural variation in proteins and metabolites in the common foods we eat, the level of different proteins and metabolites that trigger a  safe/unsafe determination, and a testable hypothesis at the outset of an experiment, undisciplined “-omics” studies risk becoming statistical fishing trips.

As someone who works in genomics and knows the tens or even hundreds of thousands of statistical comparisons that are part of genomic analyses, there is a real need to understand the statistical methods required for multiple comparisons. If 10,000 comparisons are made at the p<0.05 rate, 500 would be expected to be statistically significant by chance alone. The biological relevance of statistical differences is also not always clear as discussed here. According to the European Food Safety Authority (EFSA) Scientific Committee,  good experimental design requires that “the nature and size of biological changes or differences seen in studies that would be considered relevant should be defined before studies are initiated. The size of such changes should be used to design studies with sufficient statistical power to be able to detect effects of such size if they truly occurred.”

In the first line of the discussion Mesnage et al. state “In this report we present the first multi-omics analysis of GM NK603 maize compared to a near isogenic non-GM counterpart”. There are actually two relevant  papers on the NK603 line here and here that were published in 2016 but which were inexplicably not even cited in the Mesnage publication. The later paper is entitled “Evaluation of metabolomics profiles of grain from maize hybrids derived from near-isogenic GM positive and negative segregant inbreds demonstrates that observed differences cannot be attributed unequivocally to the GM trait” which compared differences in grain from corn hybrids derived from a series of GM (NK603, herbicide tolerance) inbreds and corresponding negative segregants. The authors concluded “Results demonstrated that the largest effects on metabolomic variation were associated with different growing locations and the female tester. They further demonstrated that differences observed between GM and non-GM comparators, even in stringent tests utilizing near-isogenic positive and negative segregants, can simply reflect minor genomic differences associated with conventional back-crossing practices.”

Moreover, a 2013 meta-analysis by Ricroch examined data from 60 high-throughput ‘-omics’ comparisons between GE and non-GE crop lines. There are several papers on compositional data in GE versus non-GM corn varieties (here, here, here, here, here, here, here, here). The overwhelming conclusion that is common to these papers is that natural variation due to varying genetic backgrounds and environmental conditions  explained most of the variability among the samples. And yet this nuance is missing in the 2016 Mesnage paper – the conflation of any factors other than the genetic modification and treatment with Roundup that could influence the results given the poor experimental design is ignored.  This tends to be a common feature of this research group – to ignore standard experimental design protocols such as randomization and biological replication, cherry pick cited literature and ignore contradictory or preceeding studies with dissimilar results, rather than discussing their results in the context of what is known based on the entire weight-of-evidence in the scientific literature.

Ricroch in her meta-analysis summarized that “The ‘-omics’ comparisons revealed that the genetic modification has less impact on plant gene expression and composition than that of conventional plant breeding. Moreover, environmental factors (such as field location, sampling time, or agricultural practices) have a greater impact than transgenesis. None of these ‘-omics’ profiling studies has raised new safety concerns about GE varieties”

Interestingly, one study showed that transcriptome alteration was greater in mutagenized plants than in transgenic plants. Of course the random mutations associated with mutation breeding undergo no regulatory evaluation or substantial equivalence assessment prior to commercialization. Variation is the driver of breeding programs, and the reason that varieties  like red delicious and golden delicious apples differ from each other in the first place.

Finally Mesnage et al. acknowledge funding from “The Sustainable Food Alliance” for their paper. There is no link as to which groups or interests provide funding for this Alliance. This is not reassuring and runs counter to the absolute transparency of all funding sources that is being demanded of public sector researchers working in this field.

At the end of the day if I have concerns about a paper by a group that has a track record of publishing highly controversial studies, I like to go back to the Nature graphic shown above to see how many red flags are raised. In this case there were a few, most particularly around experimental design and omitting references and discussion of the finding of other “-omics” studies which have consistently shown the high levels of natural variation that is seen in the composition of food due to the differing environments experienced by the plants (and animals) we consume.

I know that this is more of a response than any journalist could ever use, but as with most everything in agriculture, there are no simple sound-bite answers . Having said that I appreciate the press reaching out to seek comment from scientists and hope that is increasingly common in 2017. Although taking the time to respond kinda took the rest of my day. I may have to rethink my New Year’s Resolutions if I plan to get any of my own research done this year, I will worry about that tomorrow when I return to work for the year.

Who should fund University research?

A recent article by Danny Hakim on the so-called “agrochemical academic complex” includes a quote, “If you are funded by industry, people are suspicious of your research. If you are not funded by industry, you’re accused of being a tree-hugging greenie activist. There is no scientist who comes out of this unscathed.”

I beg to differ. My research is not funded by industry, and yet to my knowledge I have never been called a “tree-hugging greenie activist.” Quite the opposite – I have been demonized by groups opposed to genetically modified organisms (GMOs) because my publicly-funded research on animal biotechnology and genomics has occasionally published results in line with the weight-of-evidence around the safety of genetically engineered organisms.

After reading the article, I was left with the conclusion that industry (and by this it appears to be any industry associated with the “agrochemical academic complex”, not the activist or NGO industries – the influence of whose funding is noticeably absent from the article) dictates the outcome of the research and public academics are just hapless puppets to be played to produce favorable outcomes for industry.

That is not how my laboratory works. Nor is it how my department operates. Nor my university. Nor in my experience are the public scientists I know willing to trade their hard-won scientific integrity for research funding. Such a move would be career suicide. Publishing incorrect or false data in science sets off a ticking time bomb for retraction when others are unable to repeat these results.

More generally, the article does not seem to understand how academic funding works. And I have found this misunderstanding to be true in interactions with my friends and neighbors too. My university has little control over what I research – it is called “academic freedom”.

As a researcher at UC Davis, I am provided with my salary and a laboratory. The rest is up to me. What I mean by that is if I want to do original research, I need to obtain research funding to conduct that research. Typically this involves writing research grants.

My university has absolutely no say over the topics I chose to research or where I apply for funding. They have absolute direct control over how I spend the funding I do receive in terms of ensuring I abide by all university policies, and that the grant monies are spent appropriately.

By FAR the biggest expense I have in my research program are graduate students. At the current time the annual cost of a UC Davis graduate student is around $26,734 for a 50% time stipend for 12 months, and $17,581 for fees and health insurance for a total of $44,315 annually. So for a 2 year Masters student that is $88,630 and a 5 year Ph.D. student $221,575 – let’s just call that an even quarter of a million.

And that does not include the University’s “indirect rates” which currently adds an additional 57% on top of the stipend, so the amount that needs to be in a research grant for that PhD stipend is an additional $76,192 for a grand total of around $297,767 for a 5-year student, close to an even $300,000 assuming no tuition hikes– and that only gets you 50% time of that student, the other 50% time they are doing their course work!

Each laboratory is effectively a small business within a much larger entity. Their business is to obtain funding to pay student salaries, conduct research, and publish peer-reviewed articles. The University provides the location and facilities to perform that research. And for that the University taxes a 57% “overhead” rate off the top of the grant award.

If I did not write successful grants to fund graduate students, pay for research supplies, travel to field plots and perform my extension activities then I would not have a research program or anything to publish in the peer-reviewed literature. And that is how I am evaluated – by my peer-reviewed publications and research productivity, not by the amount or source of the research funding I am able to secure. It is the researcher, the so-called “principal investigator”, that drives this enterprise and makes the decisions on what to research and where to obtain grant funding, not the university.

If the university as an entity happens to obtain money from a donor to build a building, for example the UC Davis Mondavi Center for the Performing Arts built with a gift from the wine industry, that in no way affects my research funding or what I chose to research, or means that researchers have to perform research that is favorable to the wine industry.

If a researcher chooses to seek industry funding, as many do especially in the information technology, transportation and energy sectors, there are strict guidelines around managing potential conflicts of interest and ability to publish the results of the research. According to UC Davis university policy: “Freedom to publish is a major criterion when determining the appropriateness of a research sponsorship; a contract or grant is normally not acceptable if it limits this freedom. The campus cannot accept grants or contracts that give the sponsoring agency the right to prevent for an unreasonable or unlimited time, the release of the results of work performed.”

It is perhaps not well publicized that an inordinate amount of a University professor’s time is spent writing grants to public agencies, which often have a funding success rate of one grant in 10. As such 9 out of every 10 grants are only ever read by one or two reviewers, and then never see the light of day again. This is not an efficient system, but it is the system we have. If a researcher has public funding, their research has already survived pre-project peer-review by the grant panel. Ironically some have even suggested that public funding is also tainted. If funding from both private and pubic sources is suspect, where does that leave academic laboratories, and who will pay to train the next generation of scientists?

Writing grants is perhaps (actually for sure) the least favorite part of my job, and it is time-consuming. But it is a necessary evil if I am to perform research and fund my graduate students. I have been fortunate to be able to secure public funding for my research mostly from the United States Department of Agriculture (USDA) (thanks NIFA!), but many other researchers quite appropriately work with industry sponsorship in the development and evaluation of new technologies. Industries of all types seek such partnerships with public universities to obtain an impartial evaluation of their technology.

Demonizing industry funding as unilaterally suspect in the absence of wrongdoing fails to take into consideration the checks and balances that are in place at public universities and the importance of public:private partnerships in the development of new technologies, and the unfortunate reality that there is a paucity of public research funding.

Fishy Business

I was updating a chapter I am writing about the regulations around genetically engineered  (GE) animals, and both supporters and detractors of GE animals have criticisms of the US regulatory approach.

Criticisms from the biotech industry include:

• the approach is process-triggered rather than being based on the actual, relative risks (if any) associated with the novel phenotype or product as compared to those associated with conventional comparators.
• the process is prohibitively time consuming and expensive to complete all of the mandatory testing. AquaBounty has expended over $60 million in attempting to bring the AquAdvantage® salmon through the regulatory approval process thus far (David Frank, AquaBounty, pers. comm.), and still does not have authorization to market in the US.
• some tests are not based on unique risks (e.g. endogenous allergens), and the absence of a known tolerance level or natural levels of variability means regulatory studies are problematic to design and the results are ambiguous to interpret.
• unpredictable timeline.
• focus is only on risks, no consideration is given to potential benefits of the novel phenotype or product, nor are the risks associated with existing alternative approaches to genetic modification and breeding, or regulatory inaction, considered.

The following table outlines the key events in the timeline of the regulatory process for the AquAdvantage® salmon since its genesis over a quarter of a century ago in 1989. No other GE food animal has been approved anywhere else in the world.

Year Event
1989 The founder animal from which the AquAdvantage® line was derived was created by microinjection of the transgene into fertilized eggs of wild Atlantic salmon in Canada
1992 The AquAdvantage® line was created from the F1-progeny of the EO-1α line.
1995 AquaBounty technologies established an Investigational New Animal Drug (INAD) file with the Center of Veterinary Medicine (CVM) of the U.S. FDA to pursue the development of AquAdvantage® salmon.
2001 First regulatory study submitted by Aqua Bounty Technologies to U.S. FDA for a New Animal Drug Applications (NADA)
2008 U.S. FDA approves Aqua Bounty Canada as a manufacturing site for production of  AquAdvantage® salmon eggs
2009 FDA guidance on how GE animals will be regulated

FDA approval of first GE animal pharmaceutical

Final AquAdvantage® regulatory study submitted to U.S. FDA.

U.S. FDA inspected and approved Aqua Bounty Panamá as an authorized site for the commercial production of AquAdvantage® salmon for purposes of export to the U.S.

2010 September: FDA VMAC meeting on AquAdvantage® salmon
2011 Political efforts to prevent FDA from regulating GE salmon, ban GE salmon, delay regulatory approval
2012 FDA released finding of no significant impact “FONSI” environmental assessment
2015 November: US FDA AquAdvantage® Approval

AquaBounty expended over $60 million to bring the AquAdvantage® salmon through the regulatory approval process

2016 January: US FDA issued a ban on the import and sale of GE salmon until FDA “publishes final labeling guidelines for informing consumers of such content”. The ban was the result of language Alaska Sen. Lisa Murkowski introduced into the 2016 fiscal budget, or omnibus, bill. It also authorizes “an independent scientific review” of the effects of GE salmon on wild salmon stocks and for human consumption.” Fact check investigated some of her claims about the salmon

March: a coalition of environmental, consumer, commercial and recreational fishing organizations sues US FDA over approval of GE salmon approval

May: Canadian Approval of AquAdvantage® for sale in Canada

December: FDA bills AquaBounty for $113,000 “Animal Drug” User Fee for their “approved” animal drug product despite continued FDA ban on the import and commercial sale of AquAdvantage® fillets

 

Criticisms from the activist industry include:

  • lack of public participation and transparency – although the entire AquAdvantage data package was made publicly available.
  • no consideration of socio-economic issues including ethics and moral concerns.
  • concern about impartiality of the data because the regulatory review studies are being conducted by the company seeking approval of the GE animal.
  • environmental risk is not given appropriate consideration as the National Environmental Policy Act is procedural in nature and does not give the FDA the authority to deny an application on environmental grounds.
  • lack of mandatory process-based labels on food derived from GE animals, although the passage of the 2016 National Bioengineering Food Disclosure Law may pre-empt that concern.

The AquAdvantage® Atlantic salmon carries an “all fish” growth hormone chimeric gene. The rDNA construct consists of  an antifreeze protein gene promoter from ocean pout linked to a chinook salmon growth hormone cDNA. It was envisioned that this construct would raise less public concern because of the “all-fish” nature of the rDNA construct and the fact that does not include any viral or selection marker sequences. Given the founder of this transgenic line was generated in 1989 and that the commercial product was first approved in December 2015, it is hard to argue that the mandatory process-based regulatory process encountered by the GE AquAdvantage® salmon  created “an environment of certainty and confidence for researchers, industry and consumers. It may even have convinced consumers that, since mandatory regulation is required, it must mean that this fast growing fish is somehow more intrinsically risky than conventionally bred fast growing salmon.

One thing is for certain – no small company or university can afford to try to get a GE animal application through regulatory given the unpredictability of the timeline and costs. The protracted regulatory evaluation of this fast-growing Atlantic salmon and uncertainty associated with the process and the timeline has had a chilling effect on investment and development of GE animals for food applications in the US.

 

« Older posts Newer posts »

© 2024 Biobeef Blog

Theme by Anders NorenUp ↑