Steve Wing, an epidemiologist from University of South Carolina was asked by a third party to review a manuscript by Mangano and Sherman, “Elevated airborne beta levels in Pacific/West Coast US States and trends in hypothyroidism among newborns after the Fukushima nuclear meltdown,” after it had been accepted for publication on January 29, 2013. It was not the actual published version in the March 2013 issue of Open Journal of Pediatrics, shown in the following link, http://www.scirp.org/journal/PaperInformation.aspx?PaperID=28599, that Wing reviewed.
Wing’s critique, dated February 27, 2013, was sent to the authors, but there appeared to be no direct response from them, except the manuscript seemed to have been published with some corrections, as some of the issues brought up by Wing could not be identified in the final published version.
The journal that published this study, the Open Journal of Pediatrics, apparently is a ‘predatory journal’ that is for-profit and does not have a serious scientific peer-review process. This information might be of interest to some of the readers.
Wing's critique is published below with his permission. He requested that the details surrounding his critique be mentioned as above.
A related post, "A Letter to the Editor Regarding the Congenital Hypothyroidism Study by Mangano and Sherman" by Alfred Körblein, can be found in the following link.
Comments and questions on “Elevated airborne beta levels in Pacific/West Coast U.S. states and trends in hypothyroidism among newborns after the Fukushima nuclear meltdown” by Joseph J. Mangano MPH MBA, and Janette D. Sherman MD
This article compares the ratios of congenital hypothyroidism (CH) cases between time periods in 2010 and 2011 for five western US states and 36 other US states.
The authors propose that the five western states were more exposed to I-131 fallout from Fukushima than other states, and that, if this impacted CH, the ratios of 2011/2010 CH cases would be elevated for several months after the deposition of fallout in these states compared to the others. The principle of this comparison appears to be logical as it makes use of both spatial and temporal variation to evaluate the effect of an environmental exposure, however the data collection and analyses are unclear and internally contradictory.
The introduction includes results of a comparison of CH cases in four counties around the Indian Point reactor to US rates. Two time periods are compared, however there is no information about whether there was a change in exposure or another reason for choosing these periods. It is stated that Indian Point, from 1970-1993, had the fifth-highest airborne I-131 releases out of 72 US reactors. It is not clear why the authors present emissions data from a period that ended 4 to 13 years before the time of the CH analysis; clearly CH cases from 1997-2007 could not have been exposed to I-131 from 1970-1993. The authors should explain, if their interest is in whether nuclear reactor releases cause CH, why not use release estimates from a time period close to the CH data and analyze records around the nuclear facilities with highest releases rather than the fifth highest?
The first of two tables labeled “Table 2” presents 77 measurements of I-131 in U.S. precipitation following the Fukushima meltdowns. The url for the source given in the bibliography did not work, but I-131 precipitation at another EPA url provides 157 measurements. Were the omitted values non-detects? If average levels by state are of interest, non-detects should be included in mean values under some assumption, for example, half the detection limit. The authors note that some of the highest measurements came from Florida, which is classified as a “control” state (better described as “lower fallout”) in the later CH analysis, and from Massachusetts, which is omitted from the CH analysis. No rationale is provided for these decisions.
I-131 in precipitation is relevant to the milk pathway for iodine uptake, which dominates thyroid dose estimates for U.S. populations. Why are the exposure groups for the CH analysis based on gross beta in air, which would be influenced by beta-emitting gasses such as xenon and krypton that were not only present in Fukushima emissions but are also routinely present in emissions from U.S. reactors? In the second Table 2, how are non-detects treated? What was the limit of detection?
The authors state that they requested “monthly numbers of CH cases” in a telephone survey. If birth dates for individual cases were not obtained, how could they be classified according to day of birth in subsequent analyses? In the Results the authors state that data from small states were not available due to confidentiality concerns; how were such concerns handled if individual birth dates were acquired for March cases? Furthermore, several of the omitted states, including New York, are not small. This part of the data collection is unclear, and it is important for anyone interpreting the results to know clearly how the more and less exposed groups were formed and whether missing data relate to exposure.
“State programs were also asked to confirm that there was no change in CH definitions between 2010 and 2011 that would bias any temporal comparison.” Did any respond that they had changed their definition, and if so, which ones? In the next sentence, “intra-state” should be “inter-state.” Counts in many surveillance systems are provisional until some closing date for investigation. Were the figures reported in the phone interviews all final? If not, the counts may differ from those that will be reported in official documents, which could lead to inability to replicate the current analysis with final data.
One strength of the study design is the use of time-windows, therefore the choice of dates for CH incidence is important. Is it plausible that CH cases on March 17, 2011, could be caused by Fukushima I-131 that arrived on that same day, essentially with no lag? The milk pathway certainly could not be involved in exposures for some time period. One question is whether CH from Fukushima fallout would be prompted by an initial dose or by cumulative doses over days or weeks. In any case, because March 17 (or March 15, as in one row of Table 3) is the earliest possible beginning of the exposed period, the choice of that time for counting exposed cases deserves discussion and justification.
Table 3 presents the main results, however the labels and counts are confusing. Why use March 15 in the first row rather than March 17? Why are there more cases from March 15 – April 30 than for March 17 – June 30 or March 17 – December 31? The counts in the 5 western states in the two latter periods sum to the first row, suggesting labeling errors, however the values for the other states do not sum up in the same way. The p-values in Table 3 do not match those in the Discussion and it is not sufficiently clear what they refer to and how they were derived.
The authors make a number of good points in this paragraph, which could be used as a basis for improving the manuscript: “There are technical improvements that may be made to the data in this report. One of these is to obtain more precise temporal and geographic data on environmental levels of specific radionuclides in the U.S. after Fukushima, including I-131. Moreover, estimating specific exposures to humans as a consequence of the fallout would also be helpful in any future analyses of health risk. In addition, there are technical changes that may be made to data in this report, such as using a period greater than just 2010 as a baseline; including data on CH cases after 2011; and conversion of trends in cases to rates when official numbers of 2010-2011 live births by state and month become available.”