Base de conhecimento IEC (IEC's Knowledge base)
0 votos positivos 0 votos negativos
518 visitas
em Fundamentos por | 518 visitas

1 Resposta

0 votos positivos 0 votos negativos

Há informações muito relevantes à respeito em http://biostat.mc.vanderbilt.edu/wiki/Main/ManuscriptChecklist. 

Abaixo está uma síntese do material com tópicos e alguns trechos destacados. Uma versão para impressão do texto completo está disponível neste link

Statistical Problems to Document and to Avoid

Design and Sample Size Problems

Use of an improper effect size

Relying on standardized effect sizes

General Statistical Problems

Inefficient use of continuous variables

Relying on assessment of normality of the data

Inappropriate use of parametric tests

Inappropriate descriptive statistics

Failure to include Confidence intervals

Inappropriate choice of measure of change

Use of change scores in parallel-group designs

Inappropriate analysis of serial data (repeated measures)

Making conclusions from large P -values | More Information | Absence of Evidence is not Evidence of Absence


Filtering

There are many ways that authors have been seduced into taking results out of context, particularly when reporting the one favorable result out of dozens of attempted analyses. Filtering out (failing to report) the other analyses is scientifically suspect. At the very least, an investigator should disclose that the reported analyses involved filtering of some kind, and she should provide details. The context should be reported (e.g., "Although this study is part of a planned one-year follow-up of gastric safety for Cox-2 inhibitors, here we only report the more favorable short term effects of the drug on gastric side effects."). To preserve type I error, filtering should be formally accounted for, which places the burden on the investigator of undertaking often complex Monte Carlo simulations.

Here is a checklist of various ways of filtering results, all of which should be documented, and in many cases, re-thought:

  • Subsets of enrolled subjects
  • Selection of endpoint
  • Subset of follow-up interval
  • Selection of treatments
  • Selection of predictors
  • Selection of cutpoints for continuous variables


Missing Data

Multiple Comparison Problems

Multivariable Modeling Problems

Inappropriate linearity assumptions

Inappropriate model specification

Use of stepwise variable selection

Lack of insignificant variables in the final model

Overfitting and lack of model validation

Failure to validate predictive accuracy with full resolution

Use of inappropriate measures of predictive accuracy


Use of Imprecise Language | Glossary

It is important to distinguish rates from probabilities, odds ratios from risk ratios, and various other terms. The word risk usually means the same thing as probability. Here are some common mistakes seen in manuscripts:

  • risk ratio or RR used in place of odds ratio when an odds ratio was computed
  • reduction in risk used in place of reduction in odds; for example an odds ratio of 0.8 could be referred to as a 20% reduction in the odds of an event, but not as a 20% reduction in risk
  • risk ratio used in place of hazard ratio when a Cox proportional hazards model is used; the proper term hazard ratio should be used to describe ratios arising from the Cox model. These are ratios of instantaneous event rates (hazard rates) and not ratios of probabilities.
  • multivariate model used in place of multivariable model; when there is a single response (dependent) variable, the model is univariateMultivariate is reserved to refer to a model that simultaneously deals with multiple response variables.

Graphics | Handouts | Advice from the PGF manual (chapter 6)

  • Pie charts are visual disasters
  • Bar charts with error bars are often used by researchers to hide the raw data and thus are often unscientific; for continuous response variables that are skewed or have for example fewer than 15 observations per category, the raw data should almost always be shown in a research paper.
  • Dot charts are far better than bar charts, because they allow more categories, category names are instantly readable, and error bars can be two-sided without causing an optical illusion that distorts the perception of the length of a bar
  • Directly label categories and lines when possible, to allow the reader to avoid having to read a symbol legend
  • Multi-panel charts (dot charts, line graphs, scatterplots, box plots, CDFs, histograms, etc.) have been shown to be easier to interpret than having multiple symbols, colors, hatching, etc., within one panel
  • Displays that keep continuous variables continuous are preferred

Tables | Examples (see section 4.2)

As stated in Northridge et al (see below), "The text explains the data, while tables display the data. That is, text pertaining to the table reports the main results and points out patterns and anomalies, but avoids replicating the detail of the display." In many cases, it is best to replace tables with graphics.


Ways Medical Journals Could Improve Statistical Reporting

  1. Require that the Methods section includes a detailed and reproducible description of the statistical methods.
  2. Require that the Methods section includes a description of the statistical software used for the analysis and sample size calculations.
  3. Require authors to submit a diskette with their data files as a spreadsheet or statistical software file when submitting manuscripts for publication.
  4. Pay an experienced biostatistician to review every manuscript.
  5. Require exact P values, reported consistently to 3 decimal places, rather than NS or P<0.05, unless P<0.001 or space does not permit exact P Values - as in a complex table or Figure.
  6. Require that the Methods section contains enough detail about how the sample size was calculated so that another statistician could read the report and reproduce the calculations.
  7. Do not allow ambiguous reporting of percentages, such as "The recurrence rate in the control group was 50% and we calculated that the sample size required to detect a 20% reduction would be 93 in each group." Some authors mean 30% (50%-20%=30%) and some mean 40% (20% of 50% is 10%, 50%-10%=40%). Require that the authors clarify this.
  8. Print the Methods section in a font the same size as the rest of the paper.
  9. Require 95% confidence interval for all important results, especially those supporting the conclusions. Require authors to justify the logic of using standard errors.
  10. Identify every statistical test used for every P value. In tables, this can be accomplished with footnotes and in figures the legend can describe the test used.
  11. Enforce some consistency of statistical reporting. Do not allow authors to invent names for statistical methods.
  12. Require that the authors describe who performed the statistical analysis. This is especially important if the analyses were performed by the biostatistics section of a pharmaceutical company.

Useful Articles and Web Sites with Statistical Guidance for Authors

This topic: Main > WebHome > Education > HandoutsBioRes > ClinStat > ManuscriptChecklist 
Topic revision: 12 May 2018, FrankHarrell

por
Mathe Forum Schule und Studenten