Reproducible Research Week 1

Replication

The ultimate standard for strengthening scientific evidence is replication of finding and conducting studies with independent

Replication is particularly important in studies that can impact broad policy or regulatory decisions

What's wrong with replication?

Some studies cannot be replicated

Reproducible Research: Make analytic data and code available so that others may reproduce findings

Reproducibility bridges the gap between replication which is awesome and doing nothing.

Why do we need reproducible research?

New technologies increasing data collection throughput; data are more complex and extremely high dimensional

Existing databases can be merged into new "megadatabases"

Computing power is greatly increased, allowing more sophisticated analyses

For every field "X" there is a field "Computational X"

Research Pipeline

Measured Data -> Analytic Data -> Computational Results -> Figures/Tables/Numeric Summaries -> Articles -> Text

Data/Metadata used to develop test should be made publically available

The computer code and fully specified computational procedures used for development of the candidate omics-based test should be made sustainably available

"Ideally, the computer code that is released will encompass all of the steps of computational analysis, including all data preprocessing steps. All aspects of the analysis needs to be transparently reported" -- IOM Report

What do we need for reproducible research?

Who is the audience for reproducible research?

Authors:

Readers:

Challenges for reproducible research

What happens in reality

Authors:

Readers:

Literate (Statistical) Programming

An article is a stream of text and code

Analysis code is divided into text and code "chunks"

Each code chunk loads data and computes results

Presentation code formats results (tables, figures, etc.)

Article text explains what is going on

Literate programs can be weaved to produce human-readable documents and tagled to produce machine-readable documents

Literate programming is a general concept that requires

  1. A documentation language (human readable)
  2. A programming language (machine readable)

Knitr is an R package that brings a variety of documentation languages such as Latex, Markdown, and HTML

Quick summary so far

Reproducible research is important as a minimum standard, particularly for studies that are difficult to replicate

Infrastructure is needed for creating and distributing reproducible document, beyond what is currently available

There is a growing number of tools for creating reproducible documents

Golden Rule of Reproducibility: Script Everything

Steps in a Data Analysis

  1. Define the question
  2. Define the ideal data set
  3. Determine what data you can access
  4. Obtain the data
  5. Clean the data
  6. Exploratory data analysis
  7. Statistical prediction/modeling
  8. Interpret results
  9. Challenge results
  10. Synthesize/write up results
  11. Create reproducible code

"Ask yourselves, what problem have you solved, ever, that was worth solving, where you knew all of the given information in advance? Where you didn't have a surplus of information and have to filter it out, or you had insufficient information and have to go find some?" -- Dan Myer

Defining a question is the kind of most powerful dimension reduction tool you can ever employ.

An Example for #1

Start with a general question

Can I automatically detect emails that are SPAM or not?

Make it concrete

Can I use quantitative characteristics of emails to classify them as SPAM?

Define the ideal data set

The data set may depend on your goal

Determine what data you can access

Sometimes you can find data free on the web

Other times you may need to buy the data

Be sure to respect the terms of use

If the data don't exist, you may need to generate it yourself.

Obtain the data

Try to obtain the raw data

Be sure to reference the source

Polite emails go a long way

If you load the data from an Internet source, record the URL and time accessed

Clean the data

Raw data often needs to be processed

If it is pre-processed, make sure you understand how

Understand the source of the data (census, sample, convenience sample, etc)

May need reformatting, subsampling -- record these steps

Determine if the data are good enough -- If not, quit or change data

Exploratory Data Analysis

Look at summaries of the data

Check for missing data

-> Why is there missing data?

Look for outliers

Create exploratory plots

Perform exploratory analyses such as clustering

If it's hard to see your plots since it's all bunched up, consider taking the log base 10 of an axis

plot(log10(trainSpan$capitalAve + 1) ~ trainSpam$type)

Statistical prediction/modeling

Should be informed by the results of your exploratory analysis

Exact methods depend on the question of interest

Transformations/processing should be accounted for when necessary

Measures of uncertainty should be reported.

Interpret Results

Use the appropriate language

Gives an explanation

Interpret Coefficients

Interpret measures of uncertainty

Challenge Results

Challenge all steps:

Challenge measures of uncertainty

Challenge choices of terms to include in models

Think of potential alternative analyses

Synthesize/Write-up Results

Lead with the question

Summarize the analyses into the story

Don't include every analysis, include it

In the lecture example...

Lead with the question

​ Can I use quantitative characteristics of the emails to classify them as SPAM?

Describe the approach

​ Collected data from UCI -> created training/test sets

​ Explored Relationships

​ Choose logistic model on training set by cross validation

​ Applied to test, 78% test set accuracy

Interpret results

​ Number of dollar signs seem reasonable, e.g. "Make more money with Viagra $ $ $ $"

Challenge Results

​ 78% isn't that great

​ Could use more variables

​ Why use logistic regression?

Data Analysis Files

Data

Figures

R Code

Text

Raw Data

Should be stored in the analysis folder

If accessed from the web, include URL, description, and date accessed in README

Processed Data

Processed data should be named so it is easy to see which script generated the data

The processing script -- processed data mapping should occur in the README

Processed data should be tidy

Exploratory Figures

Figures made during the course of your analysis, not necessarily part of your final report

They do not need to be "pretty"

Final Figures

Usually a small subset of the original figures

Axes/Colors set to make the figure clear

Possibly multiple panels

Raw Scripts

May be less commented (but comments help you!)

May be multiple versions

May include analyses that are later discarded

Final Scripts

Clearly commented

Include processing details

Only analyses that appear in the final write-up

R Markdown Files

R Markdown files can be used to generate reproducible reports

Text and R code are integrated

Very easy to create in RStudio

Readme Files

Not necessary if you use R Markdown

Should contain step-by-step instructions for analysis

Text of the document

It should contain a title, introduction (motivation), methods (statistics you used), results (including measures of uncertainty), and conclusions (including potential problems)

It should tell a story

It should not include every analysis you performed

References should be included for statistical methods