This is an informal and hopefully intuitive way to teach the nature of correlations. Please consider textbooks and review-articles for detailed study. This project can be downloaded from GitHub or using git directly using shell commands:
git clone https://github.com/bioinformatics-leibniz-hki/correlation-workshop
It is highly recommended to run the analysis in an isolated and reproducible environment. This allows having a fixed set of software versions which can be run on any computer or server without the need of re-installation. We will use Docker for this:
docker
is availableWe can create and start a Docker container containing RStudioServer with all required tools using GNU make:
cd correlation-workshop
make
The prompt will display the URL in which the environment can be accessed. The username is rstudio
with password password
.
Correlation is way to quantify the relationship between two variables. This relationship is monotonous and often even linear. We start with looking at a dataset from the U.S. Census Bureau and National Science Foundation showing the number of computer science doctorated and the revenue of arcade games in billion USD (“Spurious Correlations,” n.d.).
First, we load the data using functions from the R tidyverse packages:
library(tidyverse)
arcades <- read_csv("data/arcades.csv")
arcades %>% summary()
year doctorates revenue
Min. :2000 Min. : 809.0 Min. :1.176
1st Qu.:2002 1st Qu.: 862.5 1st Qu.:1.247
Median :2004 Median :1038.5 Median :1.371
Mean :2004 Mean :1195.1 Mean :1.442
3rd Qu.:2007 3rd Qu.:1571.5 3rd Qu.:1.641
Max. :2009 Max. :1787.0 Max. :1.803
Let’s plot one variable against the other and do a linear regression:
arcades %>%
ggplot(aes(doctorates, revenue)) +
stat_smooth(method = "lm") +
geom_point() +
labs(
x = "Computer science doctorates",
y = "Arcade revenue (billion USD)"
)
Covariance is a simple statistical measure to quantify how much one variable co-varies with another. So why just not use covariance?
cov(arcades$doctorates, arcades$revenue)
[1] 91.11483
The result is \(91.11 Doctorates * bn USD\). This value is hard to interpret: Is this value high or low? What does the unit mean? Correlation is just normalized covariance: It does not have a unit and is always in the range \([-1,1]\)
The higher the number of computer science doctorates, the higher is the revenue in the arcade gaming market! Correlation values are always ranging between -1 and 1. The value is negative if one variable continuously increases whereas the other decreases and positive if both variables are either increasing or decreasing. The value is zero if there is no monotonous relationship.
Let’s calculate the linear Pearson’s product-moment correlation:
cor.test(~ doctorates + revenue, data = arcades, method = "pearson")
Pearson's product-moment correlation
data: doctorates and revenue
t = 16.182, df = 8, p-value = 2.138e-07
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.935915 0.996586
sample estimates:
cor
0.9850653
The correlation coefficient is estimated to be \(0.985\) indicating a strong positive relationship which fits with the plot. Most basic statistical tests, including those for correlations, are actually nothing else than linear models (Lindeløv, n.d.). The estimated slope of the fitted curve is the strength of the correlation. We have to normalize our data using z-scaling (\(z(x)=\frac{x-\mu}{\sigma}\)) to ensure that the estimated slope is independent of the unit and lies between -1 and 1. This will give us values with a mean of 0 and a variance of 1.
arcades_scaled <-
arcades %>%
mutate(
doctorates = doctorates %>% scale(),
revenue = revenue %>% scale()
)
arcades_scaled %>%
pivot_longer(c(doctorates, revenue)) %>%
group_by(name) %>%
summarise(mean = mean(value), variance = var(value))
# A tibble: 2 x 3
name mean variance[,1]
<chr> <dbl> <dbl>
1 doctorates 2.41e-16 1.
2 revenue -1.85e-16 1
Now we can do a linear regression on one variable using the other as a covariate to get the same correlation coefficient. We ignore the intercept here because this tell us nothing about the relationship between the two variables and should be almost 0 due to the scaling:
lm(formula = doctorates ~ 1 + revenue, data = arcades_scaled) %>% summary()
Call:
lm(formula = doctorates ~ 1 + revenue, data = arcades_scaled)
Residuals:
Min 1Q Median 3Q Max
-0.27327 -0.12554 -0.00275 0.12641 0.29887
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.565e-16 5.775e-02 0.00 1
revenue 9.851e-01 6.088e-02 16.18 2.14e-07 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.1826 on 8 degrees of freedom
Multiple R-squared: 0.9704, Adjusted R-squared: 0.9666
F-statistic: 261.8 on 1 and 8 DF, p-value: 2.138e-07
Often, we do not require the relationship to be linear and proportional. For instance, this is the case if we want to get the correlation of the abundance of a species with time in an exponential growing setting. Now we don’t care about the size of the gradient triangles anymore. We can remove this effect by ranking the data and doing the linear model again:
scale_ranks <- . %>% rank() %>% scale()
arcades_ranked <-
arcades %>%
mutate(
doctorates = doctorates %>% scale_ranks(),
revenue = revenue %>% scale_ranks()
)
arcades_ranked %>%
ggplot(aes(doctorates, revenue)) +
stat_smooth(method = "lm") +
geom_point() +
coord_fixed() +
labs(
x = "Computer science doctorates (rank)",
y = "Arcade revenue (rank)"
)
lm(formula = doctorates ~ 1 + revenue, data = arcades_ranked) %>% summary()
Call:
lm(formula = doctorates ~ 1 + revenue, data = arcades_ranked)
Residuals:
Min 1Q Median 3Q Max
-1.03290 -0.00701 0.08407 0.22520 0.40035
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.815e-16 1.352e-01 0.000 1.000000
revenue 9.152e-01 1.425e-01 6.421 0.000204 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 0.4276 on 8 degrees of freedom
Multiple R-squared: 0.8375, Adjusted R-squared: 0.8172
F-statistic: 41.23 on 1 and 8 DF, p-value: 0.0002045
The test is called Spearman’s rank correlation:
cor.test(~ doctorates + revenue, data = arcades, method = "spearman")
Spearman's rank correlation rho
data: doctorates and revenue
S = 14, p-value = 0.0004667
alternative hypothesis: true rho is not equal to 0
sample estimates:
rho
0.9151515
Note that the linear correlation coefficient is higher indicating that this relationship is indeed linear: \(r_{pearson} = 0.985 > r_{spearman} = 0.912\)
One might ask at this point why there is a relationship between the number of computer science doctorates and the arcade revenue. This question is totally valid and the example dataset was chosen on purpose: To illustrate the ubiquitous phenomena called spurious correlation. Pearson himself already noted this potential for false positive discoveries (Pearson 1897). This is especially the case if a third confounding variable is influencing both variables in the same way. Here, this is obviously just a coincidence so the third confounding variable is the time itself.
To mitigate this issue, we control for this third variable by fitting both other variables against the year. The residuals of the linear model is what can’t be explained by time so we can do the correlation test on the residuals:
doctorates_residuals <- lm(doctorates ~ year, data = arcades_scaled) %>% residuals()
revenue_residuals <- lm(revenue ~ year, data = arcades_scaled) %>% residuals()
cor.test(~ doctorates_residuals + revenue_residuals, method = "pearson")
Pearson's product-moment correlation
data: doctorates_residuals and revenue_residuals
t = 6.2546, df = 8, p-value = 0.0002445
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
0.6604151 0.9790925
sample estimates:
cor
0.9111654
The linear Pearson correlation is now indeed lower indicating that time is a confounding variable and the relationship is just a coincidence. This process of doing the test only on a fraction of the total variance (namely the residuals) is called partial correlation. There is also the function ppcor::pcor.test
doing this:
library(ppcor)
pcor.test(
x = arcades_scaled$doctorates,
y = arcades_scaled$revenue,
z = arcades_scaled$year,
method = "pearson"
)
estimate p.value statistic n gp Method
1 0.9111654 0.0006301684 5.850676 10 1 pearson
Sometimes, samples are not independent to each other but grouped. Examples are repeated measurements at multiple time points from the same subject or subjects grouped into different cohorts in a meta study. This is a fundamental violation of the assumption of statistical tests and can lead to even opposite results which is known as Simpson’s paradox. Here we use the R package correlation from the easystats suite to simulate a relationship between the variables V1 and V2, which has a positive correlation within every group but a negative one on all pooled samples:
library(correlation)
data <-
simulate_simpson(n = 100, r = 0.8, groups = 5) %>%
as_tibble()
data %>%
ggplot(aes(x = V1, y = V2)) +
geom_point(aes(colour = Group)) +
geom_smooth(aes(colour = Group), method = "lm", se = FALSE) +
geom_smooth(colour = "black", method = "lm", se = FALSE)
The group has an strong contradicting effect on the correlation between V1 and V2. Thus we have to control for the grouping variable (subject, cohort, etc.). This can be done using multilevel correlation. It is analogous to a linear mixed model (LMM) with the grouping variable treated as a random effect. Unlike partial correlation, the grouping factor here is treated as a random and not as a fixed effect. We treat a variable as a random effect if it is a factor from which the levels are unknown yet. For example, there are many other subjects the model has not seen before thus we do not have an estimate for it’s slope needed for a fixed effect prediction. This is a typical use case for random effects, in which the model parameters were drawn from a distribution instead. Continuously scaled covariates like time or concentration, however, have to be modeled as a fixed effect using partial correlation instead.
Let’s use the function correlation
from the correlation
package to get the Pearson correlation with and without this multilevel structure in which the sample are nested inside the groups treated as a random effect:
correlation(data)
Parameter1 | Parameter2 | r | 95% CI | t | df | p | Method | n_Obs
-----------------------------------------------------------------------------------------
V1 | V2 | -0.40 | [-0.47, -0.33] | -9.86 | 498 | < .001 | Pearson | 500
correlation(data, multilevel = TRUE)
Parameter1 | Parameter2 | r | CI | t | df | p | Method | n_Obs
--------------------------------------------------------------------------------------
V1 | V2 | 0.80 | [0.77, 0.83] | 29.72 | 498 | < .001 | Pearson | 500
The effect size of \(r=0.8\) was successfully recovered in the multilevel model.
Here is the analog linear mixed model with a similar result:
library(lme4)
lmer(V1 ~ V2 * (1|Group), data = data)
Linear mixed model fit by REML ['lmerMod']
Formula: V1 ~ V2 * (1 | Group)
Data: data
REML criterion at convergence: 948.7707
Random effects:
Groups Name Std.Dev.
Group (Intercept) 2.8432
Residual 0.6006
Number of obs: 500, groups: Group, 5
Fixed Effects:
(Intercept) V2
5.3951 0.7984
It is also possible to have even more levels of samples nestings: If we have longitudinal data from the same subject in different hospitals we can also write V1 ~ V2 * (hospital_id|subject_id)
to get the correlation between V1 and V2 controlled for effects related to the subjects and hospitals.
An application of using partial, multilevel and SparCC correlation is given in (Zhou et al. 2019). Here, age and BMI are modeled as a fixed effect for partial correlation and the subject id as a random effect for multilevel correlation between the abundances of microbes. Furthermore, the data was normalized using centered log-ratio (clr) to further address the compositionality of relative count data obtained by NGS machines.
There are relationships between two variables which are statistical depended but can not be discovered using correlation test. Correlation implies statistical dependency but not vice versa. This is especially the case in which the two variables are not monotonously linked but in a cyclic way. Let’s simulate the abundance of some predator and some prey species to estimate the relationship of hunting:
hunting <-
tibble(time = seq(from = 0, to = 3, length.out = 50)) %>%
mutate(
predator = sin(time),
prey = 1 + cos(time - 0.5)
)
hunting %>%
pivot_longer(
cols = c(predator, prey),
names_to = "animal",
values_to = "abundance"
) %>%
ggplot(aes(time, abundance, color = animal)) +
geom_point()
The pearson correlation is almost zero here:
cor.test(~ predator + prey, data = hunting, method = "pearson")
Pearson's product-moment correlation
data: predator and prey
t = 0.68411, df = 48, p-value = 0.4972
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.1851467 0.3665860
sample estimates:
cor
0.09826513
This is because the relationship between the predator and the prey is not linear:
hunting %>%
ggplot(aes(predator, prey)) +
geom_point()
But we can still do a correlation test if we just look at a small time window in which the relationship between the predator and the prey can be assumed to be linear:
hunting_local <- hunting %>% filter(time > 1 & time < 2)
hunting_local %>%
pivot_longer(
cols = c(predator, prey),
names_to = "animal",
values_to = "abundance"
) %>%
ggplot(aes(time, abundance, color = animal)) +
geom_point() +
stat_smooth(method = "lm")
cor.test(~ predator + prey, data = hunting_local, method = "pearson")
Pearson's product-moment correlation
data: predator and prey
t = -1.7183, df = 14, p-value = 0.1078
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
-0.75653210 0.09881308
sample estimates:
cor
-0.4173342
Now we can observe the negative local correlation between the predator and the prey. An elaborated version of this idea is called Local similarity analysis (LSA) (Ruan et al. 2006)
There are plenty of other ways to define a correlation metric between two continious scaled variables. For example, there is Kendall’s rank correlation which is a more robust and convervative alternative to Spearman’s rank correlation. Here is an overview about other types of correlations.
Let’s continue with a more realistic dataset. We have healthy subjects (Nielsen et al. 2014) and patients suffering from Colorectal cancer (CRC) (Zeller et al. 2014). This dataset contains whole metagenomic sequencing data which was processed by Humann2 to get abundance profiles for bacterial species and pathways. Data was retrieved using the R-package curatedMetagenomicData
(Pasolli et al. 2017).
features <- read_csv("data/features.csv")
samples <- read_csv("data/samples.csv")
lineages <- read_csv("data/lineages.csv")
samples %>% mutate_if(is.character, as.factor) %>% summary()
sample_id sample_alias project body_site
S01 : 1 CCIS12656533ST-4-0 : 1 NielsenHB_2014:16 stool:40
S02 : 1 CCIS16383318ST-4-0 : 1 ZellerG_2014 :24
S03 : 1 CCIS29688262ST-20-0: 1
S04 : 1 CCIS41288781ST-4-0 : 1
S05 : 1 CCIS48725289ST-4-0 : 1
S06 : 1 CCIS53355328ST-4-0 : 1
(Other):34 (Other) :34
disease age age_category gender country
CRC :20 Min. :29.00 adult :29 female:10 DEU:11
healthy:20 1st Qu.:50.75 senior:11 male :14 DNK:11
Median :59.00 NA's :16 ESP: 5
Mean :58.98 FRA:13
3rd Qu.:67.25
Max. :90.00
The R-package ComplexHeatmap
can be used to create a heatmap showing the abundance profile. Similar samples and species with a high Spearman correlation are displayed in neighboring rows.
library(ComplexHeatmap)
features_mat <-
features %>%
pivot_longer(-sample_id, names_to = "feature", values_to = "abundance") %>%
left_join(lineages) %>%
group_by(feature) %>%
filter(kingdom == "Bacteria" & var(abundance) > 0) %>%
group_by(sample_id) %>%
mutate(abundance = abundance / sum(abundance)) %>%
# long table to wide matrix
ungroup() %>%
select(sample_id, feature, abundance) %>%
pivot_wider(names_from = feature, values_from = abundance) %>%
select(-sample_id) %>%
as.matrix()
Heatmap(
matrix = features_mat,
name = "Abundance (TSS)",
column_title = "Species",
clustering_distance_columns = "spearman",
show_column_names = FALSE,
row_title = "Sample",
clustering_distance_rows = "spearman",
right_annotation = HeatmapAnnotation(
which = "row",
Disease = samples$disease,
Age = samples$age,
Country = samples$country,
Gender = samples$gender
)
)
Count data in ecology has often many zeros. This zero-inflation violates the assumption of a linear model. It assumes a normal distribution of the residuals (Gauss–Markov theorem). The majority of the taxa are not present in an average sample and only in a few samples.
Methods like SparCC take this into account (Friedman and Alm 2012). We use the function correlate_spiec_easi_sparcc
inside the source directory which is based on the implementation in SpiecEasi (Layeghifard, Hwang, and Guttman 2018). This allows us to get an object of class tidygraph with one table for the edges and nodes.
# load libraries and functions
source("src/setup.R")
features_sparcc <-
lineages %>%
filter(kingdom == "Bacteria") %>%
pull(feature) %>%
head(20)
mat_sparcc <-
features %>%
pivot_longer(-sample_id, names_to = "feature", values_to = "abundance") %>%
left_join(lineages) %>%
filter(feature %in% features_sparcc) %>%
# long table to wide matrix
select(sample_id, feature, abundance) %>%
pivot_wider(names_from = feature, values_from = abundance, values_fill = list(abundance = 0)) %>%
select(-sample_id) %>%
as.matrix()
res_sparcc <- correlate_spiec_easi_sparcc(
data = mat_sparcc,
nodes = lineages,
iterations = 10,
bootstraps = 20
)
res_sparcc
# A tbl_graph: 15 nodes and 15 edges
#
# An undirected simple graph with 2 components
#
# Edge Data: 15 x 31 (active)
from to estimate p.value q.value from_feature from_feature_ty…
<int> <int> <dbl> <dbl> <dbl> <chr> <chr>
1 6 11 0.277 0 0 Alistipes p… taxon
2 11 14 -0.114 0 0 Bacteroides… taxon
3 11 13 0.390 0 0 Bacteroides… taxon
4 5 10 -0.190 0 0 Bacteroides… taxon
5 8 10 -0.121 0 0 Bifidobacte… taxon
6 4 7 -0.106 0 0 Eubacterium… taxon
# … with 9 more rows, and 24 more variables: from_kingdom <chr>,
# from_phylum <chr>, from_order <chr>, from_class <chr>, from_family <chr>,
# from_genus <chr>, from_species <chr>, from_degree <dbl>,
# from_component <int>, from_closeness <dbl>, from_betweeness <dbl>,
# to_feature <chr>, to_feature_type <chr>, to_kingdom <chr>, to_phylum <chr>,
# to_order <chr>, to_class <chr>, to_family <chr>, to_genus <chr>,
# to_species <chr>, to_degree <dbl>, to_component <int>, to_closeness <dbl>,
# to_betweeness <dbl>
#
# Node Data: 15 x 13
feature feature_type kingdom phylum order class family genus species degree
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <dbl>
1 Prevot… taxon Bacter… Bacte… Bact… Bact… Prevo… Prev… Prevot… 2
2 Subdol… taxon Bacter… Firmi… Clos… Clos… Rumin… Subd… Subdol… 1
3 Faecal… taxon Bacter… Firmi… Clos… Clos… Rumin… Faec… Faecal… 1
# … with 12 more rows, and 3 more variables: component <int>, closeness <dbl>,
# betweeness <dbl>
Let’s plot the network and use functions of the package ggraph
and correlate_plot
from this repository:
library(ggraph)
res_sparcc %>%
correlate_plot() +
geom_node_point(aes(color = phylum), size = 10) +
geom_node_text(aes(label = feature))
Sequencing data is compositional: Instead of counting the species in a absolute way, one just have proportions of the reads assigned to a particular taxon. This can have severe implications (Gloor et al. 2017). Imagine one growing species and many constant ones in a longitudinal dataset. The relative abundance of all other taxa are decreasing, because the library size (total number of reads sequenced) is fixed. This happens for all other taxa in the same way introducing lots of false positive correlations. BAnOCC is a tool which takes this compositionality into account (Schwager et al. 2017). The sum for all abundances must be 1 for each sample. Let’s use their example dataset to get the correlations:
library(banocc)
data(compositions_null)
correlate_banocc(compositions_null)
# A tbl_graph: 0 nodes and 0 edges
#
# An empty graph
#
# Edge Data: 0 x 12 (active)
# … with 12 variables: from <int>, to <int>, estimate <dbl>, conf_alpha <dbl>,
# from_degree <dbl>, from_component <int>, from_closeness <dbl>,
# from_betweeness <dbl>, to_degree <dbl>, to_component <int>,
# to_closeness <dbl>, to_betweeness <dbl>
#
# Node Data: 0 x 4
# … with 4 variables: degree <dbl>, component <int>, closeness <dbl>,
# betweeness <dbl>
A comprehensive tutorial is also available.
Another way to address the compositionality is to normalize the abundances using centered log-ratios (clr).
This dinosaur dataset tells us that one can create almost any pattern having the same statistical values including the correlation (Tran 2020; Autodesk 2017):
Correlation detection strategies in microbial data sets vary widely in sensitivity and precision. An overview is given in (Weiss et al. 2016).
Correlation is a symmetric measure to quantify interactions. However, one might want to know who is the predator and who is the prey. We need absolute counts e.g. to do causal inference using Bayesian network analysis (Carr et al. 2019). For instance, this can be archived by spiking the samples with marker cells of known abundance (Rao et al. 2021).
Analyses usually consists if multiple steps which are depended on each other. We can write this work flow as a tree and use the work flow management R package drake. This has several advantages:
correlation_plan <- drake_plan(
features = file_in("data/features.csv") %>% read_csv(),
lineages = file_in("data/lineages.csv") %>% read_csv,
feature_types = c("taxon", "pathway"),
cor_methods = c("spearman", "pearson"),
matricies = target({
matrix <-
features %>%
pivot_longer(-sample_id, names_to = "feature", values_to = "abundance") %>%
left_join(lineages) %>%
# get current feature type
filter(feature_type == feature_types) %>%
select(sample_id, feature, abundance) %>%
# subsetting
sample_frac(0.05) %>%
# long table to wide matrix
pivot_wider(
names_from = feature,
values_from = abundance,
values_fill = list(abundance = 0)
) %>%
select(-sample_id) %>%
as.matrix()
tibble(feature_type = feature_types, matrix = list(matrix))
},
dynamic = map(feature_types)
),
correlations = target({
correlation <- correlate(data = matricies$matrix[[1]], method = cor_methods)
tibble(
feature_type = feature_types,
cor_method = cor_methods,
correlation = list(correlation)
)
},
dynamic = cross(feature_types, cor_methods)
)
)
correlation_plan %>% make()
Results were stored on hard disk. They can be loaded into the current R session:
loadd(correlations)
correlations
# A tibble: 4 x 3
feature_type cor_method correlation
<chr> <chr> <list>
1 taxon spearman <tbl_grph>
2 taxon pearson <tbl_grph>
3 pathway spearman <tbl_grph>
4 pathway pearson <tbl_grph>
A more elaborated work flow is part of this repository (See report). It includes also the part of getting the data. Drake plans are tibbles of steps and can be plotted:
plan %>% plot()
The targets can be made by running the script analyze.R
. It is recommended to put this as a background job of RStudio to run it in an isolated environment. Sometimes cache directories are not created in the first run. It is recommended to run the job again if it fails.
rstudioapi::jobRunScript("analyze.R")
Often, an edge is only considered if it is significant in multiple correlation methods. These ensemble approaches tend to be more robust having less false positive interactions:
loadd(coabundances)
graphs <-
coabundances %>%
filter(feature_type == "pathway" & disease == "CRC") %>%
select(method, coabundance) %>%
deframe()
ensemble_coabundance(
x = graphs$spearman,
name_x = "spearman",
y = graphs$banocc,
name_y = "banocc",
method = "intersection"
)
# A tbl_graph: 17 nodes and 20 edges
#
# A directed acyclic simple graph with 2 components
#
# Node Data: 17 x 5 (active)
feature degree component closeness betweeness
<chr> <dbl> <int> <dbl> <dbl>
1 (5Z)-dodec-5-enoate biosynthesis 2 1 0.00417 0
2 6-hydroxymethyl-dihydropterin diphospha… 0 1 0.00368 0
3 8-amino-7-oxononanoate biosynthesis I 1 1 0.00391 0
4 4-deoxy-L-threo-hex-4-enopyranuronate d… 5 1 0.00521 0
5 adenosylcobalamin salvage from cobinami… 0 1 0.00368 0
6 anaerobic energy metabolism (invertebra… 0 1 0.00368 0
# … with 11 more rows
#
# Edge Data: 20 x 18
from to estimate_spearm… estimate_banocc p.value_spearman p.value_banocc
<int> <int> <dbl> <dbl> <dbl> <dbl>
1 1 2 0.753 0.577 1.26e- 4 7.67e- 3
2 1 3 0.985 0.983 3.55e-15 8.44e-15
3 4 1 0.8 0.720 2.29e- 5 3.41e- 4
# … with 17 more rows, and 12 more variables: estimate <dbl>, p.value <dbl>,
# from_feature <chr>, from_degree <dbl>, from_component <int>,
# from_closeness <dbl>, from_betweeness <dbl>, to_feature <chr>,
# to_degree <dbl>, to_component <int>, to_closeness <dbl>,
# to_betweeness <dbl>
Correlation assumes that the pair of two variables are conditionally independend from all others. But often, an interaction is only there in a presence or absence if a third variable. SpiecEasi tries to account for this (Kurtz et al. 2015). An application of this is finding relationships between features of different types. Let’s load both species and pathways from the Colorectal Cancer (CRC) cohort from the main plan in this repository (20 samples and 20 features each).
loadd(inter_feature_type_mats)
matricies_crc <-
inter_feature_type_mats %>%
filter(disease == "CRC") %>%
pull(mat) %>%
first()
matricies_crc %>% map(dim)
[[1]]
[1] 20 20
[[2]]
[1] 20 20
Let’s estimate covariance relationships:
crc_spiec_easi <- matricies_crc %>% correlate_spiec_easi(method = "mb", nodes = lineages)
crc_spiec_easi
# A tbl_graph: 27 nodes and 19 edges
#
# An unrooted forest with 8 trees
#
# Edge Data: 19 x 29 (active)
from to estimate from_feature from_feature_ty… from_kingdom from_phylum
<int> <int> <dbl> <chr> <chr> <chr> <chr>
1 18 19 0.300 (5Z)-dodec-… pathway <NA> <NA>
2 25 26 0.0877 acetyl-CoA … pathway <NA> <NA>
3 10 26 -0.131 adenine and… pathway <NA> <NA>
4 26 27 0.0116 4-aminobuta… pathway <NA> <NA>
5 14 23 -0.0553 4-deoxy-L-t… pathway <NA> <NA>
6 1 14 0.0347 Faecalibact… taxon Bacteria Firmicutes
# … with 13 more rows, and 22 more variables: from_order <chr>,
# from_class <chr>, from_family <chr>, from_genus <chr>, from_species <chr>,
# from_degree <dbl>, from_component <int>, from_closeness <dbl>,
# from_betweeness <dbl>, to_feature <chr>, to_feature_type <chr>,
# to_kingdom <chr>, to_phylum <chr>, to_order <chr>, to_class <chr>,
# to_family <chr>, to_genus <chr>, to_species <chr>, to_degree <dbl>,
# to_component <int>, to_closeness <dbl>, to_betweeness <dbl>
#
# Node Data: 27 x 13
feature feature_type kingdom phylum order class family genus species degree
<chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <dbl>
1 Faecal… taxon Bacter… Firmi… Clos… Clos… Rumin… Faec… Faecal… 2
2 Paraba… taxon Bacter… Bacte… Bact… Bact… Porph… Para… Paraba… 1
3 Alisti… taxon Bacter… Bacte… Bact… Bact… Riken… Alis… Alisti… 1
# … with 24 more rows, and 3 more variables: component <int>, closeness <dbl>,
# betweeness <dbl>
crc_spiec_easi %>% correlate_plot()
These are the relationships found between a species and a pathway:
crc_spiec_easi %>%
activate(edges) %>%
as_tibble() %>%
filter(from_feature_type != to_feature_type) %>%
select(from_feature, to_feature, estimate)
# A tibble: 3 x 3
from_feature to_feature estimate
<chr> <chr> <dbl>
1 Faecalibacterium prausnit… 4-deoxy-L-threo-hex-4-enopyranuronate deg… 0.0347
2 Ruminococcus obeum acetyl-CoA fermentation to butanoate II 0.0477
3 Alistipes shahii adenosine nucleotides degradation II -0.0947
These correlation networks were created by the main drake work flow:
loadd(coabundances)
coabundances
# A tibble: 14 x 4
disease method feature_type coabundance
<chr> <chr> <chr> <list>
1 CRC mb all <tbl_grph>
2 healthy mb all <tbl_grph>
3 healthy spearman taxon <tbl_grph>
4 healthy sparcc taxon <tbl_grph>
5 healthy banocc taxon <tbl_grph>
6 CRC spearman taxon <tbl_grph>
7 CRC sparcc taxon <tbl_grph>
8 CRC banocc taxon <tbl_grph>
9 healthy spearman pathway <tbl_grph>
10 healthy sparcc pathway <tbl_grph>
11 healthy banocc pathway <tbl_grph>
12 CRC spearman pathway <tbl_grph>
13 CRC sparcc pathway <tbl_grph>
14 CRC banocc pathway <tbl_grph>
The tidygraph package provides functions to calculate node and edge attributes based on the topological structure:
coabundances$coabundance[[1]] %>%
activate(nodes) %>%
mutate(
# number of edges connected to a node
degree = centrality_degree(),
# Kleinberg's hub centrality scores
hub = centrality_hub()
) %>%
as_tibble() %>%
select(feature, degree, hub)
# A tibble: 27 x 3
feature degree hub
<chr> <dbl> <dbl>
1 Faecalibacterium prausnitzii 2 2.31e-16
2 Parabacteroides distasonis 1 0.
3 Alistipes shahii 1 1.15e-16
4 Dorea formicigenerans 1 1.15e-16
5 Ruminococcus obeum 1 3.66e- 1
6 Clostridium bolteae 1 0.
7 Clostridium leptum 1 0.
8 Clostridium hathewayi 1 0.
9 adenosine ribonucleotides de novo biosynthesis 1 1.15e-16
10 adenine and adenosine salvage III 2 3.88e- 1
# … with 17 more rows
Objects created by correlate
are shipped with basic topology metrics. We can extract them to make plot for comparison:
coabundances %>%
filter(feature_type == "all") %>%
select(-feature_type) %>%
mutate(nodes = coabundance %>% map(~ .x %>% activate(nodes) %>% as_tibble)) %>%
select(-coabundance) %>%
unnest(nodes) %>%
pivot_longer(cols = c(degree, closeness, betweeness), names_to = "metric") %>%
ggplot(aes(feature_type, value)) +
geom_boxplot() +
facet_wrap(~metric, scales = "free_y") +
labs(x = "Feature type", y = "Topology score")
Or compare taxonomic correlations between healthy and diseased cohorts:
coabundances %>%
filter(feature_type == "taxon" & method == "sparcc") %>%
select(-feature_type) %>%
mutate(nodes = coabundance %>% map(~ .x %>% activate(nodes) %>% as_tibble)) %>%
select(-coabundance) %>%
unnest(nodes) %>%
pivot_longer(cols = c(degree, closeness, betweeness), names_to = "metric") %>%
ggplot(aes(disease, value)) +
geom_boxplot() +
ggpubr::stat_compare_means() +
facet_wrap(~metric, scales = "free_y")
The graph with all attributes of edges and nodes can be saved as a file e.g. to visualize it in other tools like Cytoscape:
library(igraph)
coabundances %>%
filter(
disease == "healthy" &
feature_type == "pathway" &
method == "spearman"
) %>%
pull(coabundance) %>%
first() %>%
as.igraph() %>%
write.graph("healthy_pathway_spearman.graphml", format = "graphml")
Autodesk. 2017. “Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics Through Simulated Annealing.” https://www.autodesk.com/research/publications/same-stats-different-graphs.
Carr, Alex, Christian Diener, Nitin S Baliga, and Sean M Gibbons. 2019. “Use and Abuse of Correlation Analyses in Microbial Ecology.” The ISME Journal 13 (11): 2647–55.
Friedman, Jonathan, and Eric J Alm. 2012. “Inferring Correlation Networks from Genomic Survey Data.” PLoS Comput Biol 8 (9): e1002687.
Gloor, Gregory B, Jean M Macklaim, Vera Pawlowsky-Glahn, and Juan J Egozcue. 2017. “Microbiome Datasets Are Compositional: And This Is Not Optional.” Frontiers in Microbiology 8: 2224.
Kurtz, Zachary D, Christian L Müller, Emily R Miraldi, Dan R Littman, Martin J Blaser, and Richard A Bonneau. 2015. “Sparse and Compositionally Robust Inference of Microbial Ecological Networks.” PLoS Comput Biol 11 (5): e1004226.
Layeghifard, Mehdi, David M Hwang, and David S Guttman. 2018. “Constructing and Analyzing Microbiome Networks in R.” In Microbiome Analysis, 243–66. Springer.
Lindeløv, Jonas Kristoffer. n.d. “Common Statistical Tests Are Linear Models (or: How to Teach Stats).” https://lindeloev.github.io/tests-as-linear/.
Nielsen, H Bjørn, Mathieu Almeida, Agnieszka Sierakowska Juncker, Simon Rasmussen, Junhua Li, Shinichi Sunagawa, Damian R Plichta, et al. 2014. “Identification and Assembly of Genomes and Genetic Elements in Complex Metagenomic Samples Without Using Reference Genomes.” Nature Biotechnology 32 (8): 822–28.
Pasolli, E., L. Schiffer, P. Manghi, A. Renson, V. Obenchain, D. T. Truong, F. Beghini, et al. 2017. “Accessible, curated metagenomic data through ExperimentHub.” Nat Methods 14 (11): 1023–4.
Pearson, Karl. 1897. “Mathematical Contributions to the Theory of Evolution.—on a Form of Spurious Correlation Which May Arise When Indices Are Used in the Measurement of Organs.” Proceedings of the Royal Society of London 60 (359-367): 489–98.
Rao, Chitong, Katharine Z Coyte, Wayne Bainter, Raif S Geha, Camilia R Martin, and Seth Rakoff-Nahoum. 2021. “Multi-Kingdom Ecological Drivers of Microbiota Assembly in Preterm Infants.” Nature, 1–6.
Ruan, Quansong, Debojyoti Dutta, Michael S Schwalbach, Joshua A Steele, Jed A Fuhrman, and Fengzhu Sun. 2006. “Local Similarity Analysis Reveals Unique Associations Among Marine Bacterioplankton Species and Environmental Factors.” Bioinformatics 22 (20): 2532–8.
Schwager, Emma, Himel Mallick, Steffen Ventz, and Curtis Huttenhower. 2017. “A Bayesian Method for Detecting Pairwise Associations in Compositional Data.” PLoS Computational Biology 13 (11): e1005852.
“Spurious Correlations.” n.d. http://www.tylervigen.com/spurious-correlations.
Tran, Khuyen. 2020. “Can Datasets of a Dinosaur and a Circle Have Identical Statistics?” https://towardsdatascience.com/how-to-turn-a-dinosaur-dataset-into-a-circle-dataset-with-the-same-statistics-64136c2e2ca0.
Weiss, Sophie, Will Van Treuren, Catherine Lozupone, Karoline Faust, Jonathan Friedman, Ye Deng, Li Charlie Xia, et al. 2016. “Correlation Detection Strategies in Microbial Data Sets Vary Widely in Sensitivity and Precision.” The ISME Journal 10 (7): 1669–81.
Zeller, Georg, Julien Tap, Anita Y Voigt, Shinichi Sunagawa, Jens Roat Kultima, Paul I Costea, Aurélien Amiot, et al. 2014. “Potential of Fecal Microbiota for Early-Stage Detection of Colorectal Cancer.” Molecular Systems Biology 10 (11): 766.
Zhou, W., M. R. Sailani, K. Contrepois, Y. Zhou, S. Ahadi, S. R. Leopold, M. J. Zhang, et al. 2019. “Longitudinal multi-omics of host-microbe dynamics in prediabetes.” Nature 569 (7758): 663–71.