--- title: Further strategies for analyzing single-cell RNA-seq data author: - name: Aaron T. L. Lun affiliation: &CRUK Cancer Research UK Cambridge Institute, Li Ka Shing Centre, Robinson Way, Cambridge CB2 0RE, United Kingdom - name: Davis J. McCarthy affiliation: - &EMBL EMBL European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge CB10 1SD, United Kingdom - St Vincent's Institute of Medical Research, 41 Victoria Parade, Fitzroy, Victoria 3065, Australia - name: John C. Marioni affiliation: - *CRUK - *EMBL - Wellcome Trust Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridge CB10 1SA, United Kingdom date: "`r Sys.Date()`" vignette: > %\VignetteIndexEntry{Further strategies for analyzing scRNA-seq data} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} output: BiocStyle::html_document bibliography: ref.bib --- ```{r style, echo=FALSE, results='hide', message=FALSE} library(BiocStyle) library(knitr) opts_chunk$set(error=FALSE, message=FALSE, warning=FALSE) opts_chunk$set(fig.asp=1) ``` ```{r, echo=FALSE, results='hide'} all.urls <- c("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE61533&format=file&file=GSE61533%5FHTSEQ%5Fcount%5Fresults%2Exls%2Egz", "https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE29087&format=file&file=GSE29087%5FL139%5Fexpression%5Ftab%2Etxt%2Egz", "http://www.ebi.ac.uk/teichmann-srv/espresso/static/counttable_es.csv", "http://www.nature.com/nbt/journal/v33/n2/extref/nbt.3102-S7.xlsx") all.basenames <- basename(all.urls) all.basenames[1] <- "GSE61533_HTSEQ_count_results.xls.gz" all.basenames[2] <- "GSE29087_L139_expression_tab.txt.gz" all.modes <- rep("w", length(all.urls)) all.modes[!grepl("(txt|csv)$", all.basenames)] <- "wb" for (x in seq_along(all.urls)) { if (!file.exists(all.basenames[x])) { download.file(all.urls[x], all.basenames[x], mode=all.modes[x], cacheOK=FALSE) } } ``` # Overview The previous workflows focused on analyzing single-cell RNA-seq data with "standard" procedures. However, a number of alternative parameter settings and strategies can be used at some steps of the workflow. This workflow describes a few of these alternative settings as well as the rationale behind choosing them instead of the defaults. # Quality control on cells ## Assumptions of outlier identification An outlier-based definition for low-quality cells assumes that most cells are of high quality. This is usually reasonable and can be experimentally supported in some situations by visually checking that the cells are intact, e.g., on the microwell plate. Another assumption is that the QC metrics are independent on the biological state of each cell. This ensures that any outlier values for these metrics are driven by technical factors rather than biological processes. Thus, removing cells based on the metrics will not misrepresent the biology in downstream analyses. The second assumption is most likely to be violated in highly heterogeneous cell populations. For example, some cell types may naturally have less RNA or express fewer genes than other cell types. Such cell types are more likely to be considered outliers and removed, even if they are of high quality. The use of the MAD mitigates this problem by accounting for biological variability in the QC metrics. A heterogeneous population should have higher variability in the metrics among high-quality cells, increasing the MAD and reducing the chance of incorrectly removing particular cell types (at the cost of reducing power to remove low-quality cells). Nonetheless, filtering based on outliers may not be appropriate in extreme cases where one cell type is very different from the others. Systematic differences in the QC metrics can be handled to some extent using the `batch=` argument in the `isOutlier()` function. For example, setting `batch` to the plate of origin will identify outliers within each level of `batch`, using plate-specific median and MAD estimates. This is obviously useful for accommodating known differences in experimental processing, e.g., sequencing at different depth or different amounts of added spike-in RNA. We can also include biological factors in `batch`, if those factors could result in systematically fewer expressed genes or lower RNA content. However, this is not applicable in experiments where the factors are not known in advance. ## Checking for discarded cell types We can diagnose loss of distinct cell types during QC by looking for differences in gene expression between the discarded and retained cells (Figure \@ref(fig:discardplot416b)). If the discarded pool is enriched for a certain cell type, we should observe increased expression of the corresponding marker genes. No systematic upregulation of genes is apparent in the discarded pool in Figure \@ref(fig:discardplot416b), indicating that the QC step did not inadvertently filter out a cell type in the 416B dataset. ```{r discardplot416b, fig.cap="Average counts across all discarded and retained cells in the 416B dataset. Each point represents a gene, with spike-in and mitochondrial transcripts in red and blue respectively."} library(SingleCellExperiment) sce.full.416b <- readRDS("416B_preQC.rds") library(scater) suppressWarnings({ lost <- calcAverage(counts(sce.full.416b)[,!sce.full.416b$PassQC]) kept <- calcAverage(counts(sce.full.416b)[,sce.full.416b$PassQC]) }) logfc <- log2((lost+1)/(kept+1)) head(sort(logfc, decreasing=TRUE), 20) plot(lost, kept, xlab="Average count (discarded)", ylab="Average count (retained)", log="xy", pch=16) is.spike <- isSpike(sce.full.416b) points(lost[is.spike], kept[is.spike], col="red", pch=16) is.mito <- rowData(sce.full.416b)$is_feature_control_Mt points(lost[is.mito], kept[is.mito], col="dodgerblue", pch=16) ``` By comparison, a more stringent filter in the PBMC dataset would remove the previously identified platelet population (see the [previous workflow](http://bioconductor.org/packages/devel/workflows/vignettes/simpleSingleCell/inst/doc/work-3-tenx.html#marker-gene-detection)). This manifests in Figure \@ref(fig:discardplotpbmc) as a shift to the bottom-right for a number of genes, including _PF4_ and _PPBP_. ```{r discardplotpbmc, fig.cap="Average counts across all discarded and retained cells in the PBMC dataset. Each point represents a gene, with platelet-related genes highlighted in orange."} sce.pbmc <- readRDS("pbmc_data.rds") wrong.keep <- sce.pbmc$total_counts >= 1000 suppressWarnings({ lost <- calcAverage(counts(sce.pbmc)[,!wrong.keep]) kept <- calcAverage(counts(sce.pbmc)[,wrong.keep]) }) logfc <- log2((lost+1)/(kept+1)) head(sort(logfc, decreasing=TRUE), 20) plot(lost, kept, xlab="Average count (discarded)", ylab="Average count (retained)", log="xy", pch=16) platelet <- c("PF4", "PPBP", "SDPR") points(lost[platelet], kept[platelet], col="orange", pch=16) ``` If cell types are being incorrectly discarded, the solution is to relax the QC filters. This can be achieved by either increasing `nmads=` in `isOutlier()` or by dropping the filter altogether. However, keep in mind that low-quality cells will often increase the apparent heterogeneity, usually because of increased sampling noise at low sequencing coverage. This can interfere with variance modelling and PCA, e.g., where the first few PCs separate cells according to quality rather than any biology. At worst, low-quality cells can form their own cluster, which requires additional care during interpretation of the results. These considerations motivate the use of a more strict filter (at least on the first pass) in our workflows. ## Alternative approaches to quality control ### Using fixed thresholds One alternative strategy is to set pre-defined thresholds on each QC metric. For example, we might remove all cells with library sizes below 100000 and numbers of expressed genes below 4000. This generally requires considerable experience to determine appropriate thresholds for each experimental protocol and biological system. For example, thresholds for read count-based data are simply not applicable for UMI-based data, and vice versa. Indeed, even with the same protocol and system, the appropriate threshold can vary from run to run due to the vagaries of RNA capture and sequencing. ### Using PCA-based outliers Another strategy is to perform a principal components analysis (PCA) based on the quality metrics for each cell, e.g., the total number of reads, the total number of features and the proportion of mitochondrial or spike-in reads. Outliers on a PCA plot may be indicative of low-quality cells that have aberrant technical properties compared to the (presumed) majority of high-quality cells. This is demonstrated below on a brain cell dataset from @tasic2016adult, using functions from the `r Biocpkg("scater")` package [@mccarthy2017scater]. ```{r} # Obtaining the dataset. library(scRNAseq) data(allen) # Setting up the data. sce.allen <- as(allen, "SingleCellExperiment") assayNames(sce.allen) <- "counts" isSpike(sce.allen, "ERCC") <- grep("ERCC", rownames(sce.allen)) # Computing the QC metrics and running PCA. library(scater) sce.allen <- calculateQCMetrics(sce.allen) sce.allen <- runPCA(sce.allen, use_coldata=TRUE, detect_outliers=TRUE) table(sce.allen$outlier) ``` Methods like PCA-based outlier detection and support vector machines can provide more power to distinguish low-quality cells from high-quality counterparts [@ilicic2016classification]. This is because they are able to detect subtle patterns across many quality metrics simultaneously. However, this comes at some cost to interpretability, as the reason for removing a given cell may not always be obvious. Users interested in the more sophisticated approaches are referred to the `r Biocpkg("scater")` and `r Biocpkg("cellity")` packages. For completeness, we note that outliers can also be identified from PCA on the gene expression profiles, rather than QC metrics. We consider this to be a risky strategy as it can remove high-quality cells in rare populations. # Normalizing based on spike-in coverage ## Motivation Scaling normalization strategies for scRNA-seq data can be broadly divided into two classes. The first class assumes that there exists a subset of genes that are not DE between samples, as previously described. The second class uses the fact that the same amount of spike-in RNA was added to each cell [@lun2017assessing]. Differences in the coverage of the spike-in transcripts can only be due to cell-specific biases, e.g., in capture efficiency or sequencing depth. Scaling normalization is then applied to equalize spike-in coverage across cells. The choice between these two normalization strategies depends on the biology of the cells and the features of interest. If the majority of genes are expected to be DE and there is no reliable house-keeping set, spike-in normalization may be the only option for removing cell-specific biases. Spike-in normalization should also be used if differences in the total RNA content of individual cells are of interest. In any particular cell, an increase in the amount of endogenous RNA will not increase spike-in coverage (with or without library quantification). Thus, the former will not be represented as part of the bias in the latter, which means that the effects of total RNA content on expression will not be removed upon scaling. With non-DE normalization, an increase in RNA content will systematically increase the expression of all genes in the non-DE subset, such that it will be treated as bias and removed. ## Setting up the data We demonstrate the use of spike-in normalization on a dataset involving different cell types -- namely, mouse embryonic stem cells (mESCs) and mouse embryonic fibroblasts (MEFs) [@islam2011characterization]. The count table was obtained from the NCBI Gene Expression Omnibus (GEO) as a supplementary file using the accession number [GSE29087](http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE29087). We load the counts into R, using `colClasses` to speed up `read.table` by pre-defining the type of each column. We also specify the rows corresponding to spike-in transcripts. ```{r} library(SingleCellExperiment) counts <- read.table("GSE29087_L139_expression_tab.txt.gz", colClasses=c(list("character", NULL, NULL, NULL, NULL, NULL, NULL), rep("integer", 96)), skip=6, sep='\t', row.names=1) is.spike <- grep("SPIKE", rownames(counts)) sce.islam <- SingleCellExperiment(list(counts=as.matrix(counts))) isSpike(sce.islam, "spike") <- is.spike dim(sce.islam) ``` We perform some quality control to remove low-quality cells using the `calculateQCMetrics` function. Outliers are identified within each cell type to avoid issues with systematic differences in the metrics between cell types. The negative control wells do not contain any cells and are useful for quality control (as they _should_ manifest as outliers for the various metrics), but need to be removed prior to downstream analysis. ```{r} library(scater) sce.islam <- calculateQCMetrics(sce.islam) sce.islam$grouping <- rep(c("mESC", "MEF", "Neg"), c(48, 44, 4)) libsize.drop <- isOutlier(sce.islam$total_counts, nmads=3, type="lower", log=TRUE, batch=sce.islam$grouping) feature.drop <- isOutlier(sce.islam$total_features, nmads=3, type="lower", log=TRUE, batch=sce.islam$grouping) spike.drop <- isOutlier(sce.islam$pct_counts_spike, nmads=3, type="higher", batch=sce.islam$grouping) sce.islam <- sce.islam[,!(libsize.drop | feature.drop | spike.drop | sce.islam$grouping=="Neg")] data.frame(ByLibSize=sum(libsize.drop), ByFeature=sum(feature.drop), BySpike=sum(spike.drop), Remaining=ncol(sce.islam)) ``` ## Calculating spike-in size factors We apply the `computeSpikeFactors` method to estimate size factors for all cells. This method computes the total count over all spike-in transcripts in each cell, and calculates size factors to equalize the total spike-in count across cells. Here, we set `general.use=TRUE` as we intend to apply the spike-in factors to all counts. ```{r} library(scran) sce.islam <- computeSpikeFactors(sce.islam, general.use=TRUE) ``` Running `normalize` will use the spike-in-based size factors to compute normalized log-expression values. Unlike the previous analyses, we do not have to define separate size factors for the spike-in transcripts. This is because the relevant factors are already being used for all genes and spike-in transcripts when `general.use=TRUE`. (The exception is if the experiment uses multiple spike-in sets that behave differently and need to be normalized separately.) ```{r} sce.islam <- normalize(sce.islam) ``` For comparison, we also compute the deconvolution size factors [@lun2016pooling] and plot them against the spike-in factors. We observe a negative correlation between the two sets of values (Figure \@ref(fig:normplotspikemef)). This is because MEFs contain more endogenous RNA, which reduces the relative spike-in coverage in each library (thereby decreasing the spike-in size factors) but increases the coverage of endogenous genes (thus increasing the deconvolution size factors). If the spike-in size factors were applied to the counts, the expression values in MEFs would be scaled up while expression in mESCs would be scaled down. However, the opposite would occur if deconvolution size factors were used. ```{r normplotspikemef, fig.cap="Size factors from spike-in normalization, plotted against the size factors from deconvolution for all cells in the mESC/MEF dataset. Axes are shown on a log-scale, and cells are coloured according to their identity. Deconvolution size factors were computed with small pool sizes owing to the low number of cells of each type."} colours <- c(mESC="red", MEF="grey") deconv.sf <- computeSumFactors(sce.islam, sf.out=TRUE, cluster=sce.islam$grouping) plot(sizeFactors(sce.islam), deconv.sf, col=colours[sce.islam$grouping], pch=16, log="xy", xlab="Size factor (spike-in)", ylab="Size factor (deconvolution)") legend("bottomleft", col=colours, legend=names(colours), pch=16) ``` Whether or not total RNA content is relevant -- and thus, the choice of normalization strategy -- depends on the biological hypothesis. In the HSC and brain analyses, variability in total RNA across the population was treated as noise and removed by non-DE normalization. This may not always be appropriate if total RNA is associated with a biological difference of interest. For example, @islam2011characterization observe a 5-fold difference in total RNA between mESCs and MEFs. Similarly, the total RNA in a cell changes across phases of the cell cycle [@buettner2015computational]. Spike-in normalization will preserve these differences in total RNA content such that the corresponding biological groups can be easily resolved in downstream analyses. __Comments from Aaron:__ - We only use genes with average counts greater than 1 (as specified in `min.mean`) to compute the deconvolution size factors. This avoids problems with discreteness as mentioned in our previous uses of `computeSumFactors`. - Setting `sf.out=TRUE` will directly return the size factors, rather than a `SingleCellExperiment` object containing those factors. This is more convenient when only the size factors are required for further analysis. # Detecting highly variable genes ## Setting up the data Highly variable genes (HVGs) are defined as genes with biological components that are significantly greater than zero. These genes are interesting as they drive differences in the expression profiles between cells, and should be prioritized for further investigation. Formal detection of HVGs allows us to avoid genes that are highly variable due to technical factors such as sampling noise during RNA capture and library preparation. This adds another level of statistical rigour to our previous analyses, in which we only modelled the technical component. To demonstrate, we use data from haematopoietic stem cells (HSCs) [@wilson2015combined], generated using the Smart-seq2 protocol [@picelli2014fulllength] with ERCC spike-ins. Counts were obtained from NCBI GEO as a supplementary file using the accession number [GSE61533](http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE61533). Our first task is to load the count matrix into memory. In this case, some work is required to retrieve the data from the Gzip-compressed Excel format. ```{r} library(R.utils) gunzip("GSE61533_HTSEQ_count_results.xls.gz", remove=FALSE, overwrite=TRUE) library(gdata) all.counts <- read.xls('GSE61533_HTSEQ_count_results.xls', sheet=1, header=TRUE) rownames(all.counts) <- all.counts$ID all.counts <- as.matrix(all.counts[,-1]) ``` We store the results in a `SingleCellExperiment` object and identify the rows corresponding to the spike-ins based on the row names. ```{r} sce.hsc <- SingleCellExperiment(list(counts=all.counts)) dim(sce.hsc) is.spike <- grepl("^ERCC", rownames(sce.hsc)) isSpike(sce.hsc, "ERCC") <- is.spike summary(is.spike) ``` For each cell, we calculate quality control metrics using the `calculateQCMetrics` function as previously described. We filter out HSCs that are outliers for any metric, under the assumption that these represent low-quality libraries. ```{r} sce.hsc <- calculateQCMetrics(sce.hsc) libsize.drop <- isOutlier(sce.hsc$total_counts, nmads=3, type="lower", log=TRUE) feature.drop <- isOutlier(sce.hsc$total_features, nmads=3, type="lower", log=TRUE) spike.drop <- isOutlier(sce.hsc$pct_counts_ERCC, nmads=3, type="higher") sce.hsc <- sce.hsc[,!(libsize.drop | feature.drop | spike.drop)] data.frame(ByLibSize=sum(libsize.drop), ByFeature=sum(feature.drop), BySpike=sum(spike.drop), Remaining=ncol(sce.hsc)) ``` We remove genes that are not expressed in any cell to reduce computational work in downstream steps. ```{r} to.keep <- nexprs(sce.hsc, byrow=TRUE) > 0 sce.hsc <- sce.hsc[to.keep,] summary(to.keep) ``` We apply the deconvolution method to compute size factors for the endogenous genes [@lun2016pooling]. Separate size factors for the spike-in transcripts are also calculated, as previously discussed. We then calculate log-transformed normalized expression values for further use. ```{r, warning=FALSE} sce.hsc <- computeSumFactors(sce.hsc) summary(sizeFactors(sce.hsc)) sce.hsc <- computeSpikeFactors(sce.hsc, type="ERCC", general.use=FALSE) summary(sizeFactors(sce.hsc, "ERCC")) sce.hsc <- normalize(sce.hsc) ``` ## Testing for significantly positive biological components We fit a mean-variance trend to the spike-in transcripts to quantify the technical component of the variance, as previously described. The biological component for each gene is defined as the difference between its total variance and the fitted value of the trend (Figure \@ref(fig:hvgplothsc)). ```{r hvgplothsc, fig.cap="Variance of normalized log-expression values for each gene in the HSC dataset, plotted against the mean log-expression. The blue line represents the mean-dependent trend fitted to the variances of the spike-in transcripts (red)."} var.fit <- trendVar(sce.hsc, parametric=TRUE, loess.args=list(span=0.3)) var.out <- decomposeVar(sce.hsc, var.fit) plot(var.out$mean, var.out$total, pch=16, cex=0.6, xlab="Mean log-expression", ylab="Variance of log-expression") curve(var.fit$trend(x), col="dodgerblue", lwd=2, add=TRUE) cur.spike <- isSpike(sce.hsc) points(var.out$mean[cur.spike], var.out$total[cur.spike], col="red", pch=16) ``` We define HVGs as those genes that have a biological component that is significantly greater than zero. We use a false discovery rate (FDR) of 5% after correcting for multiple testing with the Benjamini-Hochberg method. ```{r} hvg.out <- var.out[which(var.out$FDR <= 0.05),] nrow(hvg.out) ``` We rank the results to focus on genes with larger biological components. This highlights an interesting aspect of the underlying hypothesis test, which is based on the ratio of the total variance to the expected technical variance. Ranking based on _p_-value tends to prioritize HVGs that are more likely to be true positives but, at the same time, less likely to be interesting. This is because the ratio can be very large for HVGs that have very low total variance and do not contribute much to the cell-cell heterogeneity. ```{r} hvg.out <- hvg.out[order(hvg.out$bio, decreasing=TRUE),] write.table(file="hsc_hvg.tsv", hvg.out, sep="\t", quote=FALSE, col.names=NA) head(hvg.out) ``` We check the distribution of expression values for the genes with the largest biological components. This ensures that the variance estimate is not driven by one or two outlier cells (Figure \@ref(fig:hvgvioplothsc)). ```{r hvgvioplothsc, fig.cap="Violin plots of normalized log-expression values for the top 10 genes with the largest biological components in the HSC dataset. Each point represents the log-expression value in a single cell."} fontsize <- theme(axis.text=element_text(size=12), axis.title=element_text(size=16)) plotExpression(sce.hsc, features=rownames(hvg.out)[1:10]) + fontsize ``` There are many other strategies for defining HVGs, based on a variety of metrics: - the coefficient of variation [@brennecke2013accounting;@kolod2015singlecell;@kim2015characterizing] - the dispersion parameter in the negative binomial distribution [@mccarthy2012differential] - a proportion of total variability [@vallejos2015basics] Some of these methods are available in `r Biocpkg("scran")` -- for example, see `DM` or `technicalCV2` for calculations based on the coefficient of variation. Here, we use the variance of the log-expression values because the log-transformation protects against genes with strong expression in only one or two cells. This ensures that the set of top HVGs is not dominated by genes with (mostly uninteresting) outlier expression patterns. # Advanced modelling of the technical noise ## Trend fitting when spike-ins are unavailable If spike-in RNA has not been added in appropriate quantities (or at all), an alternative approach is to fit the trend to the variance estimates of the endogenous genes. This is done using the `use.spikes=FALSE` setting in `trendVar`, as shown below for the HSC dataset. ```{r} var.fit.nospike <- trendVar(sce.hsc, parametric=TRUE, use.spikes=FALSE, loess.args=list(span=0.2)) var.out.nospike <- decomposeVar(sce.hsc, var.fit.nospike) ``` The simplest interpretation of the results assumes that the majority of genes are not variably expressed. This means that the technical component dominates the total variance for most genes, such that the fitted trend can be treated as an estimate of the technical component. In Figure 11, the trend passes through or close to most of the spike-in variances, indicating that our assumption is valid. ```{r hvgplot416b2, fig.cap="Variance of normalized log-expression values for each gene in the 416B dataset, plotted against the mean log-expression. The blue line represents the mean-dependent trend fitted to the variances of the endogenous genes (black), with spike-in transcripts shown in red."} plot(var.out.nospike$mean, var.out.nospike$total, pch=16, cex=0.6, xlab="Mean log-expression", ylab="Variance of log-expression") curve(var.fit.nospike$trend(x), col="dodgerblue", lwd=2, add=TRUE) points(var.out.nospike$mean[cur.spike], var.out.nospike$total[cur.spike], col="red", pch=16) ``` If our assumption does not hold, the output of decomposeVar is more difficult to interpret. The fitted value of the trend can no longer be generally interpreted as the technical component, as it contains some biological variation as well. Instead, recall that the biological component reported by `decomposeVar` represents the residual for each gene over the majority of genes with the same abundance. One could assume that the variabilities of most genes are driven by constitutive "house-keeping" processes, which are biological in origin but generally uninteresting. Any gene with an increase in its variance is _relatively_ highly variable and can be prioritized for further study. ## Blocking on uninteresting factors of variation ### Using the `block=` argument Our previous analysis of the 416B dataset specified `block=` in `trendVar()` to ensure that systematic differences between plates do not inflate the variance. This involves estimating the mean and variance of the log-expression _separately_ in each plate, followed by fitting a single trend to the plate-specific means and variances of all spike-in transcripts. In doing so, we implicitly assume that the trend is the same between plates, which is fairly reasonable for this dataset (Figure \@ref(fig:trendplotblock-416b)). ```{r trendplotblock-416b, fig.cap="Plate-specific variance estimates for all spike-in transcripts in the 416B dataset, plotted against the plate-specific means. Each point represents a spike-in transcript, numbered by the plate from which the values were estimated."} # Loading the saved object. sce.416B <- readRDS("416B_data.rds") # Repeating the trendVar() call. var.fit <- trendVar(sce.416B, parametric=TRUE, block=sce.416B$Plate, loess.args=list(span=0.3)) matplot(var.fit$means, var.fit$vars, col=c("darkorange", "forestgreen")) ``` The use of `block=` also assumes that the endogenous genes have comparable abundances between the two plates. This is most easily examined by comparing the distribution of size factors for the endogenous genes across the plates. Similar size factor distributions in Figure \@ref(fig:sizefacplot-416b) indicate that the coverage of most genes is not systematically different between plates. ```{r sizefacplot-416b, fig.cap="Plate-specific distribution of the size factors for endogenous genes."} tmp.416B <- sce.416B tmp.416B$log_size_factor <- log(sizeFactors(sce.416B)) plotColData(tmp.416B, x="Plate", y="log_size_factor") ``` However, these assumptions may not hold for other datasets. For example, if more spike-in RNA is added in a particular batch, the technical noise (and thus the trend) will decrease due to increased coverage. The mean log-expression of the spike-ins would also shift relative to the endogenous genes in that batch. The use of a single trend would subsequently be inappropriate, resulting in inaccurate estimates of the technical component for each gene. ### Fitting batch-specific trends For datasets containing multiple batches, an alternative strategy is to perform trend fitting and variance decomposition separately for each batch. This accommodates differences in the mean-variance trends between batches, especially if a different amount of spike-in RNA was added to the cells in each batch. We demonstrate this approach by treating each plate in the 416B dataset as a different batch, using the `multiBlockVar()` function. This yields plate-specific estimates of the biological and technical components for each gene. ```{r} sce.416B.2 <- normalize(sce.416B, size_factor_grouping=sce.416B$Plate) comb.out <- multiBlockVar(sce.416B.2, block=sce.416B.2$Plate, trend.args=list(parametric=TRUE, loess.args=list(span=0.4))) ``` Statistics are combined across multiple batches using the `combineVar()` function within `multiBlockVar()`. This function computes a weighted average across batches for the means and variances, and applies Fisher's method for combining the _p_-values. These results can be used in downstream functions such as `denoisePCA`, or for detecting highly variable genes (see below). ```{r} head(comb.out[,1:6]) ``` We visualize the quality of the batch-specific trend fits by extracting the relevant statistics from `comb.out` (Figure \@ref(fig:hvgplotbatch416b)). ```{r hvgplotbatch416b, fig.width=10, fig.asp=0.5, fig.cap="Variance of normalized log-expression values for each gene in each plate of the 416B dataset, plotted against the mean log-expression. The blue line represents the mean-dependent trend fitted to the variances of the spike-in transcripts (red)."} par(mfrow=c(1,2)) is.spike <- isSpike(sce.416B.2) for (plate in levels(sce.416B.2$Plate)) { cur.out <- comb.out$per.block[[plate]] plot(cur.out$mean, cur.out$total, pch=16, cex=0.6, xlab="Mean log-expression", ylab="Variance of log-expression", main=plate) curve(metadata(cur.out)$trend(x), col="dodgerblue", lwd=2, add=TRUE) points(cur.out$mean[is.spike], cur.out$total[is.spike], col="red", pch=16) } ``` By fitting separate trends, we avoid the need to assume that a single trend is present across batches. However, this also reduces the precision of each trend fit, as less information is available within each batch. We recommend using `block=` as the default unless there is clear evidence for differences in the trends between batches. ```{r, echo=FALSE, results="hide"} gc() ``` **Comments from Aaron:** - We run `normalize()` with `size_factor_grouping=` to centre the size factors within each level of the blocking factor. This adjusts the size factors across cells in each batch so that the mean is equal to 1, for both the spike-in and gene-based sets of size factors. Log-normalized expression values are then recalculated using these centred size factors. This procedure ensures that the average abundances of the spike-in transcripts are comparable to the endogenous genes, avoiding problems due to differences in the quantity of spike-in RNA between batches. Otherwise, if the globally-centred size factors were used, there would be a systematic difference in the scaling of spike-in transcripts compared to endogenous genes. The fitted trend would then be shifted along the x-axis and fail to accurately capture the technical component for each gene. ### Using the `design=` argument For completeness, it is worth mentioning the `design=` argument in `trendVar()`. This will estimate the residual variance from a linear model fitted to the log-normalized expression values for each gene. The linear model can include blocking factors for known unwanted factors of variation, ensuring that they do not inflate the variance estimate. The technical component for each gene is obtained at the average abundance across all cells. ```{r} lfit <- trendVar(sce.416B, design=model.matrix(~sce.416B$Plate)) ``` We do not recommend using this approach for categorical blocking factors in one-way layouts. This is because it does not consider the mean of each blocking level, resulting in an inaccurate estimate of the technical component in the presence of a strong blocking effect. However, it is the only choice for dealing with real covariates or multiple blocking factors in an additive model. # Identifying correlated gene pairs with Spearman's rho Another use for scRNA-seq data is to identify correlations between the expression profiles of different genes. This is quantified by computing Spearman's rho, which accommodates non-linear relationships in the expression values. Non-zero correlations between pairs of genes provide evidence for their co-regulation. However, the noise in the data requires some statistical analysis to determine whether a correlation is significantly non-zero. To demonstrate, we use the `correlatePairs` function to identify significant correlations between the various histocompatability antigens in the HSC dataset. The significance of each correlation is determined using a permutation test. For each pair of genes, the null hypothesis is that the expression profiles of two genes are independent. Shuffling the profiles and recalculating the correlation yields a null distribution that is used to obtain a _p_-value for each observed correlation value [@phipson2010permutation]. ```{r} set.seed(100) var.cor <- correlatePairs(sce.hsc, subset.row=grep("^H2-", rownames(sce.hsc))) head(var.cor) ``` Correction for multiple testing across many gene pairs is performed by controlling the FDR at 5%. ```{r} sig.cor <- var.cor$FDR <= 0.05 summary(sig.cor) ``` We can also compute correlations between specific pairs of genes, or between all pairs between two distinct sets of genes. The example below computes the correlation between _Fos_ and _Jun_, which dimerize to form the AP-1 transcription factor [@angel1991role]. ```{r} correlatePairs(sce.hsc, subset.row=cbind("Fos", "Jun")) ``` Examination of the expression profiles in Figure \@ref(fig:fosjuncorplot) confirms the presence of a modest correlation between these two genes. ```{r fosjuncorplot, fig.cap="Expression of _Fos_ plotted against the expression of _Jun_ for all cells in the HSC dataset."} plotExpression(sce.hsc, features="Fos", x="Jun") ``` The use of `correlatePairs` is primarily intended to identify correlated gene pairs for validation studies. Obviously, non-zero correlations do not provide evidence for a direct regulatory interaction, let alone specify causality. To construct regulatory networks involving many genes, we suggest using dedicated packages such as `r CRANpkg("WCGNA")`. __Comments from Aaron:__ - We suggest only computing correlations between a subset of genes of interest, known either _a priori_ or empirically defined, e.g., as HVGs. Computing correlations across all genes will take too long; unnecessarily increase the severity of the multiple testing correction; and may prioritize strong but uninteresting correlations, e.g., between tightly co-regulated house-keeping genes. - The `correlatePairs` function can also return gene-centric output by setting `per.gene=TRUE`. This calculates a combined _p_-value [@simes1986improved] for each gene that indicates whether it is significantly correlated to any other gene. From a statistical perspective, this is a more natural approach to correcting for multiple testing when genes, rather than pairs of genes, are of interest. - The `Limited` field indicates whether the _p_-value was lower-bounded by the number of permutations. If this is `TRUE` for any non-significant gene at the chosen FDR threshold, consider increasing the number of permutations to improve power. # Using parallel analysis to choose the number of PCs An alternative strategy for choosing the number of PCs is to use the `parallelPCA()` function. This performs Horn's parallel analysis [@horn1965rationale], which involves permuting the matrix and repeating the PCA to determine the variance explained by each PC under a random null model. Repeated permutations can be used to obtain "p-values" for each PC [@buja1992remarks]. PCs are discarded if they do not explain significantly more variance than expected under the null. This is demonstrated using the batch-corrected expression values from the 416B dataset. To focus on relevant features, we only use genes with positive biological components from the variability analysis above. This procedure relies on random permutations, so setting the random seed is necessary for reproducible results. ```{r} set.seed(1000) npcs <- parallelPCA(sce.416B, assay.type="corrected", subset.row=comb.out$bio > 0, value="n") as.integer(npcs) ``` Parallel analysis tends to yield more accurate estimates of the true rank of the matrix than `denoisePCA()`. It is also applicable to expression values that have been transformed such that the gene-wise variances are distorted (and thus `denoisePCA()` cannot be used). This means that parallel analysis can be used to choose the number of PCs after certain non-linear operations like batch correction [@haghverdi2018batch]. However, it is obviously much slower that `denoisePCA()` due to the need for multiple permutations. # Blocking on the cell cycle phase Cell cycle phase is usually uninteresting in studies focusing on other aspects of biology. However, the effects of cell cycle on the expression profile can mask other effects and interfere with the interpretation of the results. This cannot be avoided by simply removing cell cycle marker genes, as the cell cycle can affect a substantial number of other transcripts [@buettner2015computational]. Rather, more sophisticated strategies are required, one of which is demonstrated below using data from a study of T Helper 2 (T~H~2) cells [@mahata2014singlecell]. @buettner2015computational have already applied quality control and normalized the data, so we can use them directly as log-expression values (accessible as Supplementary Data 1 of https://dx.doi.org/10.1038/nbt.3102). ```{r} library(readxl) incoming <- as.data.frame(read_excel("nbt.3102-S7.xlsx", sheet=1)) rownames(incoming) <- incoming[,1] incoming <- incoming[,-1] incoming <- incoming[,!duplicated(colnames(incoming))] # Remove duplicated genes. sce.th2 <- SingleCellExperiment(list(logcounts=t(incoming))) ``` We empirically identify the cell cycle phase using the pair-based classifier in `cyclone`. The majority of cells in Figure \@ref(fig:phaseplotth2) seem to lie in G1 phase, with small numbers of cells in the other phases. ```{r phaseplotth2, message=FALSE, fig.cap="Cell cycle phase scores from applying the pair-based classifier on the T~H~2 dataset, where each point represents a cell."} library(org.Mm.eg.db) ensembl <- mapIds(org.Mm.eg.db, keys=rownames(sce.th2), keytype="SYMBOL", column="ENSEMBL") set.seed(100) mm.pairs <- readRDS(system.file("exdata", "mouse_cycle_markers.rds", package="scran")) assignments <- cyclone(sce.th2, mm.pairs, gene.names=ensembl, assay.type="logcounts") plot(assignments$score$G1, assignments$score$G2M, xlab="G1 score", ylab="G2/M score", pch=16) ``` We can block directly on the phase scores in downstream analyses. This is more graduated than using a strict assignment of each cell to a specific phase, as the magnitude of the score considers the uncertainty of the assignment. The phase covariates in the design matrix will absorb any phase-related effects on expression such that they will not affect estimation of the effects of other experimental factors. Users should also ensure that the phase score is not confounded with other factors of interest. For example, model fitting is not possible if all cells in one experimental condition are in one phase, and all cells in another condition are in a different phase. ```{r} design <- model.matrix(~ G1 + G2M, assignments$score) fit.block <- trendVar(sce.th2, design=design, parametric=TRUE, use.spikes=NA) dec.block <- decomposeVar(sce.th2, fit.block) library(limma) sce.th2.block <- sce.th2 assay(sce.th2.block, "corrected") <- removeBatchEffect( logcounts(sce.th2), covariates=design[,-1]) sce.th2.block <- denoisePCA(sce.th2.block, technical=dec.block, assay.type="corrected") dim(reducedDim(sce.th2.block, "PCA")) ``` The result of blocking on `design` is visualized with some PCA plots in Figure \@ref(fig:pcaplotth2). Before removal, the distribution of cells along the first two principal components is strongly associated with their G1 and G2/M scores. This is no longer the case after removal, which suggests that the cell cycle effect has been mitigated. ```{r pcaplotth2, fig.width=12, fig.asp=0.5, fig.cap="PCA plots before (left) and after (right) removal of the cell cycle effect in the T~H~2 dataset. Each cell is represented by a point with colour and size determined by the G1 and G2/M scores, respectively."} sce.th2$G1score <- sce.th2.block$G1score <- assignments$score$G1 sce.th2$G2Mscore <- sce.th2.block$G2Mscore <- assignments$score$G2M # Without blocking on phase score. fit <- trendVar(sce.th2, parametric=TRUE, use.spikes=NA) sce.th2 <- denoisePCA(sce.th2, technical=fit$trend) out <- plotReducedDim(sce.th2, use_dimred="PCA", ncomponents=2, colour_by="G1score", size_by="G2Mscore") + fontsize + ggtitle("Before removal") # After blocking on the phase score. out2 <- plotReducedDim(sce.th2.block, use_dimred="PCA", ncomponents=2, colour_by="G1score", size_by="G2Mscore") + fontsize + ggtitle("After removal") multiplot(out, out2, cols=2) ``` As an aside, this dataset contains cells at various stages of differentiation [@mahata2014singlecell]. This is an ideal use case for diffusion maps which perform dimensionality reduction along a continuous process. In Figure \@ref(fig:diffusionth2), cells are arranged along a trajectory in the low-dimensional space. The first diffusion component is likely to correspond to T~H~2 differentiation, given that a key regulator _Gata3_ [@zhu2006gata3] changes in expression from left to right. ```{r diffusionth2, fig.cap="A diffusion map for the T~H~2 dataset, where each cell is coloured by its expression of _Gata3_. A larger `sigma` is used compared to the default value to obtain a smoother plot."} plotDiffusionMap(sce.th2.block, colour_by="Gata3", run_args=list(use_dimred="PCA", sigma=25)) + fontsize ``` # Concluding remarks All software packages used in this workflow are publicly available from the Comprehensive R Archive Network (https://cran.r-project.org) or the Bioconductor project (http://bioconductor.org). The specific version numbers of the packages used are shown below, along with the version of the R installation. ```{r} sessionInfo() ``` # References