Skip to content

Commit

Permalink
fix typo
Browse files Browse the repository at this point in the history
  • Loading branch information
rcorces committed May 23, 2020
1 parent 48b05ba commit 0180acf
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion bookdown/04_ReducedDims.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ set.seed(1)

Dimensionality reduction with scATAC-seq is challenging due to the _sparsity_ of the data. In scATAC-seq, a particular site can either be accessible on one allele, both alleles, or no alleles. Even in higher-quality scATAC-seq data, the majority of accesible regions are not transposed and this leads to many loci having 0 accessibile alleles. Moreover, when we see (for example) three Tn5 insertions within a single peak region in a single cell, the sparsity of the data prevents us from confidently determining that this site in this cell is actually three times more accessible than another cell that only has one insertion in the same site. For this reason a lot of analytical strategies work on a binarized scATAC-seq data matrix. This binarized matrix still ends up being mostly 0s because transposition is rare. However, it is important to note that a 0 in scATAC-seq could mean "non-accessible" or "not sampled" and these two inferences are very different from a biological standpoint. Because of this, the 1s have information and the 0s do not. This low information content is what makes our scATAC-seq data _sparse_.

If you were to perform a standard dimensionality reduction, like Principal Component Analysis, on this sparse insertion counts matrix and plot the top two principal components, you would not obtain the desired result because the sparsity causes high inter-cell similarity at all of the 0 positions. To get around this issue, we use a layered dimensionality reduction approach. First, we use Latent Semantic Indexing (LSI), an approach from natural language processing that was originally designed to assess document similarity based on word counts. This solution was created for natural language processing because the data is sparse and noisy (many different words and many low frequency words). LSI was first introduced for scATAC-seq by [Cusanovich et al. (Science 2015)](https://www.ncbi.nlm.nih.gov/pubmed/25953818). In the case of scATAC-seq, different samples are the _documents_ and different regions/peaks are the _words_. First, we calculate the term frequency by depth normalization per single cell. These values are then normalized by the inverse document frequency which weights features by how often they occur to identify featres that are more "specific" rather than commonly accessible. The resultant term frequency-inverse document frequency (TF-IDF) matrix reflects how important a _word_ (aka region/peak) is to a _document_ (aka sample). Then, through a technique called singular value decomposition (SVD), the most _valuable_ information across samples is identified and represented in a lower dimensional space. LSI allows you to reduce the dimensionality of the sparse insertion counts matrix from many thousands to tens or hundreds. Then, a more conventional dimensionality reduction technique, such as Uniform Manifold Approximation and Projection (UMAP) or t-distributed stochastic neighbor embedding (t-SNE) can be used to visualize the data. In ArchR, these visualization methods are referred to as _embeddings_.
If you were to perform a standard dimensionality reduction, like Principal Component Analysis, on this sparse insertion counts matrix and plot the top two principal components, you would not obtain the desired result because the sparsity causes high inter-cell similarity at all of the 0 positions. To get around this issue, we use a layered dimensionality reduction approach. First, we use Latent Semantic Indexing (LSI), an approach from natural language processing that was originally designed to assess document similarity based on word counts. This solution was created for natural language processing because the data is sparse and noisy (many different words and many low frequency words). LSI was first introduced for scATAC-seq by [Cusanovich et al. (Science 2015)](https://www.ncbi.nlm.nih.gov/pubmed/25953818). In the case of scATAC-seq, different samples are the _documents_ and different regions/peaks are the _words_. First, we calculate the term frequency by depth normalization per single cell. These values are then normalized by the inverse document frequency which weights features by how often they occur to identify features that are more "specific" rather than commonly accessible. The resultant term frequency-inverse document frequency (TF-IDF) matrix reflects how important a _word_ (aka region/peak) is to a _document_ (aka sample). Then, through a technique called singular value decomposition (SVD), the most _valuable_ information across samples is identified and represented in a lower dimensional space. LSI allows you to reduce the dimensionality of the sparse insertion counts matrix from many thousands to tens or hundreds. Then, a more conventional dimensionality reduction technique, such as Uniform Manifold Approximation and Projection (UMAP) or t-distributed stochastic neighbor embedding (t-SNE) can be used to visualize the data. In ArchR, these visualization methods are referred to as _embeddings_.

## ArchR's LSI Implementation

Expand Down

0 comments on commit 0180acf

Please sign in to comment.