Latent semantic analysis (LSA) is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. Natural language processing ( NLP) is a subfield of Artificial intelligence and Computational linguistics. Vector space model (or term vector model) is an algebraic model for representing text documents (and any objects in general as vectors of identifiers such as for

LSA was patented in 1988 (US Patent 4,839,853) by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. Year 1988 ( MCMLXXXVIII) was a Leap year starting on Friday (link displays 1988 Gregorian calendar) Scott Deerwester is one of the inventors of Latent semantic analysis. Susan Dumais is a Principal Researcher in the Adaptive Systems & Interaction Group of Microsoft Research. Prof George W Furnas is a professor and Associate Dean for Academic Strategy at the School of Information of the University of Michigan. Dr Richard A Harshman was a member of the Department of Psychology of the University of Western Ontario since 1976 rising in the ranks to the level of Full Professor Dr Thomas K Landauer is a professor at the Department of Psychology of the University of Colorado. In the context of its application to information retrieval, it is sometimes called latent semantic indexing (LSI). Information retrieval ( IR) is the science of searching for documents for Information within documents and for metadata about documents as well as that

## Occurrence matrix

LSA can use a term-document matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents, typically stemmed words that appear in the documents. Document-term matrices are used in Natural language processing programs In the mathematical subfield of Numerical analysis a sparse matrix is a matrix populated primarily with zeros Terminology is the study of terms and their use Terms are Words and Compound words that are used in specific contexts Stemming is the process for reducing inflected (or sometimes derived words to their stem, base or root form &ndash generally a written word form A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance. The tf–idf weight (term frequency–inverse document frequency is a weight often used in Information retrieval and Text mining.

This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.

LSA transforms the occurrence matrix into a relation between the terms and some concepts, and a relation between those concepts and the documents. Thus the terms and documents are now indirectly related through the concepts.

## Applications

The new concept space typically can be used to:

• Compare the documents in the concept space (data clustering, document classification). Clustering is the classification of objects into different groups or more precisely the partitioning of a Data set into Subsets (clusters Document classification/categorization is a problem in Information science.
• Find similar documents across languages, after analyzing a base set of translated documents (cross language retrieval).
• Find relations between terms (synonymy and polysemy). This article deals with the general meaning of the term "synonym" Polysemy ( or) (from the Greek πολυσημεία = "multiple meaning" is the capacity for a sign (e
• Given a query of terms, translate it into the concept space, and find matching documents (information retrieval). Information retrieval ( IR) is the science of searching for documents for Information within documents and for metadata about documents as well as that

Synonymy and polysemy are fundamental problems in natural language processing:

• Synonymy is the phenomenon where different words describe the same idea. Natural language processing ( NLP) is a subfield of Artificial intelligence and Computational linguistics. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "physicians", even though the words have the same meaning.
• Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.

## Rank lowering

After the construction of the occurrence matrix, LSA finds a low-rank approximation to the term-document matrix. The column rank of a matrix A is the maximal number of Linearly independent columns of A. Document-term matrices are used in Natural language processing programs There could be various reasons for these approximations:

• The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an approximation (a "least and necessary evil").
• The original term-document matrix is presumed noisy: for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a de-noisified matrix (a better matrix than the original).
• The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. In the mathematical subfield of Numerical analysis a sparse matrix is a matrix populated primarily with zeros That is, the original matrix lists only the words actually in each document, whereas we might be interested in all words related to each document--generally a much larger set due to synonymy. This article deals with the general meaning of the term "synonym"

The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:

{(car), (truck), (flower)} --> {(1. 3452 * car + 0. 2828 * truck), (flower)}

This mitigates synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also mitigates polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.

## Derivation

Let X be a matrix where element (i,j) describes the occurrence of term i in document j (this can be, for example, the frequency). X will look like this:

$\begin{matrix} & \textbf{d}_j \\ & \downarrow \\\textbf{t}_i^T \rightarrow &\begin{bmatrix} x_{1,1} & \dots & x_{1,n} \\\vdots & \ddots & \vdots \\x_{m,1} & \dots & x_{m,n} \\\end{bmatrix}\end{matrix}$

Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:

$\textbf{t}_i^T = \begin{bmatrix} x_{i,1} & \dots & x_{i,n} \end{bmatrix}$

Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:

$\textbf{d}_j = \begin{bmatrix} x_{1,j} \\ \vdots \\ x_{m,j} \end{bmatrix}$

Now the dot product $\textbf{t}_i^T \textbf{t}_p$ between two term vectors gives the correlation between the terms over the documents. In Mathematics, the dot product, also known as the scalar product, is an operation which takes two vectors over the Real numbers R In Probability theory and Statistics, correlation, (often measured as a correlation coefficient) indicates the strength and direction of a linear The matrix product XXT contains all these dot products. In Mathematics, matrix multiplication is the operation of multiplying a matrix with either a scalar or another matrix Element (i,p) (which is equal to element (p,i)) contains the dot product $\textbf{t}_i^T \textbf{t}_p$ ($= \textbf{t}_p^T \textbf{t}_i$). Likewise, the matrix XTX contains the dot products between all the document vectors, giving their correlation over the terms: $\textbf{d}_j^T \textbf{d}_q = \textbf{d}_q^T \textbf{d}_j$.

Now assume that there exists a decomposition of X such that U and V are orthonormal matrices and Σ is a diagonal matrix. In Matrix theory, a real orthogonal matrix is a square matrix Q whose Transpose is its inverse: Q^T In Linear algebra, a diagonal matrix is a Square matrix in which the entries outside the Main diagonal (↘ are all zero This is called a singular value decomposition (SVD):

X = UΣVT

The matrix products giving us the term and document correlations then become

$\begin{matrix}X X^T &=& (U \Sigma V^T) (U \Sigma V^T)^T = (U \Sigma V^T) (V^{T^T} \Sigma^T U^T) = U \Sigma V^T V \Sigma^T U^T = U \Sigma \Sigma^T U^T \\X^T X &=& (U \Sigma V^T)^T (U \Sigma V^T) = (V^{T^T} \Sigma^T U^T) (U \Sigma V^T) = V \Sigma U^T U \Sigma V^T = V \Sigma^T \Sigma V^T\end{matrix}$

Since ΣΣT and ΣTΣ are diagonal we see that U must contain the eigenvectors of XXT, while V must be the eigenvectors of XTX. In Linear algebra, the singular value decomposition ( SVD) is an important factorization of a rectangular real or complex matrix In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes Both products have the same non-zero eigenvalues, given by the non-zero entries of ΣΣT, or equally, by the non-zero entries of ΣTΣ. Now the decomposition looks like this:

$\begin{matrix} & X & & & U & & \Sigma & & V^T \\ & (\textbf{d}_j) & & & & & & & (\hat \textbf{d}_j) \\ & \downarrow & & & & & & & \downarrow \\(\textbf{t}_i^T) \rightarrow &\begin{bmatrix} x_{1,1} & \dots & x_{1,n} \\\\\vdots & \ddots & \vdots \\\\x_{m,1} & \dots & x_{m,n} \\\end{bmatrix}&=&(\hat \textbf{t}_i^T) \rightarrow&\begin{bmatrix} \begin{bmatrix} \, \\ \, \\ \textbf{u}_1 \\ \, \\ \,\end{bmatrix} \dots\begin{bmatrix} \, \\ \, \\ \textbf{u}_l \\ \, \\ \, \end{bmatrix}\end{bmatrix}&\cdot&\begin{bmatrix} \sigma_1 & \dots & 0 \\\vdots & \ddots & \vdots \\0 & \dots & \sigma_l \\\end{bmatrix}&\cdot&\begin{bmatrix} \begin{bmatrix} & & \textbf{v}_1 & & \end{bmatrix} \\\vdots \\\begin{bmatrix} & & \textbf{v}_l & & \end{bmatrix}\end{bmatrix}\end{matrix}$

The values $\sigma_1, \dots, \sigma_l$ are called the singular values, and $u_1, \dots, u_l$ and $v_1, \dots, v_l$ the left and right singular vectors. Notice how the only part of U that contributes to $\textbf{t}_i$ is the i'th row. Let this row vector be called $\hat \textrm{t}_i$. Likewise, the only part of VT that contributes to $\textbf{d}_j$ is the j'th column, $\hat \textrm{d}_j$. These are not the eigenvectors, but depend on all the eigenvectors.

It turns out that when you select the k largest singular values, and their corresponding singular vectors from U and V, you get the rank k approximation to X with the smallest error (Frobenius norm). In Mathematics, a matrix norm is a natural extension of the notion of a Vector norm to matrices. The amazing thing about this approximation is that not only does it have a minimal error, but it translates the term and document vectors into a concept space. The vector $\hat \textbf{t}_i$ then has k entries, each giving the occurrence of term i in one of the k concepts. Likewise, the vector $\hat \textbf{d}_j$ gives the relation between document j and each concept. We write this approximation as

$X_k = U_k \Sigma_k V_k^T$

You can now do the following:

• See how related documents j and q are in the concept space by comparing the vectors $\hat \textbf{d}_j$ and $\hat \textbf{d}_q$ (typically by cosine similarity). Vector space model (or term vector model) is an algebraic model for representing text documents (and any objects in general as vectors of identifiers such as for This gives you a clustering of the documents.
• Comparing terms i and p by comparing the vectors $\hat \textbf{t}_i$ and $\hat \textbf{t}_p$, giving you a clustering of the terms in the concept space.
• Given a query, view this as a mini document, and compare it to your documents in the concept space.

To do the latter, you must first translate your query into the concept space. It is then intuitive that you must use the same transformation that you use on your documents:

$\textbf{d}_j = U_k \Sigma_k \hat \textbf{d}_j$
$\hat \textbf{d}_j = \Sigma_k^{-1} U_k^T \textbf{d}_j$

This means that if you have a query vector q, you must do the translation $\hat \textbf{q} = \Sigma_k^{-1} U_k^T \textbf{q}$ before you compare it with the document vectors in the concept space. You can do the same for pseudo term vectors:

$\textbf{t}_i^T = \hat \textbf{t}_i^T \Sigma_k V_k^T$
$\hat \textbf{t}_i^T = \textbf{t}_i^T V_k^{-T} \Sigma_k^{-1} = \textbf{t}_i^T V_k \Sigma_k^{-1}$
$\hat \textbf{t}_i = \Sigma_k^{-1} V_k^T \textbf{t}_i$

## Implementation

The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory (Gorrell and Webb, 2005). In Linear algebra, the singular value decomposition ( SVD) is an important factorization of a rectangular real or complex matrix The Lanczos algorithm is an iterative algorithm invented by Cornelius Lanczos that is an adaptation of power methods to find Eigenvalues and Eigenvectors Traditionally the term neural network had been used to refer to a network or circuit of biological neurons.

A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed (Brand, 2006). Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's (2006) algorithm provides an exact solution.

## Limitations

LSA has two drawbacks:

• The resulting dimensions might be difficult to interpret. For instance, in
{(car), (truck), (flower)} --> {(1. 3452 * car + 0. 2828 * truck), (flower)}
the (1. 3452 * car + 0. 2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to
{(car), (bottle), (flower)} --> {(1. 3452 * car + 0. 2828 * bottle), (flower)}
will occur. This leads to results which can be justified on the mathematical level, but have no interpretable meaning in natural language.
• The probabilistic model of LSA does not match observed data: LSA assumes that words and documents form a joint Gaussian model (ergodic hypothesis), while a Poisson distribution has been observed. Statistical models are used in Applied statistics. Three notions are sufficient to describe all statistical models The normal distribution, also called the Gaussian distribution, is an important family of Continuous probability distributions applicable in many fields In Mathematics and Physics, the adjective ergodic is used to imply that a system satisfies the Ergodic hypothesis of Thermodynamics or that In Probability theory and Statistics, the Poisson distribution is a Discrete probability distribution that expresses the probability of a number of events Thus, a newer alternative is probabilistic latent semantic analysis, based on a multinomial model, which is reported to give better results than standard LSA. Probabilistic latent semantic analysis (PLSA, also known as probabilistic latent semantic indexing ( PLSI, especially in information retrieval circles is a In Probability theory, the multinomial distribution is a generalization of the Binomial distribution.