In computational complexity theory, P is one of the most fundamental complexity classes. Computational complexity theory, as a branch of the Theory of computation in Computer science, investigates the problems related to the amounts of resources In Computational complexity theory, a complexity class is a set of problems of related complexity It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time. In Computability theory and Computational complexity theory, a decision problem is a question in some Formal system with a yes-or-no answer depending on Turing machines are basic abstract symbol-manipulating devices which despite their simplicity can be adapted to simulate the logic of any Computer Algorithm In Mathematics, a polynomial is an expression constructed from Variables (also known as indeterminates and Constants using the operations In Computational complexity theory, computation time is a measure of how many steps are used by some Abstract machine in a particular computation
P is often taken to be the class of computational problems which are "efficiently solvable" or "tractable", although there are potentially larger classes that are also considered tractable such as RP and BPP. In complexity theory, RP ("randomized polynomial time" is the Complexity class of problems for which a Probabilistic Turing machine exists with In complexity theory, BPP is the class of Decision problems solvable by a Probabilistic Turing machine in Polynomial time, with an error Also, there exist problems in P which are intractable in practical terms; for example, some require at least n1000000 operations. See even harder problems of complexity classes for further discussion. The relationship between the Complexity classes P and NP is an unsolved question in Theoretical computer science.
P is known to contain many natural problems, including the decision versions of linear programming, calculating the greatest common divisor, and finding a maximum matching. In Mathematics, linear programming (LP is a technique for optimization of a Linear Objective function, subject to Linear equality In Mathematics, the greatest common divisor (gcd, sometimes known as the greatest common factor (gcf or highest common factor (hcf, of two non-zero In the Mathematical discipline of Graph theory a matching or edge independent set in a graph is a set of edges without common vertices In 2002, it was shown that the problem of determining if a number is prime is in P. In Mathematics, a prime number (or a prime) is a Natural number which has exactly two distinct natural number Divisors 1  The related class of function problems is FP. In Computational complexity theory, a function problem is a problem other than a Decision problem, that is a problem requiring a more complex answer than just YES In Computational complexity theory, the Complexity class FP is the set of Function problems which can be solved by a Deterministic Turing machine
Several natural problems are complete for P, including st-connectivity (or reachability) on alternating graphs. In Computer science and Computational complexity theory, st-connectivity is a Decision problem asking for vertices s and t in a In Graph theory, reachability is the notion of being able to get from one vertex in a Directed graph to some other vertex  The article on P-complete problems lists further relevant problems in P. In complexity theory, the notion of P-complete Decision problems is useful in the analysis of both which problems are difficult to parallelize effectively
A generalization of P is NP, which is the class of languages decidable in polynomial time on a non-deterministic Turing machine. In Computational complexity theory, NP is one of the most fundamental Complexity classes The abbreviation NP refers to " N on-deterministic In Mathematics, a polynomial is an expression constructed from Variables (also known as indeterminates and Constants using the operations In Theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers We then trivially have P is a subset of NP. Though unproven, most experts believe this is a strict subset. The relationship between the Complexity classes P and NP is an unsolved question in Theoretical computer science. 
P is also known to be at least as large as L, the class of problems decidable in a logarithmic amount of memory space. In Computational complexity theory, L is the Complexity class containing Decision problems which can be solved by a Deterministic Turing machine In Mathematics, the logarithm of a number to a given base is the power or Exponent to which the base must be raised in order to produce In Computational complexity theory, a computational resource is a resource used by some Computational models in the solution of Computational problems A decider using O(logn) space cannot use more than 2O(logn) = nO(1) time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory by alternating Turing machines. In Computational complexity theory, an alternating Turing machine ( ATM) is a Non-deterministic Turing machine ( NTM) with a rule for accepting P is also known to be no larger than PSPACE, the class of problems decidable in polynomial space. Mathematicians and computer scientists try to carefully define different types of complexity, and PSPACE is one of these types Again, whether P = PSPACE is an open problem. To summarize:
Here, EXPTIME is the class of problems solvable in exponential time. "EXP" redirects here for other uses see Exp. In Computational complexity theory, the Complexity class EXPTIME Of all the classes shown above, only two strict containments are known:
The most difficult problems in P are P-complete problems. In complexity theory, the notion of P-complete Decision problems is useful in the analysis of both which problems are difficult to parallelize effectively
Another generalization of P is P/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that an advice string is given that depends only on the length of the input. Advice is a concept in Complexity theory. An advice string is an extra input to a Turing machine which is allowed to depend on the length n Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical algorithms, including all of BPP. In complexity theory, BPP is the class of Decision problems solvable by a Probabilistic Turing machine in Polynomial time, with an error If it contains NP, then the polynomial hierarchy collapses to the second level. In Computational complexity theory, the polynomial hierarchy is a hierarchy of Complexity classes that generalize the classes P, NP On the other hand, it also contains some impractical algorithms, including some undecidable problems such as the unary version of any undecidable problem. Undecidable has more than one meaning;In Mathematical logic: A Decision problem is called (recursively undecidable if no Algorithm can
In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Mitsunori Ogihara, showed that if there exists a sparse language which is P-complete, then L = P. In Computational complexity theory, a sparse language is a Formal language (a set of strings) such that the number of strings of length n in 
Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function which is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P is low for itself. In Computational complexity theory, it is said that a Complexity class B is low for a complexity class A if A B This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such as random access, which can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine. In Computer science, random access (sometimes called direct access) is the ability to access an arbitrary element of a sequence in equal time
Some problems are known to be solvable in polynomial-time, but no concrete algorithm is known for solving them. For example, the Robertson-Seymour theorem guarantees that there is a finite list of forbidden minors that characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determinining whether a graph has a given graph as a minor. In Graph theory, the Robertson–Seymour theorem (also called the graph minor theorem) states that the minor ordering on the finite undirected graphs is This yields a nonconstructive proof that there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem.
In descriptive complexity, P can be described as the problems expressible in FO (LFP), the class of first-order logic with a least fixed point operator added to it. Descriptive complexity is a branch of Finite model theory, a subfield of Computational complexity theory and Mathematical logic, which seeks to characterize First-order logic (FOL is a formal Deductive system used in mathematics philosophy linguistics and computer science In Order theory, a branch of Mathematics, the least fixed point ( lfp or LFP) of a function is the fixed point which is In Immerman's 1999 textbook on descriptive complexity, Immerman ascribes this result to Vardi 1982 and to Immerman 1982.
Kozen states that Cobham and Edmonds are "generally credited with the invention of the notion of polynomial time".