In computational complexity theory, P is one of the most fundamental complexity classes. Computational complexity theory, as a branch of the Theory of computation in Computer science, investigates the problems related to the amounts of resources In Computational complexity theory, a complexity class is a set of problems of related complexity It contains all decision problems which can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time. In Computability theory and Computational complexity theory, a decision problem is a question in some Formal system with a yes-or-no answer depending on Turing machines are basic abstract symbol-manipulating devices which despite their simplicity can be adapted to simulate the logic of any Computer Algorithm In Mathematics, a polynomial is an expression constructed from Variables (also known as indeterminates and Constants using the operations In Computational complexity theory, computation time is a measure of how many steps are used by some Abstract machine in a particular computation

P is often taken to be the class of computational problems which are "efficiently solvable" or "tractable", although there are potentially larger classes that are also considered tractable such as RP and BPP. In complexity theory, RP ("randomized polynomial time" is the Complexity class of problems for which a Probabilistic Turing machine exists with In complexity theory, BPP is the class of Decision problems solvable by a Probabilistic Turing machine in Polynomial time, with an error Also, there exist problems in P which are intractable in practical terms; for example, some require at least n1000000 operations. See even harder problems of complexity classes for further discussion. The relationship between the Complexity classes P and NP is an unsolved question in Theoretical computer science.

## Notable problems in P

P is known to contain many natural problems, including the decision versions of linear programming, calculating the greatest common divisor, and finding a maximum matching. In Mathematics, linear programming (LP is a technique for optimization of a Linear Objective function, subject to Linear equality In Mathematics, the greatest common divisor (gcd, sometimes known as the greatest common factor (gcf or highest common factor (hcf, of two non-zero In the Mathematical discipline of Graph theory a matching or edge independent set in a graph is a set of edges without common vertices In 2002, it was shown that the problem of determining if a number is prime is in P. In Mathematics, a prime number (or a prime) is a Natural number which has exactly two distinct natural number Divisors 1 [1] The related class of function problems is FP. In Computational complexity theory, a function problem is a problem other than a Decision problem, that is a problem requiring a more complex answer than just YES In Computational complexity theory, the Complexity class FP is the set of Function problems which can be solved by a Deterministic Turing machine

Several natural problems are complete for P, including st-connectivity (or reachability) on alternating graphs. In Computer science and Computational complexity theory, st-connectivity is a Decision problem asking for vertices s and t in a In Graph theory, reachability is the notion of being able to get from one vertex in a Directed graph to some other vertex [2] The article on P-complete problems lists further relevant problems in P. In complexity theory, the notion of P-complete Decision problems is useful in the analysis of both which problems are difficult to parallelize effectively

## Relationships to other classes

A generalization of P is NP, which is the class of languages decidable in polynomial time on a non-deterministic Turing machine. In Computational complexity theory, NP is one of the most fundamental Complexity classes The abbreviation NP refers to " N on-deterministic In Mathematics, a polynomial is an expression constructed from Variables (also known as indeterminates and Constants using the operations In Theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers We then trivially have P is a subset of NP. Though unproven, most experts believe this is a strict subset. The relationship between the Complexity classes P and NP is an unsolved question in Theoretical computer science. [3]

P is also known to be at least as large as L, the class of problems decidable in a logarithmic amount of memory space. In Computational complexity theory, L is the Complexity class containing Decision problems which can be solved by a Deterministic Turing machine In Mathematics, the logarithm of a number to a given base is the power or Exponent to which the base must be raised in order to produce In Computational complexity theory, a computational resource is a resource used by some Computational models in the solution of Computational problems A decider using O(logn) space cannot use more than 2O(logn) = nO(1) time, because this is the total number of possible configurations; thus, L is a subset of P. Another important problem is whether L = P. We do know that P = AL, the set of problems solvable in logarithmic memory by alternating Turing machines. In Computational complexity theory, an alternating Turing machine ( ATM) is a Non-deterministic Turing machine ( NTM) with a rule for accepting P is also known to be no larger than PSPACE, the class of problems decidable in polynomial space. Mathematicians and computer scientists try to carefully define different types of complexity, and PSPACE is one of these types Again, whether P = PSPACE is an open problem. To summarize:

$\mbox{L} \subseteq \mbox{AL} = \mbox{P} \subseteq \mbox{NP} \subseteq \mbox{PSPACE} \subseteq \mbox{EXPTIME}$

Here, EXPTIME is the class of problems solvable in exponential time. "EXP" redirects here for other uses see Exp. In Computational complexity theory, the Complexity class EXPTIME Of all the classes shown above, only two strict containments are known:

• P is strictly contained in EXPTIME. Consequently, all EXPTIME-hard problems lie outside P, and at least one of the containments to the right of P above is strict (in fact, it is widely believed that all three are strict).
• L is strictly contained in PSPACE.

The most difficult problems in P are P-complete problems. In complexity theory, the notion of P-complete Decision problems is useful in the analysis of both which problems are difficult to parallelize effectively

Another generalization of P is P/poly, or Nonuniform Polynomial-Time. If a problem is in P/poly, then it can be solved in deterministic polynomial time provided that an advice string is given that depends only on the length of the input. Advice is a concept in Complexity theory. An advice string is an extra input to a Turing machine which is allowed to depend on the length n Unlike for NP, however, the polynomial-time machine doesn't need to detect fraudulent advice strings; it is not a verifier. P/poly is a large class containing nearly all practical algorithms, including all of BPP. In complexity theory, BPP is the class of Decision problems solvable by a Probabilistic Turing machine in Polynomial time, with an error If it contains NP, then the polynomial hierarchy collapses to the second level. In Computational complexity theory, the polynomial hierarchy is a hierarchy of Complexity classes that generalize the classes P, NP On the other hand, it also contains some impractical algorithms, including some undecidable problems such as the unary version of any undecidable problem. Undecidable has more than one meaning;In Mathematical logic: A Decision problem is called (recursively undecidable if no Algorithm can

In 1999, Jin-Yi Cai and D. Sivakumar, building on work by Mitsunori Ogihara, showed that if there exists a sparse language which is P-complete, then L = P. In Computational complexity theory, a sparse language is a Formal language (a set of strings) such that the number of strings of length n in [4]

## Properties

Polynomial-time algorithms are closed under composition. Intuitively, this says that if one writes a function which is polynomial-time assuming that function calls are constant-time, and if those called functions themselves require polynomial time, then the entire algorithm takes polynomial time. One consequence of this is that P is low for itself. In Computational complexity theory, it is said that a Complexity class B is low for a complexity class A if A B This is also one of the main reasons that P is considered to be a machine-independent class; any machine "feature", such as random access, which can be simulated in polynomial time can simply be composed with the main polynomial-time algorithm to reduce it to a polynomial-time algorithm on a more basic machine. In Computer science, random access (sometimes called direct access) is the ability to access an arbitrary element of a sequence in equal time

## Pure existence proofs of polynomial-time algorithms

Some problems are known to be solvable in polynomial-time, but no concrete algorithm is known for solving them. For example, the Robertson-Seymour theorem guarantees that there is a finite list of forbidden minors that characterizes (for example) the set of graphs that can be embedded on a torus; moreover, Robertson and Seymour showed that there is an O(n3) algorithm for determinining whether a graph has a given graph as a minor. In Graph theory, the Robertson–Seymour theorem (also called the graph minor theorem) states that the minor ordering on the finite undirected graphs is This yields a nonconstructive proof that there is a polynomial-time algorithm for determining if a given graph can be embedded on a torus, despite the fact that no concrete algorithm is known for this problem.

## Alternative characterizations

In descriptive complexity, P can be described as the problems expressible in FO (LFP), the class of first-order logic with a least fixed point operator added to it. Descriptive complexity is a branch of Finite model theory, a subfield of Computational complexity theory and Mathematical logic, which seeks to characterize First-order logic (FOL is a formal Deductive system used in mathematics philosophy linguistics and computer science In Order theory, a branch of Mathematics, the least fixed point ( lfp or LFP) of a function is the fixed point which is In Immerman's 1999 textbook on descriptive complexity[5], Immerman ascribes this result to Vardi 1982[6] and to Immerman 1982[7].

## History

Kozen[8] states that Cobham and Edmonds are "generally credited with the invention of the notion of polynomial time".

## References

1. ^ Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P", Annals of Mathematics 160 (2004), no. 2, pp. 781–793.
2. ^ Immerman, Neil (1999). Neil Immerman is one of the key developers of Descriptive complexity, an approach he is currently applying to research in model checking database theory and computational complexity Descriptive Complexity. New York: Springer-Verlag. ISBN 0-387-98600-6.
3. ^ Johnsonbaugh, Richard; Schaefer, Marcus, Algorithms, 2004 Pearson Education, page 458, ISBN 0-02-360692-4
4. ^ Jin-Yi Cai and D. Sivakumar. Sparse hard sets for P: resolution of a conjecture of Hartmanis. Journal of Computer and System Sciences, volume 58, issue 2, pp. 280–296. 1999. ISSN:0022-0000. At Citeseer
5. ^ Immerman, Neil (1999). Neil Immerman is one of the key developers of Descriptive complexity, an approach he is currently applying to research in model checking database theory and computational complexity Descriptive Complexity. New York: Springer-Verlag, 66. ISBN 0-387-98600-6.
6. ^ Vardi, M. (1983). "Complexity of Relational Query Languages". 14th Symposium on Theory of Computation: 137-146.
7. ^ Immerman, Neil (1982). "Relational Queries Computable in Polynomial Time". 14th ACM STOC Symp. : 147-152.  . Revised version in Information and Control, 68 (1986), 86-104.
8. ^ Kozen, Dexter C. (2006). Theory of Computation. Springer, 4. ISBN 1-84628-297-7.
• C. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. ISBN 0-201-53082-1.
• Complexity Zoo: P, P/poly
• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Thomas H Cormen is the co-author of Introduction to Algorithms, along with Charles Leiserson, Ron Rivest, and Cliff Stein. Charles Eric Leiserson is a Computer scientist, specializing in the theory of Parallel computing and Distributed computing, and particularly practical Ronald Linn Rivest (born 1947, Schenectady, New York) is a cryptographer. Clifford Stein, a Computer scientist, is currently a professor of Industrial engineering and Operations research at Columbia University Introduction to Algorithms, Second Edition. Introduction to Algorithms is a book by Thomas H Cormen, Charles E MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 34. 1: Polynomial time, pp. 971–979.
• Michael Sipser (1997). Michael Fredric Sipser is a Professor of Applied Mathematics in the Theory of Computation Group at the Massachusetts Institute of Technology. Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X.   Section 7. 2: The Class P, pp. 234–241.

© 2009 citizendia.org; parts available under the terms of GNU Free Documentation License, from http://en.wikipedia.org
network: | |