Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
2.13k
184k
source
stringlengths
31
108
In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment. In another usage in statistics, normalization refers to the creation of shifted and scaled versions of statistics, where the intention is that these normalized values allow the comparison of corresponding normalized values for different datasets in a way that eliminates the effects of certain gross influences, as in an anomaly time series. Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms of levels of measurement, such ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). In theoretical statistics, parametric normalization can often lead to pivotal quantities – functions whose sampling distribution does not depend on the parameters – and to ancillary statistics – pivotal quantities that can be computed from observations, without knowing parameters. ## History ### Standard score (Z-score) The concept of normalization emerged alongside the study of the normal distribution by Abraham De Moivre, Pierre-Simon Laplace, and Carl Friedrich Gauss from the 18th to the 19th century. As the name “standard” refers to the particular normal distribution with expectation zero and standard deviation one, that is, the standard normal distribution, normalization, in this case, “standardization”, was then used to refer to the rescaling of any distribution or data set to have mean zero and standard deviation one. While the study of normal distribution structured the process of standardization, the result of this process, also known as the Z-score, given by the difference between sample value and population mean divided by population standard deviation and measuring the number of standard deviations of a value from its population mean, was not formalized and popularized until Ronald Fisher and Karl Pearson elaborated the concept as part of the broader framework of statistical inference and hypothesis testing in the early 20th century. ### Student’s t-Statistic William Sealy Gosset initiated the adjustment of normal distribution and standard score on small sample size. Educated in Chemistry and Mathematics at Winchester and Oxford, Gosset was employed by Guinness Brewery, the biggest brewer in Ireland back then, and was tasked with precise quality control. It was through small-sample experiments that Gosset discovered that the distribution of the means using small-scaled samples slightly deviated from the distribution of the means using large-scaled samples – the normal distribution – and appeared “taller and narrower” in comparison. This finding was later published in a Guinness internal report titled The application of the “Law of Error” to the work of the brewery and was sent to Karl Pearson for further discussion, which later yielded a formal publishment titled The probable error of a mean in the year of 1908. Under Guinness Brewery’s privacy restrictions, Gosset published the paper under the pseudo “Student”. Gosset’s work was later enhanced and transformed by Ronald Fisher to the form that is used today, and was, alongside the names “Student’s t distribution” – referring to the adjusted normal distribution Gosset proposed, and “Student’s t-statistic” – referring to the test statistic used in measuring the departure of the estimated value of a parameter from its hypothesized value divided by its standard error, popularized through Fisher’s publishment titled Applications of “Student’s” distribution. ### Feature Scaling The rise of computers and multivariate statistics in mid-20th century necessitated normalization to process data with different units, hatching feature scaling – a method used to rescale data to a fixed range – like min-max scaling and robust scaling. This modern normalization process especially targeting large-scaled data became more formalized in fields including machine learning, pattern recognition, and neural networks in late 20th century. ### Batch Normalization Batch normalization was proposed by Sergey Ioffe and Christian Szegedy in 2015 to enhance the efficiency of training in neural networks. ## Examples There are different types of normalizations in statistics – nondimensional ratios of errors, residuals, means and standard deviations, which are hence scale invariant – some of which may be summarized as follows. Note that in terms of levels of measurement, these ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). See also Category:Statistical ratios. Name Formula Use Standard score Normalizing errors when population parameters are known. Works well for populations that are normally distributed Student's t-statistic the departure of the estimated value of a parameter from its hypothesized value, normalized by its standard error. Studentized residual Normalizing residuals when parameters are estimated, particularly across different data points in regression analysis. Standardized moment Normalizing moments, using the standard deviation as a measure of scale. Coefficient of variation Normalizing dispersion, using the mean as a measure of scale, particularly for positive distribution such as the exponential distribution and Poisson distribution. Min-max feature scaling Feature scaling is used to bring all values into the range [0,1]. This is also called unity-based normalization. This can be generalized to restrict the range of values in the dataset between any arbitrary points and , using for example . Note that some other ratios, such as the variance-to-mean ratio $$ \left(\frac{\sigma^2}{\mu}\right) $$ , are also done for normalization, but are not nondimensional: the units do not cancel, and thus the ratio has units, and is not scale-invariant. ## Other types Other non-dimensional normalizations that can be used with no assumptions on the distribution include: - Assignment of percentiles. This is common on standardized tests. See also quantile normalization. - Normalization by adding and/or multiplying by constants so values fall between 0 and 1. This is used for probability density functions, with applications in fields such as quantum mechanics in assigning probabilities to .
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in $$ \hat{\mathbf{v}} $$ (pronounced "v-hat"). The term normalized vector is sometimes used as a synonym for unit vector. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., $$ \mathbf{\hat{u}} = \frac{\mathbf{u}}{\|\mathbf{u}\|}=(\frac{u_1}{\|\mathbf{u}\|}, \frac{u_2}{\|\mathbf{u}\|}, ... , \frac{u_n}{\|\mathbf{u}\|}) $$ where ‖u‖ is the norm (or length) of u and $$ \|\mathbf{u}\| = (u_1, u_2, ..., u_n) $$ . The proof is the following: $$ \|\mathbf{\hat{u}}\|=\sqrt{\frac{u_1}{\sqrt{u_1^2+...+u_n^2}}^2+...+\frac{u_n}{\sqrt{u_1^2+...+u_n^2}}^2}=\sqrt{\frac{u_1^2+...+u_n^2}{u_1^2+...+u_n^2}}=\sqrt{1}=1 $$ A unit vector is often used to represent directions, such as normal directions. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination form of unit vectors. ## Orthogonal coordinates ### Cartesian coordinates Unit vectors may be used to represent the axes of a Cartesian coordinate system. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are $$ \mathbf{\hat{x}} = \begin{bmatrix}1\\0\\0\end{bmatrix}, \,\, \mathbf{\hat{y}} = \begin{bmatrix}0\\1\\0\end{bmatrix}, \,\, \mathbf{\hat{z}} = \begin{bmatrix}0\\0\\1\end{bmatrix} $$ They form a set of mutually orthogonal unit vectors, typically referred to as a standard basis in linear algebra. They are often denoted using common vector notation (e.g., x or $$ \vec{x} $$ ) rather than standard unit vector notation (e.g., x̂). In most contexts it can be assumed that x, y, and z, (or $$ \vec{x}, $$ $$ \vec{y}, $$ and $$ \vec{z} $$ ) are versors of a 3-D Cartesian coordinate system. The notations (î, ĵ, k̂), (x̂1, x̂2, x̂3), (êx, êy, êz), or (ê1, ê2, ê3), with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables). When a unit vector in space is expressed in Cartesian notation as a linear combination of x, y, z, its three scalar components can be referred to as direction cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). ### Cylindrical coordinates The three orthogonal unit vectors appropriate to cylindrical symmetry are: - $$ \boldsymbol{\hat{\rho}} $$ (also designated $$ \mathbf{\hat{e}} $$ or $$ \boldsymbol{\hat s} $$ ), representing the direction along which the distance of the point from the axis of symmetry is measured; - $$ \boldsymbol{\hat \varphi} $$ , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; - $$ \mathbf{\hat{z}} $$ , representing the direction of the symmetry axis; They are related to the Cartesian basis $$ \hat{x} $$ , $$ \hat{y} $$ , $$ \hat{z} $$ by: $$ \boldsymbol{\hat{\rho}} = \cos(\varphi)\mathbf{\hat{x}} + \sin(\varphi)\mathbf{\hat{y}} $$ $$ \boldsymbol{\hat \varphi} = -\sin(\varphi) \mathbf{\hat{x}} + \cos(\varphi) \mathbf{\hat{y}} $$ $$ \mathbf{\hat{z}} = \mathbf{\hat{z}}. $$ The vectors $$ \boldsymbol{\hat{\rho}} $$ and $$ \boldsymbol{\hat \varphi} $$ are functions of $$ \varphi, $$ and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to $$ \varphi $$ are: $$ \frac{\partial \boldsymbol{\hat{\rho}}} {\partial \varphi} = -\sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} = \boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat \varphi}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\boldsymbol{\hat{\rho}} $$ $$ \frac{\partial \mathbf{\hat{z}}} {\partial \varphi} = \mathbf{0}. $$ ### Spherical coordinates The unit vectors appropriate to spherical symmetry are: $$ \mathbf{\hat{r}} $$ , the direction in which the radial distance from the origin increases; $$ \boldsymbol{\hat{\varphi}} $$ , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and $$ \boldsymbol{\hat \theta} $$ , the direction in which the angle from the positive z axis is increasing. To minimize redundancy of representations, the polar angle $$ \theta $$ is usually taken to lie between zero and 180 degrees. It is especially important to note the context of any ordered triplet written in spherical coordinates, as the roles of $$ \boldsymbol{\hat \varphi} $$ and $$ \boldsymbol{\hat \theta} $$ are often reversed. Here, the American "physics" convention is used. This leaves the azimuthal angle $$ \varphi $$ defined the same as in cylindrical coordinates. The Cartesian relations are: $$ \mathbf{\hat{r}} = \sin \theta \cos \varphi\mathbf{\hat{x}} + \sin \theta \sin \varphi\mathbf{\hat{y}} + \cos \theta\mathbf{\hat{z}} $$ $$ \boldsymbol{\hat \theta} = \cos \theta \cos \varphi\mathbf{\hat{x}} + \cos \theta \sin \varphi\mathbf{\hat{y}} - \sin \theta\mathbf{\hat{z}} $$ $$ \boldsymbol{\hat \varphi} = - \sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} $$ The spherical unit vectors depend on both $$ \varphi $$ and $$ \theta $$ , and hence there are 5 possible non-zero derivatives. For a more complete description, see Jacobian matrix and determinant. The non-zero derivatives are: $$ \frac{\partial \mathbf{\hat{r}}} {\partial \varphi} = -\sin \theta \sin \varphi\mathbf{\hat{x}} + \sin \theta \cos \varphi\mathbf{\hat{y}} = \sin \theta\boldsymbol{\hat \varphi} $$ $$ \frac{\partial \mathbf{\hat{r}}} {\partial \theta} =\cos \theta \cos \varphi\mathbf{\hat{x}} + \cos \theta \sin \varphi\mathbf{\hat{y}} - \sin \theta\mathbf{\hat{z}}= \boldsymbol{\hat \theta} $$ $$ \frac{\partial \boldsymbol{\hat{\theta}}} {\partial \varphi} =-\cos \theta \sin \varphi\mathbf{\hat{x}} + \cos \theta \cos \varphi\mathbf{\hat{y}} = \cos \theta\boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat{\theta}}} {\partial \theta} = -\sin \theta \cos \varphi\mathbf{\hat{x}} - \sin \theta \sin \varphi\mathbf{\hat{y}} - \cos \theta\mathbf{\hat{z}} = -\mathbf{\hat{r}} $$ $$ \frac{\partial \boldsymbol{\hat{\varphi}}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\sin \theta\mathbf{\hat{r}} -\cos \theta\boldsymbol{\hat{\theta}} $$ ### General unit vectors Common themes of unit vectors occur throughout physics and geometry: Unit vector Nomenclature Diagram Tangent vector to a curve/flux line A normal vector to the plane containing and defined by the radial position vector and angular tangential direction of rotation is necessary so that the vector equations of angular motion hold.Normal to a surface tangent plane/plane containing radial position component and angular tangential component In terms of polar coordinates; Binormal vector to tangent and normal Parallel to some axis/line One unit vector aligned parallel to a principal direction (red line), and a perpendicular unit vector is in any radial direction relative to the principal line. Perpendicular to some axis/line in some radial direction Possible angular deviation relative to some axis/line Unit vector at acute deviation angle φ (including 0 or π/2 rad) relative to a principal direction. ## Curvilinear coordinates In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors $$ \mathbf{\hat{e}}_n $$ (the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denoted $$ \mathbf{\hat{e}}_1, \mathbf{\hat{e}}_2, \mathbf{\hat{e}}_3 $$ . It is nearly always convenient to define the system to be orthonormal and right-handed: $$ \mathbf{\hat{e}}_i \cdot \mathbf{\hat{e}}_j = \delta_{ij} $$ $$ \mathbf{\hat{e}}_i \cdot (\mathbf{\hat{e}}_j \times \mathbf{\hat{e}}_k) = \varepsilon_{ijk} $$ where $$ \delta_{ij} $$ is the Kronecker delta (which is 1 for i = j, and 0 otherwise) and $$ \varepsilon_{ijk} $$ is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji). ## Right versor A unit vector in $$ \mathbb{R}^3 $$ was called a right versor by W. R. Hamilton, as he developed his quaternions $$ \mathbb{H} \subset \mathbb{R}^4 $$ . In fact, he was the originator of the term vector, as every quaternion $$ q = s + v $$ has a scalar part s and a vector part v. If v is a unit vector in $$ \mathbb{R}^3 $$ , then the square of v in quaternions is −1. Thus by Euler's formula, $$ \exp (\theta v) = \cos \theta + v \sin \theta $$ is a versor in the 3-sphere. When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in $$ \mathbb{R}^3 $$ . Thus the right versors extend the notion of imaginary units found in the complex plane, where the right versors now range over the 2-sphere $$ \mathbb{S}^2 \subset \mathbb{R}^3 \subset \mathbb{H} $$ rather than the pair } in the complex plane. By extension, a right quaternion is a real multiple of a right versor.
https://en.wikipedia.org/wiki/Unit_vector
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems. Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics. Two dominant branches of analytical mechanics are ## Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and ## Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, ## Routhian mechanics , and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics. ## Motivation The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system. Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation. When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description. The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system. Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted. Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion. It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations. Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves. Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed. Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed. ## Intrinsic motion ### Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...). ### Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule: For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: $$ \mathbf{q} = (q_1, q_2, \dots, q_N) $$ and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: $$ \frac{d\mathbf{q}}{dt} = \left(\frac{dq_1}{dt}, \frac{dq_2}{dt}, \dots, \frac{dq_N}{dt}\right) \equiv \mathbf{\dot{q}} = (\dot{q}_1, \dot{q}_2, \dots, \dot{q}_N) . $$ ### D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: $$ \delta W = \boldsymbol{\mathcal{Q}} \cdot \delta\mathbf{q} = 0 \,, $$ where $$ \boldsymbol\mathcal{Q} = (\mathcal{Q}_1, \mathcal{Q}_2, \dots, \mathcal{Q}_N) $$ are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: $$ \boldsymbol\mathcal{Q} = \frac{d}{dt} \left ( \frac {\partial T}{\partial \mathbf{\dot{q}}} \right ) - \frac {\partial T}{\partial \mathbf{q}}\,, $$ where T is the total kinetic energy of the system, and the notation $$ \frac {\partial}{\partial \mathbf{q}} = \left(\frac{\partial }{\partial q_1}, \frac{\partial }{\partial q_2}, \dots, \frac{\partial }{\partial q_N}\right) $$ is a useful shorthand (see matrix calculus for this notation). ### Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: $$ \mathbf{r} = \mathbf{r}(\mathbf{q}(t),t) $$ and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic. Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: $$ L(\mathbf{q},\mathbf{\dot{q}},t) = T(\mathbf{q},\mathbf{\dot{q}},t) - V(\mathbf{q},\mathbf{\dot{q}},t) $$ where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; $$ \frac{d}{dt}\left(\frac{\partial L}{\partial \mathbf{\dot{q}}}\right) = \frac{\partial L}{\partial \mathbf{q}} \,, $$ which are a set of N second-order ordinary differential equations, one for each qi(t). This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates: $$ \mathcal{C} = \{ \mathbf{q} \in \mathbb{R}^N \}\,, $$ where $$ \mathbb{R}^N $$ is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time: $$ \{ \mathbf{q}(t) \in \mathbb{R}^N \,:\,t\ge 0,t\in \mathbb{R}\}\subseteq\mathcal{C}\,, $$ The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle. Hamiltonian mechanics The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates: $$ \mathbf{p} = \frac{\partial L}{\partial \mathbf{\dot{q}}} = \left(\frac{\partial L}{\partial \dot{q}_1},\frac{\partial L}{\partial \dot{q}_2},\cdots \frac{\partial L}{\partial \dot{q}_N}\right) = (p_1, p_2\cdots p_N)\,, $$ and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta): $$ H(\mathbf{q},\mathbf{p},t) = \mathbf{p}\cdot\mathbf{\dot{q}} - L(\mathbf{q},\mathbf{\dot{q}},t) $$ where $$ \cdot $$ denotes the dot product, also leading to Hamilton's equations: $$ \mathbf{\dot{p}} = - \frac{\partial H}{\partial \mathbf{q}}\,,\quad \mathbf{\dot{q}} = + \frac{\partial H}{\partial \mathbf{p}} \,, $$ which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian: $$ \frac{dH}{dt}=-\frac{\partial L}{\partial t}\,, $$ which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: $$ \mathbf{\dot{p}} = \boldsymbol{\mathcal{Q}}\,. $$ Analogous to the configuration space, the set of all momenta is the generalized momentum space: $$ \mathcal{M} = \{ \mathbf{p}\in\mathbb{R}^N \}\,. $$ ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: $$ \mathcal{P} = \mathcal{C}\times\mathcal{M} = \{ (\mathbf{q},\mathbf{p})\in\mathbb{R}^{2N} \} \,, $$ that is, the Cartesian product of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait: $$ \{ (\mathbf{q}(t),\mathbf{p}(t))\in\mathbb{R}^{2N}\,:\,t\ge0, t\in\mathbb{R} \} \subseteq \mathcal{P}\,, $$ ### The Poisson bracket All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta: $$ \begin{align} \{A,B\} \equiv \{A,B\}_{\mathbf{q},\mathbf{p}} & = \frac{\partial A}{\partial \mathbf{q}}\cdot\frac{\partial B}{\partial \mathbf{p}} - \frac{\partial A}{\partial \mathbf{p}}\cdot\frac{\partial B}{\partial \mathbf{q}}\\ & \equiv \sum_k \frac{\partial A}{\partial q_k}\frac{\partial B}{\partial p_k} - \frac{\partial A}{\partial p_k}\frac{\partial B}{\partial q_k}\,, \end{align} $$ Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A: $$ \frac{dA}{dt} = \{A,H\} + \frac{\partial A}{\partial t}\,. $$ This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization: $$ \{A,B\} \rightarrow \frac{1}{i\hbar}[\hat{A},\hat{B}]\,. $$ ## Properties of the Lagrangian and the Hamiltonian Following are overlapping properties between the Lagrangian and Hamiltonian functions.Classical Mechanics, T.W.B. Kibble, European Physics Series, McGraw-Hill (UK), 1973, - All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence. - The Lagrangian is invariant under addition of the total time derivative of any function of q and t, that is: $$ L' = L +\frac{d}{dt}F(\mathbf{q},t) \,, $$ so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique. - Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case). This property is used in canonical transformations (see below). - If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. - If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). - If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it. ## Principle of least action Action is another quantity in analytical mechanics defined as a functional of the Lagrangian: A general way to find the equations of motion from the action is the principle of least action:Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system. From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity.Relativity, Gravitation, and Cosmology, R.J.A. Lambourne, Open University, Cambridge University Press, 2010, ## Hamiltonian-Jacobi mechanics Canonical transformations The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways: With the restriction on P and Q such that the transformed Hamiltonian system is: the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem. The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity, for all i = 1, 2,...N. If this does not hold then the transformation is not canonical. The Hamilton–Jacobi equation By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C: the generalized momenta become: and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation: where H is the Hamiltonian as before: Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H. The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields. Routhian mechanics Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian: which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q, and N − s Lagrangian equations in the non cyclic coordinates ζ. Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom. The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion. ## Appellian mechanics Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates: as well as generalized forces mentioned above in D'Alembert's principle. The equations are where is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr. ## Classical field theory ### Lagrangian field theory Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear. This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields. The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation. ### Hamiltonian field theory The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are: where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density is defined by analogy with mechanics: The equations of motion are: where the variational derivative must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian ## Symmetry, conservation, and Noether's theorem Symmetry transformations in classical space and time Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries. Transformation Operator Position Momentum Translational symmetry Time translation Rotational invariance Galilean transformations Parity T-symmetry where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂''' and angle θ. Noether's theorem Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s: the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q'' will be conserved.
https://en.wikipedia.org/wiki/Analytical_mechanics
In database theory, the PACELC design principle is an extension to the CAP theorem. It states that in case of network partitioning (P) in a distributed computer system, one has to choose between availability (A) and consistency (C) (as per the CAP theorem), but else (E), even when the system is running normally in the absence of partitions, one has to choose between latency (L) and loss of consistency (C). ## Overview The CAP theorem can be phrased as "PAC", the impossibility theorem that no distributed data store can be both consistent and available in executions that contains partitions. This can be proved by examining latency: if a system ensures consistency, then operation latencies grow with message delays, and hence operations cannot terminate eventually if the network is partitioned, i.e. the system cannot ensure availability. In the absence of partitions, both consistency and availability can be satisfied. PACELC therefore goes further and examines how the system replicates data. Specifically, in the absence of partitions, an additional trade-off (ELC) exists between latency and consistency. If the store is atomically consistent, then the sum of the read and write delay is at least the message delay. In practice, most systems rely on explicit acknowledgments rather than timed delays to ensure delivery, requiring a full network round trip and therefore message delay on both reads and writes to ensure consistency. In low latency systems, in contrast, consistency is relaxed in order to reduce latency. There are four configurations or tradeoffs in the PACELC space: - PA/EL - prioritize availability and latency over consistency - PA/EC - when there is a partition, choose availability; else, choose consistency - PC/EL - when there is a partition, choose consistency; else, choose latency - PC/EC - choose consistency at all times PC/EC and PA/EL provide natural cognitive models for an application developer. A PC/EC system provides a firm guarantee of atomic consistency, as in ACID, while PA/EL provides high availability and low latency with a more complex consistency model. In contrast, PA/EC and PC/EL systems only make conditional guarantees of consistency. The developer still has to write code to handle the cases where the guarantee is not upheld. PA/EC systems are rare outside of the in-memory data grid industry, where systems are localized to geographic regions and the latency vs. consistency tradeoff is not significant. PC/EL is even more tricky to understand. PC does not indicate that the system is fully consistent; rather it indicates that the system does not reduce consistency beyond the baseline consistency level when a network partition occurs—instead, it reduces availability. Some experts like Marc Brooker argue that the CAP theorem is particularly relevant in intermittently connected environments, such as those related to the Internet of Things (IoT) and mobile applications. In these contexts, devices may become partitioned due to challenging physical conditions, such as power outages or when entering confined spaces like elevators. For distributed systems, such as cloud applications, it is more appropriate to use the PACELC theorem, which is more comprehensive and considers trade-offs such as latency and consistency even in the absence of network partitions. ## History The PACELC theorem was first described by Daniel Abadi from Yale University in 2010 in a blog post, which he later clarified in a paper in 2012. The purpose of PACELC is to address his thesis that "Ignoring the consistency/latency trade-off of replicated systems is a major oversight [in CAP], as it is present at all times during system operation, whereas CAP is only relevant in the arguably rare case of a network partition." The PACELC theorem was proved formally in 2018 in a SIGACT News article. ## Database PACELC ratings Original database PACELC ratings are from. Subsequent updates contributed by wikipedia community. - The default versions of Amazon's early (internal) Dynamo, Cassandra, Riak, and Cosmos DB are PA/EL systems: if a partition occurs, they give up consistency for availability, and under normal operation they give up consistency for lower latency. - Fully ACID systems such as VoltDB/H-Store, Megastore, MySQL Cluster, and PostgreSQL are PC/EC: they refuse to give up consistency, and will pay the availability and latency costs to achieve it. Bigtable and related systems such as HBase are also PC/EC. - Amazon DynamoDB (launched January 2012) is quite different from the early (Amazon internal) Dynamo which was considered for the PACELC paper. DynamoDB follows a strong leader model, where every write is strictly serialized (and conditional writes carry no penalty) and supports read-after-write consistency. This guarantee does not apply to "Global Tables" across regions. The DynamoDB SDKs use eventually consistent reads by default (improved availability and throughput), but when a consistent read is requested the service will return either a current view to the item or an error. - Couchbase provides a range of consistency and availability options during a partition, and equally a range of latency and consistency options with no partition. Unlike most other databases, Couchbase doesn't have a single API set nor does it scale/replicate all data services homogeneously. For writes, Couchbase favors Consistency over Availability making it formally CP, but on read there is more user-controlled variability depending on index replication, desired consistency level and type of access (single document lookup vs range scan vs full-text search, etc.). On top of that, there is then further variability depending on cross-datacenter-replication (XDCR) which takes multiple CP clusters and connects them with asynchronous replication and Couchbase Lite which is an embedded database and creates a fully multi-master (with revision tracking) distributed topology. - Cosmos DB supports five tunable consistency levels that allow for tradeoffs between C/A during P, and L/C during E. Cosmos DB never violates the specified consistency level, so it's formally CP. - MongoDB can be classified as a PA/EC system. In the baseline case, the system guarantees reads and writes to be consistent. - PNUTS is a PC/EL system. - Hazelcast IMDG and indeed most in-memory data grids are an implementation of a PA/EC system; Hazelcast can be configured to be EL rather than EC. Concurrency primitives (Lock, AtomicReference, CountDownLatch, etc.) can be either PC/EC or PA/EC. - FaunaDB implements Calvin, a transaction protocol created by Dr. Daniel Abadi, the author of the PACELC theorem, and offers users adjustable controls for LC tradeoff. It is PC/EC for strictly serializable transactions, and EL for serializable reads. DDBS P+A P+C E+L E+CAerospikepaid onlyoptionalBigtable/HBase Cassandra Cosmos DB Couchbase Dynamo DynamoDB FaunaDB Hazelcast IMDG Megastore MongoDB MySQL Cluster PNUTS PostgreSQL Riak SpiceDB VoltDB/H-Store
https://en.wikipedia.org/wiki/PACELC_design_principle
In computer science, a binary search tree (BST), also called an ordered or sorted binary tree, is a rooted binary tree data structure with the key of each internal node being greater than all the keys in the respective node's left subtree and less than the ones in its right subtree. The time complexity of operations on the binary search tree is linear with respect to the height of the tree. Binary search trees allow binary search for fast lookup, addition, and removal of data items. Since the nodes in a BST are laid out so that each comparison skips about half of the remaining tree, the lookup performance is proportional to that of binary logarithm. BSTs were devised in the 1960s for the problem of efficient storage of labeled data and are attributed to Conway Berners-Lee and David Wheeler. The performance of a binary search tree is dependent on the order of insertion of the nodes into the tree since arbitrary insertions may lead to degeneracy; several variations of the binary search tree can be built with guaranteed worst-case performance. The basic operations include: search, traversal, insert and delete. BSTs with guaranteed worst-case complexities perform better than an unsorted array, which would require linear search time. The complexity analysis of BST shows that, on average, the insert, delete and search takes $$ O(\log n) $$ for $$ n $$ nodes. In the worst case, they degrade to that of a singly linked list: $$ O(n) $$ . To address the boundless increase of the tree height with arbitrary insertions and deletions, self-balancing variants of BSTs are introduced to bound the worst lookup complexity to that of the binary logarithm. AVL trees were the first self-balancing binary search trees, invented in 1962 by Georgy Adelson-Velsky and Evgenii Landis. Binary search trees can be used to implement abstract data types such as dynamic sets, lookup tables and priority queues, and used in sorting algorithms such as tree sort. ## History The binary search tree algorithm was discovered independently by several researchers, including P.F. Windley, Andrew Donald Booth, Andrew Colin, Thomas N. Hibbard. The algorithm is attributed to Conway Berners-Lee and David Wheeler, who used it for storing labeled data in magnetic tapes in 1960. One of the earliest and popular binary search tree algorithm is that of Hibbard. The time complexity of a binary search tree increases boundlessly with the tree height if the nodes are inserted in an arbitrary order, therefore self-balancing binary search trees were introduced to bound the height of the tree to $$ O(\log n) $$ . Various height-balanced binary search trees were introduced to confine the tree height, such as AVL trees, Treaps, and red–black trees. The AVL tree was invented by Georgy Adelson-Velsky and Evgenii Landis in 1962 for the efficient organization of information. English translation by Myron J. Ricci in Soviet Mathematics - Doklady, 3:1259–1263, 1962. It was the first self-balancing binary search tree to be invented. ## Overview A binary search tree is a rooted binary tree in which nodes are arranged in strict total order in which the nodes with keys greater than any particular node A is stored on the right sub-trees to that node A and the nodes with keys equal to or less than A are stored on the left sub-trees to A, satisfying the binary search property. Binary search trees are also efficacious in sortings and search algorithms. However, the search complexity of a BST depends upon the order in which the nodes are inserted and deleted; since in worst case, successive operations in the binary search tree may lead to degeneracy and form a singly linked list (or "unbalanced tree") like structure, thus has the same worst-case complexity as a linked list. Binary search trees are also a fundamental data structure used in construction of abstract data structures such as sets, multisets, and associative arrays. ## Operations ### Searching Searching in a binary search tree for a specific key can be programmed recursively or iteratively. Searching begins by examining the root node. If the tree is , the key being searched for does not exist in the tree. Otherwise, if the key equals that of the root, the search is successful and the node is returned. If the key is less than that of the root, the search proceeds by examining the left subtree. Similarly, if the key is greater than that of the root, the search proceeds by examining the right subtree. This process is repeated until the key is found or the remaining subtree is $$ \text{nil} $$ . If the searched key is not found after a $$ \text{nil} $$ subtree is reached, then the key is not present in the tree. #### Recursive search The following pseudocode implements the BST search procedure through recursion. Recursive-Tree-Search(x, key) if x = NIL or key = x.key then return x if key < x.key then return Recursive-Tree-Search(x.left, key) else return Recursive-Tree-Search(x.right, key) end if The recursive procedure continues until a $$ \text{nil} $$ or the $$ \text{key} $$ being searched for are encountered. #### Iterative search The recursive version of the search can be "unrolled" into a while loop. On most machines, the iterative version is found to be more efficient. Iterative-Tree-Search(x, key) while x ≠ NIL and key ≠ x.key do if key < x.key then x := x.left else x := x.right end if repeat return x Since the search may proceed till some leaf node, the running time complexity of BST search is $$ O(h) $$ where $$ h $$ is the height of the tree. However, the worst case for BST search is $$ O(n) $$ where $$ n $$ is the total number of nodes in the BST, because an unbalanced BST may degenerate to a linked list. However, if the BST is height-balanced the height is $$ O(\log n) $$ . #### Successor and predecessor For certain operations, given a node $$ \text{x} $$ , finding the successor or predecessor of $$ \text{x} $$ is crucial. Assuming all the keys of a BST are distinct, the successor of a node $$ \text{x} $$ in a BST is the node with the smallest key greater than $$ \text{x} $$ 's key. On the other hand, the predecessor of a node $$ \text{x} $$ in a BST is the node with the largest key smaller than $$ \text{x} $$ 's key. The following pseudocode finds the successor and predecessor of a node $$ \text{x} $$ in a BST. BST-Successor(x) if x.right ≠ NIL then return BST-Minimum(x.right) end if y := x.parent while y ≠ NIL and x = y.right do x := y y := y.parent repeat return y BST-Predecessor(x) if x.left ≠ NIL then return BST-Maximum(x.left) end if y := x.parent while y ≠ NIL and x = y.left do x := y y := y.parent repeat return y Operations such as finding a node in a BST whose key is the maximum or minimum are critical in certain operations, such as determining the successor and predecessor of nodes. Following is the pseudocode for the operations. BST-Maximum(x) while x.right ≠ NIL do x := x.right repeat return x BST-Minimum(x) while x.left ≠ NIL do x := x.left repeat return x ### Insertion Operations such as insertion and deletion cause the BST representation to change dynamically. The data structure must be modified in such a way that the properties of BST continue to hold. New nodes are inserted as leaf nodes in the BST. Following is an iterative implementation of the insertion operation. 1 BST-Insert(T, z) 2 y := NIL 3 x := T.root 4 while x ≠ NIL do 5 y := x 6 if z.key < x.key then 7 x := x.left 8 else 9 x := x.right 10 end if 11 repeat 12 z.parent := y 13 if y = NIL then 14 T.root := z 15 else if z.key < y.key then 16 y.left := z 17 else 18 y.right := z 19 end if The procedure maintains a "trailing pointer" $$ \text{y} $$ as a parent of $$ \text{x} $$ . After initialization on line 2, the while loop along lines 4-11 causes the pointers to be updated. If $$ \text{y} $$ is $$ \text{nil} $$ , the BST is empty, thus $$ \text{z} $$ is inserted as the root node of the binary search tree $$ \text{T} $$ , if it is not $$ \text{nil} $$ , insertion proceeds by comparing the keys to that of $$ \text{y} $$ on the lines 15-19 and the node is inserted accordingly. ### Deletion The deletion of a node, say $$ \text{Z} $$ , from the binary search tree $$ \text{BST} $$ has three cases: 1. If $$ \text{Z} $$ is a leaf node, it is replaced by $$ \text{NIL} $$ as shown in (a). 1. If $$ \text{Z} $$ has only one child, the child node of $$ \text{Z} $$ gets elevated by modifying the parent node of $$ \text{Z} $$ to point to the child node, consequently taking $$ \text{Z} $$ 's position in the tree, as shown in (b) and (c). 1. If $$ \text{Z} $$ has both left and right children, the in-order successor of $$ \text{Z} $$ , say $$ \text{Y} $$ , displaces $$ \text{Z} $$ by following the two cases: 1. If $$ \text{Y} $$ is $$ \text{Z} $$ 's right child, as shown in (d), $$ \text{Y} $$ displaces $$ \text{Z} $$ and $$ \text{Y} $$ 's right child remain unchanged. 1. If $$ \text{Y} $$ lies within $$ \text{Z} $$ 's right subtree but is not $$ \text{Z} $$ 's right child, as shown in (e), $$ \text{Y} $$ first gets replaced by its own right child, and then it displaces $$ \text{Z} $$ 's position in the tree. 1. Alternatively, the in-order predecessor can also be used. The following pseudocode implements the deletion operation in a binary search tree. 1 BST-Delete(BST, z) 2 if z.left = NIL then 3 Shift-Nodes(BST, z, z.right) 4 else if z.right = NIL then 5 Shift-Nodes(BST, z, z.left) 6 else 7 y := BST-Successor(z) 8 if y.parent ≠ z then 9 Shift-Nodes(BST, y, y.right) 10 y.right := z.right 11 y.right.parent := y 12 end if 13 Shift-Nodes(BST, z, y) 14 y.left := z.left 15 y.left.parent := y 16 end if 1 Shift-Nodes(BST, u, v) 2 if u.parent = NIL then 3 BST.root := v 4 else if u = u.parent.left then 5 u.parent.left := v 5 else 6 u.parent.right := v 7 end if 8 if v ≠ NIL then 9 v.parent := u.parent 10 end if The $$ \text{BST-Delete} $$ procedure deals with the 3 special cases mentioned above. Lines 2-3 deal with case 1; lines 4-5 deal with case 2 and lines 6-16 for case 3. The helper function $$ \text{Shift-Nodes} $$ is used within the deletion algorithm for the purpose of replacing the node $$ \text{u} $$ with $$ \text{v} $$ in the binary search tree $$ \text{BST} $$ . This procedure handles the deletion (and substitution) of $$ \text{u} $$ from $$ \text{BST} $$ . ## Traversal A BST can be traversed through three basic algorithms: inorder, preorder, and postorder tree walks. - Inorder tree walk: Nodes from the left subtree get visited first, followed by the root node and right subtree. Such a traversal visits all the nodes in the order of non-decreasing key sequence. - Preorder tree walk: The root node gets visited first, followed by left and right subtrees. - Postorder tree walk: Nodes from the left subtree get visited first, followed by the right subtree, and finally, the root. Following is a recursive implementation of the tree walks. Inorder-Tree-Walk(x) if x ≠ NIL then Inorder-Tree-Walk(x.left) visit node Inorder-Tree-Walk(x.right) end if Preorder-Tree-Walk(x) if x ≠ NIL then visit node Preorder-Tree-Walk(x.left) Preorder-Tree-Walk(x.right) end if Postorder-Tree-Walk(x) if x ≠ NIL then Postorder-Tree-Walk(x.left) Postorder-Tree-Walk(x.right) visit node end if ## Balanced binary search trees Without rebalancing, insertions or deletions in a binary search tree may lead to degeneration, resulting in a height $$ n $$ of the tree (where $$ n $$ is number of items in a tree), so that the lookup performance is deteriorated to that of a linear search. Keeping the search tree balanced and height bounded by $$ O(\log n) $$ is a key to the usefulness of the binary search tree. This can be achieved by "self-balancing" mechanisms during the updation operations to the tree designed to maintain the tree height to the binary logarithmic complexity. ### Height-balanced trees A tree is height-balanced if the heights of the left sub-tree and right sub-tree are guaranteed to be related by a constant factor. This property was introduced by the AVL tree and continued by the red–black tree. The heights of all the nodes on the path from the root to the modified leaf node have to be observed and possibly corrected on every insert and delete operation to the tree. ### Weight-balanced trees In a weight-balanced tree, the criterion of a balanced tree is the number of leaves of the subtrees. The weights of the left and right subtrees differ at most by $$ 1 $$ . However, the difference is bound by a ratio $$ \alpha $$ of the weights, since a strong balance condition of $$ 1 $$ cannot be maintained with $$ O(\log n) $$ rebalancing work during insert and delete operations. The $$ \alpha $$ -weight-balanced trees gives an entire family of balance conditions, where each left and right subtrees have each at least a fraction of $$ \alpha $$ of the total weight of the subtree. ### Types There are several self-balanced binary search trees, including T-tree, treap, red-black tree, B-tree, 2–3 tree, and Splay tree. ## Examples of applications ### Sort Binary search trees are used in sorting algorithms such as tree sort, where all the elements are inserted at once and the tree is traversed at an in-order fashion. BSTs are also used in quicksort. ### Priority queue operations Binary search trees are used in implementing priority queues, using the node's key as priorities. Adding new elements to the queue follows the regular BST insertion operation but the removal operation depends on the type of priority queue: - If it is an ascending order priority queue, removal of an element with the lowest priority is done through leftward traversal of the BST. - If it is a descending order priority queue, removal of an element with the highest priority is done through rightward traversal of the BST.
https://en.wikipedia.org/wiki/Binary_search_tree
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved. Here, "quickly" means an algorithm exists that solves the task and runs in polynomial time (as opposed to, say, exponential time), meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be verified in polynomial time is "NP", standing for "nondeterministic polynomial time". An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. ## #### Example Consider the following yes/no problem: given an incomplete Sudoku grid of size $$ n^2 \times n^2 $$ , is there at least one legal solution where every row, column, and $$ n \times n $$ square contains the integers 1 through $$ n^2 $$ ? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.) ## History The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973). Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated. ## Context The relation between the complexity classes ### P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other). In this theory, the class P consists of all decision problems (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes: Is P equal to NP? Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era." ## ### NP-completeness To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP. NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time. For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known. From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine M guaranteed to halt in polynomial time, does a polynomial-size input that M will accept exist? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists. The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem". ## Harder problems Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games. The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least $$ 2^{2^{cn}} $$ for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all. It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems. ## Problems in NP not known to be in P or NP-complete In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time $$ O\left (\exp \left ( \left (\tfrac{64n}{9} \log(2) \right )^{\frac{1}{3}} \left ( \log(n\log(2)) \right )^{\frac{2}{3}} \right) \right ) $$ to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes. ## Does P mean "easy"? All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common assumption in complexity theory; but there are caveats. First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than $$ 2 \uparrow \uparrow (2 \uparrow \uparrow (2 \uparrow \uparrow (h/2) ) ) $$ (using Knuth's up-arrow notation), and where h is the number of vertices in H. On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms. Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms. ## Reasons to believe ### P ≠ NP or ### P = NP Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH. It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience. On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made: ### DLIN vs NLIN When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN. It is known that DLIN ≠ NLIN. ## Consequences of solution One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well. P = NP A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields. It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them. A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including: - Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet. - Symmetric ciphers such as AES or 3DES, used for the encryption of communications data. - Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT. These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP. There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology. These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics: Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says: Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle. Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof: P ≠ NP A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place. P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds. ## Results about difficulty of proof Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required. As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP: ClassificationDefinitionRelativizing proofsImagine a world where every algorithm is allowed to make queries to some fixed subroutine called an oracle (which can answer a fixed set of questions in constant time, such as an oracle that solves any traveling salesman problem in 1 step), and the running time of the oracle is not counted against the running time of the algorithm. Most proofs (especially classical ones) apply uniformly in a world with oracles regardless of what the oracle does. These proofs are called relativizing. In 1975, Baker, Gill, and Solovay showed that P = NP with respect to some oracles, while P ≠ NP for other oracles. As relativizing proofs can only prove statements that are true for all possible oracles, these techniques cannot resolve P = NP.Natural proofsIn 1993, Alexander Razborov and Steven Rudich defined a general class of proof techniques for circuit complexity lower bounds, called natural proofs. At the time, all previously known circuit lower bounds were natural, and circuit complexity was considered a very promising approach for resolving P = NP. However, Razborov and Rudich showed that if one-way functions exist, P and NP are indistinguishable to natural proof methods. Although the existence of one-way functions is unproven, most mathematicians believe that they do, and a proof of their existence would be a much stronger statement than P ≠ NP. Thus it is unlikely that natural proofs alone can resolve P = NP.Algebrizing proofsAfter the Baker–Gill–Solovay result, new non-relativizing proof techniques were successfully used to prove that IP = PSPACE. However, in 2008, Scott Aaronson and Avi Wigderson showed that the main technical tool used in the IP = PSPACE proof, known as arithmetization, was also insufficient to resolve P = NP. Arithmetization converts the operations of an algorithm to algebraic and basic arithmetic symbols and then uses those to analyze the workings. In the IP = PSPACE proof, they convert the black box and the Boolean circuits to an algebraic problem. As mentioned previously, it has been proven that this method is not viable to solve P = NP and other time complexity problems. These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results. These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms. ## Logical characterizations The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity. Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P. Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH). ## Polynomial-time algorithms No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP: // Algorithm that accepts the NP-complete language SUBSET-SUM. // // this is a polynomial-time algorithm if and only if P = NP. // // "Polynomial-time" means it returns "yes" in polynomial time when // the answer should be "yes", and runs forever when it is "no". // // Input: S = a finite set of integers // Output: "yes" if any subset of S adds up to 0. // Runs forever with no output otherwise. // Note: "Program number M" is the program obtained by // writing the integer M in binary, then // considering that string of bits to be a // program. Every possible program can be // generated this way, though most do nothing // because of syntax errors. FOR K = 1...∞ FOR M = 1...K Run program number M for K steps with input S IF the program outputs a list of distinct integers AND the integers are all in S AND the integers sum to 0 THEN OUTPUT "yes" and HALT This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm). This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first. ## Formal definitions P and NP A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning, $$ \mathsf{P} = \{ L : L=L(M) \text{ for some deterministic polynomial-time Turing machine } M \} $$ where $$ L(M) = \{ w\in\Sigma^{*}: M \text{ accepts } w \} $$ and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies two conditions: 1. M halts on all inputs w and 1. there exists $$ k \in N $$ such that $$ T_M(n)\in O(n^k) $$ , where O refers to the big O notation and $$ T_M(n) = \max\{ t_M(w) : w\in\Sigma^{*}, |w| = n \} $$ $$ t_M(w) = \text{ number of steps }M\text{ takes to halt on input }w. $$ NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of certificate and verifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier": Let L be a language over a finite alphabet, Σ. L ∈ NP if, and only if, there exists a binary relation $$ R\subset\Sigma^{*}\times\Sigma^{*} $$ and a positive integer k such that the following two conditions are satisfied: 1. ; and 1. the language is decidable by a deterministic Turing machine in polynomial time. A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L. Not all verifiers must be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time. Example Let $$ \mathrm{COMPOSITE} = \left \{x\in\mathbb{N} \mid x=pq \text{ for integers } p, q > 1 \right \} $$ $$ R = \left \{(x,y)\in\mathbb{N} \times\mathbb{N} \mid 1<y \leq \sqrt x \text{ and } y \text{ divides } x \right \}. $$ Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations). COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test. NP-completeness There are many equivalent ways of describing NP-completeness. Let L be a language over a finite alphabet Σ. L is NP-complete if, and only if, the following two conditions are satisfied: 1. L ∈ NP; and 1. any L′ in NP is polynomial-time-reducible to L (written as $$ L' \leq_{p} L $$ ), where $$ L' \leq_{p} L $$ if, and only if, the following two conditions are satisfied: 1. There exists f : Σ* → Σ* such that for all w in Σ* we have: $$ (w\in L' \Leftrightarrow f(w)\in L) $$ ; and 1. there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w. Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete. ## Claimed solutions While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted. ## Popular culture The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem. In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension". In the second episode of season 2 of Elementary, "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP. ## Similar problems - R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete. - A similar problem exists in the theory of algebraic complexity: VP vs. VNP problem. Like P vs. NP, the answer is currently unknown.
https://en.wikipedia.org/wiki/P_versus_NP_problem
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers. The components of a distributed system communicate and coordinate their actions by passing messages to one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail. ## Examples of distributed systems vary from SOA-based systems to microservices to massively multiplayer online games to peer-to-peer applications. Distributed systems cost significantly more than monolithic architectures, primarily due to increased needs for additional hardware, servers, gateways, firewalls, new subnets, proxies, and so on. Also, distributed systems are prone to fallacies of distributed computing. On the other hand, a well designed distributed system is more scalable, more durable, more changeable and more fine-tuned than a monolithic application deployed on a single machine. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but the total cost of ownership, and not just the infra cost must be considered. A computer program that runs within a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing. ## Introduction The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system, the following defining properties are commonly used as: - There are several autonomous computational entities (computers or nodes), each of which has its own local memory. - The entities communicate with each other by message passing. A distributed system may have a common goal, such as solving a large computational problem; the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users. Other typical properties of distributed systems include the following: - The system has to tolerate failures in individual computers. - The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program. - Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input. ## Patterns Here are common architectural patterns used for distributed computing: - Saga interaction pattern - Microservices - Event driven architecture ## Events vs. Messages In distributed systems, events represent a fact or state change (e.g., OrderPlaced) and are typically broadcast asynchronously to multiple consumers, promoting loose coupling and scalability. While events generally don’t expect an immediate response, acknowledgment mechanisms are often implemented at the infrastructure level (e.g., Kafka commit offsets, SNS delivery statuses) rather than being an inherent part of the event pattern itself. In contrast, messages serve a broader role, encompassing commands (e.g., ProcessPayment), events (e.g., PaymentProcessed), and documents (e.g., DataPayload). Both events and messages can support various delivery guarantees, including at-least-once, at-most-once, and exactly-once, depending on the technology stack and implementation. However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics. Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one). While request/reply is technically possible, it is more commonly associated with messaging patterns rather than pure event-driven systems. Events excel at state propagation and decoupled notifications, while messages are better suited for command execution, workflow orchestration, and explicit coordination. Modern architectures commonly combine both approaches, leveraging events for distributed state change notifications and messages for targeted command execution and structured workflows based on specific timing, ordering, and delivery requirements. ## Parallel and distributed computing Distributed systems are groups of networked computers which share a common goal for their work. The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria: - In parallel computing, all processors may have access to a shared memory to exchange information between processors. - In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors. The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory. The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems (see below for more detailed discussion). Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms. ## History The use of concurrent processes which communicate through message-passing has its roots in operating system architectures studied in the 1960s. The first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s. ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET (and its successor, the global Internet), other early worldwide computer networks included Usenet and FidoNet from the 1980s, both of which were used to support distributed discussion systems. The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its counterpart International Symposium on Distributed Computing (DISC) was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs. ## Architectures Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system. Whether these CPUs share resources or not determines a first distinction between three types of architecture: - Shared memory - Shared disk - Shared nothing. Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling. - Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change. - Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier. - n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers. - Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers. Examples of this architecture include BitTorrent and the bitcoin network. Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a main/sub relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database. ### Cell-Based Architecture Cell-based architecture is a distributed computing approach in which computational resources are organized into self-contained units called cells. Each cell operates independently, processing requests while maintaining scalability, fault isolation, and availability. A cell typically consists of multiple services or application components and functions as an autonomous unit. Some implementations replicate entire sets of services across multiple cells, while others partition workloads between cells. In replicated models, requests may be rerouted to an operational cell if another experiences a failure. This design is intended to enhance system resilience by reducing the impact of localized failures. Some implementations employ circuit breakers within and between cells. Within a cell, circuit breakers may be used to prevent cascading failures among services, while inter-cell circuit breakers can isolate failing cells and redirect traffic to those that remain operational. Cell-based architecture has been adopted in some large-scale distributed systems, particularly in cloud-native and high-availability environments, where fault isolation and redundancy are key design considerations. Its implementation varies depending on system requirements, infrastructure constraints, and operational objectives. ## Applications Reasons for using distributed systems and distributed computing may include: - The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location. - There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example: - It can allow for much larger storage and memory, faster compute, and higher bandwidth than a single machine. - It can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system. - It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. Examples Examples of distributed systems and applications of distributed computing include the following: - telecommunications networks: - telephone networks and cellular networks, - computer networks such as the Internet, - wireless sensor networks, - routing algorithms; - network applications: - World Wide Web and peer-to-peer networks, - massively multiplayer online games and virtual reality communities, - distributed databases and distributed database management systems, - network file systems, - distributed cache such as burst buffers, - distributed information processing systems such as banking systems and airline reservation systems; - real-time process control: - aircraft control systems, - industrial control systems; - parallel computation: - scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects, - distributed rendering in computer graphics. - peer-to-peer ## Reactive distributed systems According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive. ## Theoretical foundations ### Models Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions. Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random-access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm. The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer? The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer. Three viewpoints are commonly used: Parallel algorithms in shared-memory model - All processors have access to a shared memory. The algorithm designer chooses the program executed by each processor. - One theoretical model is the parallel random-access machines (PRAM) that are used. However, the classical PRAM model assumes synchronous access to the shared memory. - Shared-memory programs can be extended to distributed systems if the underlying operating system encapsulates the communication between nodes and virtually unifies the memory across all individual systems. - A model that is closer to the behavior of real-world multiprocessor machines and takes into account the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared memory. There is a wide body of work on this model, a summary of which can be found in the literature. Parallel algorithms in message-passing model - The algorithm designer chooses the structure of the network, as well as the program executed by each computer. - Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a computer network: each gate is a computer that runs an extremely simple computer program. Similarly, a sorting network can be seen as a computer network: each comparator is a computer. Distributed algorithms in message-passing model - The algorithm designer only chooses the computer program. All computers run the same program. The system must work correctly regardless of the structure of the network. - A commonly used model is a graph with one finite-state machine per node. In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example. ### An example Consider the computational problem of finding a coloring of a given graph G. Different fields might take the following approaches: Centralized algorithms - The graph G is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result. Parallel algorithms - Again, the graph G is encoded as a string. However, multiple computers can access the same string in parallel. Each computer might focus on one part of the graph and produce a coloring for that part. - The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel. Distributed algorithms - The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbors in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own color as output. - The main focus is on coordinating the operation of an arbitrary distributed system. While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the Cole–Vishkin algorithm for graph coloring was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm. Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing). The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing). ### Complexity measures In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel (see speedup). If a decision problem can be solved in polylogarithmic time by using a polynomial number of processors, then the problem is said to be in the class NC. The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa. In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During each communication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task. This complexity measure is closely related to the diameter of the network. Let D be the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2D communication rounds: simply gather all information in one location (D rounds), solve the problem, and inform each node about the solution (D rounds). On the other hand, if the running time of the algorithm is much smaller than D communication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in their local D-neighbourhood. Many distributed algorithms are known with the running time much smaller than D rounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field. Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model. Another commonly used measure is the total number of bits transmitted in the network (cf. communication complexity). The features of this concept are typically captured with the CONGEST(B) model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits. ### Other problems Traditional computational problems take the perspective that the user asks a question, a computer (or a distributed system) processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including the dining philosophers problem and other similar mutual exclusion problems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts or deadlocks occur. There are also fundamental challenges that are unique to distributed computing, for example those related to fault-tolerance. Examples of related problems include consensus problems, Byzantine fault tolerance, and self-stabilisation. Much research is also focused on understanding the asynchronous nature of distributed systems: - Synchronizers can be used to run synchronous algorithms in asynchronous systems. - Logical clocks provide a causal happened-before ordering of events. - Clock synchronization algorithms provide globally consistent physical time stamps. Note that in distributed systems, latency should be measured through "99th percentile" because "median" and "average" can be misleading. ### Election Coordinator election (or leader election) is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator. The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator. The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost. Coordinator election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing. Many other algorithms were suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the coordinator election algorithm was suggested by Korach, Kutten, and Moran. In order to perform coordination, distributed systems employ the concept of coordinators. The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist. ### Properties of distributed systems So far the focus has been on designing a distributed system that solves a given problem. A complementary research problem is studying the properties of a given distributed system. The halting problem is an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem is undecidable in the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer. However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem is PSPACE-complete, i.e., it is decidable, but not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks.
https://en.wikipedia.org/wiki/Distributed_computing
Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks. ## Mathematical foundations ### Activation function The two historically common activation functions are both sigmoids, and are described by $$ y(v_i) = \tanh(v_i) ~~ \textrm{and} ~~ y(v_i) = (1+e^{-v_i})^{-1} $$ . The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here $$ y_i $$ is the output of the $$ i $$ th node (neuron) and $$ v_i $$ is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids. ### Learning Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation. We can represent the degree of error in an output node $$ j $$ in the $$ n $$ th data point (training example) by $$ e_j(n)=d_j(n)-y_j(n) $$ , where $$ d_j(n) $$ is the desired target value for $$ n $$ th data point at node $$ j $$ , and $$ y_j(n) $$ is the value produced at node $$ j $$ when the $$ n $$ th data point is given as an input. The node weights can then be adjusted based on corrections that minimize the error in the entire output for the $$ n $$ th data point, given by $$ \mathcal{E}(n)=\frac{1}{2}\sum_{\text{output node }j} e_j^2(n) $$ . Using gradient descent, the change in each weight $$ w_{ij} $$ is $$ \Delta w_{ji} (n) = -\eta\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} y_i(n) $$ where $$ y_i(n) $$ is the output of the previous neuron $$ i $$ , and $$ \eta $$ is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, $$ \frac{\partial\mathcal{E}(n)}{\partial v_j(n)} $$ denotes the partial derivate of the error $$ \mathcal{E}(n) $$ according to the weighted sum $$ v_j(n) $$ of the input connections of neuron $$ i $$ . The derivative to be calculated depends on the induced local field $$ v_j $$ , which itself varies. It is easy to prove that for an output node this derivative can be simplified to $$ -\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = e_j(n)\phi^\prime (v_j(n)) $$ where $$ \phi^\prime $$ is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is $$ -\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = \phi^\prime (v_j(n))\sum_k -\frac{\partial\mathcal{E}(n)}{\partial v_k(n)} w_{kj}(n) $$ . This depends on the change in weights of the $$ k $$ th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. ## History ### Timeline - Circa 1800, Legendre (1805) and Gauss (1795) created the simplest feedforward network which consists of a single weight layer with linear activation functions. It was trained by the least squares method for minimising mean squared error, also known as linear regression. Legendre and Gauss used it for the prediction of planetary movement from training data. - In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks. - In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. R. D. Joseph (1960) mentions an even earlier perceptron-like device: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." - In 1960, Joseph also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962) cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning. - In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling, the first working deep learning algorithm, a method to train arbitrarily deep neural networks. It is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." It was used to train an eight-layer neural net in 1971. - In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, which was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers. - In 1970, Seppo Linnainmaa published the modern form of backpropagation in his master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. - In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors. ### Linear regression ### Perceptron If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent. ### Multilayer perceptron A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable. ## Other feedforward networks Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.
https://en.wikipedia.org/wiki/Feedforward_neural_network
In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers, computable reals, or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time. ### Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes. ## Informal definition In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1: The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates. An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's state table – is being used to define what is a potentially infinite string of decimal digits. This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not. ## Formal definition A real number a is computable if it can be approximated by some computable function $$ f:\mathbb{N}\to\mathbb{Z} $$ in the following manner: given any positive integer n, the function produces an integer f(n) such that: $$ {f(n)-1\over n} \leq a \leq {f(n)+1\over n}. $$ A complex number is called computable if its real and imaginary parts are computable. Equivalent definitions There are two similar definitions that are equivalent: - There exists a computable function which, given any positive rational error bound $$ \varepsilon $$ , produces a rational number r such that $$ |r - a| \leq \varepsilon. $$ - There is a computable sequence of rational numbers $$ q_i $$ converging to $$ a $$ such that $$ |q_i - q_{i+1}| < 2^{-i}\, $$ for each i. There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function $$ D\; $$ which when provided with a rational number $$ r $$ as input returns $$ D(r)=\mathrm{true}\; $$ or $$ D(r)=\mathrm{false}\; $$ , satisfying the following conditions: $$ \exists r D(r)=\mathrm{true}\; $$ $$ \exists r D(r)=\mathrm{false}\; $$ $$ (D(r)=\mathrm{true}) \wedge (D(s)=\mathrm{false}) \Rightarrow r<s\; $$ $$ D(r)=\mathrm{true} \Rightarrow \exist s>r, D(s)=\mathrm{true}.\; $$ An example is given by a program D that defines the cube root of 3. Assuming $$ q>0\; $$ this is defined by: $$ p^3<3 q^3 \Rightarrow D(p/q)=\mathrm{true}\; $$ $$ p^3>3 q^3 \Rightarrow D(p/q)=\mathrm{false}.\; $$ A real number is computable if and only if there is a computable Dedekind cut D corresponding to it. The function D is unique for each computable number (although of course two different programs may provide the same function). ## Properties ### Not computably enumerable Assigning a Gödel number to each Turing machine definition produces a subset $$ S $$ of the natural numbers corresponding to the computable numbers and identifies a surjection from $$ S $$ to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set $$ S $$ of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of $$ S $$ that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set $$ S $$ of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them. While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number $$ x, $$ the well ordering principle provides that there is a minimal element in $$ S $$ which corresponds to $$ x $$ , and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered. ### Properties as a field The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers a and b are computable then the following real numbers are also computable: a + b, a - b, ab, and a/b if b is nonzero. These operations are actually uniformly computable; for example, there is a Turing machine which on input (A,B, $$ \epsilon $$ ) produces output r, where A is the description of a Turing machine approximating a, B is the description of a Turing machine approximating b, and r is an $$ \epsilon $$ approximation of a + b. The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954. Computable reals however do not form a computable field, because the definition of a computable field requires effective equality. ### Non-computability of the ordering The order relation on the computable numbers is not computable. Let A be the description of a Turing machine approximating the number $$ a $$ . Then there is no Turing machine which on input A outputs "YES" if $$ a > 0 $$ and "NO" if $$ a \le 0. $$ To see why, suppose the machine described by A keeps outputting 0 as $$ \epsilon $$ approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable. While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines A and B approximating numbers $$ a $$ and $$ b $$ , where $$ a \ne b $$ , and outputs whether $$ a < b $$ or $$ a > b. $$ It is sufficient to use $$ \epsilon $$ -approximations where $$ \epsilon < |b-a|/2, $$ so by taking increasingly small $$ \epsilon $$ (approaching 0), one eventually can decide whether $$ a < b $$ or $$ a > b. $$ ### Other properties The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis. Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including: - any number that encodes the solution of the halting problem (or any other undecidable problem) according to a chosen encoding scheme. - Chaitin's constant, $$ \Omega $$ , which is a type of real number that is Turing equivalent to the halting problem. Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each universal Turing machine. A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable. The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers. ## Digit strings and the Cantor and Baire spaces Turing's original paper defined computable numbers as follows: (The decimal expansion of a only refers to the digits following the decimal point.) Turing was aware that this definition is equivalent to the $$ \epsilon $$ -approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the $$ \epsilon $$ sense: if $$ n > \log_{10} (1/\epsilon) $$ , then the first n digits of the decimal expansion for a provide an $$ \epsilon $$ approximation of a. For the converse, we pick an $$ \epsilon $$ computable real number a and generate increasingly precise approximations until the nth digit after the decimal point is certain. This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion. Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of $$ 2^{\omega} $$ (total 0,1 valued functions) instead of reals numbers in $$ [0,1] $$ . The members of $$ 2^{\omega} $$ can be identified with binary decimal expansions, but since the decimal expansions $$ .d_1d_2\ldots d_n0111\ldots $$ and $$ .d_1d_2\ldots d_n10 $$ denote the same real number, the interval $$ [0,1] $$ can only be bijectively (and homeomorphically under the subset topology) identified with the subset of $$ 2^{\omega} $$ not ending in all 1's. Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the $$ \epsilon $$ approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces $$ \epsilon $$ approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses $$ \epsilon $$ approximations rather than decimal expansions. However, from a computability theoretic or measure theoretic perspective, the two structures $$ 2^{\omega} $$ and $$ [0,1] $$ are essentially identical. Thus, computability theorists often refer to members of $$ 2^{\omega} $$ as reals. While $$ 2^{\omega} $$ is totally disconnected, for questions about $$ \Pi^0_1 $$ classes or randomness it is easier to work in $$ 2^{\omega} $$ . Elements of $$ \omega^{\omega} $$ are sometimes called reals as well and though containing a homeomorphic image of $$ \mathbb{R} $$ , $$ \omega^{\omega} $$ isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the $$ x \in \mathbb{R} $$ satisfying $$ \forall(n \in \omega)\phi(x,n) $$ , with $$ \phi(x,n) $$ quantifier free, must be computable while the unique $$ x \in \omega^{\omega} $$ satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy. ## Use in place of the reals The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as e, π, and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics. To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis. ## Implementations of exact arithmetic Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". Modern examples include the CoRN library (Coq), and the RealLib package (C++). A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the package.
https://en.wikipedia.org/wiki/Computable_number
"In computer science, a red–black tree is a self-balancing binary search tree data structure noted(...TRUNCATED)
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
10