- Techniques Of Integration Pdf
- Differential And Integral Calculus Pdf
- Methods Of Integration Calculus Pdf Download
- Methods Of Integration Calculus Pdf Software
- Methods Of Integration Calculus Pdf Example
- Integration Pdf Notes
Chapter 1: Integration Techniques. We will see several cases where this is needed in this section. Integration Strategy – In this section we give a general set of guidelines for determining how to evaluate an integral. The guidelines give here involve a mix of both Calculus I and Calculus II techniques to be as general as possible. 164 Chapter 8 Techniques of Integration Z cosxdx = sinx+C Z sec2 xdx = tanx+ C Z secxtanxdx = secx+C Z 1 1+ x2 dx = arctanx+ C Z 1 √ 1− x2 dx = arcsinx+ C 8.1 Substitution Needless to say, most problems we encounter will not be so simple. Here’s a slightly more complicated example: find Z 2xcos(x2)dx. Integral calculus that we are beginning to learn now is called integral calculus. It will be mostly about adding an incremental process to arrive at a total'. It will cover three major aspects of integral calculus: 1. The meaning of integration. We’ll learn that integration and di erentiation are inverse operations of each other. This is a very condensed and simplified version of basic calculus, which is a prerequisite for many courses in Mathematics, Statistics, Engineering, Pharmacy, etc. It is not comprehensive, and absolutely not intended to be a substitute for a one-year freshman course in differential and integral calculus.
Part of a series of articles about | ||||
Calculus | ||||
---|---|---|---|---|
| ||||
| ||||
| ||||
| ||||
|
Calculus equations written on a chalkboard for students.
Calculus, originally called infinitesimal calculus or 'the calculus of infinitesimals', is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations.
It has two major branches, differential calculus and integral calculus. Differential calculus concerns instantaneous rates of change and the slopes of curves. Integral calculus concerns accumulation of quantities and the areas under and between curves. These two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.[1]
Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz.[2][3] Today, calculus has widespread uses in science, engineering, and economics.[4]
In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus (plural calculi) is a Latin word, meaning originally 'small pebble' (this meaning is kept in medicine). Because such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. It is therefore used for naming specific methods of calculation and related theories, such as propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus.
- 1History
- 2Principles
- 4Varieties
- 5See also
- 7Further reading
History
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it appeared in ancient Greece, then in China and the Middle East, and still later again in medieval Europe and in India.
Ancient
Archimedes used the method of exhaustion to calculate the area under a parabola.
The ancient period introduced some of the ideas that led to integral calculus, but does not seem to have developed these ideas in a rigorous and systematic way. Calculations of volume and area, one goal of integral calculus, can be found in the EgyptianMoscow papyrus (13th dynasty, c. 1820 BC), but the formulas are simple instructions, with no indication as to method, and some of them lack major components.[5]
From the age of Greek mathematics, Eudoxus (c. 408–355 BC) used the method of exhaustion, which foreshadows the concept of the limit, to calculate areas and volumes, while Archimedes (c. 287–212 BC) developed this idea further, inventing heuristics which resemble the methods of integral calculus.[6]
The method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle.[7] In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, established a method[8][9] that would later be called Cavalieri's principle to find the volume of a sphere.
Medieval
Alhazen, 11th century Arab mathematician and physicist
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040CE) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.[10]
In the 14th century, Indian mathematicians gave a non-rigorous method, resembling differentiation, applicable to some trigonometric functions. Madhava of Sangamagrama and the Kerala School of Astronomy and Mathematics thereby stated components of calculus. A complete theory encompassing these components is now well known in the Western world as the Taylor series or infinite series approximations.[11] However, they were not able to 'combine many differing ideas under the two unifying themes of the derivative and the integral, show the connection between the two, and turn calculus into the great problem-solving tool we have today'.[10]
Modern
The calculus was the first achievement of modern mathematics and it is difficult to overestimate its importance. I think it defines more unequivocally than anything else the inception of modern mathematics, and the system of mathematical analysis, which is its logical development, still constitutes the greatest technical advance in exact thinking.
—John von Neumann[12]
In Europe, the foundational work was a treatise written by Bonaventura Cavalieri, who argued that volumes and areas should be computed as the sums of the volumes and areas of infinitesimally thin cross-sections. The ideas were similar to Archimedes' in The Method, but this treatise is believed to have been lost in the 13th century, and was only rediscovered in the early 20th century, and so would have been unknown to Cavalieri. Cavalieri's work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first.
The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed in Europe at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term.[13] The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, the latter two proving the second fundamental theorem of calculus around 1670.
Isaac Newton developed the use of calculus in his laws of motion and gravitation.
The product rule and chain rule,[14] the notions of higher derivatives and Taylor series,[15] and of analytic functions[citation needed] were used by Isaac Newton in an idiosyncratic notation which he applied to solve problems of mathematical physics. In his works, Newton rephrased his ideas to suit the mathematical idiom of the time, replacing calculations with infinitesimals by equivalent geometrical arguments which were considered beyond reproach. He used the methods of calculus to solve the problem of planetary motion, the shape of the surface of a rotating fluid, the oblateness of the earth, the motion of a weight sliding on a cycloid, and many other problems discussed in his Principia Mathematica (1687). In other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were still considered disreputable.
Gottfried Wilhelm Leibniz was the first to state clearly the rules of calculus.
These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton.[16] He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts.
Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the notation used in calculus today. The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, second and higher derivatives, and the notion of an approximating polynomial series. By Newton's time, the fundamental theorem of calculus was known.
When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his 'Nova Methodus pro Maximis et Minimis' first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics.[citation needed] A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus 'the science of fluxions'.
Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.[17][18]
Foundations
In calculus, foundations refers to the rigorous development of the subject from axioms and definitions. In early calculus the use of infinitesimal quantities was thought unrigorous, and was fiercely criticized by a number of authors, most notably Michel Rolle and Bishop Berkeley. Berkeley famously described infinitesimals as the ghosts of departed quantities in his book The Analyst in 1734. Working out a rigorous foundation for calculus occupied mathematicians for much of the century following Newton and Leibniz, and is still to some extent an active area of research today.
Several mathematicians, including Maclaurin, tried to prove the soundness of using infinitesimals, but it would not be until 150 years later when, due to the work of Cauchy and Weierstrass, a way was finally found to avoid mere 'notions' of infinitely small quantities.[19] The foundations of differential and integral calculus had been laid. In Cauchy's Cours d'Analyse, we find a broad range of foundational approaches, including a definition of continuity in terms of infinitesimals, and a (somewhat imprecise) prototype of an (ε, δ)-definition of limit in the definition of differentiation.[20] In his work Weierstrass formalized the concept of limit and eliminated infinitesimals (although his definition can actually validate nilsquare infinitesimals). Following the work of Weierstrass, it eventually became common to base calculus on limits instead of infinitesimal quantities, though the subject is still occasionally called 'infinitesimal calculus'. Bernhard Riemann used these ideas to give a precise definition of the integral. It was also during this period that the ideas of calculus were generalized to Euclidean space and the complex plane.
In modern mathematics, the foundations of calculus are included in the field of real analysis, which contains full definitions and proofs of the theorems of calculus. The reach of calculus has also been greatly extended. Henri Lebesgue invented measure theory and used it to define integrals of all but the most pathological functions. Laurent Schwartz introduced distributions, which can be used to take the derivative of any function whatsoever.
Limits are not the only rigorous approach to the foundation of calculus. Another way is to use Abraham Robinson's non-standard analysis. Robinson's approach, developed in the 1960s, uses technical machinery from mathematical logic to augment the real number system with infinitesimal and infinite numbers, as in the original Newton-Leibniz conception. The resulting numbers are called hyperreal numbers, and they can be used to give a Leibniz-like development of the usual rules of calculus. There is also smooth infinitesimal analysis, which differs from non-standard analysis in that it mandates neglecting higher power infinitesimals during derivations.
Significance
While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Isaac Newton and Gottfried Wilhelm Leibniz built on the work of earlier mathematicians to introduce its basic principles. The development of calculus was built on earlier concepts of instantaneous motion and area underneath curves.
Applications of differential calculus include computations involving velocity and acceleration, the slope of a curve, and optimization. Applications of integral calculus include computations involving area, volume, arc length, center of mass, work, and pressure. More advanced applications include power series and Fourier series.
Calculus is also used to gain a more precise understanding of the nature of space, time, and motion. For centuries, mathematicians and philosophers wrestled with paradoxes involving division by zero or sums of infinitely many numbers. These questions arise in the study of motion and area. The ancient Greek philosopher Zeno of Elea gave several famous examples of such paradoxes. Calculus provides tools, especially the limit and the infinite series, that resolve the paradoxes.
Principles
Limits and infinitesimals
Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, 'infinitely small'. For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, .. and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols and were taken to be infinitesimal, and the derivative was simply their ratio.
The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. However, the concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the value of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior in the context of the real number system. In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by very small numbers, and the infinitely small behavior of the function is found by taking the limiting behavior for smaller and smaller numbers. Limits were thought to provide a more rigorous foundation for calculus, and for this reason they became the standard approach during the twentieth century.
Differential calculus
Tangent line at (x, f(x)). The derivative f′(x) of a curve at a point is the slope (rise over run) of the line tangent to that curve at that point.
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by deriving the squaring function turns out to be the doubling function.
In more explicit terms the 'doubling function' may be denoted by g(x) = 2x and the 'squaring function' by f(x) = x2. The 'derivative' now takes the function f(x), defined by the expression 'x2', as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function g(x) = 2x, as will turn out.
The most common symbol for a derivative is an apostrophe-like mark called prime. Thus, the derivative of a function called f is denoted by f′, pronounced 'f prime'. For instance, if f(x) = x2 is the squaring function, then f′(x) = 2x is its derivative (the doubling function g from above). This notation is known as Lagrange's notation.
If the input of the function represents time, then the derivative represents change with respect to time. For example, if f is a function that takes a time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.
If a function is linear (that is, if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and:
This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (a, f(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore, (a + h, f(a + h)) is close to (a, f(a)). The slope between these two points is
This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). The secant line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is undefined. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero:
Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.
Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function.
The derivative f′(x) of a curve at a point is the slope of the line tangent to that curve at that point. This slope is determined by considering the limiting value of the slopes of secant lines. Here the function involved (drawn in red) is f(x) = x3 − x. The tangent line (in green) which passes through the point (−3/2, −15/8) has a slope of 23/4. Note that the vertical and horizontal scales in this image are different.
The slope of the tangent line to the squaring function at the point (3, 9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function, or just the derivative of the squaring function for short. A computation similar to the one above shows that the derivative of the squaring function is the doubling function.
Leibniz notation
A common notation, introduced by Leibniz, for the derivative in the example above is
In an approach based on limits, the symbol dy/dx is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of d/dx as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example:
In this usage, the dx in the denominator is read as 'with respect to x'. Another example of correct notation could be:
Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.
Integral calculus
Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration. In technical language, integral calculus studies two related linear operators.
The indefinite integral, also known as the antiderivative, is the inverse operation to the derivative. F is an indefinite integral of f when f is a derivative of F. (This use of lower- and upper-case letters for a function and its indefinite integral is common in calculus.)
The definite integral inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis. The technical definition of the definite integral involves the limit of a sum of areas of rectangles, called a Riemann sum.
A motivating example is the distances traveled in a given time.
If the speed is constant, only multiplication is needed, but if the speed changes, a more powerful method of finding the distance is necessary. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.
Constant velocity
Integration can be thought of as measuring the area under a curve, defined by f(x), between two points (here a and b).
When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given time period. If f(x) in the diagram on the right represents speed as it varies over time, the distance traveled (between the times represented by a and b) is the area of the shaded region s.
To approximate that area, an intuitive method would be to divide up the distance between a and b into a number of equal segments, the length of each segment represented by the symbol Δx. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base Δx and height h gives the distance (time Δx multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x) = h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for Δx will give more rectangles and in most cases a better approximation, but for an exact answer we need to take a limit as Δx approaches zero.
The symbol of integration is , an elongated S (the S stands for 'sum'). The definite integral is written as:
and is read 'the integral from a to b of f-of-x with respect to x.' The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles, so that their width Δx becomes the infinitesimally small dx. In a formulation of the calculus based on limits, the notation
is to be understood as an operator that takes a function as an input and gives a number, the area, as an output. The terminating differential, dx, is not a number, and is not being multiplied by f(x), although, serving as a reminder of the Δx limit definition, it can be treated as such in symbolic manipulations of the integral. Formally, the differential indicates the variable over which the function is integrated and serves as a closing bracket for the integration operator.
The indefinite integral, or antiderivative, is written:
Functions differing by only a constant have the same derivative, and it can be shown that the antiderivative of a given function is actually a family of functions differing only by a constant. Since the derivative of the function y = x2 + C, where C is any constant, is y′ = 2x, the antiderivative of the latter given by:
The unspecified constant C present in the indefinite integral or antiderivative is known as the constant of integration.
Fundamental theorem
The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the fundamental theorem of calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.
The fundamental theorem of calculus states: If a function f is continuous on the interval [a, b] and if F is a function whose derivative is f on the interval (a, b), then
Furthermore, for every x in the interval (a, b),
This realization, made by both Newton and Leibniz, who based their results on earlier work by Isaac Barrow, was key to the proliferation of analytic results after their work became known. The fundamental theorem provides an algebraic method of computing many definite integrals—without performing limit processes—by finding formulas for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives, and are ubiquitous in the sciences.
Applications
The logarithmic spiral of the Nautilus shell is a classical image used to depict the growth and change related to calculus.
Calculus is used in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.
Physics makes particular use of calculus; all concepts in classical mechanics and electromagnetism are related through calculus. The mass of an object of known density, the moment of inertia of objects, as well as the total energy of an object within a conservative field can be found by the use of calculus. An example of the use of calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term 'change of motion' which implies the derivative saying Thechangeof momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × acceleration, it implies differential calculus because acceleration is the time derivative of velocity or second time derivative of trajectory or spatial position. Starting from knowing how an object is accelerating, we use calculus to derive its path.
Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes.
Calculus can be used in conjunction with other mathematical disciplines. For example, it can be used with linear algebra to find the 'best fit' linear approximation for a set of points in a domain. Or it can be used in probability theory to determine the probability of a continuous random variable from an assumed density function. In analytic geometry, the study of graphs of functions, calculus is used to find high points and low points (maxima and minima), slope, concavity and inflection points.
'Pirate Poetry'. 'Emma's Missing Bow'. 'The Mango Walk'. The wiggles hats game. 'We Like Fruit'.
Green's Theorem, which gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C, is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property.
Discrete Green's Theorem, which gives the relationship between a double integral of a function around a simple closed rectangular curve C and a linear combination of the antiderivative's values at corner points along the edge of the curve, allows fast calculation of sums of values in rectangular domains. For example, it can be used to efficiently calculate sums of rectangular domains in images, in order to rapidly extract features and detect object; another algorithm that could be used is the summed area table.
In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies.
In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.
Calculus is also used to find approximate solutions to equations; in practice it is the standard way to solve differential equations and do root finding in most applications. Examples are methods such as Newton's method, fixed point iteration, and linear approximation. For instance, spacecraft use a variation of the Euler method to approximate curved courses within zero gravity environments.
Varieties
Over the years, many reformulations of calculus have been investigated for different purposes.
Non-standard calculus
Imprecise calculations with infinitesimals were widely replaced with the rigorous (ε, δ)-definition of limit starting in the 1870s. Meanwhile, calculations with infinitesimals persisted and often led to correct results. This led Abraham Robinson to investigate if it were possible to develop a number system with infinitesimal quantities over which the theorems of calculus were still valid. In 1960, building upon the work of Edwin Hewitt and Jerzy Łoś, he succeeded in developing non-standard analysis. The theory of non-standard analysis is rich enough to be applied in many branches of mathematics. As such, books and articles dedicated solely to the traditional theorems of calculus often go by the title non-standard calculus.
Smooth infinitesimal analysis
This is another reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. One aspect of this formulation is that the law of excluded middle does not hold in this formulation.
Constructive analysis
Constructive mathematics is a branch of mathematics that insists that proofs of the existence of a number, function, or other mathematical object should give a construction of the object. As such constructive mathematics also rejects the law of excluded middle. Reformulations of calculus in a constructive framework are generally part of the subject of constructive analysis.
See also
Lists
Other related topics
- Precalculus (mathematical education)
References
- ^DeBaggis, Henry F.; Miller, Kenneth S. (1966). Foundations of the Calculus. Philadelphia: Saunders. OCLC527896.
- ^Boyer, Carl B. (1959). The History of the Calculus and its Conceptual Development. New York: Dover. OCLC643872.
- ^Bardi, Jason Socrates (2006). The Calculus Wars : Newton, Leibniz, and the Greatest Mathematical Clash of All Time. New York: Thunder's Mouth Press. ISBN1-56025-706-7.
- ^Hoffmann, Laurence D.; Bradley, Gerald L. (2004). Calculus for Business, Economics, and the Social and Life Sciences (8th ed.). Boston: McGraw Hill. ISBN0-07-242432-X.
- ^Morris Kline, Mathematical thought from ancient to modern times, Vol. I
- ^Archimedes, Method, in The Works of ArchimedesISBN978-0-521-66160-7
- ^Dun, Liu; Fan, Dainian; Cohen, Robert Sonné (1966). A comparison of Archimdes' and Liu Hui's studies of circles. Chinese studies in the history and philosophy of science and technology. 130. Springer. p. 279. ISBN978-0-7923-3463-7.,pp. 279ff
- ^Katz, Victor J. (2008). A history of mathematics (3rd ed.). Boston, MA: Addison-Wesley. p. 203. ISBN978-0-321-38700-4.
- ^Zill, Dennis G.; Wright, Scott; Wright, Warren S. (2009). Calculus: Early Transcendentals (3 ed.). Jones & Bartlett Learning. p. xxvii. ISBN978-0-7637-5995-7.Extract of page 27
- ^ abKatz, V.J. 1995. 'Ideas of Calculus in Islam and India.' Mathematics Magazine (Mathematical Association of America), 68(3):163–174.
- ^'Indian mathematics'.
- ^von Neumann, J., 'The Mathematician', in Heywood, R.B., ed., The Works of the Mind, University of Chicago Press, 1947, pp. 180–196. Reprinted in Bródy, F., Vámos, T., eds., The Neumann Compendium, World Scientific Publishing Co. Pte. Ltd., 1995, ISBN981-02-2201-7, pp. 618–626.
- ^André Weil: Number theory: An approach through History from Hammurapi to Legendre. Boston: Birkhauser Boston, 1984, ISBN0-8176-4565-9, p. 28.
- ^Blank, Brian E.; Krantz, Steven George (2006). Calculus: Single Variable, Volume 1 (Illustrated ed.). Springer Science & Business Media. p. 248. ISBN978-1-931914-59-8.
- ^Ferraro, Giovanni (2007). The Rise and Development of the Theory of Series up to the Early 1820s (Illustrated ed.). Springer Science & Business Media. p. 87. ISBN978-0-387-73468-2.
- ^Leibniz, Gottfried Wilhelm. The Early Mathematical Manuscripts of Leibniz. Cosimo, Inc., 2008. p. 228. Copy
- ^Allaire, Patricia R. (2007). Foreword. A Biography of Maria Gaetana Agnesi, an Eighteenth-century Woman Mathematician. By Cupillari, Antonella (illustrated ed.). Edwin Mellen Press. p. iii. ISBN978-0-7734-5226-8.
- ^Unlu, Elif (April 1995). 'Maria Gaetana Agnesi'. Agnes Scott College.
- ^Russell, Bertrand (1946). History of Western Philosophy. London: George Allen & Unwin Ltd. p. 857.
The great mathematicians of the seventeenth century were optimistic and anxious for quick results; consequently they left the foundations of analytical geometry and the infinitesimal calculus insecure. Leibniz believed in actual infinitesimals, but although this belief suited his metaphysics it had no sound basis in mathematics. Weierstrass, soon after the middle of the nineteenth century, showed how to establish the calculus without infinitesimals, and thus at last made it logically secure. Next came Georg Cantor, who developed the theory of continuity and infinite number. 'Continuity' had been, until he defined it, a vague word, convenient for philosophers like Hegel, who wished to introduce metaphysical muddles into mathematics. Cantor gave a precise significance to the word, and showed that continuity, as he defined it, was the concept needed by mathematicians and physicists. By this means a great deal of mysticism, such as that of Bergson, was rendered antiquated.
- ^Grabiner, Judith V. (1981). The Origins of Cauchy's Rigorous Calculus. Cambridge: MIT Press. ISBN978-0-387-90527-3.
Further reading
Books
Techniques Of Integration Pdf
- Boyer, Carl Benjamin (1949). The History of the Calculus and its Conceptual Development. Hafner. Dover edition 1959, ISBN0-486-60509-4
- Courant, RichardISBN978-3-540-65058-4Introduction to calculus and analysis 1.
- Edmund Landau. ISBN0-8218-2830-4Differential and Integral Calculus, American Mathematical Society.
- Robert A. Adams. (1999). ISBN978-0-201-39607-2Calculus: A complete course.
- Albers, Donald J.; Richard D. Anderson and Don O. Loftsgaarden, ed. (1986) Undergraduate Programs in the Mathematics and Computer Sciences: The 1985–1986 Survey, Mathematical Association of America No. 7.
- John Lane Bell: A Primer of Infinitesimal Analysis, Cambridge University Press, 1998. ISBN978-0-521-62401-5. Uses synthetic differential geometry and nilpotent infinitesimals.
- Florian Cajori, 'The History of Notations of the Calculus.' Annals of Mathematics, 2nd Ser., Vol. 25, No. 1 (Sep. 1923), pp. 1–46.
- Leonid P. Lebedev and Michael J. Cloud: 'Approximating Perfection: a Mathematician's Journey into the World of Mechanics, Ch. 1: The Tools of Calculus', Princeton Univ. Press, 2004.
- Cliff Pickover. (2003). ISBN978-0-471-26987-8Calculus and Pizza: A Math Cookbook for the Hungry Mind.
- Michael Spivak. (September 1994). ISBN978-0-914098-89-8 Calculus. Publish or Perish publishing.
- Tom M. Apostol. (1967). ISBN978-0-471-00005-1Calculus, Volume 1, One-Variable Calculus with an Introduction to Linear Algebra. Wiley.
- Tom M. Apostol. (1969). ISBN978-0-471-00007-5Calculus, Volume 2, Multi-Variable Calculus and Linear Algebra with Applications. Wiley.
- Silvanus P. Thompson and Martin Gardner. (1998). ISBN978-0-312-18548-0Calculus Made Easy.
- Mathematical Association of America. (1988). Calculus for a New Century; A Pump, Not a Filter, The Association, Stony Brook, NY. ED 300 252.
- Thomas/Finney. (1996). ISBN978-0-201-53174-9Calculus and Analytic geometry 9th, Addison Wesley.
- Weisstein, Eric W. 'Second Fundamental Theorem of Calculus.' From MathWorld—A Wolfram Web Resource.
- Howard Anton, Irl Bivens, Stephen Davis:'Calculus', John Willey and Sons Pte. Ltd., 2002. ISBN978-81-265-1259-1
- Larson, Ron, Bruce H. Edwards (2010). Calculus, 9th ed., Brooks Cole Cengage Learning. ISBN978-0-547-16702-2
- McQuarrie, Donald A. (2003). Mathematical Methods for Scientists and Engineers, University Science Books. ISBN978-1-891389-24-5
- Salas, Saturnino L.; Hille, Einar; Etgen, Garret J. (2007). Calculus: One and Several Variables (10th ed.). Wiley. ISBN978-0-471-69804-3.
- Stewart, James (2012). Calculus: Early Transcendentals, 7th ed., Brooks Cole Cengage Learning. ISBN978-0-538-49790-9
- Thomas, George B., Maurice D. Weir, Joel Hass, Frank R. Giordano (2008), Calculus, 11th ed., Addison-Wesley. ISBN0-321-48987-X
Online books
- Boelkins, M. (2012). Active Calculus: a free, open text(PDF). Archived from the original on 30 May 2013. Retrieved 1 February 2013.
- Crowell, B. (2003). 'Calculus'. Light and Matter, Fullerton. Retrieved 6 May 2007 from http://www.lightandmatter.com/calc/calc.pdf
- Garrett, P. (2006). 'Notes on first year calculus'. University of Minnesota. Retrieved 6 May 2007 from http://www.math.umn.edu/~garrett/calculus/first_year/notes.pdf
- Faraz, H. (2006). 'Understanding Calculus'. Retrieved 6 May 2007 from UnderstandingCalculus.com, URL http://www.understandingcalculus.com (HTML only)
- Keisler, H.J. (2000). 'Elementary Calculus: An Approach Using Infinitesimals'. Retrieved 29 August 2010 from http://www.math.wisc.edu/~keisler/calc.html
- Mauch, S. (2004). 'Sean's Applied Math Book' (pdf). California Institute of Technology. Retrieved 6 May 2007 from https://web.archive.org/web/20070614183657/http://www.cacr.caltech.edu/~sean/applied_math.pdf
- Sloughter, Dan (2000). 'Difference Equations to Differential Equations: An introduction to calculus'. Retrieved 17 March 2009 from http://synechism.org/drupal/de2de/
- Stroyan, K.D. (2004). 'A brief introduction to infinitesimal calculus'. University of Iowa. Retrieved 6 May 2007 from https://web.archive.org/web/20050911104158/http://www.math.uiowa.edu/~stroyan/InfsmlCalculus/InfsmlCalc.htm (HTML only)
- Strang, G. (1991). 'Calculus' Massachusetts Institute of Technology. Retrieved 6 May 2007 from http://ocw.mit.edu/ans7870/resources/Strang/strangtext.htm
- Smith, William V. (2001). 'The Calculus'. Retrieved 4 July 2008 [1] (HTML only).
External links
- Hazewinkel, Michiel, ed. (2001) [1994], 'Calculus', Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN978-1-55608-010-4
- Weisstein, Eric W.'Calculus'. MathWorld.
- Topics on Calculus at PlanetMath.org.
- Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF
- Calculus on In Our Time at the BBC
- Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites
- COW: Calculus on the Web at Temple University – contains resources ranging from pre-calculus and associated algebra
- Online Integrator (WebMathematica) from Wolfram Research
- The Role of Calculus in College Mathematics from ERICDigests.org
- OpenCourseWare Calculus from the Massachusetts Institute of Technology
- Infinitesimal Calculus – an article on its historical development, in Encyclopedia of Mathematics, ed. Michiel Hazewinkel.
- Daniel Kleitman, MIT. 'Calculus for Beginners and Artists'.
- Calculus Problems and Solutions by D.A. Kouba
- (in English)(in Arabic)The Excursion of Calculus, 1772
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Calculus&oldid=916925273'
(Redirected from Methods of integration)
A definite integral of a function can be represented as the signed area of the region bounded by its graph.
Part of a series of articles about | ||||
Calculus | ||||
---|---|---|---|---|
| ||||
| ||||
| ||||
| ||||
|
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two main operations of calculus, with its inverse operation, differentiation, being the other. Given a functionf of a realvariablex and an interval[a, b] of the real line, the definite integral
is defined informally as the signed area of the region in the xy-plane that is bounded by the graph of f, the x-axis and the vertical lines x = a and x = b. The area above the x-axis adds to the total and that below the x-axis subtracts from the total.
The operation of integration, up to an additive constant, is the inverse of the operation of differentiation. For this reason, the term integral may also refer to the related notion of the antiderivative, a function F whose derivative is the given function f. In this case, it is called an indefinite integral and is written:
The integrals discussed in this article are those termed definite integrals. It is the fundamental theorem of calculus that connects differentiation with the definite integral: if f is a continuous real-valued function defined on a closed interval[a, b], then, once an antiderivative F of f is known, the definite integral of f over that interval is given by
The principles of integration were formulated independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century, who thought of the integral as an infinite sum of rectangles of infinitesimal width. Bernhard Riemann gave a rigorous mathematical definition of integrals. It is based on a limiting procedure that approximates the area of a curvilinear region by breaking the region into thin vertical slabs. Beginning in the 19th century, more sophisticated notions of integrals began to appear, where the type of the function as well as the domain over which the integration is performed has been generalised. A line integral is defined for functions of two or more variables, and the interval of integration [a, b] is replaced by a curve connecting the two endpoints. In a surface integral, the curve is replaced by a piece of a surface in three-dimensional space.
- 1History
- 3Terminology and notation
- 5Formal definitions
- 6Properties
- 7Fundamental theorem of calculus
- 7.1Statements of theorems
- 8Extensions
- 9Computation
- 13External links
History[edit]
Pre-calculus integration[edit]
The first documented systematic technique capable of determining integrals is the method of exhaustion of the ancient Greek astronomer Eudoxus (ca. 370 BC), which sought to find areas and volumes by breaking them up into an infinite number of divisions for which the area or volume was known. This method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle.
A similar method was independently developed in China around the 3rd century AD by Liu Hui, who used it to find the area of the circle. This method was later used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi and Zu Geng to find the volume of a sphere (Shea 2007; Katz 2004, pp. 125–126).
In the Middle East, Hasan Ibn al-Haytham, Latinized as Alhazen (c. 965 – c. 1040CE) derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration of this function, where the formulae for the sums of integral squares and fourth powers allowed him to calculate the volume of a paraboloid.[1]
The next significant advances in integral calculus did not begin to appear until the 17th century. At this time, the work of Cavalieri with his method of Indivisibles, and work by Fermat, began to lay the foundations of modern calculus, with Cavalieri computing the integrals of xn up to degree n = 9 in Cavalieri's quadrature formula. Further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the fundamental theorem of calculus. Wallis generalized Cavalieri's method, computing integrals of x to a general power, including negative powers and fractional powers.
Newton and Leibniz[edit]
The major advance in integration came in the 17th century with the independent discovery of the fundamental theorem of calculus by Leibniz and Newton. Leibniz published his work on calculus before Newton. The theorem demonstrates a connection between integration and differentiation. This connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the fundamental theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the comprehensive mathematical framework that both Leibniz and Newton developed. Given the name infinitesimal calculus, it allowed for precise analysis of functions within continuous domains. This framework eventually became modern calculus, whose notation for integrals is drawn directly from the work of Leibniz.
Formalization[edit]
While Newton and Leibniz provided a systematic approach to integration, their work lacked a degree of rigour. Bishop Berkeley memorably attacked the vanishing increments used by Newton, calling them 'ghosts of departed quantities'. Calculus acquired a firmer footing with the development of limits. Integration was first rigorously formalized, using limits, by Riemann. Although all bounded piecewise continuous functions are Riemann-integrable on a bounded interval, subsequently more general functions were considered—particularly in the context of Fourier analysis—to which Riemann's definition does not apply, and Lebesgue formulated a different definition of integral, founded in measure theory (a subfield of real analysis). Other definitions of integral, extending Riemann's and Lebesgue's approaches, were proposed. These approaches based on the real number system are the ones most common today, but alternative approaches exist, such as a definition of integral as the standard part of an infinite Riemann sum, based on the hyperreal number system.
Historical notation[edit]
The notation for the indefinite integral was introduced by Gottfried Wilhelm Leibniz in 1675 (Burton 1988, p. 359; Leibniz 1899, p. 154). He adapted the integral symbol, ∫, from the letter ſ (long s), standing for summa (written as ſumma; Latin for 'sum' or 'total'). The modern notation for the definite integral, with limits above and below the integral sign, was first used by Joseph Fourier in Mémoires of the French Academy around 1819–20, reprinted in his book of 1822 (Cajori 1929, pp. 249–250; Fourier 1822, §231).
Isaac Newton used a small vertical bar above a variable to indicate integration, or placed the variable inside a box. The vertical bar was easily confused with or x′, which are used to indicate differentiation, and the box notation was difficult for printers to reproduce, so these notations were not widely adopted.
Applications[edit]
Integrals are used extensively in many areas of mathematics as well as in many other areas that rely on mathematics.
For example, in probability theory, integrals are used to determine the probability of some random variable falling within a certain range. Moreover, the integral under an entire probability density function must equal 1, which provides a test of whether a function with no negative values could be a density function or not.
Integrals can be used for computing the area of a two-dimensional region that has a curved boundary, as well as computing the volume of a three-dimensional object that has a curved boundary. The area of a two-dimensional region can be calculated using the aforementioned definite integral.
The volume of a three-dimensional object such as a disc or washer, as outlined in Disc integration can be computed using the equation for the volume of a cylinder, , where is the radius, which in this case would be the distance from the curve of a function to the line about which it is being rotated. For a simple disc, the radius will be the equation of the function minus the given -value or -value of the line. For instance, the radius of a disc created by rotating a quadratic around the line would be given by the expression or . In order to find the volume for this same shape, an integral with bounds and such that and are intersections of the line and would be used as follows:
The components of the above integral represent the variables in the equation for the volume of a cylinder, . The constant pi is factored out, while the radius, , is squared within the integral. The height, represented in the volume formula by , is given in this integral by the infinitesimally small (in order to approximate the volume with the greatest possible accuracy) term .Integrals are also used in physics, in areas like kinematics to find quantities like displacement, time, and velocity. For example, in rectilinear motion, the displacement of an object over the time interval is given by:
where is the velocity expressed as a function of time. The work done by a force (given as a function of position) from an initial position to a final position is:
Integrals are also used in thermodynamics, where thermodynamic integration is used to calculate the difference in free energy between two given states.
Terminology and notation[edit]
Standard[edit]
The integral with respect to x of a real-valued functionf of a real variable x on the interval [a, b] is written as
- .
The integral sign ∫ represents integration. The symbol dx, called the differential of the variable x, indicates that the variable of integration is x. The function f(x) to be integrated is called the integrand. The symbol dx is separated from the integrand by a space (as shown). If a function has an integral, it is said to be integrable. The points a and b are called the limits of the integral. An integral where the limits are specified is called a definite integral. The integral is said to be over the interval [a, b].
If the integral goes from a finite value a to the upper limit infinity, the integral expresses the limit of the integral from a to a value b as b goes to infinity. If the value of the integral gets closer and closer to a finite value, the integral is said to converge to that value. If not, the integral is said to diverge.
When the limits are omitted, as in
the integral is called an indefinite integral, which represents a class of functions (the antiderivative) whose derivative is the integrand. The fundamental theorem of calculus relates the evaluation of definite integrals to indefinite integrals. Occasionally, limits of integration are omitted for definite integrals when the same limits occur repeatedly in a particular context. Usually, the author will make this convention clear at the beginning of the relevant text.
There are several extensions of the notation for integrals to encompass integration on unbounded domains and/or in multiple dimensions (see later sections of this article).
Meaning of the symbol dx[edit]
Historically, the symbol dx was taken to represent an infinitesimally 'small piece' of the independent variablex to be multiplied by the integrand and summed up in an infinite sense. While this notion is still heuristically useful, later mathematicians have deemed infinitesimal quantities to be untenable from the standpoint of the real number system.[2] In introductory calculus, the expression dx is therefore not assigned an independent meaning; instead, it is viewed as part of the symbol for integration and serves as its delimiter on the right side of the expression being integrated.
In more sophisticated contexts, dx can have its own significance, the meaning of which depending on the particular area of mathematics being discussed. When used in one of these ways, the original Leibnitz notation is co-opted to apply to a generalization of the original definition of the integral. Some common interpretations of dx include: an integrator function in Riemann-Stieltjes integration (indicated by dα(x) in general), a measure in Lebesgue theory (indicated by dμ in general), or a differential form in exterior calculus (indicated by in general). In the last case, even the letter d has an independent meaning — as the exterior derivative operator on differential forms.
Conversely, in advanced settings, it is not uncommon to leave out dx when only the simple Riemann integral is being used, or the exact type of integral is immaterial. For instance, one might write to express the linearity of the integral, a property shared by the Riemann integral and all generalizations thereof.
Variants[edit]
In modern Arabic mathematical notation, a reflected integral symbol is used instead of the symbol ∫, since the Arabic script and mathematical expressions go right to left.[3]
Some authors, particularly of European origin, use an upright 'd' to indicate the variable of integration (i.e., dx instead of dx), since properly speaking, 'd' is not a variable.
The symbol dx is not always placed after f(x), as for instance in
- or .
In the first expression, the differential is treated as an infinitesimal 'multiplicative' factor, formally following a 'commutative property' when 'multiplied' by the expression 3/(x2+1). In the second expression, showing the differentials first highlights and clarifies the variables that are being integrated with respect to, a practice particularly popular with physicists.
Interpretations of the integral[edit]
Integrals appear in many practical situations. If a swimming pool is rectangular with a flat bottom, then from its length, width, and depth we can easily determine the volume of water it can contain (to fill it), the area of its surface (to cover it), and the length of its edge (to rope it). But if it is oval with a rounded bottom, all of these quantities call for integrals. Practical approximations may suffice for such trivial examples, but precision engineering (of any discipline) requires exact and rigorous values for these elements.
Approximations to integral of √x from 0 to 1, with 5 ■ (yellow) right endpoint partitions and 12 ■ (green) left endpoint partitions
To start off, consider the curve y = f(x) between x = 0 and x = 1 with f(x) = √x (see figure). We ask:
- What is the area under the function f, in the interval from 0 to 1?
Fatal error adobe acrobat dc. and call this (yet unknown) area the (definite) integral of f. The notation for this integral will be
As a first approximation, look at the unit square given by the sides x = 0 to x = 1 and y = f(0) = 0 and y = f(1) = 1. Its area is exactly 1. Actually, the true value of the integral must be somewhat less than 1. Decreasing the width of the approximation rectangles and increasing the number of rectangles gives a better result; so cross the interval in five steps, using the approximation points 0, 1/5, 2/5, and so on to 1. Fit a box for each step using the right end height of each curve piece, thus √1/5, √2/5, and so on to √1 = 1. Summing the areas of these rectangles, we get a better approximation for the sought integral, namely
We are taking a sum of finitely many function values of f, multiplied with the differences of two subsequent approximation points. We can easily see that the approximation is still too large. Using more steps produces a closer approximation, but will always be too high and will never be exact. Alternatively, replacing these subintervals by ones with the left end height of each piece, we will get an approximation that is too low: for example, with twelve such subintervals we will get an approximate value for the area of 0.6203.
The key idea is the transition from adding finitely many differences of approximation points multiplied by their respective function values to using infinitely many fine, or infinitesimal steps. When this transition is completed in the above example, it turns out that the area under the curve within the stated bounds is 2/3.
The notation
conceives the integral as a weighted sum, denoted by the elongated s, of function values, f(x), multiplied by infinitesimal step widths, the so-called differentials, denoted by dx.
Historically, after the failure of early efforts to rigorously interpret infinitesimals, Riemann formally defined integrals as a limit of weighted sums, so that the dx suggested the limit of a difference (namely, the interval width). Shortcomings of Riemann's dependence on intervals and continuity motivated newer definitions, especially the Lebesgue integral, which is founded on an ability to extend the idea of 'measure' in much more flexible ways. Thus the notation
refers to a weighted sum in which the function values are partitioned, with μ measuring the weight to be assigned to each value. Here A denotes the region of integration.
Formal definitions[edit]
There are many ways of formally defining an integral, not all of which are equivalent. The differences exist mostly to deal with differing special cases which may not be integrable under other definitions, but also occasionally for pedagogical reasons. The most commonly used definitions of integral are Riemann integrals and Lebesgue integrals.
Riemann integral[edit]
The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval.[4] Let [a, b] be a closed interval of the real line; then a tagged partition of [a, b] is a finite sequence
This partitions the interval [a, b] into n sub-intervals [xi−1, xi] indexed by i, each of which is 'tagged' with a distinguished point ti ∈ [xi−1, xi]. A Riemann sum of a function f with respect to such a tagged partition is defined as
thus each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. Let Δi = xi−xi−1 be the width of sub-interval i; then the mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, maxi=1..n Δi. The Riemann integral of a function f over the interval [a, b] is equal to S if:
- For all ε > 0 there exists δ > 0 such that, for any tagged partition [a, b] with mesh less than δ, we have
When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum becomes an upper (respectively, lower) Darboux sum, suggesting the close connection between the Riemann integral and the Darboux integral.
Lebesgue integral[edit]
Riemann–Darboux's integration (top) and Lebesgue integration (bottom)
It is often of interest, both in theory and applications, to be able to pass to the limit under the integral. For instance, a sequence of functions can frequently be constructed that approximate, in a suitable sense, the solution to a problem. Then the integral of the solution function should be the limit of the integrals of the approximations. However, many functions that can be obtained as limits are not Riemann-integrable, and so such limit theorems do not hold with the Riemann integral. Therefore, it is of great importance to have a definition of the integral that allows a wider class of functions to be integrated (Rudin 1987).
Such an integral is the Lebesgue integral, that exploits the following fact to enlarge the class of integrable functions: if the values of a function are rearranged over the domain, the integral of a function should remain the same. Thus Henri Lebesgue introduced the integral bearing his name, explaining this integral thus in a letter to Paul Montel:
I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral.
— Siegmund-Schultze (2008)
As Folland (1984, p. 56) puts it, 'To compute the Riemann integral of f, one partitions the domain [a, b] into subintervals', while in the Lebesgue integral, 'one is in effect partitioning the range of f '. The definition of the Lebesgue integral thus begins with a measure, μ. In the simplest case, the Lebesgue measureμ(A) of an interval A = [a, b] is its width, b − a, so that the Lebesgue integral agrees with the (proper) Riemann integral when both exist. In more complicated cases, the sets being measured can be highly fragmented, with no continuity and no resemblance to intervals.
Using the 'partitioning the range of f ' philosophy, the integral of a non-negative function f : R → R should be the sum over t of the areas between a thin horizontal strip between y = t and y = t + dt. This area is just μ{ x : f(x) > t} dt. Let f∗(t) = μ{ x : f(x) > t}. The Lebesgue integral of f is then defined by (Lieb & Loss 2001)
Differential And Integral Calculus Pdf
where the integral on the right is an ordinary improper Riemann integral (f∗ is a strictly decreasing positive function, and therefore has a well-defined improper Riemann integral). For a suitable class of functions (the measurable functions) this defines the Lebesgue integral.
A general measurable function f is Lebesgue-integrable if the sum of the absolute values of the areas of the regions between the graph of f and the x-axis is finite:
In that case, the integral is, as in the Riemannian case, the difference between the area above the x-axis and the area below the x-axis:
where
Other integrals[edit]
Although the Riemann and Lebesgue integrals are the most widely used definitions of the integral, a number of others exist, including:
- The Darboux integral, which is constructed using Darboux sums and is equivalent to a Riemann integral, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable. Darboux integrals have the advantage of being simpler to define than Riemann integrals.
- The Riemann–Stieltjes integral, an extension of the Riemann integral.
- The Lebesgue–Stieltjes integral, further developed by Johann Radon, which generalizes the Riemann–Stieltjes and Lebesgue integrals.
- The Daniell integral, which subsumes the Lebesgue integral and Lebesgue–Stieltjes integral without the dependence on measures.
- The Haar integral, used for integration on locally compact topological groups, introduced by Alfréd Haar in 1933.
- The Henstock–Kurzweil integral, variously defined by Arnaud Denjoy, Oskar Perron, and (most elegantly, as the gauge integral) Jaroslav Kurzweil, and developed by Ralph Henstock.
- The Itô integral and Stratonovich integral, which define integration with respect to semimartingales such as Brownian motion.
- The Young integral, which is a kind of Riemann–Stieltjes integral with respect to certain functions of unbounded variation.
- The rough path integral, which is defined for functions equipped with some additional 'rough path' structure and generalizes stochastic integration against both semimartingales and processes such as the fractional Brownian motion.
Properties[edit]
Linearity[edit]
The collection of Riemann-integrable functions on a closed interval [a, b] forms a vector space under the operations of pointwise addition and multiplication by a scalar, and the operation of integration
is a linear functional on this vector space. Thus, firstly, the collection of integrable functions is closed under taking linear combinations; and, secondly, the integral of a linear combination is the linear combination of the integrals,
Similarly, the set of real-valued Lebesgue-integrable functions on a given measure spaceE with measure μ is closed under taking linear combinations and hence form a vector space, and the Lebesgue integral
is a linear functional on this vector space, so that
More generally, consider the vector space of all measurable functions on a measure space (E,μ), taking values in a locally compactcompletetopological vector spaceV over a locally compact topological fieldK, f : E → V. Then one may define an abstract integration map assigning to each function f an element of V or the symbol ∞,
that is compatible with linear combinations. In this situation, the linearity holds for the subspace of functions whose integral is an element of V (i.e. 'finite'). The most important special cases arise when K is R, C, or a finite extension of the field Qp of p-adic numbers, and V is a finite-dimensional vector space over K, and when K = C and V is a complex Hilbert space.
Linearity, together with some natural continuity properties and normalisation for a certain class of 'simple' functions, may be used to give an alternative definition of the integral. This is the approach of Daniell for the case of real-valued functions on a set X, generalized by Nicolas Bourbaki to functions with values in a locally compact topological vector space. See (Hildebrandt 1953) for an axiomatic characterisation of the integral.
Inequalities[edit]
A number of general inequalities hold for Riemann-integrable functions defined on a closed and boundedinterval[a, b] and can be generalized to other notions of integral (Lebesgue and Daniell).
- Upper and lower bounds. An integrable function f on [a, b], is necessarily bounded on that interval. Thus there are real numbersm and M so that m ≤ f (x) ≤ M for all x in [a, b]. Since the lower and upper sums of f over [a, b] are therefore bounded by, respectively, m(b − a) and M(b − a), it follows that
- Inequalities between functions. If f(x) ≤ g(x) for each x in [a, b] then each of the upper and lower sums of f is bounded above by the upper and lower sums, respectively, of g. Thus
- This is a generalization of the above inequalities, as M(b − a) is the integral of the constant function with value M over [a, b].
- In addition, if the inequality between functions is strict, then the inequality between integrals is also strict. That is, if f(x) < g(x) for each x in [a, b], then
- Subintervals. If [c, d] is a subinterval of [a, b] and f(x) is non-negative for all x, then
- Products and absolute values of functions. If f and g are two functions, then we may consider their pointwise products and powers, and absolute values:
- If f is Riemann-integrable on [a, b] then the same is true for |f|, and
- Moreover, if f and g are both Riemann-integrable then fg is also Riemann-integrable, and
- This inequality, known as the Cauchy–Schwarz inequality, plays a prominent role in Hilbert space theory, where the left hand side is interpreted as the inner product of two square-integrable functions f and g on the interval [a, b].
- Hölder's inequality. Suppose that p and q are two real numbers, 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1, and f and g are two Riemann-integrable functions. Then the functions |f|p and |g|q are also integrable and the following Hölder's inequality holds:
- For p = q = 2, Hölder's inequality becomes the Cauchy–Schwarz inequality.
- Minkowski inequality. Suppose that p ≥ 1 is a real number and f and g are Riemann-integrable functions. Then |f|p, |g|p and |f + g|p are also Riemann-integrable and the following Minkowski inequality holds:
- An analogue of this inequality for Lebesgue integral is used in construction of Lp spaces.
Conventions[edit]
In this section, f is a real-valued Riemann-integrable function. The integral
over an interval [a, b] is defined if a < b. This means that the upper and lower sums of the function f are evaluated on a partition a = x0 ≤ x1 ≤ . . . ≤ xn = b whose values xi are increasing. Geometrically, this signifies that integration takes place 'left to right', evaluating f within intervals [xi , xi +1] where an interval with a higher index lies to the right of one with a lower index. The values a and b, the end-points of the interval, are called the limits of integration of f. Integrals can also be defined if a > b:
- Reversing limits of integration. If a > b then define
This, with a = b, implies:
- Integrals over intervals of length zero. If a is a real number then
The first convention is necessary in consideration of taking integrals over subintervals of [a, b]; the second says that an integral taken over a degenerate interval, or a point, should be zero. One reason for the first convention is that the integrability of f on an interval [a, b] implies that f is integrable on any subinterval [c, d], but in particular integrals have the property that:
- Additivity of integration on intervals. If c is any element of [a, b], then
With the first convention, the resulting relation
is then well-defined for any cyclic permutation of a, b, and c.
Fundamental theorem of calculus[edit]
The fundamental theorem of calculus is the statement that differentiation and integration are inverse operations: if a continuous function is first integrated and then differentiated, the original function is retrieved. An important consequence, sometimes called the second fundamental theorem of calculus, allows one to compute integrals by using an antiderivative of the function to be integrated.
Statements of theorems[edit]
Fundamental theorem of calculus[edit]
Let f be a continuous real-valued function defined on a closed interval[a, b]. Let F be the function defined, for all x in [a, b], by
Then, F is continuous on [a, b], differentiable on the open interval (a, b), and
for all x in (a, b).
Second fundamental theorem of calculus[edit]
Let f be a real-valued function defined on a closed interval [a, b] that admits an antiderivativeF on [a, b]. That is, f and F are functions such that for all x in [a, b],
If f is integrable on [a, b] then
Calculating integrals[edit]
The second fundamental theorem allows many integrals to be calculated explicitly. For example, to calculate the integral
of the square root function f(x) = x1/2 between 0 and 1, it is sufficient to find an antiderivative, that is, a function F(x) whose derivative equals f(x):
One such function is F(x) = 2/3x3/2. Then the value of the integral in question is
This is a case of a general rule, that for f(x) = xq, with q ≠ −1, an antiderivative is F(x) = xq + 1/(q + 1). Tables of this and similar antiderivatives can be used to calculate integrals explicitly, in much the same way that tables of derivatives can be used.
Extensions[edit]
Improper integrals[edit]
The improper integral
has unbounded intervals for both domain and range.
has unbounded intervals for both domain and range.
A 'proper' Riemann integral assumes the integrand is defined and finite on a closed and bounded interval, bracketed by the limits of integration. An improper integral occurs when one or more of these conditions is not satisfied. In some cases such integrals may be defined by considering the limit of a sequence of proper Riemann integrals on progressively larger intervals.
If the interval is unbounded, for instance at its upper end, then the improper integral is the limit as that endpoint goes to infinity.
If the integrand is only defined or finite on a half-open interval, for instance (a, b], then again a limit may provide a finite result.
That is, the improper integral is the limit of proper integrals as one endpoint of the interval of integration approaches either a specified real number, or ∞, or −∞. In more complicated cases, limits are required at both endpoints, or at interior points.
Multiple integration[edit]
Double integral as volume under a surface
Just as the definite integral of a positive function of one variable represents the area of the region between the graph of the function and the x-axis, the double integral of a positive function of two variables represents the volume of the region between the surface defined by the function and the plane that contains its domain. For example, a function in two dimensions depends on two real variables, x and y, and the integral of a function f over the rectangle R given as the Cartesian product of two intervals can be written
where the differential dA indicates that integration is taken with respect to area. This double integral can be defined using Riemann sums, and represents the (signed) volume under the graph of z = f(x,y) over the domain R. Under suitable conditions (e.g., if f is continuous), then Fubini's theorem guarantees that this integral can be expressed as an equivalent iterated integral
This reduces the problem of computing a double integral to computing one-dimensional integrals. Because of this, another notation for the integral over R uses a double integral sign:
Integration over more general domains is possible. The integral of a function f, with respect to volume, over a subset D of ℝn is denoted by notation such as
or similar. See volume integral.
Line integrals[edit]
A line integral sums together elements along a curve.
The concept of an integral can be extended to more general domains of integration, such as curved lines and surfaces. Such integrals are known as line integrals and surface integrals respectively. These have important applications in physics, as when dealing with vector fields.
A line integral (sometimes called a path integral) is an integral where the function to be integrated is evaluated along a curve. Various different line integrals are in use. In the case of a closed curve it is also called a contour integral.
The function to be integrated may be a scalar field or a vector field. The value of the line integral is the sum of values of the field at all points on the curve, weighted by some scalar function on the curve (commonly arc length or, for a vector field, the scalar product of the vector field with a differential vector in the curve). This weighting distinguishes the line integral from simpler integrals defined on intervals. Many simple formulas in physics have natural continuous analogs in terms of line integrals; for example, the fact that work is equal to force, F, multiplied by displacement, s, may be expressed (in terms of vector quantities) as:
Methods Of Integration Calculus Pdf Download
For an object moving along a path C in a vector fieldF such as an electric field or gravitational field, the total work done by the field on the object is obtained by summing up the differential work done in moving from s to s + ds. This gives the line integral
Methods Of Integration Calculus Pdf Software
Surface integrals[edit]
The definition of surface integral relies on splitting the surface into small surface elements.
A surface integral generalizes double integrals to integration over a surface (which may be a curved set in space); it can be thought of as the double integral analog of the line integral. The function to be integrated may be a scalar field or a vector field. The value of the surface integral is the sum of the field at all points on the surface. This can be achieved by splitting the surface into surface elements, which provide the partitioning for Riemann sums.
For an example of applications of surface integrals, consider a vector field v on a surface S; that is, for each point x in S, v(x) is a vector. Imagine that we have a fluid flowing through S, such that v(x) determines the velocity of the fluid at x. The flux is defined as the quantity of fluid flowing through S in unit amount of time. To find the flux, we need to take the dot product of v with the unit surface normal to S at each point, which will give us a scalar field, which we integrate over the surface:
The fluid flux in this example may be from a physical fluid such as water or air, or from electrical or magnetic flux. Thus surface integrals have applications in physics, particularly with the classical theory of electromagnetism.
Contour integrals[edit]
In complex analysis, the integrand is a complex-valued function of a complex variable z instead of a real function of a real variable x. When a complex function is integrated along a curve in the complex plane, the integral is denoted as follows
- .
This is known as a contour integral.
Integrals of differential forms[edit]
A differential form is a mathematical concept in the fields of multivariable calculus, differential topology, and tensors. Differential forms are organized by degree. For example, a one-form is a weighted sum of the differentials of the coordinates, such as:
where E, F, G are functions in three dimensions. A differential one-form can be integrated over an oriented path, and the resulting integral is just another way of writing a line integral. Here the basic differentials dx, dy, dz measure infinitesimal oriented lengths parallel to the three coordinate axes.
A differential two-form is a sum of the form
Here the basic two-forms measure oriented areas parallel to the coordinate two-planes. The symbol denotes the wedge product, which is similar to the cross product in the sense that the wedge product of two forms representing oriented lengths represents an oriented area. A two-form can be integrated over an oriented surface, and the resulting integral is equivalent to the surface integral giving the flux of .
Unlike the cross product, and the three-dimensional vector calculus, the wedge product and the calculus of differential forms makes sense in arbitrary dimension and on more general manifolds (curves, surfaces, and their higher-dimensional analogs). The exterior derivative plays the role of the gradient and curl of vector calculus, and Stokes' theorem simultaneously generalizes the three theorems of vector calculus: the divergence theorem, Green's theorem, and the Kelvin-Stokes theorem.
Summations[edit]
The discrete equivalent of integration is summation. Summations and integrals can be put on the same foundations using the theory of Lebesgue integrals or time scale calculus.
Computation[edit]
Analytical[edit]
The most basic technique for computing definite integrals of one real variable is based on the fundamental theorem of calculus. Let f(x) be the function of x to be integrated over a given interval [a, b]. Then, find an antiderivative of f; that is, a function F such that F′ = f on the interval. Provided the integrand and integral have no singularities on the path of integration, by the fundamental theorem of calculus,
The integral is not actually the antiderivative, but the fundamental theorem provides a way to use antiderivatives to evaluate definite integrals.
The most difficult step is usually to find the antiderivative of f. It is rarely possible to glance at a function and write down its antiderivative. More often, it is necessary to use one of the many techniques that have been developed to evaluate integrals. Most of these techniques rewrite one integral as a different one which is hopefully more tractable. Techniques include:
Alternative methods exist to compute more complex integrals. Many nonelementary integrals can be expanded in a Taylor series and integrated term by term. Occasionally, the resulting infinite series can be summed analytically. The method of convolution using Meijer G-functions can also be used, assuming that the integrand can be written as a product of Meijer G-functions. There are also many less common ways of calculating definite integrals; for instance, Parseval's identity can be used to transform an integral over a rectangular region into an infinite sum. Occasionally, an integral can be evaluated by a trick; for an example of this, see Gaussian integral.
Computations of volumes of solids of revolution can usually be done with disk integration or shell integration.
Specific results which have been worked out by various techniques are collected in the list of integrals.
Symbolic[edit]
Many problems in mathematics, physics, and engineering involve integration where an explicit formula for the integral is desired. Extensive tables of integrals have been compiled and published over the years for this purpose. With the spread of computers, many professionals, educators, and students have turned to computer algebra systems that are specifically designed to perform difficult or tedious tasks, including integration. Symbolic integration has been one of the motivations for the development of the first such systems, like Macsyma.
A major mathematical difficulty in symbolic integration is that in many cases, a closed formula for the antiderivative of a rather simple-looking function does not exist. For instance, it is known that the antiderivatives of the functions exp(x2), xx and (sin x)/x cannot be expressed in the closed form involving only rational and exponential functions, logarithm, trigonometric functions and inverse trigonometric functions, and the operations of multiplication and composition; in other words, none of the three given functions is integrable in elementary functions, which are the functions which may be built from rational functions, roots of a polynomial, logarithm, and exponential functions. The Risch algorithm provides a general criterion to determine whether the antiderivative of an elementary function is elementary, and, if it is, to compute it. Unfortunately, it turns out that functions with closed expressions of antiderivatives are the exception rather than the rule. Consequently, computerized algebra systems have no hope of being able to find an antiderivative for a randomly constructed elementary function. On the positive side, if the 'building blocks' for antiderivatives are fixed in advance, it may be still be possible to decide whether the antiderivative of a given function can be expressed using these blocks and operations of multiplication and composition, and to find the symbolic answer whenever it exists. The Risch algorithm, implemented in Mathematica and other computer algebra systems, does just that for functions and antiderivatives built from rational functions, radicals, logarithm, and exponential functions.
Some special integrands occur often enough to warrant special study. In particular, it may be useful to have, in the set of antiderivatives, the special functions (like the Legendre functions, the hypergeometric function, the gamma function, the incomplete gamma function and so on — see Symbolic integration for more details). Extending the Risch's algorithm to include such functions is possible but challenging and has been an active research subject.
More recently a new approach has emerged, using D-finite functions, which are the solutions of linear differential equations with polynomial coefficients. Most of the elementary and special functions are D-finite, and the integral of a D-finite function is also a D-finite function. This provides an algorithm to express the antiderivative of a D-finite function as the solution of a differential equation.
This theory also allows one to compute the definite integral of a D-function as the sum of a series given by the first coefficients, and provides an algorithm to compute any coefficient.[5]
Numerical[edit]
Some integrals found in real applications can be computed by closed-form antiderivatives. Others are not so accommodating. Some antiderivatives do not have closed forms, some closed forms require special functions that themselves are a challenge to compute, and others are so complex that finding the exact answer is too slow. This motivates the study and application of numerical approximations of integrals. This subject, called numerical integration or numerical quadrature, arose early in the study of integration for the purpose of making hand calculations. The development of general-purpose computers made numerical integration more practical and drove a desire for improvements. The goals of numerical integration are accuracy, reliability, efficiency, and generality, and sophisticated modern methods can vastly outperform a naive method by all four measures (Dahlquist & Björck 2008; Kahaner, Moler & Nash 1989; Stoer & Bulirsch 2002).
Consider, for example, the integral
which has the exact answer 94/25 = 3.76. (In ordinary practice, the answer is not known in advance, so an important task — not explored here — is to decide when an approximation is good enough.) A “calculus book” approach divides the integration range into, say, 16 equal pieces, and computes function values.
x | −2.00 | −1.50 | −1.00 | −0.50 | 0.00 | 0.50 | 1.00 | 1.50 | 2.00 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f(x) | 2.22800 | 2.45663 | 2.67200 | 2.32475 | 0.64400 | −0.92575 | −0.94000 | −0.16963 | 0.83600 | |||||||||
x | −1.75 | −1.25 | −0.75 | −0.25 | 0.25 | 0.75 | 1.25 | 1.75 | ||||||||||
f(x) | 2.33041 | 2.58562 | 2.62934 | 1.64019 | −0.32444 | −1.09159 | −0.60387 | 0.31734 |
Numerical quadrature methods: ■ Rectangle, ■ Trapezoid, ■ Romberg, ■ Gauss
Using the left end of each piece, the rectangle method sums 16 function values and multiplies by the step width, h, here 0.25, to get an approximate value of 3.94325 for the integral. The accuracy is not impressive, but calculus formally uses pieces of infinitesimal width, so initially this may seem little cause for concern. Indeed, repeatedly doubling the number of steps eventually produces an approximation of 3.76001. However, 218 pieces are required, a great computational expense for such little accuracy; and a reach for greater accuracy can force steps so small that arithmetic precision becomes an obstacle.
A better approach replaces the rectangles used in a Riemann sum with trapezoids. The trapezoid rule is almost as easy to calculate; it sums all 17 function values, but weights the first and last by one half, and again multiplies by the step width. This immediately improves the approximation to 3.76925, which is noticeably more accurate. Furthermore, only 210 pieces are needed to achieve 3.76000, substantially less computation than the rectangle method for comparable accuracy. The idea behind the trapezoid rule, that more accurate approximations to the function yield better approximations to the integral, can be carried further. Simpson's rule approximates the integrand by a piecewise quadratic function. Riemann sums, the trapezoid rule, and Simpson's rule are examples of a family of quadrature rules called Newton–Cotes formulas. The degree n Newton–Cotes quadrature rule approximates the polynomial on each subinterval by a degree n polynomial. This polynomial is chosen to interpolate the values of the function on the interval. Higher degree Newton-Cotes approximations can be more accurate, but they require more function evaluations (already Simpson's rule requires twice the function evaluations of the trapezoid rule), and they can suffer from numerical inaccuracy due to Runge's phenomenon. One solution to this problem is Clenshaw–Curtis quadrature, in which the integrand is approximated by expanding it in terms of Chebyshev polynomials. This produces an approximation whose values never deviate far from those of the original function.
Romberg's method builds on the trapezoid method to great effect. First, the step lengths are halved incrementally, giving trapezoid approximations denoted by T(h0), T(h1), and so on, where hk+1 is half of hk. For each new step size, only half the new function values need to be computed; the others carry over from the previous size (as shown in the table above). But the really powerful idea is to interpolate a polynomial through the approximations, and extrapolate to T(0). With this method a numerically exact answer here requires only four pieces (five function values). The Lagrange polynomial interpolating {hk,T(hk)}k = 0..2 = {(4.00,6.128), (2.00,4.352), (1.00,3.908)} is 3.76 + 0.148h2, producing the extrapolated value 3.76 at h = 0.
Gaussian quadrature often requires noticeably less work for superior accuracy. In this example, it can compute the function values at just two x positions, ±2 ⁄ √3, then double each value and sum to get the numerically exact answer. The explanation for this dramatic success lies in the choice of points. Unlike Newton–Cotes rules, which interpolate the integrand at evenly spaced points, Gaussian quadrature evaluates the function at the roots of a set of orthogonal polynomials. An n-point Gaussian method is exact for polynomials of degree up to 2n − 1. The function in this example is a degree 3 polynomial, plus a term that cancels because the chosen endpoints are symmetric around zero. (Cancellation also benefits the Romberg method.)
In practice, each method must use extra evaluations to ensure an error bound on an unknown function; this tends to offset some of the advantage of the pure Gaussian method, and motivates the popular Gauss–Kronrod quadrature formulae. More broadly, adaptive quadrature partitions a range into pieces based on function properties, so that data points are concentrated where they are needed most.
The computation of higher-dimensional integrals (for example, volume calculations) makes important use of such alternatives as Monte Carlo integration.
A calculus text is no substitute for numerical analysis, but the reverse is also true. Even the best adaptive numerical code sometimes requires a user to help with the more demanding integrals. For example, improper integrals may require a change of variable or methods that can avoid infinite function values, and known properties like symmetry and periodicity may provide critical leverage. For example, the integral is difficult to evaluate numerically because it is infinite at x = 0. However, the substitution u = √x transforms the integral into , which has no singularities at all.
Mechanical[edit]
The area of an arbitrary two-dimensional shape can be determined using a measuring instrument called planimeter. The volume of irregular objects can be measured with precision by the fluid displaced as the object is submerged.
Geometrical[edit]
Area can sometimes be found via geometricalcompass-and-straightedge constructions of an equivalent square.
See also[edit]
References[edit]
- ^Katz, V.J. 1995. 'Ideas of Calculus in Islam and India.' Mathematics Magazine (Mathematical Association of America), 68(3):163–174.
- ^In the 20th century, nonstandard analysis was developed as a new approach to calculus that incorporates a rigorous concept of infinitesimals by using an expanded number system called the hyperreal numbers. Though placed on a sound axiomatic footing and of interest in its own right as a new area of investigation, nonstandard analysis remains somewhat controversial from a pedagogical standpoint, with proponents pointing out the intuitive nature of infinitesimals for beginning students of calculus and opponents criticizing the logical complexity of the system as a whole.
- ^(W3C 2006).
- ^Weisstein, Eric W.'Riemann Sum'. MathWorld.
- ^Frédéric Chyzak's Mgfun Project: Introduction to the Package Mgfun and Related Packages
Bibliography[edit]
Methods Of Integration Calculus Pdf Example
- Apostol, Tom M. (1967), Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra (2nd ed.), Wiley, ISBN978-0-471-00005-1
- Bourbaki, Nicolas (2004), Integration I, Springer Verlag, ISBN3-540-41129-1. In particular chapters III and IV.
- Burton, David M. (2005), The History of Mathematics: An Introduction (6th ed.), McGraw-Hill, p. 359, ISBN978-0-07-305189-5
- Cajori, Florian (1929), A History Of Mathematical Notations Volume II, Open Court Publishing, pp. 247–252, ISBN978-0-486-67766-8
- Dahlquist, Germund; Björck, Åke (2008), 'Chapter 5: Numerical Integration', Numerical Methods in Scientific Computing, Volume I, Philadelphia: SIAM, archived from the original on 2007-06-15
- Folland, Gerald B. (1984), Real Analysis: Modern Techniques and Their Applications (1st ed.), John Wiley & Sons, ISBN978-0-471-80958-6
- Fourier, Jean Baptiste Joseph (1822), Théorie analytique de la chaleur, Chez Firmin Didot, père et fils, p. §231
Available in translation as Fourier, Joseph (1878), The analytical theory of heat, Freeman, Alexander (trans.), Cambridge University Press, pp. 200–201 - Heath, T. L., ed. (2002), The Works of Archimedes, Dover, ISBN978-0-486-42084-4
(Originally published by Cambridge University Press, 1897, based on J. L. Heiberg's Greek version.) - Hildebrandt, T. H. (1953), 'Integration in abstract spaces', Bulletin of the American Mathematical Society, 59 (2): 111–139, doi:10.1090/S0002-9904-1953-09694-X, ISSN0273-0979
- Kahaner, David; Moler, Cleve; Nash, Stephen (1989), 'Chapter 5: Numerical Quadrature', Numerical Methods and Software, Prentice Hall, ISBN978-0-13-627258-8
- Kallio, Bruce Victor (1966), A History of the Definite Integral(PDF) (M.A. thesis), University of British Columbia
- Katz, Victor J. (2004), A History of Mathematics, Brief Version, Addison-Wesley, ISBN978-0-321-16193-2
- Leibniz, Gottfried Wilhelm (1899), Gerhardt, Karl Immanuel (ed.), Der Briefwechsel von Gottfried Wilhelm Leibniz mit Mathematikern. Erster Band, Berlin: Mayer & Müller
- Lieb, Elliott; Loss, Michael (2001), Analysis, Graduate Studies in Mathematics, 14 (2nd ed.), American Mathematical Society, ISBN978-0821827833
- Miller, Jeff, Earliest Uses of Symbols of Calculus, retrieved 2009-11-22
- O’Connor, J. J.; Robertson, E. F. (1996), A history of the calculus, retrieved 2007-07-09
- Rudin, Walter (1987), 'Chapter 1: Abstract Integration', Real and Complex Analysis (International ed.), McGraw-Hill, ISBN978-0-07-100276-9
- Saks, Stanisław (1964), Theory of the integral (English translation by L. C. Young. With two additional notes by Stefan Banach. Second revised ed.), New York: Dover
- Shea, Marilyn (May 2007), Biography of Zu Chongzhi, University of Maine, retrieved 9 January 2009
- Siegmund-Schultze, Reinhard (2008), 'Henri Lebesgue', in Timothy Gowers; June Barrow-Green; Imre Leader (eds.), Princeton Companion to Mathematics, Princeton University Press.
- Stoer, Josef; Bulirsch, Roland (2002), 'Topics in Integration', Introduction to Numerical Analysis (3rd ed.), Springer, ISBN978-0-387-95452-3.
- W3C (2006), Arabic mathematical notation
External links[edit]
Wikibooks has a book on the topic of: Calculus |
- Hazewinkel, Michiel, ed. (2001) [1994], 'Integral', Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN978-1-55608-010-4
- Online Integral Calculator, Wolfram Alpha.
- Online Integral Calculator, by MathsTools.
Integration Pdf Notes
Online books[edit]
- Keisler, H. Jerome, Elementary Calculus: An Approach Using Infinitesimals, University of Wisconsin
- Stroyan, K. D., A Brief Introduction to Infinitesimal Calculus, University of Iowa
- Mauch, Sean, Sean's Applied Math Book, CIT, an online textbook that includes a complete introduction to calculus
- Crowell, Benjamin, Calculus, Fullerton College, an online textbook
- Garrett, Paul, Notes on First-Year Calculus
- Hussain, Faraz, Understanding Calculus, an online textbook
- Johnson, William Woolsey (1909) Elementary Treatise on Integral Calculus, link from HathiTrust.
- Kowalk, W. P., Integration Theory, University of Oldenburg. A new concept to an old problem. Online textbook
- Sloughter, Dan, Difference Equations to Differential Equations, an introduction to calculus
- Numerical Methods of Integration at Holistic Numerical Methods Institute
- P. S. Wang, Evaluation of Definite Integrals by Symbolic Manipulation (1972) — a cookbook of definite integral techniques
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Integral&oldid=917187727#Computing_integrals'