Friday, January 24, 2020

Edgar Allan Poe :: essays research papers

Edgar Allan Poe was born in Boston, Massachusets, January 19, 1809. His parents were touring actors, and they both died before Poe was three years old. After their death, Poe was taken in by a wealthy merchant named John Allan in Richmond, Virginia. There he was baptised Edgar Allan Poe. From 1815 to 1820, Poe studied in England. Later, in 1826, he went to the University of Virginia, where he stayed for a year. Poe owed a large gambling debt, but Allan refused to pay it and consequently, prevented Poe's return to the university. Allan also broke off Poe's engagement to Sarah Elmira Royster. After leaving the university, Poe enlisted in the army as a means of support. In 1827, Poe had his first book, Tamerlane And Other Poems, published at his own expense. Although he refused to provide financial support, Allan arranged Poe's release from the army, and had him appointed to West Point. Poe was dismissed after only six months for disobeying orders, but his fellow cadets gave Poe the mo ney for his second publication. Poems by Edgar A. Poe --- Second Edition was published in 1831, although in 1829 another edition of Tamerlane and minor poems had been published, actually making it a third edition. In this book were the poems To Helen and Israfel, which later became famous. These two poems show Poe's use of language in a musical way, which makes his poetry stand out from all other. Poe moved in with his aunt and cousin, Maria and Virginia Clemm, in Baltimore. Using fiction as a means of support, five of his stories were published in the Philadelphia Saturday Courier in 1832. In 1833 he won a fifty-dollar prize from the Baltimore Saturday Visitor with his short story M.S. Found In A Bottle. In 1835, Poe, his aunt, and Virginia, moved to Richmond where he married Virginia. She wasn't even fourteen when they married. Poe became editor of the Southern Literary Messenger, and published many criticisms and reviews. He also published his short story, Bernice, which is known as his most horrific work. He earned great respect as a critic and wrote reviews about many of his contemporaries. Although he was extremely critical of most, he praised a few authors, such as Charles Dickens. Poe's work made the publication very popular, but the magazine's owner found his work offensive. Poe also had a drinking problem, which earned him disfavor with his employer. Edgar Allan Poe :: essays research papers Edgar Allan Poe was born in Boston, Massachusets, January 19, 1809. His parents were touring actors, and they both died before Poe was three years old. After their death, Poe was taken in by a wealthy merchant named John Allan in Richmond, Virginia. There he was baptised Edgar Allan Poe. From 1815 to 1820, Poe studied in England. Later, in 1826, he went to the University of Virginia, where he stayed for a year. Poe owed a large gambling debt, but Allan refused to pay it and consequently, prevented Poe's return to the university. Allan also broke off Poe's engagement to Sarah Elmira Royster. After leaving the university, Poe enlisted in the army as a means of support. In 1827, Poe had his first book, Tamerlane And Other Poems, published at his own expense. Although he refused to provide financial support, Allan arranged Poe's release from the army, and had him appointed to West Point. Poe was dismissed after only six months for disobeying orders, but his fellow cadets gave Poe the mo ney for his second publication. Poems by Edgar A. Poe --- Second Edition was published in 1831, although in 1829 another edition of Tamerlane and minor poems had been published, actually making it a third edition. In this book were the poems To Helen and Israfel, which later became famous. These two poems show Poe's use of language in a musical way, which makes his poetry stand out from all other. Poe moved in with his aunt and cousin, Maria and Virginia Clemm, in Baltimore. Using fiction as a means of support, five of his stories were published in the Philadelphia Saturday Courier in 1832. In 1833 he won a fifty-dollar prize from the Baltimore Saturday Visitor with his short story M.S. Found In A Bottle. In 1835, Poe, his aunt, and Virginia, moved to Richmond where he married Virginia. She wasn't even fourteen when they married. Poe became editor of the Southern Literary Messenger, and published many criticisms and reviews. He also published his short story, Bernice, which is known as his most horrific work. He earned great respect as a critic and wrote reviews about many of his contemporaries. Although he was extremely critical of most, he praised a few authors, such as Charles Dickens. Poe's work made the publication very popular, but the magazine's owner found his work offensive. Poe also had a drinking problem, which earned him disfavor with his employer.

Thursday, January 16, 2020

Computational Efficiency of Polar

Lecture Notes on Monte Carlo Methods Fall Semester, 2005 Courant Institute of Mathematical Sciences, NYU Jonathan Goodman, [email  protected] nyu. edu Chapter 2: Simple Sampling of Gaussians. created August 26, 2005 Generating univariate or multivariate Gaussian random variables is simple and fast. There should be no reason ever to use approximate methods based, for example, on the Central limit theorem. 1 Box Muller It would be nice to get a standard normal from a standard uniform by inverting the distribution function, but there is no closed form formula for this distribution 2 x unction N (x) = P (X < x) = v1 ? e? x /2 dx . The Box Muller method is a 2 brilliant trick to overcome this by producing two independent standard normals from two independent uniforms. It is based on the familiar trick for calculating ? 2 e? x I= /2 dx . This cannot be calculated by â€Å"integration† – the inde? nite integral does not have an algebraic expression in terms of elementary f unctions (exponentials, logs, trig functions). However, ? 2 e? x I2 = ? /2 e? y dx 2 ? /2 ? 2 e? (x dy = +y 2 )/2 dxdy . The last integral can be calculated using polar coordinates x = r cos(? ), y = r sin(? with area element dxdy = rdrd? , so that 2? I2 = r = 0? e? r 2 /2 rdrd? = 2? r = 0? e? r 2 /2 rdr . ? =0 Unlike the original x integral, this r integral is elementary. The substitution s = r2 /2 gives ds = rdr and ? e? s ds = 2? . I 2 = 2? s=0 The Box Muller algorithm is a probabilistic interpretation of this trick. If (X, Y ) is a pair of independent standard normals, then the probability density is a product: 2 2 1 1 ? (x2 +y2 )/2 1 e . f (x, y ) = v e? x /2  · v e? y /2 = 2? 2? 2? 1 Since this density is radially symmetric, it is natural to consider the polar coordinate random variables (R, ? de? ned by 0 ? ? < 2? and X = R cos(? ), and Y = R sin(? ). Clearly ? is uniformly distributed in the interval [0, 2? ] and may be sampled using ? = 2? U1 . Unlike the original dis tribution function N (x), there is a simple expression for the R distribution function: 2? r G(R) = P (R ? r) = r =0 ?=0 r 1 ? r 2 /2 e rdrd? = 2? e? r 2 /2 rdr . r =0 The same change of variable r 2 /2 = s, r dr = ds (so that r = r when s = r2 /2) allows us to calculate r 2 /2 e? s dx = 1 ? e? r G(r) = 2 /2 . s=0 Therefore, we may sample R by solving the distribution function equation1 G(R) = 1 ? e? R 2 /2 = 1 ?U2 , whose solution is R = ? 2 ln(U2 ). Altogether, the Box Muller method takes independent standard uniform random variables U1 and U2 and produces independent standard normals X and Y using the formulas ? = 2? U1 , R = ?2 ln(U2 ) , X = R cos(? ) , Y = R sin(? ) . (1) It may seem odd that X and Y in (13) are independent given that they use the same R and ?. Not only does our algebra shows that this is true, but we can test the independence computationally, and it will be con? rmed. Part of this method was generating a point â€Å"at random† on the unit circle. We sug gested doing this by choosing ? niformly in the interval [0, 2? ] then taking the point on the circle to be (cos(? ), sin(? )). This has the possible drawback that the computer must evaluate the sine and cosine functions. Another way to do this2 is to choose a point uniformly in the 2 ? 2 square ? 1 ? x ? 1, 1 ? y ? 1 then rejecting it if it falls outside the unit circle. The ? rst accepted point will be uniformly distributed in the unit disk x2 + y 2 ? 1, so its angle will be random and uniformly distributed. The ? nal step is to get a point on the unit circle x2 + y 2 = 1 by dividing by the length.The methods have equal accuracy (both are exact in exact arithmetic). What distinguishes them is computer performance (a topic discussed more in a later lecture, hopefully). The rejection method, with an acceptance probability ? ? 4 78%, seems e? cient, but rejection can break the instruction pipeline and slow a computation by a factor of ten. Also, the square root needed to compute 1 Re call that 1 ? U2 is a standard uniform if U2 is. for example, in the dubious book Numerical Recipies. 2 Suggested, 2 the length may not be faster to evaluate than sine and cosine.Moreover, the rejection method uses two uniforms while the ? method uses just one. The method can be reversed to solve another sampling problem, generating a random point on the â€Å"unit spnere† in Rn . If we generate n independent standard normals, then the vector X = (X1 , . . . , Xn ) has all angles equally n likely (because the probability density is f (x) = v1 ? exp(? (x2 + ·  ·  ·+x2 )/2), n 1 2 which is radially symmetric. Therefore X/ X is uniformly distributed on the unit sphere, as desired. 1. 1 Other methods for univariate normals The Box Muller method is elegant and reasonably fast and is ? ne for casual omputations, but it may not be the best method for hard core users. Many software packages have native standard normal random number generators, which (if they are any good) use e xpertly optimized methods. There is very fast and accurate software on the web for directly inverting the normal distribution function N (x). This is particularly important for quasi Monte Carlo, which substitutes equidistributed sequences for random sequences (see a later lecture). 2 Multivariate normals An n component multivariate normal, X , is characterized by its mean  µ = E [X ] and its covariance matrix C = E [(X ?  µ)(X ?  µ)t ].We discuss the problem of generating such an X with mean zero, since we achieve mean  µ by adding  µ to a mean zero multivariate normal. The key to generating such an X is the fact that if Y is an m component mean zero multivariate normal with covariance D and X = AY , then X is a mean zero multivariate normal with covariance t C = E X X t = E AY (AY ) = AE Y Y t At = ADAt . We know how to sample the n component multivariate normal with D = I , just take the components of Y to be independent univariate standard normals. The formula X = AY w ill produce the desired covariance matrix if we ? nd A with AAt = C .A simple way to do this in practice is to use the Choleski decomposition from numerical linear algebra. This is a simple algorithm that produces a lower triangular matrix, L, so that LLt = C . It works for any positive de? nite C . In physical applications it is common that one has not C but its inverse, H . This would happen, for example, if X had the Gibbs-Boltzmann distribution with kT = 1 (it’s easy to change this) and energy 1 X t HX , and probability 2 1 density Z exp(? 1 X t HX ). In large scale physical problems it may be impracti2 cal to calculate and store the covariance matrix C = H ? though the Choleski factorization H = LLt is available. Note that3 H ? 1 = L? t L? 1 , so the choice 3 It is traditional to write L? t for the transpose of L? 1 , which also is the inverse of Lt . 3 A = L? t works. Computing X = L? t Y is the same as solving for X in the equation Y = Lt X , which is the process of ba ck substitution in numerical linear algebra. In some applications one knows the eigenvectors of C (which also are the eigenvectors of H ), and the corresponding eigenvalues. These (either the eigenvectors or the eigenvectors and eigenvalues) sometimes are called principal com2 ponents.Let qj be the eigenvectors, normalized to be orthonormal, and ? j the corresponding eigenvalues of C , so that 2 Cqj = ? j qj , t qj qk = ? jk . t Denote the qj component of X by Zj = qj X . This is a linear function of X and t therefore Gaussian with mean zero. It’s variance (note: Zj = Zj = X t qj ) is 2 t t t 2 E [Zj ] = E [Zj  · Zj ] = qj E [XX t ]qj = qj Cqj = ? j . A similar calculation shows that Zj and Zk are uncorrelated and hence (as components of a multivariate normal) independent. Therefore, we can generate Yj as independent standard normals and sample the Zj using Zj = ? j Yj . (2) After that, we can get an X using Zj qj . X= (3) j =1 We restate this in matrix terms. Let Q be the orthogonal matrix whose columns are the orthonormal eigenvectors of C , and let ? 2 be the diagonal ma2 trix with ? j in the (j, j ) diagonal position. The eigenvalue/eigenvector relations are CQ = Q? 2 , Qt Q = I = QQt . (4) The multivariate normal vector Z = Qt X then has covariance matrix E [ZZ t ] = E [Qt XX t Q] = Qt CQ = ? 2 . This says that the Zj , the components of Z , are 2 independent univariate normals with variances ? j . Therefore, we may sample Z by choosing its components by (14) and then reconstruct X by X = QZ , which s the same as (15). Alternatively, we can calculate, using (17) that t C = Q? 2 Qt = Q Qt = (Q? ) (Q? ) . Therefore A = Q? satis? es AAt = C and X = AY = Q? Y = QZ has covariance C if the components of Y are independent standard univariate normals or 2 the components of Z are independent univariate normals with variance ? j . 3 Brownian motion examples We illustrate these ideas for various kids of Brownian motion. Let X (t) be a Brownian motion path. Choose a ? nal time t and a time step ? t = T /n. The 4 observation times will be tj = j ? t and the observations (or observation values) will be Xj = X (tj ).These observations may be assembled into a vector X = (X1 , . . . , Xn )t . We seek to generate sample observation vectors (or observation paths). How we do this depends on the boundary conditions. The simplest case is standard Brownian motion. Specifying X (0) = 0 is a Dirichlet boundary condition at t = 0. Saying nothing about X (T ) is a free (or Neumann) condition at t = T . The joint probability density for the observation vector, f (x) = f (x1 , . . . , xn ), is found by multiplying the conditional densities. Given Xk = X (tk ), the next observation Xk+1 = X (tk + ? ) is Gaussian with mean Xk and variance ? t, so its conditional density is v 2 1 e? (xk+1 ? Xk ) /2? t . 2? ?t Multiply these together and use X0 = 0 and you ? nd (with the convention x0 = 0) f (x1 , . . . , xn ) = 3. 1 1 2? ?t n/2 exp ?1 2 ? Deltat n? 1 (xk+ 1 ? xk )2 . (5) k=0 The random walk method The simplest and possibly best way to generate a sample observation path, X , comes from the derivation of (1). First generate X1 = X (? t) as a mean zero v univariate normal with mean zero and variance ? t, i. e. X1 = ? tY1 . Given X1 , X2 is a univariate normal with mean X1 and variance ? , so we may v take X2 = X1 + ? tY2 , and so on. This is the random walk method. If you just want to make standard Brownian motion paths, stop here. We push on for pedigogical purposes and to develop strategies that apply to other types of Brownian motion. We describe the random walk method in terms of the matrices above, starting by identifying the matrices C and H . Examining (1) leads to ? 2 ? 1 0  ·Ã‚ ·Ã‚ · ? ? ? 1 2 ? 1 0  ·Ã‚ ·Ã‚ · ? ? .. .. .. . . . 1 ? 0 ? 1 ? H= ?. .. ?t ? . . 2 ? 1 ?. ? .. ? . ? 1 2 0  ·Ã‚ ·Ã‚ · 0 ? 1 ? 0 .? .? .? ? ? ? ? 0? ? ? ?1 ? 1 This is a tridiagonal matrix with pattern ? 1, 2, ? except at the bottom right corner. O ne can calculate the covariances Cjk from the random walk representation v Xk = ? t (Y1 +  ·  ·  · + Yk ) . 5 Since the Yj are independent, we have Ckk = var(Xk ) = ? t  · k  · var(Yj ) = tk , and, supposing j < k , Cjk = E [Xj Xk ] = ? tE [((Y1 +  ·  ·  · + Yj ) + (Yj +1 +  ·  ·  · + Yk ))  · (Y1 +  ·  ·  · + Yj )] = 2 ?tE (Y1 +  ·  ·  · + Yj ) = tj . These combine into the familiar formula Cjk = cov(X (tj ), X (tk )) = min(tj , tk ) . This is the same as saying that the ? 1 ?1 ? ?. ?. C = ? t ? . ? ? ? 1 matrix C is 1  ·Ã‚ ·Ã‚ · 2 2  ·Ã‚ ·Ã‚ · 2 . . . 3  ·Ã‚ ·Ã‚ · . . . 2 3  ·Ã‚ ·Ã‚ · ? 1 2? ? ? 3? .? .? .? .. . (6) The random walk method for generating X may be expresses as ? ? ? Y ? X1 1 1 0  ·Ã‚ ·Ã‚ · 01 ? ? ? ?1 1 0  ·Ã‚ ·Ã‚ · 0 ? ? . ? ?.? ?.? v? ? . ? ?.? 1 0 . . ? . .? ? . ? = ? t ? 1 1 ? ? ? ? ?. . .. ? ? ? ?. . . .. ? ? ? ? 11 1  ·Ã‚ ·Ã‚ · 1 Yn Xn Thus, X = AY with ? ? 1 0  ·Ã‚ ·Ã‚ · 01 ?1 1 0  ·Ã‚ ·Ã‚ · 0 ? ? ? v? .? .? . ?1 1 1 0 .? A = ? t ? ?. . ? .. .. ?. . ? . 11 1  ·Ã‚ ·Ã‚ · 1 (7) The reader should do the matrix multiplication to check that indeed C = AAt for (6) and (7). Notice that H is a sparse matrix indicating short range interactions while C is full indicating long range correlations.This is true of in great number of physical applications, though it is rare to have an explicit formula for C . 6 We also can calculate the Choleski factorization of H . The reader can convince herself or himself that the Choleski factor, L, is bidiagonal, with nonzeros only on or immediately below the diagonal. However, the formulas are simpler if we reverse the order of the coordinates. Therefore we de? ne the coordinate reversed observation vector t X = (Xn , xn? 1 , . . . , Xn ) and whose covariance matrix is ? tn ? tn? 1 ? C=? . ?. . t1 tn? 1 tn? 1  ·  ·  · t1 t1 .. .  ·Ã‚ ·Ã‚ · ? ? ? , ? t1 and energy matrix ? 1 ? 1  ·Ã‚ ·Ã‚ · 0 ? 0 .? .? .? ? ? ?. ? 0? ? ? ?1 ? 2 ? ? ? 1 2 ? 1 0  ·Ã‚ ·Ã‚ · ? ? .. .. .. . . . 1 ? 0 ? 1 ? H= .. ?t ? . . ?. . 2 ? 1 ? ? .. ? . ? 1 2 0  ·Ã‚ ·Ã‚ · 0 ? 1 We seek the Choleski factorization H = LLt ? l1 0 ? m2 l2 1? L= v ? m3 ?t ? 0 ? . .. . . . with bidiagonal ?  ·Ã‚ ·Ã‚ · ? 0 ? ?. .. ? . ? .. . Multiplying out H = LLt leads to equations that successively determine the lk and mk : 2 l1 l 1 m2 2 2 l1 + l 2 l 2 m3 = 1 =? l1 = 1 , = ? 1 =? m2 = ? 1 , = 2 =? l2 = 1 , = 1 =? m3 = ? 1 , etc. , The result is H = LLt with L simply ? 1 0  ·Ã‚ ·Ã‚ · ? ? 1 10 1? .. L= v ? . ?t ? ? 1 ? . .. .. . . . . 7 ? ? ? ?. ? ? The sampling algorithm using this Y = Lt X : ? ? ? 1 Yn ? Yn? 1 ? ? ? ? ?0 ? ? 1? ? ? ? ? . ?= v ? ?.? ?t ? ?.? ?. ? ? ?. . Y1 0 information is to ? nd X from Y by solving ?1 0 1 .. . ?1 .. .  ·Ã‚ ·Ã‚ ·  ·Ã‚ ·Ã‚ · .. . 0 0 Xn . ? ? Xn? 1 . . . 0 . . ?1 X1 1 ? ? ? ? ? ? ? ? ? Solving from the bottom up (back substitution), we have Y1 = Y2 = v 1 v X1 =? X1 = ? tY1 , ?t v 1 v (X2 ? X1 ) =? X2 = X1 + ? tY2 , etc. ? t This whole process turns out to give the same random walk sampling method. Had we not gone to the time reversed (X , etc. variables, we could have calculated the bidiagonal Choleski factor L numerically. This works for any problem with a tridiagonal energy matrix H and has a name in the control theory/estimation literature that escapes me. In particular, it will allow to ? nd sample Brownian motion paths with other boundary conditions. 3. 2 The Brownian bridge construction The Brownian bridge construction is useful in the mathematical theory of Brownian motion. It also is the basis for the success of quasi Monte Carlo methods in ? nance. Suppose n is a power of 2: n = 2L . We will construct the observation path X through a sequence of L re? ements. First, notice that Xn is a univariate normal with mean zero and variance T , so we may take (with Yk,l being independent standard normals) v Xn = T Y1,1 . Given the value of Xn , the midoint observation, Xn/2 , is a univariate normal4 w ith mean 1 Xn and variance T /4, so we may take 2 Xn 2 v 1 T = Xn + Y2,1 . 2 2 At the ? rst level, we chose the endpoint value for X . We could draw a ? rst level path by connenting Xn to zero with a straight line. At the second level, or ? rst re? nement, we created a midpoint value. The second level path could be piecewise linear, connecting 0 to X n to Xn . 4 We assign this and related claims below as exercises for the student. 8 The second re? nement level creates values for the â€Å"quarter points†. Given n X n , X n is a normal with mean 1 X n and variance 1 T . Similarly, X 34 is a 2 42 2 4 2 1 1T normal with mean 2 (X n + Xn ) and variance 4 2 . Therefore, we may take 2 Xn = 4 1 1 Xn + 22 2 T Y3,1 2 and n X 34 = 1 1 (X n + Xn ) + 2 2 2 T Y3,2 . 2 1 The level three path would be piecewise linear with breakpoints at 1 , 2 , and 3 . 4 4 Note that in each case we add a mean zero normal of the appropriate variance to the linear interpolation value.In the general step, we go from the level k ? 1 path to the level k paths by creating values for the midpoints of the level k ? 1 intervals. The level k observations are X j . The values with even j are known from the previous 2k? 1 level, so we need values for odd j . That is, we want to interpolate between the j = 2m value and the j = 2m + 2 value and add a mean zero normal of the appropriate variance: X (2m+1)n = 2k? 1 1 2 mn X 2k? 1 + X (2m+2)n 2 2k? 1 + 1 2(k? 2)/2 T Ym,k . 2 The reader should check that the vector of standard normals Y = (Y1,1 , Y2,1 , Y3,1 , Y3,2 , . . . t indeed has n = 2L components. The value of this method for quasi Monte Carlo comes from the fact that the most important values that determine the large scale structure of X are the ? rst components of Y . As we will see, the components of the Y vectors of quasi Monte Carlo have uneven quality, with the ? rst components being the best. 3. 3 Principle components The principle component eigenvalues and eigenvectors for many types of Brownian motion are known in closed form. In many of these cases, the Fast Fourier Transform (FFT) algorithm leads to a reasonably fast sampling method.These FFT based methods are slower than random walk or Brownian bridge sampling for standard random walk, but they sometimes are the most e? cient for fractional Brownian motion. They may be better than Brownian bridge sampling with quasi Monte Carlo (I’m not sure about this). The eigenvectors of H are known5 to have components (qj,k is the k th component of eigenvector qj . ) qj,k = const  · sin(? j tk ) . 5 See e. g. Numerical Analysis by Eugene Isaacson and Herbert Keller. 9 (8) The n eigenvectors and eigenvalues then are determined by the allowed values of ? j , which, in turn, are determined throught the boundary conditions.We 2 2 can ? nd ? j in terms of ? j using the eigenvalue equation Hqj = ? j qj evaluated at any of the interior components 1 < k < n: 1 2 [? sin(? j (tk ? ?t)) + 2 sin(? j tk ) ? sin(? j (tk + ? t)) ] = ? j sin(? j tk ) . ?t Doing the math shown that the eigenvalue equation is satis? ed and that 2 ?j = 2 1 ? cos(? j ? t) . ?t (9) The eigenvalue equation also is satis? ed at k = 1 because the form (8) automatically satis? es the boundary condition qj,0 = 0. This is why we used the sine and not the cosine. Only special values ? j give qj,k that satisfy the eigenvalue equation at the right boundary point k = n. 10

Tuesday, January 7, 2020

Indian Independence Movement and Gandhi - 979 Words

#65279; Gandhi was an influential figure in our society. He taught many people about equal rights, honouring thy neighbour, and peace and tranquillity. Although at times his actions were deemed improbable and insane nevertheless, they were effective. Life of Mohatama Gandhi;his goals he accomplish for freedom for South Africa; and how Mohatama finally obtained freedom for India. Gandhi, also known as Mahatma Gandhi, was born in the present state of Gujarat on October 2, 1869. He was educated in law at University College, London. In 1891, after Gandhi was admitted to the British bar, he returned to India and attempted to create a law practice in Bombay, which failed. Two years after his failure, and India firm with†¦show more content†¦Once more Gandhi was arrested but was released in 1931, stopping his methods after the British government agreed to some of his demands. In 1932, Gandhi began a new civil disobedience method against Britain. Gandhi was arrested twice, then fasted for long periods of time. These fasts were effective against the British because if Gandhi dies all of India would have revolted against Britain. In 1934 Gandhi completely resigned from politics and was replaced by a leader of the Congress party named Jawaharlal Nehru. Gandhi then travelled across India teaching passive resistance. In 1939, Gandhi returned to political life because of the federation of Indian principalities with the rest of India. He then decided he would force the ruler of the state to modify his autocratic rule. Gandhi fasted until his demands were met. When World War II broke out, Congress and Gandhi demanded that a declaration of war aims and their application to India. Due to the unsatisfactory response from the British the party decided not to support Britain in the war unless the country was granted independence. The British again refused only offering compromises, which were rejected by the party. Gandhi was sent to prison in 1942 due to refusing to help Britain in the war even after Japan entered but was released two years later suffering from Malaria. By 1944 Britain had almost completelyShow MoreRelatedMahatma Gandhi And The Indian Independence Movement1009 Words   |  5 Pagesthat comes to mind is Mahatma Gandhi. The word ‘Mahatma’ is a literal translation to: great sage, a saint, a person to be considered as a messiah (Gandhi, 2011, P4). Descriptions such as freedom fighter, warrior for justice and activist are just three popular terms that describe Mohandas Gandhi today. But are all these descriptions true? Mahatma Gandhi is revered by mainstream opinion as a Jesus like figure. The media and in particular, Richard Attenborough (Gandhi 1982), portray Mohandas as theRead MoreMohandas Gandhis Struggl for India’s Independence Essay1571 Words   |  7 Pageshis lifetime, Mohandas Gandhi with great patience struggled for the goal of India’s independence (Mohandas Gandhi. ABC-CLIO). The world widely celebrates him because of his enormous efforts towards the goal with perseverance and dedication (Wakin, Eric. â€Å"Gandhi, Mohandas K.†). Though he faced huge penalties, he did not lost perseverance but he constantly campaigned against the powerful whites (Wakin, Eric. â€Å"Gandhi, Mohandas K.†). As he strongly supported nonviolence, Gandhi campaigned to â€Å"convinceRead MoreMahatma Gandhi : The First War Of Indian Independence1096 Words   |  5 PagesMahatma Gandhi was one with the greatest soul who was a freedom fi ghter, father of the nation and he was called ‘Bapu’ in the India. He took India to a totally new level by employing movements like non-violence, civil disobedience and civil rights during India’s freedom struggle with the British. He was fasting for purification, and respect for all religions. The British government rule in India under crown rule, ending a century of control of the East India Company. The life and death struggleRead MoreThe Indian Independence Movement Of India1129 Words   |  5 Pagespaper I am going to be focusing in on the later half on the Indian independence movement (1918-1947) leading up till the riots that caused the partition of India. This means that I will be analyzing the complete territory of India that was present during that time period. There are several reasons I am focusing in on this time period. First I will be researching about the independence movement which eventually led to the Indian Independence Act of 1947 and then see how that leads to the partition ofRead MoreEssay about Gandhi and his passive Resistace to Great Britain in War I1040 Words   |  5 PagesMohandas Gandhi nbsp;nbsp;nbsp;nbsp;nbsp;Mohandas Karamchand Gandhi, also known as mahatma Gandhi, was a Indian nationalist leader, who established his countrys freedom through a nonviolent revolution. nbsp;nbsp;nbsp;nbsp;nbsp;Gandhi became a leader in a difficult struggle, the Indian campaign for home rule. He believed and dedicated his life to demonstrating that both individuals and nations owe it to themselves to stay free, and to allow the same freedom to others. Gandhi was one ofRead MoreGandhi : The World Of Mahatma Gandhi1320 Words   |  6 PagesWorld Religions May 6, 2016 Research Paper: Gandhi Mohandas Karamchand Ghandi, better known to the world as Mahatma Gandhi is one of the world’s main faces when we think or talk of the Indian independence movements, women’s rights and all around freedom for humanity. This individual used strategies and tactics of his own to achieve justice for the Indian culture while he was alive. Gandhi also worked to reform traditional Indian society in India as he was a mahatma, a Hindu term in theRead MoreMahatma Gandhi - Father of Modern India1734 Words   |  7 PagesMohandas Gandhi – Father of Modern India – Sky W. During the late 1800’s, India was yet again being taken over by another conquering nation (Britain). The British were not the first to do this, but followed in the footsteps of the Greek and Persian invasions of the 5th Century BC. Though the control factor remained the same, the way the British went about doing it – gradual and subtle – was not the same method the Persians or the Greeks used of an immediate and simple takeover. If India were toRead MoreIndia was granted independence from the British on 15 August 1947. Her to path independence was not1300 Words   |  6 PagesIndia was granted independence from the British on 15 August 1947. Her to path independence was not because of one person or just one movement. It was rather a collection of multiple events which were both violent and nonviolent in nature. In essence the Indian Independence Movement lasted nearly a century starting with the Sepoy rebellion(1857) to the formation of the Indian National Congr ess to the Salt Satyagraha(1929) to the Quit India Movement (1942) and finally Independence in the 1947. In thisRead MoreMahatma Gandhi : Gandhi ( Gandhi )1176 Words   |  5 PagesKaramchand Gandhi, also known as Gandhi Ji, Mahatma Gandhi and Bapu. He was a nationalist leader in India, known for establishing freedom in India from British through nonviolent movement. He professed the term’s passive resistance and civil disobedience insufficient for his work, however he devised a term called, Satyagraha (truth and firmness). He worked his whole life for peace and freedom in India, which I think, is something to be acknowledged by millions of people. Mohandas Karamchand Gandhi wasRead MoreIndependence for India: Cutting The British Empire Down To Size1147 Words   |  5 PagesEmpire’s colonies cried for independence with unruly nationalist movements, none more so than the South Asian colony of India; in which Britain’s firm rule had become known as ‘the British Raj’. India had fought alongside their British rulers during World War I in hopes it would gain them independence, however this hope was not fulfilled and the Indian colony began to rebel with nationalist movements. Britain still kept an iron grip on India during the numerous nationalist movements and throughout all of