The trust region method, minimization of a real valued function f by using its quadraticrnmodel subject to Euclidean norm trust region constraint, occurs in many trust regionrnalgorithms. In most cases, it is inexpensive and even unnecessary to _nd an exact solutionrnof a trust region problem unless the number of variables is relatively small. In order tornalleviate this di_culty, in this thesis, we emphasized on a trust region method that takesrnHessian of the model to be the true Hessian of a twice continuously di_erentiable realrnvalued objective function f on Rn , at a given current iterate point xk using the so calledrntwo dimensional subspace minimization strategy. As the name indicates, the tworndimensional subspace minimization method de_nes an approximate minimizer s to liernin a subspace Sk of Rn spanned by two reasonably chosen directions. We constructedrnsuch subspace using Lanczos iterative method. This subspace method _rst reduces thernn-dimensional trust region problem to 2-dimensional constrained problem, and then torn_nding roots of a fourth degree polynomial with the help of Lagrangian approach forrnconstrained optimization problems. We compared the performance of the two dimensionalrnsubspace minimization method with that of an alternative trust region method, namely,rnthe dogleg method, and the other common methods such as the steepest descent andrnNewton's method using MATLAB and we obtain that the two dimensional subspacernminimization method is more e_cient.