Penalty and augmented Lagrangian methods are procedures for approximating constrainedrnoptimization problems by unconstrained optimization problems to and the approximate so-rnlution of the given constrained problem. The idea of replacing a constrained optimizationrnproblem by a sequence of unconstrained problems parametrized by a scalar parameter hasrnplayed a fundamental role in the formulation of algorithms. Even though, penalty meth-rnods are very natural and general; unfortunately, they suffer from a serious drawback: tornapproximate well the solution to constrained problem, we have to work with large penaltyrnparameters, and this inevitably makes the problem of unconstrained minimization of the pe-rnnalized objective very ill-conditioned. Their slow rates of convergence due to ill-conditioningrnof the associated Hessian led researchers to pursue other approaches, augmented Lagrangianrnmethods. The main advantage of the augmented Lagrangian methods is that they allowrnto approximate well the solution to a constrained problem by solutions of unconstrainedrn(and penalized) auxiliary problems without pushing the penalty parameter to inifnity; as arnresult, the auxiliary problems remain reasonably conditioned even when we are seeking forrnhigh-accuracy solutions. In this paper we will see how all this works.rnKeywords: Unconstrained optimization, constrained optimization, penalty and barrierrnmethods, augmented Lagrangian methods.