Artificial neural networks, as they are usually called, currently gained much popularityrnin the design of "intelligent" machines and in programs which are used for automatic patternrnrecognition as pattern classifiers. In contrast to symbolic-oriented methods in artificialrnintelligence (Al), artificial neural networks are computing systems that use mathematicalrnalgorithms and "imitate" the way the brain, the biological neural network, functions. They arernmade up of a number of simple, highly connected non linear processing elements and processrninformation by their dynamic state response to external inputs. They are characterized by thernability to learn and generalize, massive parallelism which gives rise to greater speed onrncomputers with parallel processors or on a dedicated analogue VLSI circuit chip, tolerance tornsignificant erroneous data or network fault, and some models exhibit self organization in thernlearning phase giving optimum network architecture. Their greatest asset compared to otherrnrecognition methods, however, is their ability to learn and generalize.rnIn this paper a feed-forward Back Propagation Network (BPN) architecture, which isrnone of the several network architectures available, is implemented to recognize printedrnmultifont alpha-numeric (English and Amharic or Geez) characters and its performance isrninvestigated. The network model has three layers and is trained in a supervised training mode.rnIn the research, two independent sets of pattern classes of characters were formed eachrnpattern class having four training and two testing sample character patterns. The first set dealsrnwith randomly selected pattern classes and the second set deals with very similar patternrnclasses. And in both sets of training and testing schemes, the relative recognition performancernof the network is evaluated. The network recognition rate or performance, only for the testrnpatterns, in percentage for the first set is about 85% and for the second set is about 67%. Thernoverall recognition rate accounting tests with both the training and test patterns is 93% for thernfirst set and 87% for the second.rnWhen the test patterns from these two sets were corrupted with noise, the recognitionrnperformance of both sets degraded steadily.rnTest was also made with tilted test patterns on the first set and the performance wasrnunaffected up to a tilt angle of 4.4 degrees from the vertical.rnA steady improvement in performance was observed as the dimension of the input patternrnvectors, the number of training patterns in a pattern class, and network size were increased.