According to Ethiopian population census, Oromo Language is estimated to be spokenrnby 36.4% of the local population. Furthermore, in addition to the local populationrnthe language is spoken outside of Ethiopia, for instance in small portion of Kenya.rnThus, taking this into account the language is estimated to be spoken by aroundrn fty million people. In addition to the spoken form, a considerable portion of thernlanguage's speaker are capable of understanding its written form known as Qubee.rnThe introduction of Qubee, in the mid-nineties has opened doors for its utilizationrnin modern day communication systems. Leaving this argument aside, in the eyes ofrninformation theory and communication channels both symbol utilization schemes arernfound to be ine cient. This is because, Latin or Amharic symbols are represented byrnASCII8 and UTF 16 xed length encoding mechanisms poorly model written naturalrnlanguage.rnWith the expected increasing demand of the language in telecom services in mind, inrnthis thesis we mainly aim at estimating the Oromo Language Language's entropy. Thernestimation will set the optimum number of bits per symbol needed to e ciently trans-rnmit written Oromo Language in communication systems. To achieve our objective, wernhave modeled the sources, i.e., written Oromo Language, as Nth order Markovian chainrnrandom process. Based on the modeling scheme we have studied the distribution ofrnsymbols in ten literature written in Oromo Language. The study reveals the Languagerncan be transmitted using 4.31 bits/symbol when modeled as rst order MarkovianrnChain source. Whereas, the zero crossing entropy of the source was estimated to be inrnaverage at N=19.5; which gave an entropy estimation of 0.85 bits/symbol with a re-rndundancy of 89.36%. Additionally, we have conducted two entropy-based compressionrnalgorithms, namely, Hu man and Arithmetic coding, to test the validity of our estima-rntion. The Hu man algorithm was able to compress our sample corpora in average fromrn42:17% �� 64:88% for N = 1 �� 5. These compression results con rm the results of ourrnNth order estimation of the Language's entropy by approaching their theoretical limits.