Introduction:There are around 7000 languages spoken today. Of which 2500 are consideredendangered. Only 5% of the 7000 languages spoken today are estimated to survive1. Theprimary reason for this is digital ascend. The languages which can make the digital ascendwill survive while the others won’t. The digital world is not as diverse as the real world. Lessthan 5% of the languages have a substantial online presence. There is a very hard digitaldivide to bridge.While English represents only 9% of the world population, 54% of the digital contentis in English. The digital gap between English language and every other language is vastprimarily because most of the language technologies developed are in English. Since thebeginning of the research in language technologies and computer linguistics started in mid20th century, the dominant language was English. With resources and funds, high-qualitytools were developed in English for tasks like speech recognition and synthesis, spellingcorrection, semantic parser, etc. Even today most of the machine learning tools andapplications are available primarily in English. A scan of leading conferences and scientificjournals for the period of 2008-2010 reveals 971 publications on language technology forEnglish, Compared to 228 for Chinese, and 80 for Spanish.To address this digital divide, there should more focus on creating language corpusand solving Natural Language Processing problems in languages other than English. Thereason why a technique used for one language cannot work directly for another language isbecause languages are different morphologically. While English is fusional language, Tamilis an Agglutinative language. In recent time, there has been a huge focus on language toolsfor Chinese thanks to Baidu and other Chinese companies investing in R. A similar effortfrom all language community can decrease the digital divide.Natural language Processing is a range of theory-motivated computationaltechniques for automatic analysis and representation of human language. In 1950, AlanTuring proposed Turing Test as a criterion of intelligence for machines in his article titled”Computing Machinery and Intelligence”. According to Turing test, A human evaluator has aconversation with a machine using a computer keyboard or screen and if the evaluatorcannot distinguish mahine from humans the machine is said to possess intelligence. But weare a long way from achieving this. There are various problems in the field of NLP like POSTagging, Morphological Analysis, Parse Tree, semantic analysis, sense disambiguation etc..For Tamil, some of these problems like Morphological analysis, POS Tagging, etc have beensolved with decent accuracy. Almost, all the models were developed using some statisticalmachine learning technique. This is true in the field of NLP on the whole.Historically, Most of the NLP problems have been approached in three ways. In earlystages of NLP and AI, Logic was the primary focus of the researchers. Logic is pure andworks in a very algorithmic manner. But to solves problems in NLP and AI it required morethan Logic since there were many uncertainties around it. Probabilistic methods emerged asa need to handle uncertainty. Models like Frequentist and Bayesian are data-driven and