![]() ![]() So far as WER and SER are concerned, NMT hasģ2.58% and 28% respectively. On the contrary, NMT module has Kappa scores of 0.61 and 1 onĬomprehensibility and grammaticality respectively. The Kappa scores of PBSMT for comprehensibility and grammaticality are 0.24 andĠ.22 respectively which is indicative of the fact that on both counts the scores are not up to the mark.įurthermore, the system has also been quantitatively evaluated on the basis of word error rate (21.11%)Īnd sentence error rate (72.39%). Grammaticality on the basis of which error analysis and suggestions have been provided for The scores areĬalculated by the Fleiss’ Kappa statistical measure with regard to comprehensibility and Of three human translators has been considered with their scores on a five-point scale. In order to evaluate the output text in a qualitative manner, the Inter-translator Agreement (IA) In this study, a model corpus set of 100 English sentences has beenĪpplied out of 1k cross-domain data considering various types of verbs as input text to evaluate the Rosetta, formerly governed by Phrase-based approach and is presently governed by the neural module Systems, namely PBSMT and NMT hosted on Google’s Translate. The paper demonstrates the qualitative evaluation of the English to Urdu Machine Translation
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |