Abstract
This experiment has a focus on the enhancement capability of large language models (LLMs) for translation in low-resource languages, specifically Thai. Moreover, the experiment of translation by selected different LLMs, such as GPT-3.5 Turbo, Claude 3.5 Sonnet, SeaLLMs, and Typhoon, translating from English to Thai in general text, found that translation by specific lan-guages pre-trained LLMs such as Typhoon has more accuracy than multilin-gual pre-trained LLMs such as GPT-3.5 Turbo. Furthermore, attempt to im-plement Agentic Machine Translate to enhance translation, which is a pro-cess that uses 2 LLMs; first assign as translator and second assign as reflec-tor. This experiment has 2 methods, first using the same LLMs as translation and reflection, and second methods using different LLMs. Additionally, the results before and after translation by reflection were assessed by BERTScore, COMET, METEOR, and BLEU. By using different LLMs, the quality of translation increases with Typhoon as translator and Claude as re-flector. As a result, the efficiency of translation by agentic flow depends on the generated reflection prompt and pre-trained language. Although this ex-periment was not to confirm that agentic machine translation can enhance LLMs-based translation accuracy, it showed there is a challenge in leveraging LLMs-based translation instead of machine translation to improve translation in low-resource languages.
Original language | English |
---|---|
Title of host publication | Lecture Notes in Computer Science |
Publisher | Springer |
ISBN (Print) | 9789819606917 |
DOIs | |
Publication status | Published (VoR) - 20 Feb 2025 |
Keywords
- Large Language Models
- Translation Quality
- Agentic Machine Translation