24 arxiv Enhancing Grammatical Error Detection using BERT with Cleaned Lang-8 Dataset
an improved LLM based model(GPT-4 and Llama-3-70B-instruct) 针对英文GED
1. 模型
- backbone: bert-base-uncased, bert-large-uncased, roberta-base, and roberta-large.
- Data Collection and Cleaning
- Removing Similar Sentences (删除了在“0”列和“1”列完全相同的句子(即语法正确的句子))
- Text Normalization (规范化 Unicode -> ASCII)
- Space Removal (空格删除)
- Lower-casing (小写转换)
- Handling Contraction (缩写), e.g., "can’t" became "cannot".
- Punctuation Removal(标点)
- 句子长度和Levenshtein举例过滤 (Levenshtein distance between 7 and 42 and lengths less than 101 characters were kept)
- Normalized Levenshtein Distance Filtering
- 训练集:前90K个语法错误句子, 后90个语法正确句子;中间20k留作验证。
- Model Selection and Training
2. 背景知识
- 这篇文献综述追溯了GED方法从早期基于规则和统计的方法到更先进的神经网络方法的演变过程。我们以这些基础为依托,利用基于Transformer的模型、数据集的严格清理以及广泛的评估来开展工作。我们希望帮助GED持续改进,以实现更准确、更具上下文感知性的错误检测系统,提供更高质量的数据集,并推动高级深度学习技术的进一步研究。
3. 数据集
rigorously cleaned Lang8 dataset (重新清晰了Lang8数据)
- 23,50,982 rows,containing two columns: ’0’ and ’1’. Column ’0’ contains sentences that are incorrect, and column ’1’ holds the corrected versions corresponding to the sentences in column ’0’.
4. Baselines
5. 实验结果
参考
- Cleaned lang8 ged, 2024. github.com/atmabodha/O…
- WeightWatcher tool (Martin et al., 2021)(检查过拟合层)
潜在不足与展望
- Table 2, 当将8000个清理过的句子和12000个丢弃的句子用于训练时,出现了一个小异常,需要进一步调查。