Improving the Performance of Semantic Text Similarity Tasks on Short Text Pairs
dc.contributor.author | ElKafrawy, Passent | |
dc.contributor.author | Gamal, Mohamed Taher | |
dc.date.accessioned | 2023-03-16T05:04:35Z | |
dc.date.available | 2023-03-16T05:04:35Z | |
dc.date.issued | 2023-01 | |
dc.identifier.citation | M. T. Gamal and P. M. El-Kafrawy, "Improving The Performance of Semantic Text Similarity Tasks on Short Text Pairs," 2022 20th International Conference on Language Engineering (ESOLEC), Cairo, Egypt, 2022, pp. 50-52, | en_US |
dc.identifier.doi | 10.1109/ESOLEC54569.2022 | en_US |
dc.identifier.uri | http://hdl.handle.net/20.500.14131/688 | |
dc.description.abstract | Training semantic similarity model to detect duplicate text pairs is a challenging task as almost all of datasets are imbalanced, by data nature positive samples are fewer than negative samples, this issue can easily lead to model bias. Using traditional pairwise loss functions like pairwise binary cross entropy or Contrastive loss on imbalanced data may lead to model bias, however triplet loss showed improved performance compared to other loss functions. In triplet loss-based models data is fed to the model as follow: anchor sentence, positive sentence and negative sentence. The original data is permutated to follow the input structure. The default structure of training samples data is 363,861 training samples (90% of the data) distributed as 134,336 positive samples and 229,524 negative samples. The triplet structured data helped to generate much larger amount of balanced training samples 456,219. The test results showed higher accuracy and f1 scores in testing. We fine-tunned RoBERTa pre trained model using Triplet loss approach, testing showed better results. The best model scored 89.51 F1 score, and 91.45 Accuracy compared to 86.74 F1 score and 87.45 Accuracy in the second-best Contrastive loss-based BERT model. | en_US |
dc.publisher | IEEE | en_US |
dc.subject | Training , Semantics , Bit error rate , Distributed databases , Data models , Entropy , Task analysis | en_US |
dc.title | Improving the Performance of Semantic Text Similarity Tasks on Short Text Pairs | en_US |
dc.contributor.researcher | External Collaboration | en_US |
dc.contributor.lab | Artificial Intelligence & Cyber Security Lab | en_US |
dc.subject.KSA | ICT | en_US |
dc.source.index | Scopus | en_US |
dc.contributor.department | Computer Science | en_US |
dc.contributor.firstauthor | Gamal, Mohamed Taher | |
dc.conference.location | Egypt | en_US |
dc.conference.name | 2022 20th International Conference on Language Engineering (ESOLEC) | en_US |
dc.conference.date | 2022-10-12 |