• Login
    View Item 
    •   Home
    • Computer Science
    • Faculty Research and Publications
    • Conference Papers
    • View Item
    •   Home
    • Computer Science
    • Faculty Research and Publications
    • Conference Papers
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Effat University RepositoryCommunitiesPublication DateAuthorsTitlesSubjectsPublisherJournalTypeDepartmentThis CollectionPublication DateAuthorsTitlesSubjectsPublisherJournalTypeDepartmentProfilesView

    My Account

    Login

    Statistics

    Display statistics

    Improving the Performance of Semantic Text Similarity Tasks on Short Text Pairs

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Author
    ElKafrawy, Passent cc
    Gamal, Mohamed Taher
    Subject
    Training , Semantics , Bit error rate , Distributed databases , Data models , Entropy , Task analysis
    Date
    2023-01
    
    Metadata
    Show full item record
    Abstract
    Training semantic similarity model to detect duplicate text pairs is a challenging task as almost all of datasets are imbalanced, by data nature positive samples are fewer than negative samples, this issue can easily lead to model bias. Using traditional pairwise loss functions like pairwise binary cross entropy or Contrastive loss on imbalanced data may lead to model bias, however triplet loss showed improved performance compared to other loss functions. In triplet loss-based models data is fed to the model as follow: anchor sentence, positive sentence and negative sentence. The original data is permutated to follow the input structure. The default structure of training samples data is 363,861 training samples (90% of the data) distributed as 134,336 positive samples and 229,524 negative samples. The triplet structured data helped to generate much larger amount of balanced training samples 456,219. The test results showed higher accuracy and f1 scores in testing. We fine-tunned RoBERTa pre trained model using Triplet loss approach, testing showed better results. The best model scored 89.51 F1 score, and 91.45 Accuracy compared to 86.74 F1 score and 87.45 Accuracy in the second-best Contrastive loss-based BERT model.
    Department
    Computer Science
    Publisher
    IEEE
    DOI
    10.1109/ESOLEC54569.2022
    ae974a485f413a2113503eed53cd6c53
    10.1109/ESOLEC54569.2022
    Scopus Count
    Collections
    Conference Papers

    entitlement

     
    DSpace software (copyright © 2002 - 2023)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.