Conference Papers
Recent Submissions
-
Sudden Fall Detection and Prediction Using AI TechniquesFall prediction is a critical process in ensuring the safety and well-being of individuals, particularly the elderly population. This paper focuses on the development of a fall detection and prediction system using wearable sensors and machine learning algorithms. The system issues an alarm upon predicting the occurrence of falling and sends alerts to a monitoring centre for timely assistance. Wearable sensor devices, including Inertial Measurement Units (IMUs) equipped with accelerometers, gyroscopes, and magnetometers are utilized for data collection. UPFALL, a comprehensive online freely available dataset, had been utilized for training and verifying the proposed system. Several machine learning algorithms, such as K-Nearest Neighbours (KNN), Random Forest, Support Vector Machine (SVM), and Gradient Boosting, are implemented and evaluated. Among these algorithms, KNN demonstrates the highest effectiveness for fall detection having an accuracy of 93.5%. Furthermore, deep learning models, including Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), and Convolutional Neural Network (CNN), are employed. The GRU model exhibits superior performance among the deep learning approaches by having the least train and test loss of 0.219 and 0.267 respectively. An early fall prediction function is incorporated by establishing a threshold selection process based on logical analysis. The maximum voting concept is employed to determine the optimal threshold.
-
Machine Unlearning: An Overview of the Paradigm Shift in the Evolution of AIThe rapid advancements in artificial intelligence (AI) have primarily focused on the process of learning from data to acquire knowledge for smart systems. However, the concept of machine unlearning has emerged as a transformative paradigm shift in the field of AI, due to the amount of false information that have been learned over the past. Machine unlearning refers to the ability of AI systems to reverse or discard previously acquired knowledge or patterns, enabling them to adapt and refine their understanding in response to changing circumstances or new insights. This paper explores the concept of machine unlearning, its implications, methods, challenges, and potential applications. The paper begins by providing an overview of the traditional learning-based approaches in AI and the limitations they impose on system adaptability and agility. It then delves into the concept of machine unlearning, discussing various techniques and algorithms employed to remove or modify learned knowledge from AI models or datasets.
-
Statistical Analysis for Evaluation and Improvement of Computer Science EducationDeveloping well-prepared and competent graduates is one of the main goals of all university programs globally. Recently, Computer Science (CS) has achieved tremendous success among all career fields driven by the strong competence in the market and the rapid changes in technology. Our goal is to develop an automated framework that provides efficient management, evaluation and improvement of the CS students education, as well as a profound establishment of a successful study tree for CS university programs. Such a challenging goal comprises major factors that should be inclusively considered. High school (HS) students are expected to join CS university programs with different educational backgrounds and learning capabilities. The strength of association among several performance-related factors including the academic performance of students in HS is evaluated to gain insights and infer indicators in CS programs. The automatic correlational analysis of the prerequisites for each course is also investigated to assess the program structure and dependencies among several CS courses. In this comprehensive study, all these factors are efficiently analyzed in order to investigate the valid causes of low and high performance of both CS university students and programs. Experimental results have concluded several major findings with validated associations that assure and prioritize the importance of evaluation and improvement of CS education.
-
From Analysis to Implementation: A Comprehensive Review for Advancing Arabic-English Machine TranslationIn an increasingly interconnected world, the demand for accurate Arabic-English translation has surged, highlighting the complexities in handling Arabic's intricate morphology and diverse linguistic structures. This research delves into various translation models, including Convolutional Neural Networks (CNNs), LSTM, Neural Machine Translation (NMT), BERT, and innovative fusion architectures like the Transformer-CNN. Each model's strengths and limitations are scrutinized through comprehensive evaluations and comparisons, unveiling their potential to address translation challenges. The research then builds two models, the first based on LSTM and the second on BERT, and tests their performance in translating English to Arabic. The paper then conducts an in-depth analysis of the results. The comparative analysis provides insights into the landscape of Arabic-English translation models, guiding future research toward refining models, leveraging diverse datasets, and establishing standardized evaluation benchmarks to bridge existing gaps.
-
Dynamic Pricing Mechanisms for Load Management in Smart GridsIn this research study, we introduce an innovative approach to managing load distribution in smart grid infrastructures. The primary objective is to regulate power demand during peak hours, utilizing dynamic pricing mechanisms. Unlike traditional centralized systems, our proposed solution operates on a distributed paradigm, deriving its foundation from the rational actions of grid users. To effectively analyze the prevailing scenario, we employ a game-theoretic model to understand the behaviors of users, who, in our model, act primarily out of self-interest. An initial observation indicates that if left to their own devices, these selfish users might make decisions that could be detrimentally arbitrary to the system's efficient functioning.In response to this challenge, we introduce a pricing mechanism designed to enhance the allocation quality influenced by these self-centered users. We begin by establishing the validity of our approach through a proof that the game, even with the existence of selfish users, will reach a Nash equilibrium. This equilibrium ensures that no player can benefit by deviating unilaterally from their chosen strategy after considering an opponent's choice. Following this, we demonstrate that by incorporating our dynamic pricing strategy, the resulting allocation's peak demand is effectively managed. Specifically, the peak of this allocation, even in the worst-case scenario, will not exceed double the value of an ideal or optimal peak. This result underscores the efficiency and efficacy of our proposed mechanism in maintaining a balance between user behavior and systemic demand, ensuring a more stable and sustainable smart grid infrastructure.
-
Mapping the Intersection of AI and Science Fiction: A Data-Driven Analysis Using R-PackageThis study investigates the intersection between Artificial Intelligence (AI) and science fiction (Sci-Fi), using bibliometrix R-Package and VOSviewer network visualizations to analyze data from the Web of Science (WoS) and Scopus databases. The search yielded 462 articles from WoS and a significantly larger corpus of 1029 articles from Scopus. Our findings indicate that Computer Science is the predominant field in AI and Sci-Fi research, accounting for 39% and 40% of the documents in WoS and Scopus, respectively. This underscores the central role of Computer Science in shaping AI within Sci-Fi narratives. Notably, in Scopus, Engineering and Mathematics also emerged as significant categories, highlighting the research’s strong technical and quantitative focus. The study also reveals a keen interest in understanding AI’s societal impacts, as evidenced by numerous Humanities and Social Sciences articles. Additionally, recent AI and Sci-Fi research trends, such as ’chatbots’, ’ChatGPT’, ’metaverse’, and ’privacy’, were identified. These trends align with advancements in natural language processing (NLP), virtual realities (VR), data privacy, and deep learning technologies, indicating a shift in academic focus towards these cutting-edge areas. This comprehensive study offers valuable insights into the evolving landscape of AI and Sci-Fi research, guiding future scholarly exploration in these dynamically interconnected fields.
-
LL-XSS: End-to-End Generative Model-based XSS Payload CreationIn the realm of web security, there is a growing shift towards harnessing machine learning techniques for Cross-Site Scripting (XSS) vulnerability detection. This shift recognizes the potential of automation to streamline identification processes and reduce reliance on manual human analysis. An alternative approach involves security professionals actively executing XSS attacks to precisely pinpoint vulnerable areas within web applications, facilitating targeted remediation. Furthermore, there has been a growing interest in machine learning-based methods for creating XSS payloads in academic and research domains. In this research, we introduce a new model for generating XSS payloads, utilizing a combination of auto-regressive and generative AI models to craft malicious scripts intended to exploit potential vulnerabilities. Our approach to XSS vulnerability detection encompasses both frontend and backend code, providing organizations with a comprehensive means to enhance web application security.
-
Vyond as a Gateway: Enhancing SDLC Education through Animated-Based TechniquesThis study centers on the development of a narrative-driven, science fiction-themed animated video using Vyond’s Go AI-powered platform. The objective is to enhance conceptual understanding and cognitive engagement in the Software Development Life Cycle (SDLC) —requirements gathering, design, implementation, testing, and maintenance— for computer science undergraduates. The developed animation and storyline present each of the SDLC stages in an engaging and educational manner ensuring the content is accessible to those with varying levels of familiarity in computer science. Following the development, the paper assesses the animated video’s impact on participants’ comprehension and engagement through questionnaires, targeting two groups: those with and without a software background. This approach allows for a detailed understanding of the animation’s effectiveness across different knowledge levels. Findings reveal that AI-enhanced narrative animations significantly improve educational engagement and understanding, contributing to educational technology advancements, and aligning with Education for Sustainable Development goals. The study highlights AI’s potential to revolutionize educational approaches and simplify complex topics.
-
Programmer Performance Prediction with Cognitive Tests: A Granular ApproachUniversities and students face the challenge of finding the career that best suits the students. Programming is currently one of the highly sought-after careers and is one of the highest paying jobs. However not all students succeed in this career. In fact failure rates in programming field is relatively high globally. In this research, we used a spatial rotation test and Non-Verbal reasoning test as well as gender, at the beginning of the semester to observe the relationship between those tests and students' grades at the end of the semester for 2 groups. Group 1 are Computer Science students studying programming course and group 2 are non-Computer Science students studying programming course. For each group, we created a predictive model using 2 methods. Method 1 uses the conventional aggregate score of cognitive tests and method 2 uses answers to selected questions in each test. In method 2, we applied a data-driven methodology which utilize individual answers to questions as input to the model. To the best of our knowledge we are the first in this field to apply this method. The modeling results show usefulness of those test to predict programming and non-programming course grades using both methods, however method 2 demonstrates much higher superiority over the aggregate method 1.
-
A Smart and Privacy-Preserving Logistics System Based on IoT and Blockchain TechnologiesThe smart city concept serves the well-being of the urban population in order to improve quality of life in different life aspects including logistic services. However, the continuous growth of logistics volume combined with information opaqueness and process complexity led to challenging issues related to managing the services and information of logistics. Hence, efficient management of logistics activities with traceability and condition monitoring capabilities is required to ensure quality and safe delivery. It is necessary also to ensure the accuracy and dependability of distribution data. In this context, this paper proposes a smart and privacy-preserving logistics system for high-price goods distribution. An intelligent parcel (iParcel) containing piezoresistive sensors is developed to pack delivered goods during shipping process for violation detection such as severe fall or theft. Moreover, smart contracts based on blockchain is also developed for automatic approval and payment with the consideration of distributing the shipping information between legitimate logistics parties only. A zero-knowledge proof is used to conceal blockchain address and prove the authentication. iParcels are automatically tracked and traced in which upon violation occurrence, the contract is cancelled, and the payment is refunded. The transaction fee per party is reasonable for high-price products in the pay of guarantee successful shipment.
-
A Smart and Privacy-Preserving Logistics System Based on IoT and Blockchain TechnologiesThe smart city concept serves the well-being of the urban population in order to improve quality of life in different life aspects including logistic services. However, the continuous growth of logistics volume combined with information opaqueness and process complexity led to challenging issues related to managing the services and information of logistics. Hence, efficient management of logistics activities with traceability and condition monitoring capabilities is required to ensure quality and safe delivery. It is necessary also to ensure the accuracy and dependability of distribution data. In this context, this paper proposes a smart and privacy-preserving logistics system for high-price goods distribution. An intelligent parcel (iParcel) containing piezoresistive sensors is developed to pack delivered goods during shipping process for violation detection such as severe fall or theft. Moreover, smart contracts based on blockchain is also developed for automatic approval and payment with the consideration of distributing the shipping information between legitimate logistics parties only. A zero-knowledge proof is used to conceal blockchain address and prove the authentication. iParcels are automatically tracked and traced in which upon violation occurrence, the contract is cancelled, and the payment is refunded. The transaction fee per party is reasonable for high-price products in the pay of guarantee successful shipment.
-
A Smart and Privacy-Preserving Logistics System Based on IoT and Blockchain TechnologiesThe smart city concept serves the well-being of the urban population in order to improve quality of life in different life aspects including logistic services. However, the continuous growth of logistics volume combined with information opaqueness and process complexity led to challenging issues related to managing the services and information of logistics. Hence, efficient management of logistics activities with traceability and condition monitoring capabilities is required to ensure quality and safe delivery. It is necessary also to ensure the accuracy and dependability of distribution data. In this context, this paper proposes a smart and privacy-preserving logistics system for high-price goods distribution. An intelligent parcel (iParcel) containing piezoresistive sensors is developed to pack delivered goods during shipping process for violation detection such as severe fall or theft. Moreover, smart contracts based on blockchain is also developed for automatic approval and payment with the consideration of distributing the shipping information between legitimate logistics parties only. A zero-knowledge proof is used to conceal blockchain address and prove the authentication. iParcels are automatically tracked and traced in which upon violation occurrence, the contract is cancelled, and the payment is refunded. The transaction fee per party is reasonable for high-price products in the pay of guarantee successful shipment.
-
Sentiment Analysis: Amazon Electronics Reviews Using BERT and TextblobThe market needs a deeper and more comprehensive grasp of its insight, where the analytics world and methodologies such as “Sentiment Analysis” come in. These methods can assist people especially “business owners” in gaining live insights into their businesses and determining wheatear customers are satisfied or not. This paper plans to provide indicators by gathering real world Amazon reviews from Egyptian customers. By applying both Bidirectional Encoder Representations from Transformers “Bert” and “Text Blob” sentiment analysis methods. The processes shall determine the overall satisfaction of Egyptian customers in the electronics department - in order to focus on a specific domain. The two methods will be compared for both the Arabic and English languages. The results show that people in Amazon.eg are mostly satisfied with the percentage of 47%. For the performance, BERT outperformed Textblob indicating that word embedding model BERT is more superior than rule-based model Textblob with a difference of 15% - 25%.
-
Improving the Performance of Semantic Text Similarity Tasks on Short Text PairsTraining semantic similarity model to detect duplicate text pairs is a challenging task as almost all of datasets are imbalanced, by data nature positive samples are fewer than negative samples, this issue can easily lead to model bias. Using traditional pairwise loss functions like pairwise binary cross entropy or Contrastive loss on imbalanced data may lead to model bias, however triplet loss showed improved performance compared to other loss functions. In triplet loss-based models data is fed to the model as follow: anchor sentence, positive sentence and negative sentence. The original data is permutated to follow the input structure. The default structure of training samples data is 363,861 training samples (90% of the data) distributed as 134,336 positive samples and 229,524 negative samples. The triplet structured data helped to generate much larger amount of balanced training samples 456,219. The test results showed higher accuracy and f1 scores in testing. We fine-tunned RoBERTa pre trained model using Triplet loss approach, testing showed better results. The best model scored 89.51 F1 score, and 91.45 Accuracy compared to 86.74 F1 score and 87.45 Accuracy in the second-best Contrastive loss-based BERT model.
-
A Core Ontology to Support Agricultural Data InteroperabilityThe amount and variety of raw data generated in the agriculture sector from numeroussources, including soil sensors and local weather stations, are proliferating. However, these raw data in themselves are meaningless and isolated and, therefore, may offer little value to the farmer. Data usefulness is determined by its context and meaning and by how it is interoperable with data from other sources. Semantic web technology can provide context and meaning to data and its aggregation by providing standard data interchange formats and description languages. In this paper, we introduce the design and overall description of a core ontology that facilitates the process of data interoperability in the agricultural domain.
-
Dynamic Modeling and Identification of the COVID-19 Stochastic DispersionIn this work, the stochastic dispersion of novel coronavirus disease 2019 (COVID-19) at the borders between France and Italy has been considered using a multi-input multi-output stochastic model. The physical effects of wind, temperature and altitude have been investigated as these factors and physical relationships are stochastic in nature. Stochastic terms have also been included to take into account the turbulence effect, and the random nature of the above physical parameters considered. Then, a method is proposed to identify the developed model's order and parameters. The actual data has been used in the identification and prediction process as a reference. These data have been divided into two parts: the first part is used to calculate the stochastic parameters of the model which are used to predict the COVID-19 level, while the second part is used as a check data. The predicted results are in good agreement with the check data.
-
Using Knowledge Graph Embeddings in Embedding Based Recommender SystemsThis paper proposes using entity2rec [1] which utilizes knowledge graph-based embeddings (node2vec) instead of traditional embedding layers in embedding based recommender systems. This opens the door to increasing the accuracy of some of the most implemented recommender systems running in production in many companies by just replacing the traditional embedding layer with node2vec graph embedding without the risk of completely migrating to newer SOTA systems and risking unexpected performance issues. Also, Graph embeddings will be able to incorporate user and item features which can help in solving the well-known Cold start problem in recommender systems. Both embedding methods are compared on the movie-Lens 100-K dataset in an item-item collaborative filtering recommender and we show that the suggested replacement improves the representation learning of the embedding layer by adding a semantic layer that can increase the overall performance of the normal embedding based recommenders. First, normal Recommender systems are introduced, and a brief explanation of both traditional and graph-based embeddings is presented. Then, the proposed approach is presented along with related work. Finally, results are presented along with future work.
-
Recommender Diagnosis System with Fuzzy Logic in Cloud EnvironmentRecommendation systems are now used in a wide range in many fields. In the medical field, recommendation systems have a great stature to both doctors and patients for its accurate prediction. It can reduce the time and efforts spent by doctors and patients. The present work introduces a simple and effective methodology for medical recommendation system based on fuzzy logic. Fuzzy logic is an important method to be used based on fuzzy input data. The input data for each patient are not the same, on which recommendation can differ. This work aims to develop techniques for handling the patient data to urge accurate lifestyle recommendations to the patient. Fuzzy logic is utilized to form different recommendations for the patient like lifestyle recommendations, medicine recommendations, and sports recommendations based on different patient factors like age, gender and patient diseases. After evaluating the system its efficiency reached 94%. This Experiment is the final module in a four modules recommendation system. The first one is responsible for diagnosing chest diseases using ECG signals. The second one makes diagnosis using X-ray images. The third is utilizing the security of the whole system through encryption when sending user data over the cloud.
-
Early Fall Prediction Using Hybrid Recurrent Neural Network and Long Short-Term MemoryFalls are unintentionally events that may occur in all age groups, particularly for elderly. Negative impacts include severe injuries and deaths. Although numerous machine learning models were proposed for fall detection, the formulations of the models are limited to prevent the occurrence of falls. Recently, the emerging research area namely early fall prediction receives an increasing attention. The major challenges of fall prediction are the long period of unseen future data and the nature of uncertainty in the time of occurrence of fall events. To extend the predictability (from 0.5 to 5 s) of the early fall prediction model, we propose a particle swarm optimization-based recurrent neural network and long short-term memory (RNN-LSTM). Results and analysis show that the algorithm yields accuracies of 89.8–98.2%, 88.4–97.1%, and 89.3–97.6% in three benchmark datasets UP Fall dataset, MOBIFALL dataset, and UR Fall dataset, respectively.
-
Artificial Intelligence in Brain Computer InterfaceA brain-computer interface (BCI) is a connection path among brain and an external device. Motor imagery (MI) is proven to be a useful cognitive technique for enhancing motor skills as well as for movement disorder rehabilitation therapy. It is known that the efficiency of MI training can be enhanced by using BCI approach, which provides real-time feedback on the mental attempts of the subject. Artificial intelligence (AI) methods play a key role in detecting changes in brain signals and converting them into appropriate control signals. In this paper, we focus on brain signals that have been obtained from the scalp to control assistive devices. In addition, signal denoising, feature extraction, dimension reduction, and AI techniques utilized for EEG-based BCI are evaluated. Moreover, Bagging and Adaboost are utilized to classify MI task for BCI using EEG signals. Different classifiers are used to enhance the performance of detecting the signals from the brain and make it on the real time and controlling any lateness. MI related brain activities can be categorized efficiently via AI techniques. This paper utilizes wavelet packet decomposition feature extraction approach to improve MI recognition accuracy. The proposed approach classifies MI-related brain signals using ensemble techniques. The results show that the proposed framework surpasses the traditional machine learning approaches. Furthermore, the proposed Adaboost with k-NN ensemble approach also yields a greater performance for MI classification with 94.57% classification accuracy for subject independent case.