Recent Submissions

  • A Comprehensive Review of the Li-Ion Batteries Fast-Charging Protocols

    Mouais, Talal; Mian Qaisar, Saeed; Department Collaboration; Energy Lab; 0; 0; Electrical and Computer Engineering; 1; Mouais, Talal (Wiley, 2023-09-15)
    One of the significant drawbacks of renewable energy sources, such as solar and wind, is their intermittent pattern of functioning. One promising method to overcome this limitation is to use a battery pack to enable renewable energy generation to be stored until required. Batteries are known for their high commercial potential, fast response time, modularity, and flexible installation. Therefore, they are a very attractive option for renewable energy storage, peak shaving during intensive grid loads, and a backup system for controlling the voltage drops in the energy grid. The lithium-ion (Li-Ion) is considered one of the most promising battery technologies. It has a high energy density, fair performance-to-cost ratio, and long life compared to its counterparts. With an evolved deployment of Li-Ion batteries, the latest trend is to investigate the opportunities of fast Li-Ion battery charging protocols. The aim is to attain around the 70-80% State of Charge (SoC) within a few minutes. However, fast charging is a challenging approach. The cathode particle monitoring and electrolyte transportation limitations are the major bottlenecks in this regard. Additionally, sophisticated process control mechanisms are necessary to avoid overcharging, which can cause a rapid diminishing in the battery capacity and life. This chapter mainly focuses on an important aspect of realizing the effective and fast-charging protocols of Li-Ion batteries. It presents a comprehensive survey on the advancement of fast-charging battery materials and protocols. Additionally, the state-of-the-art approaches of optimizing the configurations of concurrent fast-charging protocols to maximize the Li-Ion batteries life cycle are also presented.
  • Comparison Between Different MPPT Methods Applied to a Three-Port Converter

    Amani S Alzahrani; Hussein, Aziza; Marwa M Ahmed; Mohamed A Enany; External Collaboration; NA; NA; NA; Electrical and Computer Engineering; 1; et al. (Springer Nature Switzerland, 2023-08-02)
    Recently, the interest in renewable energy has gained interest since these sources are promising for generating electricity. Solar energy tops the list of renewable energy sources. Solar photovoltaic (PV) panels are used to capture the solar energy radiated from the sun. Since solar energy is unavailable throughout the day, a battery is added. In a PV/battery system, a three-port converter is needed to interface the PV and battery with the load. This paper applies Maximum Power Point Tracking (MPPT) methods to a system with a three-port converter. These methods are Perturb and Observe (P&O) and Incremental Conductance (IC). MATLAB/SIMULINK software is used to perform the simulation. The temperature and irradiance are varied to simulate environmental changes in a real-world environment. Based on the results, the IC method performs slightly better than P&O. This indicates that a three-port converter is more stable regarding environmental changes than regular two-port converters. The usage of a three-port converter has gained recent interest. The significance of this paper is that it compares different MPPT methods applied to a three-port converter to be able to determine the suitable MPPT for a specific application.
  • Application of Wavelet Decomposition and Machine Learning for the sEMG Signal Based Gesture Recognition

    Rabih Fatayerji, Hala; Saeed, Majed; Mian Qaisar, Saeed; Alqurashi, Asmaa; Al Talib, Rabab; Department Collaboration; 3; Electrical and Computer Engineering (Springer, 2023-02)
    The amputees throughout the world have limited access to the high-quality intelligent prostheses. The correct recognition of gestures is one of the most difficult tasks in the context of surface electromyography (sEMG) based prostheses development. This chapter shows a comparative examination of the several machine learning-based algorithms for the hand gestures identification. The first step in the process is the data extraction from the sEMG device, followed by the features extraction. Then, the two robust machine learning algorithms are applied to the extracted feature set to compare their prediction accuracy. The medium Gaussian Support Vector Machine (SVM) performs better under all conditions as compared to the K-nearest neighbor. Different parameters are used for the performance comparison which include F1 score, accuracy, precision and Kappa index. The proposed method of hand gesture recognition, based on sEMG, is thoroughly investigated and the results have shown a promising performance. In any case, the miscalculation during feature extraction can reduce the recognition precision. The profound learning technique are used to achieve a high precision. Therefore, the proposed design takes into account all aspects while processing the sEMG signal. The system secures a highest classification accuracy of 92.2% for the case of Gaussian SVM algorithm.
  • Brain-Computer Interface (BCI) Based on the EEG Signal Decomposition Butterfly Optimization and Machine Learning

    Alghamdi, Mawadda; Mian Qaisar, Saeed; Bawazeer, Shahad; Saifuddin, Faya; Saeed, Majed; Department Collaboration; 3; Electrical and Computer Engineering (Springer, 2023-02)
    The Brain-Computer Interface (BCI) is a technology that helps disabled people to operate assistive devices bypassing neuromuscular channels. This study aims to process the Electroencephalography (EEG) signals and then translate these signals into commands by analyzing and categorizing them with Machine Learning algorithms. The findings can be onward used to control an assistive device. The significance of this project lies in assisting those with severe motor impairment, paralysis, or those who lost their limbs to be independent and confident by controlling their environment and offering them alternative ways of communication. The acquired EEG signals are digitally low-pass filtered and decimated. Onward, the wavelet decomposition is used for signal analysis. The features are mined from the obtained sub-bands. The dimension of extracted feature set is reduced by using the Butterfly Optimization algorithm. The Selected feature set is then processed by the classifiers. The performance of k-Nearest Neighbor, Support Vector Machine and Artificial Neural Network is compared for the categorization of motor imagery tasks by processing the selected feature set. The suggested method secures a highest accuracy score of 83.7% for the case of k-Nearest Neighbor classifier.
  • Adaptive rate EEG processing and machine learning-based efficient recognition of epilepsy

    Mian Qaisar, Saeed; No Collaboration; Electrical and Computer Engineering (Academic Press, 2023-01-01)
    Biomedical sensors and cloud-based smart applications are the main components of contemporary automated healthcare systems. A multichannel EEG signals collection, processing, transmission, and interpretation are needed in the framework of mobile epileptic seizure care. Conventional automatic epilepsy detection systems are time-invariant and have the drawback of collecting more information than required. This renders in the wastage of energy computation, storage, and activity of transmission. In this framework, techniques of real-time data compression can play a vital role. In this sense, this work describes an original tactic for the cloud-based efficient and automated epilepsy detection. The method is developed by smartly using the idea of adaptive rate processing. The aim is to gain a real-time compression advantage in order to ensure efficient EEG signal processing, retrieval, and interpretation as part of the development of a cloud-based healthcare system. The specific brain activities can cause EEG signals to differ and can influence the performance of automated classification. Therefore, the signal is conditioned in this analysis by the application of novel adaptive-order filtering. Onward, the adaptive rate discrete wavelet transform (DWT) derives subbands by decomposing the enhanced signal. In next step, rigorous classifiers are used to recognize various anticipated EEG signal categories. The efficiency of the method is evaluated by using a publically available Hauz Khas health center epilepsy dataset. Results show a significant gain in compression and aptitudes for a processing effectiveness as compared to fix rate counterparts. The accuracy rate, specificity, F-measure, and Kappa statistics are all used to test the suggested method. Results confirm that the suggested method secures a high accuracy of classification. It assures the benefit of embedding the proposed solution in existing automated epilepsy detectors to realize efficient cloud-based healthcare solutions.
  • A Review of Charging Schemes and Machine Learning Techniques for Intelligent Management of Electric Vehicles in Smart Grid

    Alyamani, Nehal; Mian Qaisar, Saeed
    Dynamic charging Data-driven techniques Load prediction Signal processing Machine learning
  • An Effective Li-Ion Battery State of Health Estimation Based on Event-Driven Processing

    Maram, Alguthami; Mian Qaisar, Saeed; Electrical and Computer Engineering (Wiley, 2020)
    Summary The most common types of rechargeable batteries are Li-ion batteries. It is important to ensure that the batteries are always in good health and thus achieve a longer lifespan. The Battery Management System (BMS) is utilized to achieve this aim. Given that a single rechargeable battery can have many cells, a BMS is becoming more complicated. The main disadvantage of having a complete BMS is that it can lead to higher power overhead consumption. Therefore there is a need to develop a BMS that does not compromise on its ability to accurately monitor power systems, but do so at low overhead consumptions. In this paper, the aim is to develop and enhance the conventional Coulomb Counting based SOH method to create a reliable, effective and real-time technique for estimating the SOH of cells. The paper also compares the developed method with its traditional counterpart, and the results of the experiment show that the new model performs better in terms of computational efficiency, compression gain, and SOH estimation accuracy.
  • A Systematic Review on Machine Learning and Deep Learning Models for Electronic Information Security in Mobile Networks

    Gupta, Chaitanya; johri, ishita; Srinivasan, Kathiravan; Hu, Yuh-Chung; Mian Qaisar, Saeed; Electrical and Computer Engineering (04-03-2022)
    Today's advancements in wireless communication technologies have resulted in a tremendous volume of data being generated. Most of our information is part of a widespread network that connects various devices across the globe. The capabilities of electronic devices are also increasing day by day, which leads to more generation and sharing of information. Similarly, as mobile network topologies become more diverse and complicated, the incidence of security breaches has increased. It has hampered the uptake of smart mobile apps and services, which has been accentuated by the large variety of platforms that provide data, storage, computation, and application services to end-users. It becomes necessary in such scenarios to protect data and check its use and misuse. According to the research, an artificial intelligence-based security model should assure the secrecy, integrity, and authenticity of the system, its equipment, and the protocols that control the network, independent of its generation, in order to deal with such a complicated network. The open difficulties that mobile networks still face, such as unauthorised network scanning, fraud links, and so on, have been thoroughly examined. Numerous ML and DL techniques that can be utilised to create a secure environment, as well as various cyber security threats, are discussed. We address the necessity to develop new approaches to provide high security of electronic data in mobile networks because the possibilities for increasing mobile network security are inexhaustible
  • Prediction of the Li-Ion Battery Capacity by Using Event-Driven Acquisition and Machine Learning

    Mian Qaisar, Saeed; AbdelGawad, Amal; Electrical and Computer Engineering
    The battery is a crucial element of modern power systems and it is utilized habitually in different vital applications such as electric vehicles, drones, avionics and mobile phones. Among various batteries technologies the Li-Ion batteries are widely used. It is mainly because of their compactness, longer life and high power capacity. On the other hand, due to the disadvantage of Li-ion batteries being expensive, their use is monitored using battery management systems (BMSs) to optimize their performance and ensure they last longer. The extensive processing resources that modern BMSs need can result in higher overhead power consumption. This study focuses on upgrading the present Li-ion BMSs through redesigning their associative data acquisition and processing chains differently. It aims at enhancing the data acquisition and estimation mechanisms for the Li-ion batteries' capacities. It utilizes a novel event-driven mechanism for extracting the intended Li-Ion cell parameters. The event-driven approach brings notable compression gain compared to fix-rate conventional counterparts. The mined attributes are onward conveyed to the robust machine learning algorithms for prediction. The 5-fold cross-validation approach is used for prediction performance evaluation. The achieved correlation coefficient and minimum Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) are respectively 0.9996, 0.0038 and 0.0054 respectively. It shows the feasibility of incorporating the proposed approach in contemporary BMSs.
  • Crowdsourcing Research for Social Insights into Smart Cities Applications and Services

    Alhalabi, Wadee; Lytras, Miltiadis; Aljohani, Nada; Electrical and Computer Engineering (2021)
    The evolution in knowledge management and crowdsourcing research provides new data-processing capabilities. The availability of both structured and unstructured open data formats offers unforeseen opportunities for analytics processing and advanced decision-making. However, social sciences research is facing advanced, complicated social challenges and problems. The focus of this study is to analyze the contribution of crowdsourcing techniques to the promotion of advanced social sciences research, exploiting open data available from the geographical positioning system (GPS) to analyze human behavior. In our study, we present the conceptual design of a device that, with the help of a global positioning system-data collection device (GPS-DCD), associates behavioral aspects of human life with place. The main contribution of this study is to integrate research in computer science and information systems with that in social science. The prototype system summarized in this work, proves the capacity of crowdsourcing and big data research to facilitate aggregation of microcontent related to human behavior toward improved quality of life and well-being in modern smart cities. Various ethical issues are also discussed to promote the scientific debate on this matter. Our study shows the capacity of emerging technologies to deal with social challenges. This kind of research will gain increased momentum in the future due to the availability of big data and new business models for social platforms.
  • Epileptic seizure classification using level-crossing EEG sampling and ensemble of sub-problems classifier

    Mian Qaisar, Saeed; Hussain, Syed; Electrical and Computer Engineering (2022)
    Epilepsy is a disorder of the brain characterized by seizures and requires constant monitoring particularly in serious patients. Electroencephalogram (EEG) signals are frequently used in epilepsy diagnosis and monitoring. A new paradigm of battery packed wearable gadgets has recently gained popularity which constantly monitor a patient’s signals. These gadgets acquire the data and transmit it to the cloud for further processing. Power consumption due to data transmission is a major issue in these devices. Moreover, in a constant monitoring environment, the number of classes to be identified are usually higher and overlapping. Existing techniques either require the entire data to be transmitted, such as in deep learning, or suffer from reduced accuracy. In this context, we propose a new framework for EEG based epilepsy detection which requires a low data transmission while maintaining high accuracy for multiclass classification. At the device-end, we use a preprocessing mechanism that uses adaptive rate sampling, modified activity selection, filtering, and wavelet decomposition to extract only a handful of highly discriminatory features to be transmitted instead of the entire EEG waveform. For multiclass classification, we propose a novel ensemble of sub-problems-based classification paradigm to achieve high accuracy using the reduced data. Our proposed solution shows many-fold increase in computational gains and an accuracy of 100% and 99.38% on the 2-class problem when tested on the popular University of Bonn and CHB-MIT datasets, respectively. An accuracy of 99.6% on 3-class, 96% on 4-class, and 92% on 5-class problems is obtained for the University of Bonn dataset.
  • Appliance Identification Based on Smart Meter Data and Event-Driven Processing in the 5G Framework

    Mian Qaisar, Saeed; Alsharif, Futoon; Subasi, Abdulhamit; bensenouci, ahmed; Electrical and Computer Engineering (2021)
    The digitization and IoT advancement is evolving the energy sector. 5G is playing an important role in connecting various smart grid modules and stockholders. In this framework, the idea of utilizing smart meters is increasing. A fine-grained metering data acquisition and processing is crucial to help the smart grid stake holders. The classical data sampling approach is of time invariant nature. Thus, it includes in the acquisition, transmission, and processing stages a large amount of redundant data. This deficit can be eliminated by employing the event-driven sampling. It provides a real-time data compression. Therefore, a novel event-driven adaptive-rate sampling approach is utilized for the data acquisition and features extraction. The relevant features related to the appliances power consumption patterns are subsequently utilized for their identification by employing the support vector machine. Results confirm a 3.7 folds compression and processing gains of the suggested approach while achieving 96% classification accuracy. Thanks to the 5G network, findings are effectively logged on the cloud for further analysis and decision support.
  • Effective Brain–Computer Interface Based on the Adaptive-Rate Processing and Classification of Motor Imagery Tasks

    Mian Qaisar, Saeed; Oudah, Reem Fuad; Nisar, Humaira; Electrical and Computer Engineering (CRC Press, 2021)
    In the context of cloud-based mobile healthcare systems, continuous multichannel electroencephalogram (EEG) signals acquisition, processing, transmission, and analysis is required. The conventional BCI (Brain to computer interface) systems are time-invariant and can have the disadvantage of capturing redundant information. It leads to a waste of system ressources and power consumption. In this context, this chapter presents an original approach of realizing the adaptive rate signal acquisition and processing chain for the cloud-based BCI. The objective is to achieve a real-time compression gain in order to attain effective EEG signal processing, transmission, and analysis in the context of realizing a proficient cloud-based BCI framework. The signal is enhanced in this study by the use of the method of adaptive rate filtering. Attributes are derived from the conditioned signal using a hybrid method. Afterward, robust classifiers are used to classify different intended EEG signal classes. The performance of the system is studied by using a standard data set of motor imagery tasks. The devised approach is able to achieve appreciable compression gain and computational improvement when compared with the traditional methods. The system also produces a good classification accuracy.
  • EEG based Alcoholism Detection by Oscillatory Modes Decomposition Second Order Difference Plots and Machine Learning

    Mian Qaisar, Saeed; Salankar, Nilima; Electrical and Computer Engineering (2022)
    The excessive drinking of alcohol can disrupt the neural system. This can be observed by properly analysing the Electroencephalogram (EEG) signals. However, the EEG is a signal of complex nature. Therefore, an accurate categorization between alcoholic (A) and non-alcoholic (NA) subjects, while using a short time EEG recording, is a challenging task. In this paper a novel hybridization of the oscillatory modes decomposition, features mining based on the Second Order Difference Plots (SODPs) of oscillatory modes, and machine learning algorithms is devised for an effective identification of alcoholism. The Empirical Mode Decomposition (EMD) and Variational Mode Decomposition (VMD) are used to respectively decompose the considered EEG signals in Intrinsic Mode Functions (IMFs) and Modes. Onward, the SODPs, derived from first six IMFs and Modes, are considered. Features of SODPs are mined. To reduce the dimension of features set and computational complexity of the classification model, the pertinent features selection is made on the basis of Wilcoxon statistical test. Three features with p-values (p) of?<?0.05 are selected from each intended SODP and these are the Central Tendency Measure (CTM), area and mean. These features are used for the discrimination between A and NA classes. In order to determine a suitable EEG signal segment length for the intended application, experiments are performed by considering features extracted from three different length time windows. The classification is carried out by using the Least Square Support Vector Machine (LS-SVM), Multilayer perceptron neural network (MLPNN), K-Nearest Neighbour (KNN) and Random Forest (RF) algorithms. The applicability is tested by using the UCI-KDD EEG dataset. The results are noteworthy for MLPNN with 99.89% and 99.45% accuracies for EMD and VMD respectively for 8-second window.
  • Social mining for terroristic behavior detection through Arabic tweets characterization

    Alhalabi, Wadee; Jussila, Jari; Jambi, Kamal; Visvizi, Anna; Electrical and Computer Engineering (2021)
    In the latest years, the use of social media has increased dramatically. Content, as well as media, are shared in Big Data volumes and this poses a critical requirement for the behavior supervision and fraud protection. The detection of terrorist behavior in the social media is essential to every country, but has complexities in both the supervision of shared content and in the understanding of behavior. Therefore, in this project an artificial intelligence enabled Detection Terrorist behavior system (ALT-TERROS) as a key priority was developed. The key requirements for a terrorist behavior detection system operating in the Kingdom are: (i) Data integration, (ii) Advanced smart analysis capacity and (iii) Decision making capability. The unique value proposition is based on a sophisticated integrated approach to the management of distributed data available on social media, which uses advanced social mining methods for the detection of patterns of terrorist behavior, its visualization and use for decision making. In addition, several critical issues related to the availability of APIs to handle Arabic text as well as the need to provide an end-to-end workflow from the extraction of textual and visual data over social media to the deliverable of advanced analytics and visualizations for rating mechanisms were highlighted. The key contribution of our approach is a testbed for the application of novel scientific approaches and algorithms for the rating of harm associated to social media content. The complexity of the problem does not allow hyper-optimistic solutions, but the combination of heuristic rules and advanced decision-making capabilities is toward the right direction. We contribute to the body of the theory of Sentiment Analysis for Arabic content and we also summarize a heuristic algorithm developed for the future. In the future research directions, we emphasize on the need to develop trusted Arabic thesaurus and corpus for the use sentiment analysis.
  • A comprehensive review on the application of machine learning techniques for analyzing the smart meter data

    Alsharif, Futoon; Bashawyah, Doaa; Subasi, Abdulhamit; Mian Qaisar, Saeed; Electrical and Computer Engineering (De Gruyter, 2021)
    The deployment of smart meters has developed through technical developments. A fine-grained analysis and interpretation of metering data is important to deliver benefits to multiple stakeholders of the smart grid. The deregulation of the power industry, particularly on the distribution side, has been continuously moving forward worldwide. How to use broad smart meter data to improve and enhance the stability and efficiency of the power grid is a critical matter. So far, extensive work has been done on smart meter data processing. This chapter provides a thorough overview of the current research outcomes for the study of smart meter data using machine learning techniques. An application-oriented analysis is being addressed. The main applications, such as load profiling, load forecasting and load scheduling, are taken into account. A summary of the state-of-the-art machine learning-based methodologies, customized for each intended application, is provided.