Browsing by Author "Siriluck Lorpunmanee"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item A memorization approach for test case generation in concurrent UML activity diagram(Association for Computing Machinery, 2019) Siriluck Lorpunmanee; Suwatchai Kamonsantiroj; Luepol PipanmaekapornTest case generation is the most important part of software testing. Currently, researchers have used the UML activity diagram for test case generation. Testing concurrent system is difficult task due to the concurrent interaction among the threads and the system results in test case explosion. In this paper, we proposed a novel approach to generate test cases for concurrent systems using a dynamic programming technique with tester specification to avoid the path explosion. The tester can configure the concurrency specifications that follow the business flow constraints. In order to evaluate the quality of test cases, activity coverage and causal ordering coverage were measured. By experimental results, the proposed approach is superior as compared to DFS and BFS algorithms. Finally, the proposed approach helps to avoid generating all possible concurrent activity paths which are able to minimize test cases explosion. � 2019 Association for Computing Machinery.Item Efficient mining recurring patterns of inter-transaction in time series(Fuji Technology Press, 2019) Siriluck Lorpunmanee; Suwatchai KamonsantirojOne type of the partial periodic pattern is known as recurring patterns, which exhibit cyclic repetitions only for particular time period within a series. A key property of the patterns is the event can start, stop, and restart at anytime within a series. Therefore, the extracted meaningful knowledge from the patterns is challenging because the information can vary across patterns. The mining technique in recurring patterns plays an important role for discovering knowledge pertaining to seasonal or temporal associations between events. Most existing researches focus on discovering the recurring patterns in transaction. However, these researches for mining recurring patterns cannot discover recurring events across multiple transactions (inter-transaction) which often appears in many real-world applications such as the stock exchange market, social network, etc. In this study, the proposed algorithm, namely, CP-growth can efficiently perform in discovering the recurring patterns within inter-transaction. Besides, an efficient pruning technique to reduce the computational cost of discovering recurring patterns is developed in CP-growth algorithm. Experimental results show that recurring patterns can be useful in multiple transactions and the proposed algorithm, namely, CP-growth is efficient. � 2019 Fuji Technology Press. All rights reserved.Item EXTENDING NETWORK INTRUSION DETECTION WITH ENHANCED PARTICLE SWARM OPTIMIZATION TECHNIQUES(Academy and Industry Research Collaboration Center (AIRCC), 2024) Surasit Songma; Watcharakorn Netharn; Siriluck LorpunmaneeThe present research investigates how to improve Network Intrusion Detection Systems (NIDS) by combining Machine Learning (ML) and Deep Learning (DL) techniques, addressing the growing challenge of cybersecurity threats. A thorough process for data preparation, comprising activities like cleaning, normalization, and segmentation into training and testing sets, lays the framework for model training and evaluation. The study uses the CSE-CIC-IDS 2018 and LITNET-2020 datasets to compare ML methods (Decision Trees, Random Forest, XGBoost) and DL models (CNNs, RNNs, DNNs, MLP) against key performance metrics (Accuracy, Precision, Recall, and F1-Score). The Decision Tree model performed better across all measures after being fine-tuned with Enhanced Particle Swarm Optimization (EPSO), demonstrating the model's ability to detect network breaches effectively. The findings highlight EPSO's importance in improving ML classifiers for cybersecurity, proposing a strong framework for NIDS with high precision and dependability. This extensive analysis not only contributes to the cybersecurity arena by providing a road to robust intrusion detection solutions, but it also proposes future approaches for improving ML models to combat the changing landscape of network threats. © (2024), (Academy and Industry Research Collaboration Center (AIRCC)). All Rights Reserved.Item Meta-scheduler in Grid environment with multiple objectives by using genetic algorithm(2006) Siriluck Lorpunmanee; Mohd Noor Md Sap; Abdul Hanan Abdullah; Surat Srinoy; S. Lorpunmanee; Faculty of Science and Technology, Suan Dusit Rajabhat University, Dusit, Bangkok, 295 Rajasrima Rd., Malaysia; email: siriluck_lor@dusit.ac.thGrid computing is the principle in utilizing and sharing large-scale resources of heterogeneous computing systems to solve the complex scientific problem. Such flexible resource request could offer the opportunity to optimize several parameters, such as coordinated resource sharing among dynamic collections of individuals, institutions, and resources. However, the major opportunity is in optimal job scheduling, which Grid nodes need to allocate the resources for each job. This paper proposes and evaluates a new method for job scheduling in heterogeneous computing Systems. Its objectives are to minimize the average waiting time and make-span time. The minimization is proposed by using a multiple objective genetic algorithm (GA), because the job scheduling problem is NP-hard problem. Our model presents the strategies of allocating jobs to different nodes. In this preliminary tests we show how the solution founded may minimize the average waiting time and the make-span time in Grid environment. The benefits of the usage of multiple objective genetic algorithm is improving the performance of the scheduling is discussed. The simulation has been obtained using historical information to study the job scheduling in Grid environment. The experimental results have shown that the scheduling system using the multiple objective genetic algorithms can allocate jobs efficiently and effectively.Item Optimalisation of a job scheduler in the grid environment by using fuzzy C-mean(2007) Siriluck Lorpunmanee; Mohd Noor Md Sap; Abdul Hanan Abdullah; S. Lorpunmanee; Faculty of Science and Technology, Suan Dusit Rajabhat University, Dusit, Bangkok, Thailand; email: siriluck_lor@dusit.ac.thGrid computing is the principle in utilizing and sharing large-scale resources to solve complex scientific problems. Under this principle, Grid environment has problems in flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources. However, the major problems include optimal job scheduling, and which grid nodes allocate the resources for each job. This paper proposes the model for optimizing jobs scheduling in Grid environment. The model presents the results of the simulation of the Grid environment of jobs allocation to different nodes. We develop the results of job characteristics to three classifications depending on jobs run time in machines, which have been obtained using the optimization of jobs scheduling. The results prove the model by using Fuzzy c-mean clustering technique for predicting the characterization of jobs and optimization of jobs scheduling in Grid environment. This prediction and optimization engine will provide Jobs scheduling base upon historical information. This paper presents the need for such a prediction and optimization engine that discusses the approach for history-based prediction and optimization. Simulation runs demonstrate that our algorithm leads to better results than the traditional algorithms for scheduling policies used in Grid environment.