Browsing by Author "Uckan, Taner"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
Conference Object Applications and Comparisons of Optimization Algorithms Used in Convolutional Neural Networks(Ieee, 2019) Seyyarer, Ebubekir; Uckan, Taner; Hark, Cengiz; Ayata, Faruk; Inan, Mevlut; Karci, AliNowadays, it is clear that the old mathematical models are incomplete because of the large size of image data set. For this reason, the Deep Learning models introduced in the field of image processing meet this need in the software field In this study, Convolutional Neural Network (CNN) model from the Deep Learning Algorithms and the Optimization Algorithms used in Deep Learning have been applied to international image data sets. Optimization algorithms were applied to both datasets respectively, the results were analyzed and compared The success rate was approximately 96.21% in the Caltech 101 data set, while it was observed to be approximately 10% in the Cifar-100 data set.Article Evaluation of Hbsag, Anti-Hcv, Anti-Hiv Seroprevalence and Perinatal Outcomes in Pregnant Women(Galenos Publ House, 2021) Uckan, Kazim; Celegen, Izzet; Uckan, TanerObjectives: Vertical transmission of hepatitis B virus, hepatitis C virus, (HBV, HCV) and human immunodeficiency viruses (HIV) infections is an important public health problem. The aim of this study was to determine the rates of hepatitis B, anti-HCV and anti HIV seropositivities in pregnant women in a city and to evaluate the infections in terms of perinatal outcomes. Materials and Methods: In this retrospective study, 8,464 patients who gave birth in obstetrics and gynecology clinic were recorded. Seropositivity rates of pregnant women were investigated according to the results of hepatitis B surface antigen (HBsAg), HCV antibody and anti-HIV antibody. The rates were determined according to years and perinatal results and statistical comparison was made. Results: HBsAg seropositivity in pregnant women included in the study was 2.8 % (n=55) in 2015, 2.2% (n=52) in 2016, 2.3% (n=47) in 2017 and 2.2% (n=49) in 2018. The 4 year average was found to be 2.3% (n=203). There was no significant difference between the years (p>0.05). Among all our patients, 4-year mean anti-HCV seropositivity was 0.57% (n=49) and there was no difference between years (p>0.05). Anti-HIV seropositivity was found to be 0.09% on average, and there was no statistically significant difference over the years (p>0.05). Conclusion: Since hepatitis B, which is a preventable viral disease, has a risk of transmission during delivery and if it is transmitted to the fetus, it may lead to fatal complications at later ages, it is necessary to screen all pregnant women in terms of HBsAg seropositivity and to include it in an antepartum planning program to protect and treat newborns from infection. Although the transmission rate of HCV is low in the society, considering its clinical course, screening of HCV together with HIV in risky groups and pregnant women antibody positivity is considered important for the health of the society and newborns.Article Extractive Multi-Document Text Summarization Based on Graph Independent Sets(Cairo Univ, Fac Computers & information, 2020) Uckan, Taner; Karci, AliWe propose a novel methodology for extractive, generic summarization of text documents. The Maximum Independent Set, which has not been used previously in any summarization study, has been utilized within the context of this study. In addition, a text processing tool, which we named KUSH, is suggested in order to preserve the semantic cohesion between sentences in the representation stage of introductory texts. Our anticipation was that the set of sentences corresponding to the nodes in the independent set should be excluded from the summary. Based on this anticipation, the nodes forming the Independent Set on the graphs are identified and removed from the graph. Thus, prior to quantification of the effect of the nodes on the global graph, a limitation is applied on the documents to be summarized. This limitation prevents repetition of word groups to be included in the summary. Performance of the proposed approach on the Document Understanding Conference (DUC-2002 and DUC-2004) datasets was calculated using ROUGE evaluation metrics. The developed model achieved a 0.38072 ROUGE performance value for 100-word summaries, 0.51954 for 200-word summaries, and 0.59208 for 400-word summaries. The values reported throughout the experimental processes of the study reveal the contribution of this innovative method. (C) 2019 Production and hosting by Elsevier B.V. on behalf of Faculty of Computers and Artificial Intelligence, Cairo University.Conference Object Extractive Text Summarization Via Graph Entropy(Ieee, 2019) Hark, Cengiz; Uckan, Taner; Seyyarer, Ebubekir; Karci, AliThere is growing interest in automatic summarizing systems. This study focuses on a subtractive, general and unsupervised summarization system. It is provided to represent the texts to be summarized with graphs and then graph entropy is used to interpret the structural stability and structural information content on the graphs representing the text files. The performance of the proposed text summarizing approach for the purpose of summarizing the text on the data set of Document Understanding Conference (DUC-2002) including open access texts and summaries of these texts was calculated using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) evaluation metrics. Experimental processes were repeated for 200 and 400 word abstracts. Experimental results reveale that the proposed text summarizing system performs competitively with competitive methods for different ROUGE metrics.Article A Hybrid Model for Extractive Summarization: Leveraging Graph Entropy To Improve Large Language Model Performance(Elsevier, 2025) Uckan, TanerExtractive text summarization models focus on condensing large texts by selecting key sentences rather than generating new ones. Recently, studies have utilized large language models (LLMs) for effective summarization solutions. However, limitations like cost and time in using LLMs make achieving high performance challenging. This study introduces a hybrid model that combines graph entropy with LLMs to improve summarization accuracy and time efficiency. Initially, the text is represented as a graph, with each sentence as a node. Using Karci Entropy (KE) to measure each sentence's information, the model selects the most valuable sentences, which are then processed by LLMs like BERT, RoBERTa, and XLNet, to create summaries of 400 words, 200 words, and 3 sentences. Testing on Duc2002 and CNN Daily datasets shows significant gains in both accuracy and processing speed, highlighting the proposed model's effectiveness.Article Integrating Pca With Deep Learning Models for Stock Market Forecasting: an Analysis of Turkish Stocks Markets(Elsevier, 2024) Uckan, TanerFinancial data such as stock prices are rich time series data that contain valuable information for investors and financial professionals. Analysis of such data is critical to understanding market behaviour and predicting future price movements. However, stock price predictions are complex and difficult due to the intense noise, non-linear structures, and high volatility contained in this data. While this situation increases the difficulty of making accurate predictions, it also creates an important area for investors and analysts to identify opportunities in the market. One of the effective methods used in predicting stock prices is technical analysis. Multiple indicators are used to predict stock prices with technical analysis. These indicators formulate past stock price movements in different ways and produce signals such as buy, sell, and hold. In this study, the most frequently used ten different indicators were analyzed with PCA (Principal Component Analysis. This study aims to investigate the integration of PCA and deep learning models into the Turkish stock market using indicator values and to assess the effect of this integration on market prediction performance. The most effective indicators used as input for market prediction were selected with the PCA method, and then 4 different models were created using different deep learning architectures (LSTM, CNN, BiLSTM, GRU). The performance values of the proposed models were evaluated with MSE, MAE, MAPE and R2 measurement metrics. The results obtained show that using the indicators selected by PCA together with deep learning models improves market prediction performance. In particular, it was observed that one of the proposed models, the PCA-LSTM-CNN model, produced very successful results.Article A New Multi-Document Summarisation Approach Using Saplings Growing-Up Optimisation Algorithms: Simultaneously Optimised Coverage and Diversity(Sage Publications Ltd, 2024) Hark, Cengiz; Uckan, Taner; Karci, AliAutomatic text summarisation is obtaining a subset that accurately represents the main text. A quality summary should contain the maximum amount of information while avoiding redundant information. Redundancy is a severe deficiency that causes unnecessary repetition of information within sentences and should not occur in summarisation studies. Although many optimisation-based text summarisation methods have been proposed in recent years, there exists a lack of research on the simultaneous optimisation of scope and redundancy. In this context, this study presents an approach in which maximum coverage and minimum redundancy, which form the two key features of a rich summary, are modelled as optimisation targets. In optimisation-based text summarisation studies, different conflicting objectives are generally weighted or formulated and transformed into single-objective problems. However, this transformation can directly affect the quality of the solution. In this study, the optimisation goals are met simultaneously without transformation or formulation. In addition, the multi-objective saplings growing-up algorithm (MO-SGuA) is implemented and modified for text summarisation. The presented approach, called Pareto optimal, achieves an optimal solution with simultaneous optimisation. Experimentation with the MO-SGuA method was tested using open-access (document understanding conference; DUC) data sets. Performance success of the MO-SGuA approach was calculated using the recall-oriented understudy for gisting evaluation (ROUGE) metrics and then compared with the competitive practices used in the literature. Testing achieved a 26.6% summarisation result for the ROUGE-2 metric and 65.96% for ROUGE-L, which represents an improvement of 11.17% and 20.54%, respectively. The experimental results showed that good-quality summaries were achieved using the proposed approach.Article Ssc: Clustering of Turkish Texts by Spectral Graph Partitioning(Gazi Univ, 2021) Uckan, Taner; Hark, Cengiz; Karci, AliThere is growing interest in studies on text classification as a result of the exponential increase in the amount of data available. Many studies have been conducted in the field of text clustering, using different approaches. This study introduces Spectral Sentence Clustering (SSC) for text clustering problems, which is an unsupervised method based on graph-partitioning. The study explains how the proposed model proposed can be used in natural language applications to successfully cluster texts. A spectral graph theory method is used to partition the graph into non-intersecting sub-graphs, and an unsupervised and efficient solution is offered for the text clustering problem by providing a physical representation of the texts. Finally, tests have been conducted demonstrating that SSC can be successfully used for text categorization. A clustering success rate of 97.08% was achieved in tests conducted using the TTC-3600 dataset, which contains open-access unstructured Turkish texts, classified into categories. The SSC model proposed performed better compared to a popular k-means clustering algorithm.