Representation and Control of the Cold Rolling Process through Artificial Neural Networks via Sensitivity Factors

 

The mathematical modeling of the rolling process involves several parameters that may lead to non-linear equations of difficult analytical solution. Such is the case of Alexander’s model (Alexander 1972), considered one of the most complete in the rolling theory. This model requires significant computational time, which prevents its application in on-line control and supervision systems. For this reason new and efficient forms to represent this kind of process are still necessary. The only requirement is that the new representations incorporate the qualitative behavior of the process, and that they can be used in the control system design.

In this paper, the representation of the cold rolling process through Neural Networks, trained with data obtained by Alexander’s model, is presented. Two neural networks are trained to represent the rolling process and operation. For them, the quantitative and qualitative aspects of their behaviors are verified through simulation and via sensitivity equations. These equations are based on sensitivity factors obtained by differentiating the previously trained neural networks; and for different Representatio and control of theoperation points, different equations can be obtained with low computational time.

On the other hand, one of the capital issues in the controller design for rolling systems is the difficulty to measure the final thickness without time delays. The time delay is a consequence of the location of the output thickness sensor that is always placed to a certain distance ahead of the roll-gap region. The representation based in sensitivity factors has predictive characteristics that will be used by the control strategy. This predictive model permits to overcome the time delay that exists in such processes and can eliminate the thickness sensor, usually based on X-ray. This model works as a virtual sensor implemented via software. Besides, this paper presents a method to determinate the appropriate adjustment for thickness control considering three possible control parameters: roll gap, front and back tensions. The method considers, as the best control action, the one that demands the smallest adjustment. Simulation results show the viability of the proposed techniques and an example of the application to a single stand rolling mill is discussed.

 

Sensitivity Analysis of Cause-Effect Relationship Using Neural Networks

Many industrial, chemical, economic and financial processes have non-lineal characteristics and intrinsic complexities that hinder the development of mathematics models.

Through a mathematical model is possible to obtain knowledge of the cause-effect relationship of the variables. The relationship between parameters of the process can be expressed by sensitivity factors. If this sensitivity factors are calculated through the differentiation of complex mathematical models this may lead to equations of difficult analytic solution.

In this work a technique to obtain the sensitivity factors is proposed. In this technique, the sensitivity factors are obtained by differentiating of a neural network previously trained. The expressions to calculate the sensitivity factors are generic for nets multi-layer with N entries, M exits and L neurons in the hidden layer.sensitivity2

Each sensitivity factor that relates one parameter A (cause) with one parameter B (effect) allow to identify which entries (cause) presents larger influence in the exit (effect) of the process. This way, this technique allows to identify the parameters that should be observed and controlled during the productive activity. Finally an application for strip rolling process is presented.

 

Hybrid Structure based on Previous Knowledge and GA to search the ideal neurons quantity for the hidden layer of MLP – Application in the Cold Rolling Process

 

Description: The neural representation of a physical process has the objective of explaining the cause-effect relationship among the parameters involved in the process. The representation is normally evaluated through the error reached during the training and validation processes. As the neural representation is not based on the physical
principles, its mathematical representation can be correct in the quantitative aspect but not in the qualitative one. In this work, it is shown that a neural representation can fail when its qualitative aspect is evaluated. The search of the ideal neurons quantity for the hidden layer of the MLP neural network, by means of Genetic Algorithms and the sensitivity factors calculated directly from the neural networks during the training process, is presented. The new optimization structure has the objective to find a neural network structure capable to represent the process quantitatively and qualitatively. The sensitivity factors, when compared with the expert knowledge of the human agent, represented through symbolic rules, can evaluate not only the quantitative but also the qualitative aspect of the process being represented through a specific neural structure. The results obtained, and the time (epochs) necessaries to reach the neural network target show that this combination is promising. As a case study, the new structure is applied for the cold rolling process.

 

Climatic Data Neural Representation for Large Territorial Extensions: Case Study for the State of Minas Gerais

clima2

It is possible to observe that for large areas the number of meteorological stations is small or they are improperly distributed. In environments or systems whose climatic variables impact directly or indirectly in the production, it is necessary to know or at least be able to estimate climate data to improve the production of the processes. To meet this demand, in this paper a representation of weather data for large areas through artificial neural networks (ANN) is proposed. All the procedures adopted are detailed which allow to be used to represent other regions. The main input variables of the neural model are the latitude, longitude and altitude.

 

Neural Representation to estimate the Relative Air Humidity

 

To obtain the neural model, the historical climate data on a daily basis was considered for 12 years from 1995 to 2006 from 255 weather stations located in the Brazilian territory. Based on the observed data over this period, a neural representation for each month of the year was proposed for the variables: latitude (lat), longitude (long), elevation (elev), maximum temperature (Tmax), minimum temperature (tmin) with the relative air humidity (UR).

 

FCANN: A new approach for extraction and representation of knowledge from ANN trained via Formal Concept Analysis

 

Nowadays, Artificial Neural Networks (ANN) are being widely used in the representation of different systems and physics processes. Once trained, the networks are capable of dealing with operational conditions not seen during the training process, keeping tolerable errors in their responses. However, humans cannot assimilate the knowledge kept by those networks, since such implicit knowledge is difficult to be extracted. In this work Formal Concept Analysis (FCA) is being used in order to extract and represent knowledge from previously trained ANN. The new FCANN approach permits to obtain a complete canonical base, non redundant and with minimum implications, which qualitatively describes the process being studied. The approach proposed has a sequence of steps such as the generation of a synthetic dataset. The variation of data number per parameter and the discretization interval number are adjustment factors to obtain more representative rules without the necessity of retraining the network. The FCANN method is notFCANN a classifier itself as other methods for rule extraction; this approach can be used to describe and understand the relationship among the process parameters through implication rules.

Comparisons of FCANN with C4.5 and TREPAN Algorithms are made to show its features and efficacy. Applications of the FCANN method for real world problems are presented as case studies.

 

Qualitative Behavior Rules for the Cold Rolling Process Extracted from Trained ANN via FCANN Method

 

 Nowadays, Artificial Neural Networks (ANN) are being widely used in the representation of different systems and physics processes. In this paper, a neural representation of the cold rolling process will be considered. In general, once trained, the networks are capable of dealing with operational conditions not seen during the training process, keeping acceptable errors in their responses. However, humans cannot assimilate the knowledge kept by those networks, since such knowledge is implicit and difficult to be extracted. For this reason, the Neural Networks are considered a “black box”.

QBRulesIn this work, the FCANN method based on Formal Concept Analysis (FCA) is being used in order to extract and represent knowledge from previously trained ANN. The new FCANN approach permits to obtain a non redundant canonical base with minimum implications, which qualitatively describes the process. The approach can be used to understand the relationship among the process parameters through implication rules in different operational conditions on the load curve of the cold rolling process. Metrics for evaluation of the rules extraction process are also proposed, which permit a better analysis of the results obtained.

 

Handling Large Formal Context using BDD – Perspectives and Limitations

 

Handling FC BDDThis paper presents Binary Decision Diagrams (BDDs) applied to Formal Concept Analysis (FCA). The aim is to increase the FCA capability to handle large formal contexts. The main idea is to represent formal context using BDDs for latter extraction of the set of all formal concepts from this implicit representation. BDDs have been evaluated based on several types of randomly generated synthetic contexts and on density variations in two distinct occasions: (1) computational resources required to build the formal contexts in BDD; and (2) to extract all concepts from it. Although BDDs have had fewer advantages in terms of memory consumption for representing formal contexts, it has true potential when all concepts are extracted. In this work, it is shown that BDDs could be used to deal with large formal contexts especially when those have few attributes and many objects. To overcome the limitations of having contexts with fewer attributes, one could consider vertical partitions of the context to be used with distributed FCA algorithms based on BDDs.

 

Data Mining Applied to the Discovery of Symptom Patterns is Data Base of Patients with Nephrolithiasis

 

DM sysNephrolithiasis is a disease that is unknown yet a clinical treatment that determines the disease cure. In the adult population is esteemed an incidence around 5 to 12%, being a little lesser in the pediatric band. The renal colic, caused by nephrolithiasis, is the main disease symptom in the adults and it is observed in 14% of the pediatric patients. The disease symptoms in the pediatric patient don’t follow a pattern, and this difficult the disease diagnosis. The objective this work is the discovery of patterns of the disease symptoms through application of KDD methodology to determinate discriminantes rules for the patterns of the symptoms. The results and the conclusions of the work are presented in the end of the article.

 

Data Mining in the Reduction of the Number of Places of Experiments for Plant Cultivates

 

When a new plant cultivar is developed, it needs to be evaluated in relation to different environmental conditions before being commercially released. A set of experiments systematically was conducted at various locations with the purpose of evaluating the performance of the plants, in different soil, climate and elevation conditions. In these experiments, the plant culture development was monitored throughout the entire
planting cycle. It is important to note that these experiments are very expensive and the aim of this work is to verify the possibility of reducing the number of places of the experiments without losing of representativeness in the evaluation of planted cultivars. To achieve this goal, the interaction among soil, climate and plant was modeled and the major components in each of these systems were considered. Neural computational models were also developed in order to estimate data regarding the relative air humidity. Using the partitioned clustering technique (k-medians), clusters (planting sites) were built in such a way that, according to the methodology proposed were similar in terms of soil, climate and plant behavior. The consistency of the formed clusters was evaluated by two criteria and the recommendation for reducing the cultivation sites was presented. By analyzing the results, it was possible to verify that the clustering process is not sufficient for deciding which planting locations should be omitted. It is necessary to count on complementary evaluation criteria associated with the production and cultivar ranking. In this work, the national competition experiments of maize cultivars in Brazil are considered as case study.

 

Missing Value Recovering in Imbalanced Databases, Application in a Marketing Databases with Massive Missing Data

 

miss1 Missing data in databases are considered to be one of the biggest problems faced on Data Mining application. This problem can be aggravated when there is massive missing data in the presence of imbalanced databases. Several techniques as imputation, classifiers and approximation of patterns have been proposed and compared, but these comparisons do not consider adverse conditions found in real databases. In this work, it is presented a comparison of techniques used to classify records from a real imbalanced database with massive missing data, where the main objective is the database pre-processing to recover and select records completely filled for a further application of the techniques. It was compared algorithms such as clustering, decision tree, artificial neural networks and Bayesian classifier. Through the results, it can be verified that the problem characterization and database understanding are essential steps for a correct techniques comparison in a real problem. It was observed that artificial neural networks are an interesting alternative for this kind of problem since it is capable to obtain satisfactory results even when dealing with real-world problems.

 

RAIMA: A new approach for identifying the missing value mechanism

 

 Missing data is one of the most significant problems encountered in real world dataraima2 mining and knowledge discovery projects. Many approaches have been proposed to deal with the missing values problem. These approaches are related to missing mechanism. However, it can be difficult to detect these mechanisms since in some situations they are not clear. Precise identification of the missing mechanism is crucial to the selection of appropriate methods for dealing with missing data and thus obtain better results in the processes of data mining and knowledge discovery.

 

SPHERE-M: an Ontology Capture Method

 

This projects proposes a method to help the ontology capture task, and also the development of a computational tool which orients in its utilization, trying to give better technical and formal apparatus to this initial task on the ontology building process. The presented method proposes rules, restrictions, steps and metrics to be used in the capture task, helping this activity in a objective way, adding more formalism and philosophical basis to the ontology building methodologies.

 

Prospects and limitations in the context of knowledge discovery in database for manipulation of domains through ontologies to support the modeling of data warehouse -Case study in social databases

 

Organizations use the techniques of data warehouse and knowledge discovery in databases to generate useful information to support their strategic decisions. These techniques, if properly combined, can offer a broad knowledge of business domain treaty. This paper proposes the use of ontology to organize and categorize theprospects2 relevant data from the field of social vulnerability. With the exploration of this area will evaluate the prospects and limitations of using ontology as a basis for a model of data warehouse in relation to the wider area treated. Data on social vulnerability will be extracted from the foundations of social systems and CADUNICO and SIAB, which are made available by the federal government, Brazil and after implementation of this work will be possible to demonstrate new knowledge about the interrelationship between the systems in order to facilitate the view of the impacts that a government sector can have on other sectors.

 

PICTOREA: A method for Knowledge Discovery in Data Base

 

pictorea2Nowadays, knowledge discovery in databases (Knowledge Discovery in Databases – KDD) is widely used in academia and yet little used in the market. Organizations using KDD process, usually do with acquiring software and methodologies defined by them. However, most KDD applications are made using a proprietary methodology for dealing with each case. These methodologies generally do not follow a pattern. Through interpretive scientific method, concepts of Domain-Driven Data Mining – D3M, with the aid of Methodology for Process Modeling (Business Process Management – BPM) and SPEM – Software and Systems Process Engineering Meta-Model, this paper proposed a pedagogical canonical method called PICTOREA for developing, monitoring and documentation of the steps and activities of a project KDD.

 

Characterization of Time Series for Analyzing of the Evolution of Time Series Clusters

series1

This work proposes a new approach for the characterization of time series in databases (Temporal Databases – TDB) for temporal analysis of clusters. For the characterization of time-series it were used the level and trend components calculated through the Holt-Winters smoothing model. For the temporal analysis of those clusters, it was used in a combined manner the AGNES (Agglomerative Hierarchical Cluster) and PAM (Partition Clustering) techniques. For the application of this methodology an R-based script for generating synthetic TDBs was developed. Our proposal allows the evaluation of the clusters, both in the object movement such as in the appearance or disappearance of clusters. The model chosen to characterize the time-series is adequate because it can be applied for short periods of time in situations where changes should be promptly detected for quick decision making.

 

Improvement in the prediction of the translation initiation site through balancing methods, inclusion of acquired knowledge and addition of features to sequences of mRNA

 

TIS1

The accurate prediction of the initiation of translation in sequences of mRNA is an important activity for genome annotation. However, obtaining an accurate prediction is not always a simple task and can be modeled as a problem of classification between positive sequences (protein codifiers) and negative sequences (non-codifiers). The problem is highly imbalanced because each molecule of mRNA has a unique translation initiation site and various others that are not initiators. Therefore, this study focuses on the problem from the perspective of balancing classes and we present an undersampling balancing method, M-clus, which is based on clustering. The method also adds features to sequences and improves the performance of the classifier through the inclusion of knowledge obtained by the model, called InAKnow.

 

Applying ANN to Model the Stock Market for Prediction of Stock Price and Improvement of the Direction Prediction Index —  Case Study of PETR4 – Petrobras, Brazil

 

Predicting the direction of stock price changes is an important factor, as it contributes to developing effective strategies for stock market transactions and attracts much interest in incorporating historical variables series in the mathematical models or computer algorithms in order to produce predictions about the expectation of price changes.  The purpose of this study is to build a neural model for the financial market, allowing predictions of future behavior of the closing prices of the stocks negotiated in BM&FBOVESPA in the short term, using the economic and financial theory, combining technical analysis, fundamental analysis and analysis of time series, so as to predict the behavior of prices, addressing the percentage of correct predictions about the direction of the price series (POCID).

This work had the intention of understanding the informationANN2 available in the financial market and indicate the variables that drive stock prices. The methodology presented may be adapted to other companies and their stock. The stock PETR4 of Petrobras, traded in BM&FBOVESPA, was used as a case study. As part of this effort, configurations with different window sizes were designed, and the best performance was achieved with window size of 3, with a POCID index of correct direction predictions of 93.62%.

 

Transductive SVM-based prediction of protein translation initiation site

 

The correct protein-coding region identification is an important problem. This problem becomes more complex due to the unfamiliarity with both the biologic context and the presence of conservative characteristics in the messenger RNA (mRNA). Therefore, it is fundamental to research for computational methods aiming to help in the discovery of patterns within the Translation Initiation Sites (TIS). Literature has widely applied machine learning methods in the inductive inference, primarily in the field of Bioinformatics. On the other hand, not so much focus has been given to transductive inference-based machine learning methods such as Transductive Support Vector Machine (TSVM). The transductive itransductive SVMnference performs well for problems of text categorization, in which the amount of categorized elements is considerably smaller than the noncategorized ones. Similarly, the problem of predicting the TIS may take advantage of transductive methods due to the fact that the amount of new sequences grows rapidly with the progress of Genome Project and the study of new organisms. Consequently, this work aims investigate the transductive learning towards predicting TIS and to compare its performance with the inductive inference in the following tools: TISHunter, TisMiner, and NetStart