![]() METHOD AND SYSTEM FOR DATA-BASED OPTIMIZATION OF PERFORMANCE INDICATORS IN MANUFACTURING AND PROCESS
专利摘要:
It is a system and method for performing optimization based on performance indicator data from manufacturing and process plants. The system consists of modules for collecting and merging data from industrial processing units, preprocessing the data to remove atypical values and omission. Additionally, the system generates custom outputs from data and identifies important variables that affect a given process performance indicator. The system also builds predictive models for key performance indicators that understand key features and determines operating points to optimize key performance indicators with minimal user intervention. In particular, the system receives input from users on key performance indicators to optimize, and notifies users of output from several steps in the analysis that help users effectively manage the analysis and make appropriate operational decisions. 公开号:BR102018009859A2 申请号:R102018009859-4 申请日:2018-05-15 公开日:2019-03-12 发明作者:Venkataramana Runkana;Rohan PANDYA;Rajan Kumar;Aniruddha PANDA;Mahesh Mynam;Sri Harsha Nistala;Pradeep Rathore;Jayasree Biswas 申请人:Tata Consultancy Services Limited; IPC主号:
专利说明:
Invention Patent Descriptive Report for: METHOD AND SYSTEM FOR OPTIMIZATION BASED ON DATA FROM PERFORMANCE INDICATORS IN MANUFACTURING AND PROCESS INDUSTRIES description Priority Claim [001] This patent application claims priority over India Application No. 201721009012, filed on May 15, 2017. The entire content of the application mentioned above is incorporated by reference into this document. Field of Technique [002] The modalities of this document refer, in general, to the field of data analysis and, specifically, to a system and a method to optimize key performance indicators of manufacturing and process industries. Background [0 03] Indicators, such as productivity, product quality, energy consumption, uptime in percent, emission levels, etc., are used to monitor the performance of the manufacturing and process plant industries. Today, industries face the challenge of reaching ambitious production targets, which minimizes their energy consumption, reaches emission standards and personalizes their products, while dealing with broad Petition 870180051164, of 06/14/2018, p. 23/63 2/41 variations in the quality of raw material and other parameters of influence, such as ambient temperature, humidity, etc. Industries strive to continually improve their performance indicators by modulating some parameters that are known to influence or affect them. This is easy when a process involves a limited number of variables. However, the largest of industrial processes consists of many units in series and / or in parallel, or involve thousands of variables or parameters. Identifying the variables that influence key performance indicators (KPIs) and (their) levels of optimization in such situations is not simple, and doing the same requires a lot of time and specific knowledge. Data analysis methods, such as statistical techniques, machine learning and data mining, have the potential to solve these complex optimization problems and can be used to analyze industrial data and discover new operating regimes. [004] The identification of relevant variables that affect KPIs is a challenge associated with the analysis of process data. This is due to the large number of variables in industrial processes and complex non-linear interactions between them. There are several variable selection techniques (or resources), but no single variable selection technique is able to identify all relevant variables, Petition 870180051164, of 06/14/2018, p. 24/63 3/41 particularly in complex industrial processes. Therefore, there is a need for a better variable selection technique that is able to select the most important variables. [005] In addition, in all methods that describe the application of data analysis for manufacturing process industrialists, the focus is limited to the visualization of KPIs, other variables of interest and results of predictive models, and / or that provide recommendations process to the end user. Various other outputs, such as ranges of variables that correspond to the desired and unwanted ranges of KPIs, ranges of KPIs at different flow levels, etc. they are of great help to end users in decision making and do not appear in any of the existing methods. Summary [006] The following is a simplified summary of some disclosure modalities in order to provide a simple understanding of the modalities. This summary is not an extension of the modalities overview. It is not intended to identify key / critical elements of the modalities or to outline the scope of the modalities. The only purpose is to present some modalities in a simplified form as a prelude to the more detailed description that is presented below. Petition 870180051164, of 06/14/2018, p. 25/63 4/41 [007] In view of the above, one embodiment of this document provides a system and method for analyzing a plurality of data from one or more industrial processing units to optimize key industry performance indicators. [008] In one aspect, the following presents a system and method for analyzing a plurality of data from one or more industrial processing units to optimize key industry performance indicators. The system comprises an instruction memory, at least one processor communicatively coupled with the memory, a plurality of interfaces and a plurality of modules. A receiving module is configured to receive the plurality of data from one or more industrial processing units, in which the plurality of characteristics of raw materials, data comprises characteristics of intermediate products, by-products and final products, process parameters and condition of process equipment. A unit-level fusion module is configured to merge plurality of received data to obtain a data set per unit from each of one or more industrial processing units, where the data set per unit of each processing unit comprises a desired sampling frequency. A verification module is configured to Petition 870180051164, of 06/14/2018, p. 26/63 5/41 verify data set per merged unit of one or more industrial processing units, in which the presence of unwanted values, percentage of availability, standard deviation and interquartile range of all variables of the processing unit are calculated. A data preprocessing module is configured to preprocess the verified plurality of data to obtain a preprocessed data set from each of one or more industrial processing units, where preprocessing is an interactive process comprising the steps of removing the outlier, imputing missing values and grouping. A company-level fusion module is configured to integrate pre-processed data from each of one or more industrial processing units with one or more values of simulated variables from one or more physics-based models and one or more inputs user domain to obtain a company-level data set, in which the unit data set is merged and synchronized taking into account the time intervals due to dwell times in several units, transport times between one or more industrial processing units and response time of one or more sensors of the processing units. A regime identification module is configured to identify one or more operating regimes using one or more grouping techniques in the Petition 870180051164, of 06/14/2018, p. 27/63 6/41 company-level data set, in which one or more grouping techniques comprise grouping based on distance, grouping based on density and hierarchical grouping. A baseline statistics module is configured to determine ranges of one or more variables that correspond to the company-level data set KPIs. Range determination is based on predefined baseline statistics and one or more operating regimes, in which determined ranges of one or more variables are being used to generate one or more KPI plots during the period of time that the analysis is being performed. A resource selection module is configured to select one or more resources from the company-level data set to obtain the superset of one or more resources selected from the company-level data set, where the resource selection is performed on all data sets per regime, as well as the company-level data set. A model building module is configured to develop one or more predictive models for each KPI, where the one or more predictive models is developed using the enterprise-level data set and the superset of one or more features selected from the company-level data set. An optimization module is configured to optimize at least one KPI based on one or more predictive models and Petition 870180051164, of 06/14/2018, p. 28/63 7/41 constraining one or more KPIs using one or more optimization techniques, in which one or more optimization techniques comprise gradient research, linear programming, objective programming, simulated annealing and evolutionary algorithms. [009] In another aspect, the following is a method for analyzing a plurality of data from one or more industrial processing units to optimize key industry performance indicators. The method comprises the steps of receiving the plurality of data from one or more industrial processing units, in which the plurality of data comprises characteristics of raw materials, characteristics of intermediate products, by-products and final products, process parameters and condition of process equipment , which merge the plurality of data received to obtain a data set per unit from each of one or more industrial processing units, check the data set per unit merged from one or more industrial processing units, in which the presence of absurd values, percentage of availability, standard deviation and interquartile range of all variables of the processing unit are calculated, pre-processing the plurality of data verified to obtain a pre-processed data set from each one of one or more units of Petition 870180051164, of 06/14/2018, p. 29/63 8/41 industrial processing, in which pre-processing is an interactive process that comprises the steps of removing the outlier, imputing missing values and grouping, integrating the set of pre-processed data from each of one or more units of industrial processing with one or more values from one or more physics-based models, and one or more domain entries from the user to obtain an enterprise-level data set, where the unit data set is merged and synchronized taking into account the time intervals due to dwell times in several units, material transport times of one or more industrial processing units and response time of one or more sensors of the processing units, identify one or more operating regimes with the use of one or more clustering techniques in the enterprise-level data set, in which one or more clustering techniques with comprise grouping based on distance, grouping based on density and hierarchical grouping, determining ranges of one or more variables that match the KPIs of the company-level data set based on predefined baseline statistics and the one or more schemes operation, in which the determined ranges of one or more variables are being used to generate one or more KPI plots during the period of analysis being performed, select a Petition 870180051164, of 06/14/2018, p. 30/63 9/41 or more company-level data set resources to obtain a superset of one or more selected resources from the company-level data set, where resource selection is performed on all data sets per regime, as well as in the company-level data set, develop one or more predictive models for each KPI, where the one or more predictive models use company-level data set and the superset of one or more selected resources from the data set company-level and optimize at least one KPI based on one or more predictive models and restrict to one or more KPIs using one or more optimization techniques, where one or more optimization techniques comprise gradient research, linear programming , objective programming, simulated annealing and evolutionary algorithms. [0010] In yet another aspect, the modality of this document provides one or more non-transitory machine-readable means of storing information that comprises one or more instructions, which when executed by one or more hardware processors perform actions that comprise receive a plurality of data from one or more industrial processing units, where the plurality of data comprises characteristics of raw materials, characteristics of intermediate products, by-products and final products, process parameters and condition of Petition 870180051164, of 06/14/2018, p. 31/63 10/41 process equipment, which merge the plurality of data received to obtain a data set per unit from each of one or more industrial processing units, check the data set per merged unit of one or more industrial processing units , in which the presence of absurd values, percentage of availability, standard deviation and interquartile range of all variables of the processing unit are calculated, pre-processing the plurality of data verified to obtain a pre-processed data set of each among one or more industrial processing units, where preprocessing is an interactive process that comprises the steps of removing outliers, imputing missing values and grouping, integrating the pre-processed data set of each one among one or more processing units. industrial processes with one or more values of one or more physics-based models, and one or more inputs domain data from the user to obtain a company-level data set, in which the data set by units are merged and synchronized taking into account the time intervals due to dwell times in different units, transport of materials from one or more industrial processing units and response time of one or more sensors from the processing units, identify one or more operating regimes Petition 870180051164, of 06/14/2018, p. 32/63 11/41 using one or more grouping techniques in the company-level data set, in which one or more grouping techniques comprise grouping based on distance, grouping based on density and hierarchical grouping, determining ranges of a or more variables that correspond to the company-level data set KPIs based on predefined baseline statistics and the one or more operating regimes, in which the determined ranges of one or more variables are being used to generate one or more more KPI plots during the analysis time period being performed, select one or more resources from the company level data set to obtain a superset of one or more selected resources from the company level data set, where resource selection is performed on all data sets per regime, as well as on the company level data set, develop one or more predictive models for each KPI, where the one or more predictive models use company-level data set and the superset of one or more features selected from the company-level data set and optimize at least one KPI based on one or more models predictive and constrain on one or more KPIs using one or more optimization techniques, where one or more optimization techniques comprise gradient research, linear programming, objective programming, simulated annealing and algorithms Petition 870180051164, of 06/14/2018, p. 33/63 12/41 evolutionary. [0011] It should be noted by experts in the field that any block diagram in this document represents conceptual views of the illustrative systems that incorporate the principles of the present matter. Similarly, it will be noted that any flowcharts, flow diagrams, pseudocode state transition diagrams and the like represent various processes that can be substantially represented in readable medium by computed and, then executed per one device in computation or processor, so much if O device in computation or processor is or no explicitly shown. Brief Description of Drawings [0012] The modalities of this document will be better understood from the following detailed description with reference to the drawings, in which: [0013] Figure 1 illustrates a system for analyzing a plurality of data from one or more industrial processing units to optimize key indicators industry performance in according to a modality gives present disclosure; [0014] Figure 2 is a scheme a power plant in process or manufacture of according to a modality gives present disclosure; Petition 870180051164, of 06/14/2018, p. 34/63 13/41 [0015] Figure 3 is a schematic showing the steps in the method to optimize KPIs according to a modality of the present disclosure; [0016] Figures 4 (a) and 4 (b) are a flowchart that represents data pre-processing with the use of atypical value and imputation of techniques according to a modality of the present disclosure; [0017] Figure 5 is a schematic of the inputs and outputs of the data pre-processing step according to a modality of the present disclosure; [0018] Figure 6 is a diagram of the company-level integration inputs and outputs according to a modality of the present disclosure; [0019] Figure 7 is a schematic of the inputs and outputs of the baseline statistics and the identification regime according to a modality of the present disclosure; [0020] Figure 8 is a flow chart of the resource selection according to a modality of the present disclosure; [0021] Figures 9 (a) and 9 (b) are a flowchart of the model of construction and discrimination according to a modality of the present disclosure; [0022] Figure 10 is a schematic of the inputs and outputs of the construction and discrimination model according to one modality of the present disclosure; Petition 870180051164, of 06/14/2018, p. 35/63 14/41 [0023] Figure 11 is a scheme of the optimization inputs and outputs according to a modality of the present disclosure; and [0024] Figures 12 (a) and 12 (b) illustrate a method for analyzing a plurality of data from one or more industrial processing units to optimize key industry performance indicators according to one embodiment of the present disclosure. Detailed Description of the Modalities [0025] The modalities of this document and the various features and advantageous details of it are explained more fully in reference to the non-limiting modalities that are illustrated in the attached drawings and detailed in the description below. The examples used in this document are intended to merely facilitate understanding of the ways in which the modalities of this document can be practiced and, in addition, to enable experts in the field to practice the modalities of this document. Consequently, the examples should not be interpreted as limiting the scope of the modalities of this document. [0026] Referring to Figure 1, a system 100 for analyzing a plurality of data from one or more industrial processing units to optimize key industry performance indicators is presented. System 100 Petition 870180051164, of 06/14/2018, p. 36/63 15/41 comprises a processor 102, a memory 104 communicatively coupled to processor 102, a plurality of interfaces 106, a receiving module 108, a unit-level fusing module 110, a checking module 112, a scanning module preprocessing data 114, a company-level merger module 116, a regime identification module 118, a baseline statistics module 120, a resource selection module 122, a model construction module 124 , an optimization module 126 and a data management server 128. [0027] In the preferred embodiment, memory 104 contains instructions that are readable by processor 102. The plurality of interfaces 106 comprise a graphical user interface, server interface, a physics-based model interface and a resolution interface. The graphical user interface is used to receive inputs, such as the KPIs of interest and the user analysis time period, and to forward them to the plurality of modules. The server interface forwards the data request received from one of a plurality of modules to the data management server 128 and to the data received from the data management server 128 to the plurality of modules. The physics-based model interface sends the set of received data integrated from one of a plurality of modules after the company-level merger to Petition 870180051164, of 06/14/2018, p. 37/63 16/41 the physics-based models, if any, available for the industrial process, receive the values of simulated variables from the physics-based models and forward them to one of a plurality of modules. [0028] In the preferred mode, a receiving module 108 is configured to receive the plurality of data from one or more industrial processing units, in which the plurality of data comprises characteristics of raw materials, characteristics of intermediate products, by-products and products process parameters and condition of process equipment. [0029] Referring to Figures 2 and 3, as examples, a schematic of a hypothetical industrial company in which the majority of process and manufacturing companies consist of several units in series or in parallel. The company consists of 8 process units that produce two products, that is, A and B, to produce product A, the material flow occurs through the following sequence of operations: (Unit No. 1, Unit No. 2, Unit n ° 3) Unit n ° 4 Unit n ° 5 Unit No. 6. Similarly, to produce product B, the material flow occurs through the following sequence of operations: (Unit No. 1, Unit No. 2, Unit No. 3) -> Unit No. 4 Unit No. 7 Unit No. 8 . In order to optimize the KPIs related to the production of product A, said quality of product A or energy consumed per unit mass of Petition 870180051164, of 06/14/2018, p. 38/63 17/41 product A produced, data from all units involved in the operational sequence must be considered. Similarly, in order to optimize the KPIs related to the production of product B, data from all units involved in the operational sequence must be considered. Analysis of company-level data instead of unit-level data can yield better prospects for company operations. Figure 3 shows that for each process unit of N process units, data is collected from various sources, such as Enterprise Resource Planning (ERP), Distributed Control System (DCS) and Laboratory Information Management System (LIMS). [0030] In the preferred mode, the unit-level fusion module is configured to merge the plurality of data received to obtain a data set per unit from each of one or more industrial processing units, in which the data set per unit of each processing unit comprises a desired sampling frequency. In the process of merging, one or more variables from all files or data sets are merged as per specific observation of ID corresponding to the sampling frequency, for example, data in the case of daily data, hours in the case of hour data , etc. If the sampling frequency is inconsistent with Petition 870180051164, of 06/14/2018, p. 39/63 18/41 for multiple files / data sets, variable values are averaged whenever possible. If averaging is not possible, the same data is used for, for example, when hourly analysis is to be performed and only when daily data is available, the daily data value is used for all hours in the day specific. At the end of the process, the data set by units with rows corresponding to the observation ID and the columns corresponding to all variables in the process unit are obtained. [0031] In the preferred mode, the verification module is configured to verify data set per merged unit of one or more industrial processing units, in which the presence of unwanted values, percentage of availability, standard deviation and interquartile range of all Processing unit variables are calculated. The verification of data quality is performed in the data set by units obtained for each one of the process units. Omission maps showing the percentage and pattern of availability of variables are also created for each process unit. Data quality metrics and omission maps are sent out to the user via the user interface. Depending on the availability of the data, the user can decide whether to proceed with the Petition 870180051164, of 06/14/2018, p. 40/63 19/41 rest of the analysis. The user can also suggest deleting the same variables with very low availability before performing the rest of the steps. [0032] With reference to Figures 4 (a), 4 (b) and 5, in which the data preprocessing module 114 is configured to preprocess the plurality of data verified to obtain a preprocessed data set of each one of one or more industrial processing units, where pre-processing is an interactive process that comprises the steps of removing outlier, imputing missing values and grouping. Variables with a percentage availability of at least seventy percent are considered for pre-processing, although this condition is relaxed for material variables, such as raw materials, intermediate product and final product characteristics as the omission of these types of Variables can occur due to the lower number of samples since the laboratory analysis is generally performed only at periodic intervals. [0033] Material variables with less availability than the desired availability and that do not follow any specific pattern in the omission are discarded from the data set. An outlier nominal value analysis is initially performed in order to detect and remove outliers from the data set, which includes values Petition 870180051164, of 06/14/2018, p. 41/63 20/41 inconsistent due to instrument failure / defect. In case the production of a unit is zero, all variables for the unit for that period of time are neglected. The variables are then categorized into various subsets based on the percentage availability of the variable. While multivariate imputation is used for process parameters and non-seasonal material characteristic variables, time series imputation is used for seasonal quality variables. After omitting all variables in appropriate imputation, the grouping is performed in the data set per unit to identify clusters, if any are present in the data. These groupings are representative of different operating regimes. Each data set per unit is then divided into different data sets based on the identified groupings. The divided data sets are obtained through the outliers and imputation steps as shown in Figures 4 (a) and 4 (b). [0034] In the preferred mode, the interactive process of removing the outlier, imputation and grouping is interrupted when the number of clusters and the number of data points in each grouping is not changed. The pre-processed data sets per unit are obtained at the end of this step. For each variable, the Petition 870180051164, of 06/14/2018, p. 42/63 21/41 number / percentage of outliers removed, the technique used for imputation, and mean, median and standard deviation before and after pre-processing are presented to the user as outputs. The list of discarded variables is also presented to the user. The user is also provided with an option to view trends in pre-processed and original variables. [0035] In the preferred mode, with reference to Figure 6, the company-level fusion module 116 is configured to integrate the pre-processed data of each one among one or more industrial processing units with one or more values of simulated variables one or more physics-based models and one or more user domain entries to obtain a company-level data set, where the unit data set is merged and synchronized taking into account the time intervals due to dwell times in several units, transport times between one or more industrial processing units and response time of one or more sensors of the processing units. If the transport time between the two process units is greater than the sampling frequency of the data, then the observation IDs for one of the process units is changed by the appropriate number of time units before integration. For example, if the sampling frequency is daily and takes 2 Petition 870180051164, of 06/14/2018, p. 43/63 22/41 days for material to travel from process unit A to process unit B, then all observation IDs in the data set for process A are changed for 2 days before merging the data sets for both processes. [0036] In the preferred mode, any specific process unit can be considered as the baseline for merging data from all process units. Typically, the process unit on which the KPIs of interest are calculated is considered to be the baseline unit for data integration. In this case, the same intermediate product is coming out of two or more different process units, so the operating variables of all such process units are considered for analysis. However, instead of using material characteristics (size analysis, chemical analysis, etc.) from all process units where the intermediate product is generated, the weighted average characteristics are used. The weights could be quantities of the intermediate product generated from each of the process units or the quantities of the intermediate product consumed in the subsequent process unit. [0037] Once the company-level data set is prepared, it is forwarded to physics-based models, if any are available for the industrial process through the model-based interface Petition 870180051164, of 06/14/2018, p. 44/63 23/41 in physics for calculating simulated variables. These are parameters that can have an impact on KPIs, but they cannot be directly measured in the process. Examples of simulated variables are temperature in a high temperature zone (> 1500 o C) in an oven, concentration of the intermediate product in a reactor, etc. The simulated parameters are sent back to the company-level merger module and are added to the company-level data set to obtain the integrated data set for further analysis. Outputs from company-level integration comprise range, average, standard deviation and median of all variables, and the list of estimated and simulated parameters. [0038] In the preferred mode, the regime identification module 118 is configured to identify one or more operating regimes using one or more grouping techniques in the company-level data set, in which one or more grouping comprises grouping based on distance, grouping based on density and hierarchical grouping. [0039] In the preferred mode, the baseline statistics module 120 is configured to determine ranges of one or more variables of the company-level data set KPIs, based on the predefined baseline statistics and the one or more operating regimes, in which the bands of one or more Petition 870180051164, of 06/14/2018, p. 45/63 24/41 variables are being used to generate one or more KPI plots during the analysis time period being performed. Baseline statistics, such as percentage of KPIs' time are in the desired and unwanted ranges, the ranges of variables that correspond to the desired and unwanted ranges of KPIs, the ranges of KPIs at different flow levels, and the coefficients of correlation between KPIs and other variables in the integrated data set are calculated and reported to the user. The user is provided with the option to generate trend plots and KPI box plots and all variables in the integrated data set during the time period in which the analysis is being performed. The user can also generate scatter plots between the KPIs and the variables of interest. All variables in the integrated data set are combined at various intervals between their minimum and maximum values. The KPI values that correspond to each reservoir for each variable are separated and averaged. The average KPI values that correspond to the reservoirs / range of all variables are represented in the form of a heat map and notified to the user. [0040] In the preferred mode, the resource selection module 122 is configured to select one or more resources from the company level data set for Petition 870180051164, of 06/14/2018, p. 46/63 25/41 obtain the superset of one or more selected resources from the company-level data set, where the resource selection is performed on all data sets per regime, as well as on the company-level data set. The integrated data set is divided into two or more data sets depending on the number of regimes identified during the regime identification step. [0041] It would be envisaged that a two-stage resource selection approach as shown in Figure 8 would be used to select important resources. In the first stage, important resources are obtained from various resource selection methods. This stage involves adjusting the parameters available in the resource selection and cross-validation algorithms k times to obtain important resources. Resource selection methods could be model-based methods, such as random forest, multivariate adaptive regression grooves, supervised principal component analysis, gradual regression and support vector regression, non-model based methods such as associated mining and time series grouping. In the second stage, lists of important resources obtained from individual resource selection techniques are combined to obtain a single 'superset' of important resources. This is achieved by scoring the main resources identified Petition 870180051164, of 06/14/2018, p. 47/63 26/41 for all techniques using the geometric mean scoring method. The score for the resource 'í' is calculated as follows: [0042] rii is the frequency or number of methods that selected the 1st resource; and [0043] Ri r k is the classification of resource i in the k-th method '[0044] The superset of important resources together with their importance score in relation to the KPIs for the data sets per regime and the integrated data set are notified to the user. The user is provided with the option of adding other resources or deleting existing resources from the supersets. For each data set, parallel coordinate plots are also displayed to the user. [0045] With reference to Figures 9 (A) and 9 (b), the model construction module 124 of system 100 is configured to develop one or more predictive models for each KPI in the training data set, where the one or more predictive models using the company-level data set and the superset of one or more features selected from the company-level data set. Would be Petition 870180051164, of 06/14/2018, p. 48/63 27/41 contemplated that a three-step model building approach is used. The first step involves predictive construction models that use basic model construction algorithms. The one or more predictive models comprises gradual regression, principal component regression, multivariate adaptive regression slots, independent component regression, loop regression, kriging, random forest, partial minimum squares, gradient driven trees, generalized linear model, vector machines linear and nonlinear support and artificial neural networks. The second step involves adjusting the construction model parameters in order to optimize the forecasting performance of the models. The prediction performance of the models is assessed using the test data set and is expressed in terms of mean square error (RMSE) of forecast, mean absolute error (MAE) of forecast, akaike criterion (AIC), criterion of correction akaike information (AICc) and the Bayesian information criterion (BIC) and hit rate (% of points with a certain predicted accuracy) as shown in Figure 10. It would be contemplated that if in any case none of the predictive models reach the RMSE and / or MAE, the user is provided with the option to return to the resource selection in which additional variables or transformed variables can be added to the superset of important variables and repeat the step Petition 870180051164, of 06/14/2018, p. 49/63 28/41 model building. [0046] The third step involves model discrimination and selection in which for the integrated data set and the data sets per regime, the three main predictive models with mean square error and mean absolute error values are less than the specified values chosen by the user. A robustness score (RS) is assessed for the three main models and used for model discrimination. At least ten thousand data points that contain values for all variables included in the models are generated randomly and used to predict the KPI. The robustness score for each model is then determined with use, _ Number of data points where KPI is within a desired range Total number of data points on which KPI is predicted [0047] Predictive models where the highest robustness score is greater than 95% are selected for sensitivity analysis and optimization. The sensitivity analysis based on variance is performed to access the sensitivity of the KPI for unit changes in the variables in the model. The sensitivity scores for each of the variables in the models are obtained, with a higher score indicating a greater change in the KPI value with a change in Petition 870180051164, of 06/14/2018, p. 50/63 29/41 unit in the value of the variable. It would be contemplated that if the robustness score for all three predictive models is less than 95%, the user can modify the superset of important features and repeat the model construction step. [0048] It would be contemplated that the predictive performance of the models is likely to decrease over time as new / future data is obtained for prediction and a self-learning option is provided to the user to improve the accuracy of the predictive models. For self-learning, original data used to develop the models and data for the new time period are combined and the model building step is repeated in the combined data set. Self-learning can be triggered automatically on a periodic basis (for example, every week or every month) or by the user based on statistical measurements related to the models or new data set. Statistical measurements to models could be performance metrics, such as mean square error, mean absolute error, akaike information criterion, akaike information criterion of correction, bayesian information criterion or hit rate as statistical measurements related to the new data set could be a percentage deviation of new data from the original data or a multivariate distance between the original data set and Petition 870180051164, of 06/14/2018, p. 51/63 30/41 new data set. [0049] In the preferred mode, the optimization module is configured to optimize at least one KPI based on one or more predictive models and constrain one or more KPIs using one or more optimization techniques, in which one or more techniques of optimization comprise gradient research, linear programming, simulated annealing and evolutionary algorithms. [0050] Referring to Figure 11, a schematic diagram of the optimization in which the KPIs to be optimized with restrictions on the variables used in the predictive models are considered as user inputs and the values of the variables that yield KPI optimization levels are determined. When any of the KPIs need to be optimized, the problem is to minimize or maximize the KPI and the solution consists of variable values that lead to the minimum / maximum KPI. When two or more KPIs need to be optimized simultaneously, the problem is to minimize the cost function (for example, cost function = 0.6 KPI 1 + 0.4 KPI 2 - 1.2 KPI 3 ) and the solution consists of a set of pareto-ideal operating points for the process. The cost function for optimizing multiple KPIs is built using the weights assigned to each of the KPIs by the user. Various optimization techniques, such as gradient search, linear programming, objective programming, annealing Petition 870180051164, of 06/14/2018, p. 52/63 Simulated 31/41 and evolutionary algorithms like genetic algorithms are used. The optimization problem is sent to the optimization solvers for multiobjective or single objective optimization algorithms, such as rule-based, based on fuzzy logic and gradient-based resolvers through the resolution interface. Solutions received from resolvers are processed and notified to the user. User outputs of the optimization step comprise the values of variables that yield optimized KPIs and the optimized values of KPIs, set of pareto-ideal operating points and the values of KPIs at those points, and the plotting of pareto-ideal operating points. [0051] Referring to Figure 12 (a) and 12 (b), a 400 method for analyzing a plurality of data from one or more industrial processing units to optimize the key performance indicators (KPIs) of the industry. [0052] In step 402, in which the receiving module receives the plurality of data from one or more industrial processing units, in which the plurality of data comprises characteristics of raw materials, characteristics of intermediate products, by-products and final products, process parameters, environment, market demand, availability of raw materials and condition of process equipment. [0053] In step 404, in which the fusion module of Petition 870180051164, of 06/14/2018, p. 53/63 32/41 unit level merges the plurality of data received to obtain a data set per unit from each of one or more industrial processing units, where the data set per unit of each processing unit comprises a sampling frequency desired. [0054] In step 406, in which the verification module checks the data set per merged unit of one or more industrial processing units, in which the presence of unwanted values, percentage of availability, standard deviation and interquartile range of all Processing unit variables are calculated. [0055] In step 408, in which the data preprocessing module preprocesses the plurality of data verified to obtain a preprocessed data set from each of one or more industrial processing units, in which the preprocessing it is an interactive process that comprises the steps of removing atypical value, imputing missing values and grouping. The outputs for the user of the data pre-processing module include a list of discarded variables, number and percentage of outliers removed for each variable, a technique used for imputing missing values and each variable, mean, standard deviation and median of each variable. before and after pre-processing, and plots of Petition 870180051164, of 06/14/2018, p. 54/63 33/41 trend of all variables before and after preprocessing. [0056] In step 410, in which the company-level merger module integrates the pre-processed data from each one of one or more industrial processing units with one or more values of simulated variables from one or more models based on physical and one or more user domain entries to obtain a company-level data set, in which the unit data set is merged and synchronized taking into account the time intervals due to dwell times in several units , transport times between one or more industrial processing units and response time of one or more sensors of the processing units. The outputs for the user of the company-level fusion module comprise a list of simulated parameters, and the range, mean, standard deviation and median of all variables in the integrated data set. [0057] In step 412, the regime identification module identifies one or more operating regimes using one or more grouping techniques in the enterprise level data set, where one or more grouping techniques comprise grouping with distance-based, density-based grouping and hierarchical grouping. [0058] In step 414, the baseline statistics module determines ranges of one or more variables Petition 870180051164, of 06/14/2018, p. 55/63 34/41 corresponding to the company-level data set KPIs, based on predefined baseline statistics and the one or more operating regimes, in which the determined ranges of one or more variables are being used to generate one or more more KPI plots during the analysis time period being performed. The outputs for the user of the baseline statistics module comprises the percentages of time period KPIs that are in the desired and unwanted ranges, the ranges of the variables that correspond to the desired and unwanted ranges of KPIs, the ranges of KPIs at different levels of productivity, correlation coefficients between KPIs and other variables, trend plots and box plots of KPIs and other variables, dispersion plots between KPIs and variables of interest, and heat maps of KPI average values. [0059] In step 426, the resource selection module selects one or more resources from the company-level data set to obtain a superset of one or more resources selected from the company-level data set, in which the selection of resource is performed on all data sets per regime, as well as on the company level data set. The outputs for the user of the resource selection module comprise the superset of resources and their importance scores for database sets Petition 870180051164, of 06/14/2018, p. 56/63 35/41 integrated and per regime, and parallel coordinate plots of the resources. [0060] In step 418, the building model module develops one or more predictive models for each KPI, in which the one or more predictive models uses a company-level data set and a superset of one or more features selected from the company-level data set. The outputs for the user of the discrimination module and model construction comprise performance metrics for all predictive models, three main predictive models developed on the basis of RMSE and MAE, robustness score for the three main models, sensitivity scores for all variables in robust models. In addition, the outputs for the user of the discrimination module and model construction also comprise trend plots of current and predicted KPI values, dispersion plots of current versus predicted KPI values and residual plots of absolute error versus all variables in robust models. [0061] In the final step 420, the optimization module optimizes at least one KPI based on one or more predictive models and constrains one or more KPIs using one or more optimization techniques, in which one or more optimization techniques they comprise gradient research, linear programming, simulated annealing and evolutionary algorithms. At Petition 870180051164, of 06/14/2018, p. 57/63 36/41 outputs for the user of the optimization module comprise the values of variables that yield optimized KPIs (pareto-ideal operating points), optimized KPI values and plots of the pareto-ideal operating points. [0062] The written description describes the subject of this document to enable any specialist in the technique to produce and use the modalities. The scope of the modalities of the matter is defined by the claims and may include other changes that occur to those who are experts in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they comprise equivalent elements with insubstantial differences from the literal language of the claims. [0063] A system and method to perform optimization based on data from the performance indicators of manufacturing and process plants. The system consists of modules to collect and merge data from industrial processing units, pre-process data to remove outliers and omissions. In addition, the system generates customized data output and identifies important variables that affect a given performance process indicator. The system also builds predictive models for key performance indicators that Petition 870180051164, of 06/14/2018, p. 58/63 37/41 important resources and determines points of operation to optimize key performance indicators with minimal user intervention. In particular, the system receives inputs from users on the key performance indicators to be optimized and notifies users of the outputs of various stages in the analysis that helps users to effectively manage the analysis and obtain appropriate operational decisions. [0064] The modality of the present disclosure of this document is directed to the unsolved problem of optimization of performance indicators to monitor the performance of manufacturing and process plant industries, in addition to pre-processing of industrial data received from the variety of sources that have different record formats and frequencies. [0065] However, it must be understood that the scope of protection is extended to such a program and, in addition, to a computer-readable medium that has a message on it; such computer-readable storage medium contains means of program code for implementing one or more steps of the method, when the program is executed on a suitable server or mobile device or programmable device. The hardware device can be any type of device that can be programmed, which includes, for example, any type of computer, such as Petition 870180051164, of 06/14/2018, p. 59/63 38/41 a personal or service computer, or the like, or any combination thereof. The device may also comprise media that could be, for example, hardware media, such as an application-specific integrated circuit (ASIC), a field programmable port array (FPGA), or a combination of hardware and software, for example, an ASIC and an FPGA or at least one microprocessor and at least one memory with software modules located in it. Therefore, the media can comprise as many hardware media as software media. The method modalities described in this document could be implemented in hardware and software. The device can also comprise software means. Alternatively, the modalities can be implemented on different hardware devices, for example, using a plurality of central processing units (CPUs). [0066] The modalities of this document may comprise elements of hardware and software. The modalities that are implemented in software include, but are not limited to, firmware, resident software, microcode, etc. the functions performed by the various modules described in this document can be implemented in other modules or combinations of other modules. For the purposes of this description, computer-readable or readable media Petition 870180051164, of 06/14/2018, p. 60/63 39/41 per computer can be any device that can understand, store, communicate, propagate or transport the program for use by or in connection with the system, device or instruction execution device. [0067] The media can be an electronic, magnetic, optical, electromagnetic, infrared or semiconductor system (or apparatus or device) or a propagation medium. Examples of computer-readable media include semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a hard magnetic disk and an optical disk . Current examples of optical discs include read-only memory on compact disc (CD-ROM), read / write on compact disc (CDR / W) and digital video disc (DVD). [0068] A data processing system suitable for storing and / or executing program code will comprise at least one processor (processor 28) coupled directly or indirectly to memory elements via a system bus. Memory elements can comprise local memory used during actual program code execution, raw storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times the code needs to be retrieved from a raw storage Petition 870180051164, of 06/14/2018, p. 61/63 40/41 during execution. [0069] Input / output (I / O) devices (which include, but are not limited to keyboards, screens, pointing devices, etc.) can be coupled to the system either directly or through the intervention of I / O controllers . Network adapters can also be coupled to the system to enable the data processing system to become coupled with other data processor systems or remote printers or storage devices through private or public intervention networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters. [0070] A representative hardware environment for practicing the modalities may comprise a hardware configuration of a computing / information handling system in accordance with the modalities of this document. The system of this document comprises at least one processor or central processing unit (CPU). CPUs are interconnected via the system bus to various devices, such as random access memory (RAM), read-only memory (ROM) and an input / output adapter (I / O). The I / O adapter can be connected to peripheral devices, such as disk drives and tape drives, or other storage devices. Petition 870180051164, of 06/14/2018, p. 62/63 41/41 program that are readable by the system. The system can read the instructions of the invention on the program storage devices and follow these instructions to execute the methodology of the modalities of this document. [0071] The system additionally comprises a user interface adapter that connects a keyboard, mouse, speaker, microphones, and / or other user interface devices, such as a touchscreen device (not shown) to the bus to gather user input. In addition, a connection adapter connects the bus to a data processing network data, and a adapter in screen connects the bus to one device of screen what can be incorporated as one device about to leave, such how a monitor, printer or transmitter, for example. [0072] The description ant superior was presented in reference to the various modalities. Experts in the technique and technology to which this application belongs will appreciate that changes and changes in the described methods and structures of operation can be practiced without departing significantly from the principle, essence and scope.
权利要求:
Claims (19) [1] 1. Computer-implemented method to analyze a plurality of data from one or more industrial processing units to optimize the key performance indicators (KPIs) of one or more units of the process plant, the method being characterized by the fact that it comprises the steps of: receiving, in a receiving module (108), a plurality of data from one or more industrial processing units, where the plurality of data comprises characteristics of raw materials, characteristics of intermediate products, by-products and final products, process parameters , environmental parameters, market demand, availability of raw materials and condition of process equipment; merge, in a unit-level fusion module (110), the plurality of data received to obtain a data set per unit from each of one or more industrial processing units, where the data set per unit of each the processing unit comprises a desired sampling frequency; verify, in a verification module (112), the data set per unit merged from one or more industrial processing units, in which the presence of absurd values, percentage of availability, standard deviation and Petition 870180040605, of 5/15/2018, p. 11/64 [2] 2/14 interquartile range of all processing unit variables are calculated; preprocessing, in a data preprocessing module (114), the plurality of data verified to obtain a preprocessed data set of each one of the one or more industrial processing units, where preprocessing is a process iterative that comprises the steps of removal of outlier, imputation of missing values and grouping; integrate, in a company-level fusion module (116), the pre-processed of each one of one or more industrial processing units with one or more values of simulated variables from one or more models based on physics and a or more domain entries from the user to obtain company-level data set, where the unit data set is merged and synchronized taking into account the time intervals due to dwell times in different units, times material transport between one or more industrial processing units and response time of one or more sensors of the processing units; identify, in a regime identification module (118), one or more operating regimes using one or more grouping techniques in the enterprise level data set, in which one or more grouping techniques Petition 870180040605, of 05/15/2018, p. 12/64 [3] 3/14 comprise grouping based on distance, grouping based on density and hierarchical grouping; determine, in a baseline statistics module (120), ranges of one or more variables that correspond to the company-level data set KPIs, based on predefined baseline statistics and the one or more operation, in which the determined ranges of one or more variables are being used to generate one or more KPI plots during the period in which the analysis is being performed; select, in a resource selection module (122), one or more resources or key variables from the company-level data set to obtain a superset of one or more selected resources from the company-level data set, where the resource selection is performed on all data sets per regime, as well as on the company level data set; develop, in a model building module (124), one or more predictive models for each KPI, in which the one or more predictive models use the enterprise-level data set and the superset of one or more resources selected from the set enterprise-level data; and optimize, in an optimization module (126), at least one KPI based on one or more predictive models and restrictions on one or more KPIs using one or more Petition 870180040605, of 05/15/2018, p. 13/64 [4] 4/14 optimization, where one or more optimization techniques include gradient research, linear programming, objective programming, simulated annealing and evolutionary algorithms. 2. Method, according to claim 1, characterized by the fact that pre-processing is performed on variables that have a predefined percentage of availability and a predefined default pattern. 3. Method according to claim 1, characterized by the fact that the integration of the plurality of pre-processed data from one or more industrial units is based on a predefined baseline process unit. 4. Method, according to claim 1, characterized by the fact that the one or more plots include KPI trend plots, KPI box plots, dispersion plots and heat maps. [5] 5. Method, according to claim 1, characterized by the fact that the resource selection is carried out in two stages that comprise: in the first stage, important resources are obtained from one or more resource selection techniques, and in the second stage, the resources obtained from the first stage are classified using the geometric mean scoring method and combined to obtain a single superset of one or more resources. Petition 870180040605, of 05/15/2018, p. 14/64 5/14 [6] 6. Method, according to claim 5, characterized by the fact that the resource with the lowest score among the one or more resources selected in the first stage is greater in relevance in relation to KPI. [7] 7. Method, according to claim 5, characterized by the fact that the first stage of one or more resource selection techniques comprises methods based on model and based on non-model. [8] 8. Method, according to claim 1, characterized by the fact that the selection of resources is carried out in all data sets per regime, as well as in the company-level data set. [9] 9. Method, according to claim 1, characterized by the fact that the outputs for the user from the data pre-processing module (114) include a list of discarded variables, number and percentage of outliers removed for each variable, technique used for imputing missing values in each variable, mean, median and standard deviation of each variable before and after pre-processing and trend plots of all variables before and after pre-processing. [10] 10. Method, according to claim 1, characterized by the fact that the outputs for the user from the company-level merger module (116) include a list of simulated parameters, and the range, average, median Petition 870180040605, of 5/15/2018, p. 15/64 6/14 and standard deviation of all variables in the integrated data set. [11] 11. Method, according to claim 1, characterized by the fact that the outputs for the user from the baseline statistics module (120) include the ranges of variables that correspond to the desired and unwanted ranges of KPIs, the KPI ranges at different productivity levels and the correlation coefficients between KPIs and other variables. [12] 12. Method, according to claim 1, characterized by the fact that the outputs for the user from the baseline statistics module (120) include trend plots and box plots of KPIs and other variables, plots dispersion between KPIs and variables of interest, and heat maps of average KPI values. [13] 13. Method, according to claim 1, characterized by the fact that the outputs for the user from the resource selection module (122) include the superset of resources and their importance scores for integrated data sets and by regime , and parallel coordinate plots of the features. [14] 14. Method, according to claim 1, characterized by the fact that the outputs for the user from the discrimination module and model construction Petition 870180040605, of 05/15/2018, p. 16/64 7/14 include performance metrics for all predictive models, three main predictive models developed based on the mean square error and mean absolute error, robustness scores for the three main models, sensitivity scores for all variables in the robust models. [15] 15. Method, according to claim 1, characterized by the fact that the outputs for the user from the discrimination module and model construction also include trend plots of the actual and predicted KPI values, value dispersion plots actual versus predicted KPI and residual absolute error plots versus all variables in robust models. [16] 16. Method, according to claim 1, characterized by the fact that the outputs to the user from the optimization module (126) include the values of variables that yield ideal KPIs (Pareto optimal operation points) and values ideals of KPIs and plots of Pareto optimal operation points. [17] 17. System (100) for analyzing a plurality of data from one or more industrial processing units to optimize key industry performance indicators, the system being characterized by the fact that it comprises: a memory (104) with instructions; Petition 870180040605, of 5/15/2018, p. 17/64 8/14 at least one processor (102) coupled communicatively with the memory; a plurality of interfaces (106), wherein the plurality of interfaces comprises graphical user interface, server interface, a physics-based model interface and a resolver interface; a receiving module (108) is configured to receive a plurality of data from one or more industrial processing units, wherein the plurality of data comprises characteristics of raw materials, characteristics of by-products, final and intermediate products, process parameters and condition of process equipment; a unit-level fusion module (110) is configured to merge the plurality of data received to obtain a data set per unit from each of one or more industrial processing units, where the data set per unit of each the processing unit comprises a desired sampling frequency; a verification module (112) is configured to verify the data set per merged unit of one or more industrial processing units, in which the presence of unwanted values, percentage of availability, standard deviation and interquartile range of all variables of the Petition 870180040605, of 5/15/2018, p. 18/64 9/14 processing unit are calculated; a data preprocessing module (114) is configured to preprocess the plurality of data verified to obtain a preprocessed data set from each of one or more industrial processing units, where the preprocessing is an iterative process that comprises the steps of removing atypical value, imputing missing values and grouping; a company-level fusion module (116) is configured to integrate the preprocessor of each one of one or more industrial processing units with one or more values of simulated variables from one or more physics-based models and a or more user domain entries to obtain company-level data set, in which the unit data set is merged and synchronized taking into account the time intervals due to dwell times in different units, transport times between one or more industrial processing units and response time of one or more sensors of the processing units; a regime identification module (118) is configured to identify one or more operating regimes using one or more grouping techniques in the enterprise level data set, where one or more grouping techniques comprise grouping based distance, grouping Petition 870180040605, of 5/15/2018, p. 19/64 10/14 based on density and hierarchical grouping; a baseline statistics module (120) is configured to determine ranges of one or more variables that correspond to the company-level data set KPIs, based on predefined baseline statistics and one or more operation, in which the determined ranges of one or more variables are being used to generate one or more KPI plots during the period in which the analysis is being performed; a resource selection module (122) is configured to select one or more resources from the company-level data set to obtain a superset of one or more resources selected from the company-level data set, where the resource selection is performed on all data sets per regime, as well as on the company level data set; a model building module (124) is configured to account for one or more predictive models for each KPI, where the one or more predictive models use the enterprise-level data set and the superset of one or more features selected from the set enterprise-level data; and an optimization module (126) is configured to optimize at least one KPI based on one or more predictive models and constraints on one or more KPIs using one Petition 870180040605, of 5/15/2018, p. 20/64 11/14 or more optimization techniques, where one or more optimization techniques include gradient research, linear programming, simulated annealing and evolutionary algorithms. [18] 18. System, according to claim 17, characterized by the fact that the one or more models based on physics are used to calculate one or more simulated variables. [19] 19. One or more non-transitory machine-readable information storage media characterized by the fact that they comprise one or more instructions that, when executed by one or more hardware processors, perform actions that comprise: receiving, in a receiving module (108), a plurality of data from one or more industrial processing units, where the plurality of data comprises characteristics of raw materials, characteristics of intermediate products, by-products and final products, process parameters , environmental parameters, market demand, availability of raw materials and condition of process equipment; merge, in a unit-level fusion module (110), the plurality of data received to obtain a data set per unit from each of one or more industrial processing units, where the data set per unit of each processing unit comprises a Petition 870180040605, of 5/15/2018, p. 21/64 12/14 desired sampling frequency; verify, in a verification module (112), the data set per unit merged from one or more industrial processing units, in which the presence of absurd values, percentage of availability, standard deviation and interquartile range of all unit variables processing costs are calculated; preprocessing, in a data preprocessing module (114), the plurality of data verified to obtain a preprocessed data set of each one of the one or more industrial processing units, where preprocessing is a process iterative that comprises the steps of removal of outlier, imputation of missing values and grouping; integrate, in a company-level fusion module (116), the pre-processed of each one of one or more industrial processing units with one or more values of simulated variables from one or more models based on physics and a or more domain entries from the user to obtain company-level data set, where the unit data set is merged and synchronized taking into account the time intervals due to dwell times in different units, times transport of materials between one or more industrial processing units and response time of one or more sensors of the Petition 870180040605, of 5/15/2018, p. 22/64 13/14 processing units; identify, in a regime identification module (118), one or more operating regimes using one or more grouping techniques in the enterprise level data set, where one or more grouping techniques comprise grouping based in distance, density-based grouping and hierarchical grouping; determine, in a baseline statistics module (120), ranges of one or more variables that correspond to the company-level data set KPIs, based on predefined baseline statistics and the one or more operation, in which the determined ranges of one or more variables are being used to generate one or more KPI plots during the period in which the analysis is being performed; select, in a resource selection module (122), one or more resources or key variables from the company-level data set to obtain a superset of one or more selected resources from the company-level data set, where the resource selection is performed on all data sets per regime, as well as on the company level data set; develop, in a model construction module (124), one or more predictive models for each KPI, in which the one or more predictive models use the data level Petition 870180040605, of 5/15/2018, p. 23/64 14/14 company and superset of one or more resources selected from the company level data set; and optimize, in an optimization module (126), at least one KPI based on one or more predictive models and constraints on one or more KPIs using one or more optimization techniques, where one or more optimization techniques include gradient research, linear programming, objective programming, simulated annealing and evolutionary algorithms.
类似技术:
公开号 | 公开日 | 专利标题 BR102018009859A2|2019-03-12|METHOD AND SYSTEM FOR DATA-BASED OPTIMIZATION OF PERFORMANCE INDICATORS IN MANUFACTURING AND PROCESS INDUSTRIES Ushio et al.2018|Fluctuating interaction network and time-varying stability of a natural fish community Chen et al.2020|Selecting critical features for data classification based on machine learning methods JP6725700B2|2020-07-22|Method, apparatus, and computer readable medium for detecting abnormal user behavior related application data Corizzo et al.2019|Anomaly detection and repair for accurate predictions in geo-distributed big data CN109657805B|2021-04-23|Hyper-parameter determination method, device, electronic equipment and computer readable medium KR20170060031A|2017-05-31|Utilizing machine learning to identify non-technical loss JP2022500745A|2022-01-04|Computer implementation methods, computer program products and systems for anomaly detection and / or predictive maintenance US10963790B2|2021-03-30|Pre-processing for data-driven model creation US20140129499A1|2014-05-08|Value oriented action recommendation using spatial and temporal memory system WO2017034512A1|2017-03-02|Interactive analytics on time series US11037080B2|2021-06-15|Operational process anomaly detection Islam et al.2019|Capacity management of hyperscale data centers using predictive modelling US11138376B2|2021-10-05|Techniques for information ranking and retrieval Bosnić et al.2014|Enhancing data stream predictions with reliability estimators and explanation Bashar et al.2021|Performance of machine learning algorithms in predicting the pavement international roughness index Malekloo et al.2021|Machine learning and structural health monitoring overview with emerging technology and high-dimensional data source highlights De Caro et al.2018|Adaptive wind generation modeling by fuzzy clustering of experimental data US20220078198A1|2022-03-10|Method and system for generating investigation cases in the context of cybersecurity US20200379868A1|2020-12-03|Anomaly detection using deep learning models Corizzo et al.2019|Big Data Analytics and Predictive Modeling Approaches for the Energy Sector US11043289B2|2021-06-22|Monitoring, predicting and alerting for census periods in medical inpatient units US20210081899A1|2021-03-18|Machine learning model for predicting litigation risk on construction and engineering projects Duan et al.2021|Automated Security Assessment for the Internet of Things M França et al.2021|Missing Data Imputation in Internet of Things Gateways
同族专利:
公开号 | 公开日 JP2018195308A|2018-12-06| US10636007B2|2020-04-28| US20180330300A1|2018-11-15| AU2018203375A1|2018-11-29| EP3404593A1|2018-11-21| CN108875784A|2018-11-23|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US6102958A|1997-04-08|2000-08-15|Drexel University|Multiresolutional decision support system| US7206646B2|1999-02-22|2007-04-17|Fisher-Rosemount Systems, Inc.|Method and apparatus for performing a function in a plant using process performance monitoring with process equipment monitoring and control| US8417360B2|2001-08-10|2013-04-09|Rockwell Automation Technologies, Inc.|System and method for dynamic multi-objective optimization of machine selection, integration and utilization| EP2172887A3|2008-09-30|2011-11-09|Rockwell Automation Technologies, Inc.|System and method for dynamic multi-objective optimization of machine selection, integration and utilization| US20050228511A1|2002-01-15|2005-10-13|Suvajit Das|Computer-implemented system and method for measuring and improving manufacturing processes and maximizing product research and development speed and efficiency| US20070059838A1|2005-09-13|2007-03-15|Pavilion Technologies, Inc.|Dynamic constrained optimization of chemical manufacturing| EP2097794B2|2006-11-03|2017-03-08|Air Products and Chemicals, Inc.|System and method for process monitoring| US8055375B2|2008-09-30|2011-11-08|Rockwell Automation Technologies, Inc.|Analytical generator of key performance indicators for pivoting on metrics for comprehensive visualizations| US20100082292A1|2008-09-30|2010-04-01|Rockwell Automation Technologies, Inc.|Analytical generator of key performance indicators for pivoting on metrics for comprehensive visualizations| EP2237197A1|2009-03-31|2010-10-06|Siemens Aktiengesellschaft|Method for evaluating key production indicators in a manufacturing execution system | AU2010243182A1|2009-04-30|2011-11-10|Ge. Infrastructure South Africa Limited|Method of establishing a process decision support system| EP2606402A4|2010-08-18|2014-05-14|Mfg Technology Network Inc|Computer apparatus and method for real-time multi-unit optimization| US8909685B2|2011-12-16|2014-12-09|Sap Se|Pattern recognition of a distribution function| US20150262095A1|2014-03-12|2015-09-17|Bahwan CyberTek Private Limited|Intelligent Decision Synchronization in Real Time for both Discrete and Continuous Process Industries| US9957781B2|2014-03-31|2018-05-01|Hitachi, Ltd.|Oil and gas rig data aggregation and modeling system| US10031510B2|2015-05-01|2018-07-24|Aspen Technology, Inc.|Computer system and method for causality analysis using hybrid first-principles and inferential model|US10405219B2|2017-11-21|2019-09-03|At&T Intellectual Property I, L.P.|Network reconfiguration using genetic algorithm-based predictive models| US10877654B1|2018-04-03|2020-12-29|Palantir Technologies Inc.|Graphical user interfaces for optimizations| US10901400B2|2018-05-21|2021-01-26|International Business Machines Corporation|Set point optimization in multi-resolution processes| CN110058527A|2019-05-22|2019-07-26|杭州电子科技大学|A kind of industrial process Infinite horizon optimization advanced control method| US11049060B2|2019-05-31|2021-06-29|Hitachi, Ltd.|Operating envelope recommendation system with guaranteed probabilistic coverage| CN110320872B|2019-06-28|2021-08-03|云南中烟工业有限责任公司|Method for improving standard production of cigarette mainstream smoke indexes| US20210073658A1|2019-09-06|2021-03-11|Ebay Inc.|Anomaly detection by correlated metrics| WO2021059303A2|2019-09-27|2021-04-01|Tata Consultancy Services Limited|Method and system for industrial data mining| WO2021059291A1|2019-09-27|2021-04-01|Tata Consultancy Services Limited|Method and system for identification and analysis of regime shift| WO2021152606A1|2020-01-29|2021-08-05|Tata Consultancy Services Limited|Method and system for time lag identification in an industry| US11055639B1|2020-04-28|2021-07-06|Sas Institute Inc.|Optimizing manufacturing processes using one or more machine learning models| EP3965033A1|2020-09-04|2022-03-09|Siemens Aktiengesellschaft|Method and device for generating logistics configurations for a production facility| US10942936B1|2020-10-13|2021-03-09|Veeva Systems Inc.|Signal detection and visualization using point-in-time architecture databases|
法律状态:
2019-03-12| B03A| Publication of a patent application or of a certificate of addition of invention [chapter 3.1 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 IN201721009012|2017-05-15| IN201721009012|2017-05-15| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|