• Nu S-Au Găsit Rezultate

View of A Generic Recommender System for Continuous Optimization based on Deep Neural Network

N/A
N/A
Protected

Academic year: 2022

Share "View of A Generic Recommender System for Continuous Optimization based on Deep Neural Network"

Copied!
8
0
0

Text complet

(1)

A Generic Recommender System for Continuous Optimization based on Deep Neural Network

1Raju Anitha,2Dr.K.Krishnaiah ,3G.Anusha

1 Department of Computer Science &Engineering, KoneruLakshmaiah Education Foundation,Vaddeswaram, 522 502-India

2Department of Computer Science &Engineering, St.Marys group of Institutes,Hyderabad

3Department of Information Technology, Sri Indu college of Engineering,Hyderabad

Abstract

As per the no lunch theorem it is revealed that No one algorithm can exceed anybody else on all classes of optimization. To tackle this problem, methods have been developed to recommend an existing problem solving algorithm. However, there is poor practicality and transferability of existing recommendation methods for contentious optimization, mainly because of the problem of extracting features which can effectively describe the structure of the problem and lack of training data. This paper proposes a generic system to deal with the two challenges mentioned above.

First, it is proposed a new method to represent the analytical objective function of a tree that is used directly as a continuous optimization problem. Second, on the basis of the proposed tree representation, a large number of benchmark problems are created randomly providing a large number of training data with different levels of difficulty. A recommendation model is trained by a deep recurrence network in which a metaheuristic algorithm for optimization of white or black box is recommended, which a significant step is towards fully automated recommendation for continuous optimization of algorithms. This paper proposes a system that can automatically select a metaheuristic algorithm that is suited to a particular problem without trial or error. The method suggested Develops a tree-like data structure to display the difficulty in optimizing and training a deep recurrence neural network to learn the best metaheuristic solution.

Keywords:Metaheuristic Algorithm, Generic Algorithm,optimization,Deep Recurrence Network etc.,

I.INTRODUCTION

Many Metaheuristics have gained from biological mechanisms and swarm conduct in nature extensive development and use over recent decades, for example Genetic algorithms[1], optimization of particulate swarms[2], varying towards development[3], optimizingant colonies[4] and numerous. These algorithms are methods of high standard that do not rely on the particular problem characteristics, showing attractive competitiveness in the resolution of several Complex problems of optimization. However, there is no single algorithm, which can exceed any other in all kinds of optimizing issues as revealed in the no-free lunch theorem[8]. Therefore, there is a lot of effort.Algorithms adapted to certain types of were designed to solve some specific problems. The following figure shows the generic frame work for Algorithemrecomondation.

(2)

Fig 1: A Generic Frame Work for Algorithm Recommendation

Since the development of specific domain knowledge algorithms can require special field knowledge, it is difficult for engineers to customize new algorithms for different applications. This leads to a new field of research aimed at analysing metaheuristics performance. The ofline methods are designed to choose a single best-performing algorithm, known as the algorithm recommendation, before resolving the problem [2]. As shown asthe recommendation of the algorithm in Fig. 1 can be considered a Classification task where each sample is made up of the features that represent the challenge or structure of the problem optimization, and the label for that problem shall be the index of the best optimization algorithm. Some methods for algorithm recommendation for several combinatorial optimization problems have been developed over the past decade. On the other hand, The algorithm recommendation research is still in the infancy for ongoing optimization issues, with only a small number Methods based on a few algorithms were developed and classes of problems Algorithm problems Continuous optimization problems are recommended in The two aspects that follow is Feature Extraction and Sample Generation..

This work presents a new system to recommend that represents a tree structure optimization problem and feeds it into a deep recurrent system to take a step forward. Neural training network, enhancement of accuracy, clarification of integrity and algorithm recommendation for transferability continuous problems with optimization. The recommending system proposed in summary differentiates itself in the following three aspects from existing work those are i) Feature extraction strategy ii) Training sample collection strategy iii) Recommendation model.

II. FEATURE EXTRACTION FROM OPTIMIZATION PROBLEMS The standard form of a continuous optimization problem is

Minimize f(x)

(3)

Subject to gi(x)<= 0 i=1,….m hj(x)=0 j=1,….,P ……….(1)

where F: ℝn → ℝ is the objective function to be minimized over the n variable vector x g(x)<=0 are called inequality constraints

h(x)=0 are called equality constraints and m>=0 and p>=0

The continious optimization problems in this work can be defined mathematically as follows

Minimize f(x)

Subject to x€Ω⊆ℝd ……….(2)

Where f indicates the objective function, x={x1,…..xd} denotes the decision vector, Ω denotes the decision space ,d is the number of Decision variables.The tree structure proposed contains various types of decision- making vectors and vector-oriented operators and adopts a symbolic return to estimate operands and operatorsIncluded in a slightly different black box problemGenetic programming based on existing methods[45].The symbolic regression procedure is presentedA number of solutions X are in algorithm 1.

1.Sampled uniformly in the Latin hypercube decision spacesampling and its black-box objective values are calculated. The final values of the solutionsevery problem created by benchmark is also computed and the similarity between f and g is measured (approximation error).The loss function as follows:

sim(f,g)= 𝑥𝜖𝑋(𝑓 𝑥 − 𝑔(𝑥))2 ---(3)

The tree structure, defines seven operands and twenty operators, with operands stored in the leaf nodes and operators stored in the non-leaf nodes. The operands and operators can be classified into one of five groups like numbers,DecisionVariables,Binaryoperators,unary and Vector-oriented operators.In the functions of continuous optimization problems, real numbers are the most common operands. A real constant is represented by the notation an in the proposed method.DecisionVariables:The decision vector x, which is represented by the notation x in the tree, is required in the functions. While x can only be used to represent entirely separable functions, it can also be used to represent partially separable functions.Non-separable entities are also represented by additional notations.similar to a number of well-known benchmark challenges, such as theThe first variable, x1, serves as a link between all of the variables.a single one , giving the translated decision vector xtthe rotated linkages between each two continuous variables ,The xr decision vector provides complex linkages between all of the variables.variables , as well as the index vector index, which provides a variety of options.All variables have optimal values .Binary operators. Four basic binary operators are consideredin the tree, namely, addition, subtraction, multiplication, and division.Unary Operators: The tree takes into account eleven unary operators. The unary operators neg, rec, and multen will change the ranges of real constants and decision variables, square, sqrt, abs, log, and the unary operators square, sqrt, abs, log, and The unary operators sin can provide unimodal landscapes, and exp can provide unimodal landscapes.The unary can provide multimodal landscapes, as can and cos.The operator can create flat scenery. Vector-oriented operators.: Since the functions of the problems of continuous optimisation are calculated by a decision vector rather than a single decision variable, a multidimensional decision vector must be mapped by a vector-oriented operator into a single

(4)

objective value. In this respect, five vjector lines operators in the tree shall be considered, including calcu lining all the vector decision vector variables, average, cumulative sum, product and maximum value.

III A DEEP LEARNING BASED CLASSIFIER FOR AUTOMATED ALGORITHM RECOMMENDATION

As input attributes to the classifier, the proposed method uses the components of a tree representation for a continuous optimization problem. It does so by transforming the tree into a reverse Polish expression. The reverse polish expression contains no parentheses and can be properly transformed into an algorithm which is given below. Note that an expression after order cannot generally be converted in order, while the Polish expression is reverse can properly be converted to a function because two functions exist

Type (i.e., operand and operator) of the different nodes on the tree, where there are always operands leaf nodes and operators.

Fig 2: DNN method for AlgorithmRecommendation Algorithm:

Step1: Take input P as Parent population Step2: S<-ɸ

Step3: While P ≠ ɸ do

{t1,t2]<-{pick up two trees from P;

P<-P/{t1,t2};

T1<- Randomly selected a subtree from t1;

T2<-Randomly selected a subtree from T2;

Swap(t1,T2)

(5)

S<- S u{t1,t2}

Step4: for each tree S do

If rand<mutation probability then R<-rand;

If r<1/3 then

T<- randomly select a leaf node from tree;

Extend t y a randomly selected operator Else if r<2/3 then

T<-randomly select a subtree from tree;

T1<=randomly select a subtree from t;

Replace t with t1 in tree;

Else

T<-Randomly select a node from tree;

T1<- Randomly generate a node with the same Replace t with t1 in tree;

Return S;

Because the reverse Polish expression is a variable-length sentence with a limited vocabulary, the classifier is a deep recurrent neural network, i.e., the recommendation of The task of deciphering algorithms is considered to be a natural language processing task. The reverse expression Polish is first described in the figure Fig.2. Deep recurring neural network used in the method proposed. Encoding in the one-hot representation of each notation, i.e. a single dimensional binary vector 1 and the rest 0. Each notation is then fed into a one-hot representation Layer of word embedding for a word vector. Subsequently word vectors are fed into a sequence LSTM layer that is tailed by a completely connected layer. And a layer of softmax to predict the best algorithm index.

IV RESULT ANALYSIS

The algorithem can be applied on GA ,CMA-ES and PSO which are obtains the best results and most frequently used in real world problems. As shown in Fig. 3, the r-values are quite different for all operators and operators on CMA-ES and GA. The r-values of GA are obviously greater for sin and cos, on the one hand. 1, meaning that GA is good for multimodal management landscapes; instead, the xrr-value of GA is evident Less than 1, which shows GA cannot deal with complex Good connections between variables. CMA-ES on the contrary is good for noise, but it is poor for complex handling Multimodal landscapes and connections. These findings are in line with previous research, which shows that GA is effective at solving multimodal problems due to the mutation operator.

(6)

Fig 4: r-values of all the operands and operators on CMA-ES and GA

Fig 5: : r-values of all the operands and operators on CE and CSO

The performance of each metaheuristic is, according to the above analysis, highly related to the operators and operators in the problem, and it is therefore reasonable to use the operators and operators for algorithm

recommendation. Two methods recently proposed to recommend algorithms are adopted as baselines to study the performance of the proposed method. The experiment involves two variants of this proposed method, one (AR- WB term) being designed for problems with the white box and the next (AR-BB term) for problems with the black box.

(7)

Fig 6: Test accuracy of Baseline1, Baseline2

Method Test Accuracy True Ranking Method Test Accuracy True Ranking

Baseline1 4.53% 3.58% Baseline 1 54.89% 3.21

Baseline 2 52.68 2.78 Baseline2 63.04% 2.54

AR-BB 61.84% 2.72 AR-BB 68.86% 2.290

Table1:TimeSeries Forecast and Portfolio Optimization

The average true ranking of the algorithms chosen by the compared methods is shown in Tables 1 The results are consistent with those in Table 1, with AR-BB performing best and AR-BB following closely behind. AR-BB have average true rankings of 1.251 and 2.057, respectively, indicating that they can pick the best or second best algorithm for all problems.

V CONCLUSION

A recommender system for selecting metaheuristic algorithms for solving continuous optimization problems is presented in this paper. In contrast to existing methods that extract landscape-related features from a small number of benchmark problems, the new method uses a large number of benchmark problems.The operands and operators are used as features in the proposed procedure.As training, it produces a large number of benchmark problems.samples, resulting in a substantial improvement in the performance of the model of suggestion

REFERENCES:

[1]T.Rodemann, K. Narukawa, M. Fischer, and M. Awada, “Many-objective optimization of a hybrid car controller,” in Applications of Evolutionary Computation. Springer International Publishing, 2015, pp. 593–

603.

[2]A.LaTorre, S. Muelas, and J. M. Pena, “Multiple offspring sampling in large scale global optimization,”

in Proc. IEEE Congr. Evol. Comput., 2012, Paper 2012.6256611.

[3]L. C. Bezerra, M. López-Ibánez, and T. Stützle, “Automatic component-wise design of multiobjective evolutionary algorithms,” IEEE Trans. Evol. Comput., vol. 20, no. 3, pp. 403–417, Jun. 2016.

[4]M. Zhao, H. Ge, Y. Lian, and C. L. P. Chen, “Online decisioning meta-heuristic framework for large scale black-box optimization,” 2018, arXiv:1812.06585v1.

[5]P.Kerschke, H. H. Hoos, F. Neumann, and H. Trautmann, “Automated algorithm selection: Survey and perspectives,” Evol. Comput., vol. 27, no. 1, pp. 3–45, 2018.

(8)

[6]E.Nudelman, K. Leyton-Brown, H. H. Hoos, A. Devkar, and Y. Shoham, “Understanding random SAT:

Beyond the clauses-to-variables ratio,” in Proc. 10th Int. Conf. Princ. Pract. Constraint Program., 2004, pp. 438–452.

[7]Velavan TP, Meyer CG. The COVID-19 epidemic. Trop Med Int Health. 2020;25:278–280.

[8]. Li R, Pei S, Chen B, et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV-2). Nature. 2020;368:489–493.

[9]Tolic D, Kleineberg K, Antulov-Fantulin N. Simulating SIR processes on networks using weighted shortest paths. Sci Rep. 2018;8:6562.

[10]. Boussaı¨d I, Lepagnot J, Siarry P. A survey on optimization metaheuristics. Inf Sci. 2013;237:82–117

Referințe

DOCUMENTE SIMILARE

The equivalence between the saddle points of the lagrangian of the second order approximated optimization problem and optimal solutions of the original optimization problem

In this paper we give a general method to approximate the set of all efficient solutions and the set of all weakly-efficient solutions for a multiple criteria optimization

The aim of this paper is to characterize the sets of weakly-efficient solutions and efficient solutions for multicriteria optimization problem involving generalized unimodal

Micula, on et'en degree polvnontiul .rpline.litnuiotts rtÍt¡ ap¡tlica- líons to numerical solution of differential equation.r u,irh reiar(te¿ argunrcqt,

Result for classification of DNA binding proteins into four major classes using 2 nd neural network based on protein sequence derived features with the varying number of

[24] Rania Khadim, et.al (2018), “An Energy-Efficient Clustering Algorithm for WSN Based on Cluster Head Selection Optimization to Prolong Network Lifetime, International

The optimization problem is formulated based on the condition when primary transmission is present and absent in cognitive radio (CR) network. The optimization problem

Based on the results of data analysis, it is concluded that strategic leadership has a significant effect on business performance, competitive strategy has no significant effect