• Nu S-Au Găsit Rezultate

View of Allocating Resources in Load Balancing Using Elastic Routing Table

N/A
N/A
Protected

Academic year: 2022

Share "View of Allocating Resources in Load Balancing Using Elastic Routing Table"

Copied!
13
0
0

Text complet

(1)

13051

Allocating Resources in Load Balancing Using Elastic Routing Table

A.Vijayaraj 1 , M.Vijay Anand2 , N.Mageshkumar2, S.Deepan3, SP.Karuppiah3

1Associate Professor, Information Technology, Vignan's Foundation for Science, Technology & Research (Deemed to be University),

Vadlamudi, Guntur, Andhra Pradesh 522213. e-mail : [[email protected]]

2Professor Department of CSE, Saveetha Engineering College, Chennai, Tamilnadu, India. e-mail:

[[email protected]]

2Asst.Professor, Department of CSE, Vignan's Lara Institute Technology and Science, Guntur, Andhra Pradesh 522213.

[e-mail : [email protected]]

3Assistant Professor, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram campus, Chennai.[ [email protected]]

3Assistant Professor, Department of MBA, St.Joseph's college of Engineering ,Old Mamallapuram Road, Chennai, Chennai-119.

[ [email protected] ]

ABSTRACT

The main objective of this project is to handle the work efficiently. When a user sends a file to another user, the file is first sent to the main server which identifies the type of the file. The main server then forwards the file to the corresponding response node. Main server identifies the file type and sends it to the response node by identifying the extension of the file. The response nodes are systems which act as server which has many sub nodes. Each response node is responsible for handling different file types. Once the response node receives the file, it identifies the best individual node which can handle the work without delay. This process is carried out using Elastic Routing Table (ERT). A query balancing approach called an Elastic Routing Table (ERT) is based on the fact that high-degree nodes get higher traffic demand. The ERT is computed based on the number of links to the cluster nodes. It has a high capability to handle the large-scale and dynamic characteristics of resources. It helps in low-overhead. When multiple users send files to the destination simultaneously then load overhead may occur in the main server. Therefore, assigning the task to different nodes respective to the file types reduces congestion in the main server. This also helps in minimizing the time and reduces the effort.

Keywords:

Elastic, balancing, high-degree, destination, congestion, cluster, overhead.

I. Introduction

The purpose of this project is to introduce ERT technique in response node which is used in allotting the tasks to nodes for handling the requests from the users. We use networks to implement this whole execution. A network is a type of telecommunications network that connects a group of computers to facilitate communication and data transfer between systems, software applications, and users. It is a group of devices connected to each other. When a process in one device can communicate information with a process in another device, the devices are said to be networked. A server is a physical computer (a computer hardware system) that is devoted to running one or more services (as a host) for the benefit of other computers on a network. It might be providing number of services depending on the computing function. In existing system, the main disadvantage is the problem of bottleneck. A network bottleneck is a circumstance in which data flow is slowed due to a lack of computer or network resources. The amount of data that flows is limited by the bandwidth of various system resources. A network bottleneck occurs when a system on a network delivers a larger volume of data than the network's current capacity can sustain. This can be avoided using ERT technique in response nodes. Elastic Routing Table technique is used to further improve the efficiency. Based on the decision that high-degree nodes receive increased traffic load, this research proposes an Elastic Routing Table (ERT) technique for query load balancing. The ERT is computed based on number of links to the Cluster node. The ERT-based congestion control protocol outperforms existing "virtual-server" based load balancing algorithms and other routing table control techniques in terms of query lookup efficiency.The ERT-based congestion control protocol

(2)

gives existing "virtual-server" based load balancing algorithms. The other routing table control techniques in terms of query lookup efficiency. Response node plays a vital role in every individual cluster network for handling the requests very effectively by dynamically allocating the request to the best group node which has maximum number of connectivity. If the connectivity of nodes is equal, the work is allocated to the node which has higher efficiency. The main objective of this project is to handle the work very effectively. Users would be sending the corresponding file type request to the main server after receiving the request, the main server will find out the response node for handling the corresponding file type. Once the response node receives the file, it identifies the most excellent node which can switch the work without delay using Elastic Routing Table.

II. Literature Survey

The file keys are assigned to nodes based on their IDs and a distributed hash table (DHT). The DHT preserves topological links between nodes and offers a routing mechanism for identifying a node with a required key. DHT networks have gotten a lot of attention in recent years because of their lookup effectiveness and strength of the data position. CAN [26], Chord [33], Tapestry [34], Pastry [27], and Cycloid [32] are examples of the systems. Load balancing is a challenge with DHT networks. It's because consistent hashing results in an O(log n) extent of key imbalance between nodes. As nodal heterogeneity increases, the problem becomes even worse. Furthermore, the popularity of files kept in the system varies, and access patterns to the same file might alter over time. Designing an Adht protocol with congestion management capabilities is difficult. Congestion control's main goal is to eliminate bottlenecks in either node. A blockage can happen due to query overload, which occurs when the node receives too many requests at once, or data overflow, which occurs when the node must download and forward too much data. Because of DHT routing techniques, query load intended for an object may focus on a limited number of nodes near the destination, resulting in bottlenecks. Furthermore, even while files are typically sent directly from source to destination, data forwarding via intermediary nodes in the query routing path is regularly utilized to offer anonymity for file sharing, as shown in Mantis [5], Mutis [1], and Hordes [16].

Node power and node query model are heavily biased in P2P file sharing systems, according to recent studies [13], [28]. In such an environment, nodes can easily become bottlenecks. Many load balancing methods have been suggested in the past to agreement with network heterogeneity; for example, see [33], [12], and [4]. A lengthy key ID space gap is considered to have a higher likelihood of being contacted than a small gap. The majority of known techniques are based on the notion of a "virtual server," in which a physical node mimics a number of virtual overlay servers, with each node receiving an ID space interval of varied lengths depending on its power.

Maintaining the connection between a node's liable interval and its potential comes at a more cost due to the simplicity of the approaches. Furthermore, since the approaches are solely focused on key ID assignment, they provide no protection against congestion caused by non-uniform and time- varying file popularity. Other methods, based on "item-movement," get into report the impact of file reputation on query load [3], [31]. Heavily loaded nodes probe light ones in these methods, reassigning extra load between peers by modifying the IDs of similar files. It's worth noting that current load balancing methods presume that virtual node has the same and consistent DHT degree.

Each node retains the same amount of neighboring relationships regardless of its power. According to the power-law network theorem, higher degree nodes face more query loads [2]. Census administration with PDA revealed that meticulously enumerating and presenting census data is simple and painless, according to A Vijayaraj, P DineshKumar[30].

As a result, in this study, we show how to cope with node heterogeneity, distorted requests, and churn in DHT networks using an elastic routing table (ERT) technique. Unlike conventional

(3)

13053 structured P2P routing tables with a fixed number of outlinks, each ERT has a varying number of inlinks/outlinks, and the indegree/outdegree of each node may be adjusted dynamically. Castro et al.

[7] took advantage of variety in congestion management by using a fixed mapping of node indegree and bandwidth and biassing high-capacity nodes for overlay neighbors. Static mapping is incapable of dealing with non-uniform and time-varying file popularity and churn. dynamic nodes become bottlenecks due to simple capacity bias. The congestion management protocol based on ERT goes beyond the development of capacity-aware DHTs. To deal with node heterogeneity and network churn, many load balancing techniques have been devised [33], [12], [4]. Each real node runs Olog n virtual servers, and the keys are mapped into virtual servers such that each real node is accountable for a distinct length key ID space according to its size. Although the principle is basic, the virtual server abstraction has a high maintenance cost and reduces lookup performance. Godfrey and Stoica [12] solved the trouble by setting up a real server for consecutive virtual ID space.

Bienkowski et al. proposed a distributed randomized method in which the present lengthy ID space is divided by a linear number of nodes with small ID space intervals, resulting in a high-probability optimum balance. To accomplish load balancing, Byers et al. [6] proposed utilising the "power of two choices" approach, in which an item is placed in the less loaded of two (or more) random possibilities. Because a node with a longer interval has a higher possibility of being contacted, load balancing algorithms based on ID space interval assignment control traffic congestion caused by node heterogeneity in capacity. Initial key ID partitioning is insufficient to achieve load balance, especially in churn. Dynamic load reassignment is frequently used in conjunction with it. To avoid bottlenecks, Godfrey and Rao et al. proposed load-balancing strategies based on the capabilities of heavy and light nodes [11], [25]. They assumed that the ID space's query burden is distributed uniformly. Bharambe et al. [3] proposed a load balancing technique to alleviate the congestion caused by biassed lookups. The load of a node was measured by the number of messages routed or matched per unit time. The technique operates as follows: severely loaded nodes probe a number of sample nodes, while lightly loaded nodes are instructed to leave their current places and reconnect at the strongly loaded nodes'. Each node samples nodes within a given distance on a regular basis and retains estimated histograms to acquire load distribution statistics. This necessitates a high level of communication and maintenance, particularly in the case of churn. Furthermore, node ID changes caused by node leave/rejoin have a high overhead. Hu et al. [14] and Li et al. [17] have recently pursued the concept of using abnormal routing tables in relation to node power. Their key focus was on the cost of maintenance versus the quality of lookups. To take advantage of variety of node to increase lookup performance. Hu et al. suggested deploying large routing tables in high- capacity nodes. Li et al. used Chord to create an Accordian mechanism that allows them to change the table size at different network scales and churn rates without sacrificing lookup efficiency. In such a vast country like India, Sanjay Ram, A. Vijayaraj[10] advised implementing a trusted computing platform into the system and paying special attention to the security requirements in the cloud computing environment to create a trusted computing framework for cloud computing systems.

Chun et al. [9],[21] investigated the effect of neighbour selection on structured P2P efficiency and resilience. Different forms of heterogeneous, such as node power and network proximity, are taken into account while selecting neighbours. A node selects its neighbours with the shortest processing delay when node capacity is taken into account. This system, on the other hand, is likely to transform high-capacity neighbours into bottlenecks. In reality, often routing queries to high-capacity nodes exacerbates the problem if these nodes lack the capacity to manage a sudden influx of queries due to churn. The query load should be distributed proportionally to the capacities of high-capacity and low-capacity nodes. For generating routing tables based on node capabilities, Castro et al. [7] suggested a neighbour selection approach. Its underlying notion of employing node

(4)

indegrees to leverage node heterogeneity is similar to our first indegree assignment methodology.

Since it does not select low capacity nodes as neighbours until the indegree bounds of high capacity nodes are reached, their algorithm guides the majority of traffic to high-capacity nodes. The ERT mechanism, on the other hand, would disperse traffic between neighbours proportionally to their capacities, allowing both high and low-capacity nodes to be fully utilised. Furthermore, the ERT system deals with more than just node heterogeneity. By dynamically changing table indegree and outdegree and question forwarding, it manages biassed lookups and churn. Finally, we should mention that congestion control is not an issue exclusive to structured P2P networks. In unstructured P2P networks, it has also been a critical performance problem. Flow regulation in unstructured networks has been the subject of numerous studies; see, for example, [24], [18], [2], [19], [8]. Congestion control in DHT networks is based on the power-law network theory, which states that high-degree nodes play a critical role in communication. Senders estimate the chance that a neighbour would drop packets based on the answers they get from the neighbour, while receivers drop packets when they become overloaded, as proposed by Osokine [24]. When requests are flooded across the network, this process is appropriate because even if a node goes downs a query, extra replica of the query will spread across the network. It is not, however, ideal for searching random walks [18]. R Srinivasan and A Vijayaraj [20] proposed novel ways for developing optimised FSO network topologies based on mixed integer programming formulations to maximise connection availability and high data rate while also accounting for limited bandwidth allocation.

Our findings guide us in the correct path for constructing a reconfigurable FSO network with great efficiency.

Significant-degree nodes in power-law networks like Gnutella commonly encounter high query load, according to Lv et al. [18]. They recommended that P2P systems employ graph-building methods to limit the likelihood of nodes having extremely large degrees of separation. On the other hand, Adamic et al. [2] took use of the fact that in power-law networks, highdegree nodes play a crucial role in communication. To speed up the search for specific files, they proposed a message-passing technique that delivers requests to high-degree nodes. It has been discovered that high-degree nodes are not always high-capacity nodes, and that if they handle a substantial amount of query traffic, they are prone to bottlenecking. To address this issue and account for node heterogeneity, Lv et al. [19]

proposed query flow management and topology adaptation methods, allowing higher capacity nodes to deliver a higher degree and routing requests to these nodes. Based on the status of each neighbour connection, the algorithms gradually adjust the overlay topology so that queries flow to nodes with sufficient capacity to handle them. However, the preference for high-capacity nodes can result in them becoming hotspots. Chawathe et al. [8], [22], [23] present an active flow control strategy that embraces heterogeneity and adjusts to it by issuing flow control tokens to nodes depending on available capacity. A node with k flow-control tokens may receive k inquiries, and a sender can only route a query to a neighbour if it holds the neighbor's flow-control token.

III. Existing System

In existing system, if the user wants to send an object to a destination the request will be handled by main server. The main server is responsible for handling all the requests from multiple users which will lead to overhead to main server. The main server is also incapable of effective routing this tasks resulting in load overhead. There is no proper scheduling scheme available in the existing system. Because of overhead, server can cause network congestion. A server needs some kind of management. Someone who understands how to set it up, create/modify users and groups, apply security, and so on. If the server goes down, the entire network falls down with it.

(5)

13055 Additionally, if you are downloading a file from a server and it is abandoned due to an issue, the download will be terminated. Peers, on the other hand, would have given the missing bits of the file if there had been any.

Disadvantages

1. Load Overhead

2. No proper routing of request 3. Response Error

4. Magnified physical failures 5. Degraded performance 6. Complex root cause analysis 7. Network congestion

IV. Proposed System

In proposed system, effective task scheduling is applied based on Process Executor. When a request comes for sending an object the main server will redirect the request to Process Executor which will effectively handle the request based on the requested object by determining the effective node to handle the request. Each type of file is handled by its particular supporting executor which is easy for server to balance the load. Thus it is easy for a server to share resources among users. To avoid network congestion, we use Elastic Routing Table (ERT) for proper routing of request.

According to connectivity and efficiency of node, request is allocated in the particular process executor. Servers can play different roles for different users.

Advantages

In this system, many numbers of files can be sent simultaneously by different users without any congestion. Here loads overhead is avoided by assigning the task to separate node. This helps in consuming less time. Data are retrieved at high speed. System acts as both server as well as client hence we need not spend more money by allotting many numbers of systems. Simple changes may be performed by just updating the server. New resources and systems can also be introduced by making the appropriate server adjustments.

System Design

When a request comes from a user such as User 1 to main server the main server will determine the type of object and the request will be forwarded to respective Process Executor. The Process Executor will now determine the effective node based on number of links to the response node and that node will process the request effectively.

Modules

1. Network Construction 2. Response Node Construction 3. Server

4. ERT Implementation

(6)

Module Explanation

By enabling simultaneous development of different parts of the system, a modular design minimises complexity, facilitates modification (a fundamental feature of software maintainability), and results in easier implementation. Because functions may be separated and interfaces may be reduced, software with effective modularity is easier to design. Modularity is embodied in software architecture, in which software is separated into independently named and addressable components called modules, which are then combined to meet issue requirements. Modularity is the single feature of software that enables a programme to be managed rationally. Modular decomposability, Modular Comps ability, Modular Understandability, Modular Continuity, and Modular Protection are the five fundamental characteristics that enable us to assess a design process in terms of its ability to construct an effective modular design. The following are the project modules that are anticipated to help finish the project in relation to the proposed system, while also overcoming current systems and giving support for future enhancements in figure 1.

Figure.1 System Architecture a. Network Construction

Network has much number of node and their details. It maintains the connection details also.

Nodes are interconnected and exchange the data directly with each other nodes. Network server Store the data like node IP Address, port details and status of the user. Node give request to server and get the needed response from server. To construct the network, first we have to provide the number connected to the server. Once the initialized, the nodes are logged into the system.

b. Response Node Construction

Response node plays a vital role in every individual cluster network for handling the requests very effectively by dynamically allocating the request to the best group node which has maximum number of connectivity. Once the request is passed to the main server, the main server identifies request type then pass to the corresponding response node for the further process. Then the response node allocates the request to the best individual within a cluster with reference to its maximum number of connectivity using Elastic Routing Table (ERT).

(7)

13057 c. Server

A server is a computer program that runs in the background to fulfill the demands of other programs, known as "clients." As a result, the "server" takes on a computing work on behalf of the

"clients." Clients can either operate on the same machine or connect through a network. In this case, the server serves as the client's primary resource. The server is in charge of keeping track of all client data. Unwanted users will be prevented from joining the network by the server. It also double-checks each user's access permissions. Users must be aware of their limitations.

Figure 2 .Proposed system implementation sender node using java

Figure.3.Proposed system implementation receiver node using java

Figure 4. System implementation server node using java

(8)

Figure.5 Server -maintaining client‟s information

d. ERT Implementation

Elastic Routing Technique is used to further improve the efficiency. The ERT is computed based on number of links to the Cluster node. Consider the following scenario where N1=5, N2=6, N3=8 now when request comes for the first time then N3 will be selected now the new values will be N1=5, N2=6, N3=7. Again when request comes then again N3 will be selected after this N1=5, N2=6, N3=6. Now when request comes node N2 will be selected because of following methodology i.e. N1=5/5=1, N2=6/6=1 and N3=6/8=0.8, so N2 will be selected this occurs when multiple response node have equal values. Finally, when request come now N3 will be selected because it is having the highest links. Figure 2,3,4,5 are implementations.

NS2 output when testing the packet that is sent from S to D.

The time to process the packet is shortened based on the fuzzy logic methodology and it is

(9)

13059 shown below Figure. 6.

Figure 6. Packet Delivery Vs Time

The X-axis denotes time and the delay is represented by Y. Where the delay is between 60 to 80 milliseconds, up to 360 * 104 high. Besides, the packet is also secured while it is being modified from the source to the destination with a ratio of 40*104. This statement can be considered a confirmation that the packet was delivered on time.

Figure 7. Throughput Vs Time

The X-axis is time and the Y-axis as throughput are shown in this graph. In Figure. 7 at a time of 200ms, the maximum throughput is achieved. The maximum throughput packet consists of 30*104.

The intervals of 110 to 140ms are shown as fewer outputs.

(10)

Figure 8. Packet loss Vs Time

In the above graph Figure 8, the X-axis is time and the Y-axis is a loss. Within a period of 50 to 70ms, the maximum possible loss occurs. To represent the thickness of the vertical line in the packet transmission based on the loss ratio.

V. Conclusion and Future Enhancement

Previously, all data transmitted by a user to a destination was processed wholly by the primary server, resulting in network congestion owing to inefficiencies in handling the load overhead. It takes a long time. Effective task scheduling is used in this project, which is based on Process Executor. The Elastic Routing Table (ERT) is used for query balancing, and the data is transferred to the effective node, which processes the file and delivers it to the destination, based on the observation that high degree nodes get greater traffic load. It allows for quick file transfers from one user to another. It has a great capacity for dealing with large-scale and dynamic resource features. It determines which particular node is most suited to manage the workload. A larger number of users can transfer data at the same time without causing a load imbalance. It saves the user time and effort by allocating the work to an efficient node. Assigning the task to different nodes respective to the file type reduces congestion in the main server. There is undoubtedly room for improvement in this programme, just as there is in other apps. New modules are being developed to improve the project's compatibility. Once these enhancements are complete, the bulk of the characteristics that distinguish an outstanding programme will be present, and usage will become more widespread and expensive. Here, there are some decisions to make our project effective and efficient in the future. The user can choose many numbers of files at once. Data that are being transferred can be read by third party. Thus, secure transmission of data can be implemented using any high standards Cryptographic algorithms. In addition, we are investigating how to avoid the data loss due to crashing of the main server.

(11)

13061 References

[1] Mute, http://mute-net.sourceforge.net/, 2009.

[2] L.A. Adamic, B.A. Huberman, R.M. Lukose, and A.R. Puniyani, (2001), “Search in Power Law Networks,” Physical Rev. E, vol. 64, pp. 46135-46143.

[3] A.R. Bharambe, M. Agrawal, and S. Seshan, (2004). Mercury: Supporting Scalable Multi- Attribute Range Queries, Proc. ACM SIGCOMM.

[4] M. Bienkowski, M. Korzeniowski, and F.M. auf der Heide, (2005). Dynamic Load Balancing in Distributed Hash Tables, Proc. Int‟l Workshop Peer-to-Peer Systems (IPTPS).

[5] S. Bono et al., “Mantis: (2004). A Lightweight, Server-Anonymity Preserving, Searchable P2P Network, technical report, JohnsHopkins Univ.

[6] J. Byers, J. Considine, and M. Mitzenmacher, (2003). Simple Load Balancing for Distributed Hash Tables, Proc. Second Int‟l Workshop Peer-to-Peer Systems (IPTPS).

[7] M. Castro, M. Costa, and A. Rowstron, (2005). Debunking Some Myths About Structured and Unstructured Overlays, Proc. Second Conf. Symp. Networked Systems Design &

Implementation (NSDI).

[8] Y. Chawathe, S. Ratnasamy, L. Breslau, N. Lanham, and S. Shenker, (2003). Making Gnutella Like P2P Systems Scalable, Proc.ACM SIGCOMM.

[9] B.G. Chun, B.Y. Zhao, and J.D. Kubiatowicz,(2005). Impact of Neighbor Selection on Performance and Resilience of Structured P2P Networks, Proc. Fourth Int‟l Workshop Peer-To- Peer Systems (IPTPS).

[10] M Sanjay Ram, A. Vijayaraj, [2011]. Analysis of the characteristics and trusted security of cloud computing”, International Journal on Cloud Computing, Volume 1 Pages, 61-69.

[11] B. Godfrey, K. Lakshminarayanan, S. Surana, R. Karp, and I. Stoica,(2006). Load Balancing in Dynamic Structured P2P Systems, Performance Evaluation, vol. 63, no. 3, pp. 217-240.

[12] B. Godfrey and I. Stoica, (2005). Heterogeneity and Load Balance in Distributed Hash Tables, Proc. IEEE INFOCOM.

[13] P. Gummadi, R. Dunn, S. Saroiu, S. Gribble, H. Levy, and J. Zahorjan, (2003). Measurement, Modeling, and Analysis of a Peer-to- Peer File-Sharing Workload, Proc. 19th ACM Symp.

Operating Systems Principles (SOSP).

[14] J. Hu, M. Li, W. Zheng, D. Wang, N. Ning, and H. Dong,(2004). SmartBoa: Constructing P2P Overlay Network in the Heterogeneous Internet Using Irregular Routing Tables, Proc. Third Int‟l Workshop Peer-to-Peer Systems (IPTPS).

(12)

[15] D. Karger et al., (1997). Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web,” Proc. 29th Ann. ACM Symp. Theory of Computing (STOC).

[16] B. Levine and C. Shields, )2002). Hordes: A Multicast-Based Protocol for Anonymity, J.

Computer Security, vol. 10, no. 3, pp. 213-240.

[17] J. Li et al., (2005) . Bandwidth Efficient Management of DHT Routing Tables, Proc. Second Symp. Networked System Design and Implementation (NSDI ‟05).

[18] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker, (2001). Search and Replication in Unstructured Peer-to-Peer Networks,” Proc. Ann. ACM Int‟l Conf. Supercomputing (ICS).

[19] Q. Lv et al., (2002). Can Heterogeneity Make Gnutella Scalable?” Proc. Int‟l Workshop Peer-to- Peer Systems (IPTPS).

[20] R Srinivasan, A Vijayaraj, [2011]. Mobile communication implementation techniques to improve last mile high speed FSO communication , Trends in Network and Communications, Springer, Berlin, Heidelberg, Pp: 55-63.

[21] M. Mitzenmacher, (1997). On the Analysis of Randomized Load Balancing Schemes, Proc.

Ann. ACM Symp. Parallel Algorithms and Architectures (SPAA).

[22] M. Mitzenmacher et al. (2002). Load Balancing with Memory, Proc. 43rd IEEE Symp.

Foundations of Computer Science (FOCS).

[23] S. Nath, P.B. Gibbons, S. Seshan, and Z.R. Anderson, (2004). Synopsis Diffusion for Robust Aggregation in Sensor Networks, Proc. Second ACM Conf. Embedded Networked Sensor Systems (SenSys).

[24] S. Osokine, (2001). The Flow Control Algorithm for the Distributed „Broadcast-Route‟

Networks with Reliable Transport Links, technical report.

[25] A. Rao et al., (2003). Load Balancing in Structured P2P Systems, Proc. Int‟l Workshop Peer-to- Peer Systems (IPTPS).

[26] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker, (2001) . Scalable Content- Addressable Network, Proc. ACM SIGCOMM, pp. 329-350.

[27] A. Rowstron and P. Druschel, (2001). Pastry: Scalable, Decentralized Object Location and Routing for Large-Scale Peer-to-Peer Systems, Proc. Middleware Conf.

[28] S. Saroiu, P. Gummadi, and S. Gribble, (2002). A Measurement Study of Peer-to-Peer File Sharing Systems,” Proc. Multimedia Computing and Networking (MMCN).

[29] H. Shen and C. Xu, (2006). Elastic Routing Table with Provable Performance for Congestion

(13)

13063 Control in DHT Networks,” Proc. IEEE Int‟l Conf. Distributed Computing Systems (ICDCS).

[30] A Vijayaraj, P DineshKumar [2010]. Design and implementation of census data collection system using PDA , International Journal of Computer Applications, Volume 9 , Issue 9 Pp 28- 32 .

[31] H. Shen and C. Xu, (2007). Locality-Aware and Churn-Resilient Load Balancing Algorithms in Structured Peer-to-Peer Networks,” IEEE Trans. Parallel and Distributed Systems, vol. 18, no. 6, pp. 849-862.

Referințe

DOCUMENTE SIMILARE

(lONYlìIìGlÌ\i;lì 1'lIIìolllìll. [)oncorningccluatiorr(1.1)ancltlreiter.ations(1.2)_arrcì11.3)l'ehar'e llrtnolturt

Abstract' Irr this p]l)er we apply tlrc rnebod of v.. We also

– Features: service discovery (Kubernetes assigns a DNS name and IPs for multiple containers), load balancing, storage orchestration (Kubernetes allows managing of on-premise

Oral habits are a term used to describe these behaviors[4].Bruxism during sleep either during daytime or during night is termed as „Sleep Bruxism‟ (SB). SB is

If I continue running experiments on a lot of subtle server environments I hope to urge a a lot of refined result set that may result in a much better understanding of NGINX

Because of their capacity to detect male- specific DNA, highly variable Y-chromosomal polymorphisms, also known as STR sequences, are an important addition to

It describes a min-min algorithm which requires a minimum time of complication and fewer resources, max-min algorithm where the task which requires more resources are executed

In particular, a rational load balancing approach and a meta-heuristic algorithm can be used to consider two goals for the advantage of service providers (resource use)