• Nu S-Au Găsit Rezultate

Single sign-on solutions for organizations

N/A
N/A
Protected

Academic year: 2022

Share "Single sign-on solutions for organizations "

Copied!
58
0
0

Text complet

(1)

“ALEXANDRU IOAN CUZA” UNIVERSITY OF IA Ș I

FACULTY OF COMPUTER SCIENCE

MASTER THESIS

Single sign-on solutions for organizations

proposed by

Secrieru Radu

Session ​ : February, 2017

Scientific coordinator:

Asist. Dr. Vasile Alaiba

(2)

“ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI FACULTY OF COMPUTER SCIENCE

Single sign-on solutions for organizations

Secrieru Radu

Session ​ : February, 2017

Scientific coordinator:

Asist. Dr. Vasile Alaiba

(3)

Declara ț ie privind originalitatea ș i respectarea drepturilor de autor

Prin prezenta declar că Lucrarea de disertație cu titlul “Single sign-on solutions for organizations” este scrisă de mine și nu a mai fost prezentată niciodată la o altă facultate sau instituție de învățământ superior din țară sau străinătate. De asemenea, declar că toate sursele utilizate, inclusiv cele preluate de pe Internet, sunt indicate în lucrare, cu respectarea regulilor de evitare a plagiatului:

- toate fragmentele de text reproduse exact, chiar și în traducere proprie din altă limbă, sunt scrise în ghilimele și dețin referința precisă a sursei;

- reformularea în cuvinte proprii a textelor scrise de către alți autori deține referință precisă;

- codul sursă, imagini etc. preluate din proiecte ​open-source

​ sau alte surse sunt utilizate

cu respectarea drepturilor de autor și dețin referințe precise;

- rezumarea ideilor altor autori precizează referința precisă la textul original

Absolvent Secrieru Radu, _________________________

(semnătura în original)

(4)

Declaraţie de consim ț ământ

Prin prezenta declar că Lucrarea de disertație cu titlul “Single sign-on solutions for organizations”, codul sursă al programelor și celelalte conținuturi (grafice, multimedia, date de test, etc.) care însoțesc această lucrare să fie utilizate în cadrul Facultății de Informatică. De asemenea, sunt de acord ca Facultatea de Informatică de la Universitatea Alexandru Ioan Cuza Iași să utilizeze, modifice, reproducă și să distribuie în scopuri necomerciale

programele-calculator, format executabil și sursă, realizate de mine în cadrul prezentei lucrări de disertație.

Absolvent Secrieru Radu, _________________________

(semnătura în original)

(5)

Table of Contents

Table of Contents 5

Motivation 7

Context 7

Functional requirements 8

Risks 9

General software security 10

Authentication and authorization 10

Basic authentication 11

Digest authentication 12

One-time passwords 14

Biometric authentication 15

Password security 16

Password storage 17

Two-factor authentication 18

Host-based authentication systems 20

Single sign-on 23

Differentiating between single sign on solutions 25

Application dependent authorization 28

Options for establishing a SAR 28

Policy-based SAR 29

Network-based authentication mechanisms 30

Hesiod 30

NIS 33

NIS+ 34

RADIUS 35

One-Time Passwords 36

Public key infrastructure 39

Kerberos 41

SSO alternatives 45

Entry-point authentication 46

The key box 46

(6)

Hybrid solutions 49

SnareWorks 50

Conclusions on SSO solutions 52

State of the art 55

References 57

(7)

Motivation

This paper is going to explore the possibilities of Single sign-on authentication system, in an enterprise environment. This is important because people do not want to remember a lot of passwords, but at the same time be secure. An array of different SSO and SAR solutions are presented, with the end-goal of reducing the number of authenticators for users and improving the overall experience.

Context

All the big companies, like Google, Facebook and others, have already implemented such authentication/authorization techniques, and each application that they come out with is in line with this direction. For example, when you log into your Google account, you can access Inbox, Drive and Docs without needing to re-login when accessing each one.

This paper delves in developing an SSO system for large organizations and the diverse advantages and disadvantages that these different approaches are bringing to the table.

Whether the solution is just a SAR, that deals in reducing the number of authenticators (e.g.

username/password pairs) that the users have, or a solution closer to the single sign-on functionality, in which users can login once during a work session and have access to the different applications and systems, a good solution needs to be found that is appropriate for the issues that the organization is trying to resolve.

This approach allows the enterprise system to authenticate and authorize employees and other external users so they can connect to the system with enterprise (organization’s)

(8)

devices, as well as personal devices. A clear distinction must, of course, be made between each of these, so they have access to the part of the system that they should have clearance to.

The system could be deployed and maintained on custom built by using physical servers, inside the organization. This means that, in most cases, it costs more, but control and security are also at a premium. Another solution is to use a cloud-based solution, which requires less

maintenance and is cheaper, but the security could suffer, as well as the possibility of some security loopholes to appear.

Large organizations are increasingly shifting critical computing operations from

traditional host-based application platforms to network-distributed, client-server platforms. The resulting proliferation of disparate systems poses problems for end-users, who must frequently track multiple electronic identities across different systems, as well as for system administrators, who must manage security and access for those systems.

Functional requirements

First and foremost, the authentication and authorization system is the most important part of implementing such an approach. This must be a system that is secure enough to keep confidential information safe, provide access to the correct devices (or people, as it may be) and also authorize them with the appropriate rights, so they have the access that they need and not more or less.

Secondly, this system must provide single sign-on across multiple applications and as such it is a single point of failure. A solution needs to be found so it is stable enough to accommodate quite a big number of concurrent user, both the ones that log in to the system and those who are already logged in and access an application under the single sign-on system.

(9)

Risks

The risks involved in the research needed for this paper are revolving around the fact that the single sign-on mechanism is usually used in the setup of big enterprises, with people hired specifically for setting it up and maintaining it working correctly. “Bring your own device” is also mostly used in the same kind of environments, and it usually implies another separate network, for the differentiation in the kind of security needed to access documents and applications. I have some limited experience in setting up environments, that is only in the simple scenarios needed for running and deploying enterprise application, including continuous integration. I do not have any experience in setting up high security environments or what that even implies. Also, because it’s usually set up in enterprises, there is a high degree of security and secrecy involved in this kind of setup, so papers related to this subject touch the matter slightly, but I feel not enough to extract great conclusions out of. Another related risk is that, although there are some papers on both single sign-on and bring your own device, there are not any that actually research or try to implement a system that incorporates both.

(10)

General software security

Authentication and authorization

Authorization and authentication, while very similar in name and often used in combination, are really two different concepts.

Authentication is about identifying who someone is. This can be done in a multitude of different ways, depending on the context. In a “face-to-face” interaction you would typically authenticate by showing your photo ID, or perhaps writing your signature, which can be verified.

In technology based scenarios, most commonly on the web, the most common way is without a doubt through providing a username and password. However, there are many other options to achieve this, such as OAuth and various kinds of two-factor authentication.

An additional step, after authentication, can be authorization. Authorization is the

process of deciding whether a specific user has rights to access a specific resource. Therefore, you can be authenticated, but still not be authorized to access a resource.

Historically, the use of plain username and password combinations has been the most common method of authentication. Today, it is still the most prevalent authentication method in existence, and its demise is nowhere in sight. It is an outdated, insecure method, but easy to implement with minimal requirements for the terminals of the users with regards to equipment and software.

(11)

Basic authentication

Basic authentication means a plaintext username and password pair, which is inputted into a dialogue box, a form or prompted for this information. Then the textual information is checked against a database of correct answers, which either stores the information in plaintext or in some hashed form in a text file or uses an actual database within the authentication system.

Figure 1.1. Simplest flow of basic authentication

“Figure 1.1. Simplest flow of basic authentication” depicts the simplest way of

authenticating a user ID and a password. In this case, the server first asks the user to identify himself and after the user has supplied the information necessary for his identification, that is his

(12)

the system and if the password provided matches the one stored in the database. It indeed there is a match, the user is granted access to the server.

In a UNIX system setting, this would be the /etc/passwd or /etc/shadow file, which contains the pairs: username, in plaintext and password, in hashed form hash (password, salt).

The salt is a padding of bytes appended to the real password before hashing it to increase the number of possible hashes for one single password, so that dictionary based password

guessing attacks are more time consuming.

In the web environment, the HTTP protocol specifies a challenge-response framework for a web server to request authentication credentials from a client by sending a “401

Unauthorized” error status code. The user-agent may respond to this challenge with an Authorization HTTP header that specifies the required credentials.

The information is then sent unencrypted over the network to the service for

authentication. If the given credentials are correct, the client is given access, which means he is authenticated. Moreover, the server now knows which resources he is authorized to access.

Digest authentication

Closely related to basic authentication, digest access authentication verifies that both communicating parties share a secret - a password. But unlike basic authentication, this verification can be done without sending the password unscrambled, which is the biggest drawback of basic authentication.

(13)

Figure 1.2. Simplified digest authentication

Digest authentication operated much the same way as basic authentication as illustrated in the figure above. It does have a slight modification, though. That is, that only hashes of the passwords are transported over the Internet, as opposed to a plaintext password.

The digest scheme issues challenges using a nonce value. A valid response contains a checksum of the username, password, a given nonce value, the HTTP method and the

requested URI. This way, the password is never transmitted unscrambled over the Internet.

The basic authentication scheme is not considered a secure method of user

authentication since the username and password are passed over the network as plaintext.

While the digest authentication is better than basic authentication, it is not infallible, and much stronger methods are available with systems like Kerberos and public key based methods.

(14)

One-time passwords

This is a special case of basic authentication where the password changes every time a user authenticates to a service and none of the passwords are reusable. When doing

OTP-authentication​[1][2][8], the server retains the correct passwords in a secure index, so that when a user authenticates himself, only the next unused password is valid. This protects the authentication process from replay attacks, in which an eavesdropper has recorded previous network traffic and discovered the username/password pair and attempts to log in into the protected service using the stolen credentials.

There are two entities in the operation of the one-time password system. The generator must produce the appropriate one-time password from the user’s secret pass-phrase and from information provided in the challenge from the server. The server must send a challenge, which includes the appropriate generation parameters to the generator. The one-time password must be verified, stored and need to correspond to the sequence number.

This requires that the server does not contain any compromising secret information while the seed, sequence number and last used key are all public data and they cannot compromise the information, given that the secure hash function used to generate the password sequence cannot be inverted (i.e. an attacker cannot calculate the original value from the hash value).

The OTP system generator passes the user’s secret string, along with the seed received from the server as part of the challenge, through multiple iterations of a secure hash function, which produces a one-time password. After each successful authentication, the number of secure hash function iterations is reduced by one, which generates a sequence of unique

(15)

passwords. The server verifies the one-time password received from the generator by computing the secure hash function once, and comparing the result with the previously accepted one-time password.

The generator, on the other hand, must be reliable and secure, as it contains the secret generation key, with which it computes the required number of hashes to generate the correct password.

Biometric authentication

Biometric authentication uses biometric data to form a biometric template of the

biological feature measured by the particular biometric method​[3][4]. This could be the positions of curves and junctions of a fingerprint, the pigment structure of the iris or voiceprint of a certain sentence.

If the person can provide a good enough sample of the measured biometric, which closely match a previously recorded biometric template, the identity of the person authenticated is guaranteed.

The best known biometric identification method is the finger scan, while less common ones include iris scans, voiceprint, facial images and hand geometry scans.

Biometrics have a very interesting future, but they are not widely used today for various reasons. A possible application of biometrics could be the replacement of conventional PIN codes on smart cards with finger scans. This could occur with the integration of the card reader device and the finger scanner to build a system built for smart cards and biometric operations.

(16)

Password security

The purpose of a password is to be a secret combination of characters known only to one or a few people authorized to access a resource. Thus, it should not be a sequence easily guessed by others, such as a name, birth date or other information with some connection to real world entities. Instead a more or less randomly chosen sequence, using multiple different types of characters, is typically used.

Excluding attacks such as tricking someone to reveal a password, eavesdropping, or other types of social techniques, the typical way of cracking a password resolves to some form of brute force search. That is, an attacker repeatedly tries possible combinations until the right one is found.

In order to make it as difficult as possible for the attacker to figure the password out, we want to force the attacker to search through a large set of possible combinations of passwords.

Increasing the password space can be done in two ways: having a long password, and include characters from a large set of possible characters. For example, a password with 5 characters is easier to crack than one with 10, and a password with only letters is easier to break than one with both letters and numbers​[5].

Table 1 displays a few examples of how long time it would take to search through the full set of possible passwords, given the password length and the number of possible characters.

We assume that a computer can try 1.5 million passwords per second.

(17)

4 characters 8 characters 16 characters

Numbers only (10) <1 second 1 minute 210 years

Letters only (50) 4 seconds 10 months 3.2*10​13​ years

Letters and numbers (60)

9 seconds 3.5 years 6.0*10​14 years

Letters, numbers and special characters (75)

21 seconds 21 years 2.1*10​16​ years

Table 1. Time needed to break passwords

Password storage

Excluding attacks where the attacker gets ahold of the password unencrypted, a typical brute attack is performed against the password storage. Obviously, an attacker shouldn’t be allowed to access it, but it can still happen. In fact, early UNIX implementations allowed anyone to read the encrypted password store, assuming it would be safe. This assumption was made based on the vast computing power required to crack a password. However, this assumption no longer holds given modern computers.

Because of the vulnerability of short passwords, various techniques are used to improve their security when stored. Using a salt is the most typical solution. A randomly generated string of characters is appended to the password before encryption. This also helps in that two

(18)

identical passwords are hashed into two different values, because while the passwords are the same, their salts are not.

Two-factor authentication

The primary way of authenticating on the Internet today is using a username and a password. For most uses, this is a solution good enough to provide adequate security, especially given a cryptographically strong password.

However, no matter how secure the password is, if an attacker gets ahold of it, he can access the resource it was protecting. For high security applications, this is not acceptable.

Therefore, there is a need for a higher level of security. One way to achieve this is through so called two-factor authentication, where the user needs to provide not only one security

credential (e.g. a password), but two. This raises the bar as the attacker needs to get ahold of both these security credentials.

For two-factor authentication​[6]​ to be truly effective, two different kinds of credentials should be used. A password is something you know. Thus, the second credential should be something else, for example, something you have or something you are.

Also using something you have for authentication makes it much harder for an attacker to compromise the security of the system. Not only does the attacker need to get ahold of the user’s password, but they also need to physically get ahold of something the user is in

possession of.

There are many kinds of credentials which are based on something you have. The most common is perhaps a smart card, a pocket-sized card with embedded integrated circuits which in combination with a card reader can store and process a digital certificate used to authenticate

(19)

the user carrying it. Another common type of something you have is a mobile phone. Much like a smart card, it can carry a digital certificate, or it can be used to simply receive a SMS​[7]​. A third common type is one time password token which generates a pseudo-random number that changes at predetermined intervals, and part of the code generated identifies the device used.

What provides an arguably even higher level of security is requiring authentication based on something you are. This could be a fingerprint scan, a retina scan, or any kind of biometric.

In order to break this kind of authentication, the attacker needs to either get ahold of an actual part of the user’s body(as creepy as that sounds), force the user to authenticate, or somehow be able to fool the biometric scanner. While the first and the second alternatives can obviously be done, it comes with a much greater risk for the attacker. Therefore, the most likely type of attack is fooling the sensor. How hard this is depends on what biometric is used and the quality of the sensor. Especially, some fingerprint scanners have been proven to be quite easy to fool, while other scanners have been proven to be very reliable.

(20)

Host-based authentication systems

In the traditional, host-based application environment, like the Unix distributions and Windows, authentication is simplistic. Users were authenticated by the host operating system​[9], the system checking its database of correct authenticators, as they presented for entry. The users are granted access to applications, operations, and data (e.g. locations on the system) based on their identities, as identified by the system. Multiple applications that reside on a single large host system can share authentication information, like login/password, through the host’s operating system, meaning that access to the host system can be managed cheaply and centrally (this is in a non-distributed architecture).

Audit trails can be maintained centrally and can easily be correlated between

applications, because all the application reside on the same operating system. Each user has a single identity across anything relating to the host, including applications. Because of this, audit records that recorded an individual’s singular actions within the applications housed on the same host can easily be correlated with audits that reflect other actions taken by that same user on other applications. In a similar fashion, security and access management is straightforward, in most cases meaning maintaining a single user database across applications and various authorization tables, that are specific to the different applications.

The rise of network-distributed applications​[10] and the increase in the number of disparate systems inside an enterprise, authentication and related issues have become more complex. Each different application or service may require separate authentication for its users.

Users that have access and need to use multiple applications or systems may need to

remember a huge list of different electronic identities and authenticators (passwords). End users more than likely are required to authenticate themselves multiple times during a single day, and

(21)

passwords when the eventually expire.

For the system administrator, the situation is more or no less the same. As the systems migrate to a more distributed infrastructure, administrators must maintain authentication

information across multiple platforms. Maintaining user authentication information in multiple different contexts is taking a lot of their time, between creating, deleting and issuing

authenticators across the organization. There are some tools for auto-management that can reduce the effort involved in managing user identities and passwords across multiple systems, but they are often as difficult to manage and the maintenance of said tools is as difficult if not more than maintaining the systems they support.

More than that, system administrators who have in their attributions managing electronic security for their organizations have to do much more to ensure that audits from disparate systems can be reconciled, whether or not the systems have anything in place to correlate these audits. One user can perform action across multiple different domains and systems, which may be difficult to correlate if the systems involved do not agree to the user’s identity or don’t have any way to identify the same user across all the systems (as the username cannot be relied on).

Additionally, the abundance of electronic identities​[11] can be devastating to overall system security. Once users have ten or more different authenticators for the different

applications and systems, many end-users will resort to using insecure but easily remembered passwords, which is bad because the passwords can be easily broken by an attacker.

Alternatively, they will resort to keeping their authentication information in an insecure fashion, for example in a plain text file on their computer, or lists on their monitors. Enforcing any

reasonable security policy in such an environment can be very difficult, maybe even impossible.

As the leaders of the organization are becoming aware of the serious security problems

(22)

that come with the increase in electronic identities and end-users start complaining of the immense amount of work they have to do to ensure that they can actually do what they’re supposed to, the pressure increases on system administrators. They have to investigate and provide technical solutions to these problems, usually resulting in the demand for a single sign-on solution.

(23)

Single sign-on

Single sign-on is a paradigm, which by utilising authentication, authorization and auditing functions as well as protocols for dissemination of access control information, the client is provided with universal identification with a single authentication event.

A single sign-on system builds on the notion that one special server holds the responsibility to authenticate users to a number of different sites or services. From a user’s perspective, he or she can authenticate only once to one single server and then gain access to multiple sites.

The fundamental problem a single sign-on solution solves is the problem of forwarding the authentication credential from one service to another in a secure manner.

In the terms of SAML​[16] (Security Assertion Markup Language), a service provider is a site which provides some functionality or service to a user. This could be a webmail client, an online newspaper, an online banking system, or just about any kind of site there is on the web.

The special server responsible for authenticating the user to these service providers is called an identity provider.

To provide a typical scenario of how single sign-on systems usually work, we look at a user who wants to access a certain web page, through a service provider. In this case, the service provider requires the user to authenticate in order to function properly. The service provider asks the identity provider to authenticate the user in question through a request which is completely transparent to the user.

This is delegated to the identity provider, thus making it up to it to perform the actual authentication. If the identity provider does not currently know who the user is, which means that there was no prior session established between the user and identity provider, it forces the user

(24)

to provide some suitable security credentials - typically username and password. If the user enters some correct security credentials, a session can be established between the two participants, and the identity provider can return the identity of the user to the service provider.

The service provider can therefore continue serving the user with the previously requested protected resource.

However, if another service provider has recently asked about the identity of the user, the identity provider already established a session for that user. In this case, there is no reason to further ask the user for security credentials, and the identity provider can just return the identity of the user to the service provider without any other ‘questions’ for the user. This is the main benefit from a single sign-on, from a usability perspective.

There are various ways of accomplishing single sign-on functionality on a network of computers. The most prominent way of obtaining SSO capabilities is by using a

client-agent-server architecture.

The client is a distrusted component, capable of completing the authentication protocols and storing the access control ticket issued to it by the server if the authentication was

successful. A typical client would be using a web browser acting on the behalf of the user as a user-agent.

The agent is situated near the protected service, acting as a gatekeeper, consulting the server for authentication and authorization decisions, as well as supplying it with audit data. The agent is a small piece of code that effectively can allow or deny access to resource requests based on the authorization information provided by the server and forward the acceptable ones to the service and the responses back to the client.

The most important part of the system is the server, which provides the back-end

processing capabilities, with support for different authentication methods, user databases, policy

(25)

evaluation capabilities, audit logging and processing functions.

Differentiating between single sign on solutions

Single sign-on is something large organizations need to to survive these trying times.

Opinions differ, but to most, “single sign-on” means that each user in the organization has only one user id and associated password. Others think that “single sign-on” means that whenever a user logs in, a generic interface is presented to him/her, which allows them access throughout the applications, tailored to them. Another interpretation, is that “single sign-on” is presenting the users with only one authentication screen, that requires them to input authentication information during a single work day.

Before starting to design a single sign-on solution, the system administrators must first determine what “single sign-on” means for their organization. Is single sign-on the solution for an enhanced security and auditability, or is it more for the sake of the end-users, who no longer want to have an immense number of electronic identities? How outgrown is this problem for existing systems? How many of these systems, which already have multiple applications, and as such, most likely, multiple authenticators for individual users, are willing to participate in a single sign-on solution? Are the users mobile, or do the users work from fixed locations? Do users need access to multiple different applications and systems at the same time? If yes, to which extent?

If organizations are investigating single sign-on solutions, it is critical to decide whether they need a method by which to reduce the number of authenticators each end-user must remember and maintain, or whether they need is a method by which to reduce the number of authentication operations a user must perform in a single work day​[12].

(26)

For the first case, the ultimate goal is the creation of a “Single Authentication Realm”​[12], or SAR, where each user has only one authenticator (user id and password). For the second case, the ultimate goal is the creation of an actual Single Sign-on, or SSO, mechanism, where each user authenticates only once during each work session. As SAR is concerned, individual users may be required to authenticate multiple times during a given work day. This is typically required once for each different application or system accessed. For SSO, each user performs at most one authentication operation during each work day, although multiple authentication operations may be performed on behalf of the user by the SSO software during the course of the user’s work day.

Building a SAR that can be used throughout the organization is not a necessarily a prerequisite to for a SSO solution, but in many cases a SAR may be sufficient to meet an

organization’s needs, even without implementing a full SSO solution. A SAR can eliminate many of the security risks, mostly related to the growth in electronic identities. With only one set of authentication information to remember and maintain, users are less likely to not be mindful of their now single username and password are more likely to protect it. A SAR can also eliminate many of the system administration problems posed by the same growth in user identities. As each user has only one identity used for authentication, reconciling and putting together audit trails across systems participating in the SAR is easy, because they can be correlated with the same identity, being almost as easy as reconciling audit trails on a single host. User identity management (creating and removing authentication information for individual users) is also reduced to managing data associated with the SAR. A SAR can also reduce the overall cost that come with the management of critical systems by reducing the number of times individual users must be initially certified. Once a SAR is in place, each individual need only be identified by the system staff once, therefore making strong certification policies easily enforceable with

(27)

less time investment by both end-users and system administrators.

However, a SAR may not be sufficient for a large organization’s user population and their demands. While each user is attributed only one authenticator (e.g. a login/password pair) to remember and maintain, users may still may still complain about the fact that they need to authenticate in each application and systems that they use, even though they already

authenticated in that work session.

A SAR may also increase the possibility of leaking secure authentication information on possibly insecure networks, thus the end result being an overall decrease in system security.

Although each user needs to only remember one combination to access something inside the organization, the likelihood that their combination can be compromised by an attacker bent on observing either the user’s activities or the network is increased.

If these issues are important to the organization, they can be solved by deploying a full SSO solution. This SSO solution can have various forms: a well thought-out SAR that is able to issue reusable credentials, that are recognized by all participating systems and services, a redesign of the systems and applications so that they can use a third-party authentication system to identify users, a SSO application created as a proxy for authentication operations, making it so that previously authenticated users can have access to the systems, or some mixture of all three. A SSO solution, as opposed to a SAR, can have a good impact on user satisfaction and can improve the overall security rating of the systems involved.

Deploying and maintaining an SSO solution can be very difficult for the system administrators and expensive, depending on the approach chosen. A big problem is making legacy applications or application environments work with a SSO solution, which may be difficult or impossible. The costs also need to be a big consideration, as it may limit the extent to which an SSO mechanism can be deployed for the whole organization.

(28)

Application dependent authorization

Both SAR and SSO solutions have in their attribution authentication, rather than authorization. Authentication uniquely identifies an individual user and represents them in electronic form. Authorization happens after authentication and has to do with whether an user is authorized to access a certain system or application.

Authentication can be an application independent process because an user’s identity is unique to the theml, regardless of what role that specific individual has or what operation he is trying to perform. Authorization depends on the application, because different applications need to control how they do access control differently from one another. Authorization come with its own challenges both for application programmers and system administrators, which are not addressed by SAR or SSO solutions, although they can be adjusted to.

Authentication and authorization are closely related to one another. For adequate

authorization decisions, applications must have access to reliable authentication information. An application which cannot uniquely identify its user can’t really make appropriate authorization decisions. A SAR can provide the basis for building an authorization infrastructure that can be user within an organization, in the ideal case.

Options for establishing a SAR

We are going to explore some authentication options, each having its own strengths and weaknesses. Any or more of options may need investigating or using in the precursor steps for the development of an enterprise-wide SAR, depending on what the organization needs.

(29)

Policy-based SAR

Dictating by organizational policy that the users will have to use only one user id and password for all the systems and applications they access is probably the simplest, but certainly not the most robust of effective option. Basically, the organization achieves a SAR by

establishing an institutional policy that forbids having multiple authenticators.

“From experience, such restrictive regulations are virtually impossible to enforce in any but the smallest and most tightly managed organizations. As the number of users increases, and particularly as there are more system administrators who must cooperate with such a policy in order to make it enforceable, the labor required to enforce the policy increases beyond

manageable limits.”​[12]

More than that, it’s borderline impossible to enforce such a restrictive policy across different systems that have their own security rules without sacrificing password security. One way is to allow administrators to have access to users’ individual credentials, or to have the systems that are involved somehow compare credentials on a regular basis so that violations of the policy are caught.

Another big problem that can occur if a successful implementation of such a restrictive policy is put in place is that make the systems involved much more vulnerable, making the user credentials vulnerable from multiple points in the organization. Seeing that all the systems that take part must replicate user authentication information, the security of all the other systems is brought down to the security of the least secure system in the group. Another big problem is that there is no single location in which compromised authenticators belonging to a user can be modified or revoked, it takes a lot of time and money for system administrators to fix security breaches, and users can forget to update one or more compromised authenticators, leaving

(30)

themselves open to continuous attacks. Despite its weaknesses, this approach can be made to work under certain circumstances, although it is strongly discouraged to implement this

approach in favor of other options.

Network-based authentication mechanisms

A network-based authentication service can be a good basis for creating a SAR. The system administrator has access to a number of different services, ranging in complexity from simple network-distributed databases to complex third-party authentication protocols based on various encryption technologies. Each of these approaches has its own strengths and

weaknesses, and any of them can be reasonably viewed as possibilities for partial or complete SAR solutions, depending on the requirements of the organization.

Network-distributed databases are widely used as a network-based solution. By

providing access across the network to a single authentication database, cooperating systems share user credentials, making a SAR possible. Network extensions of the well-known Unix passwd table mechanism are some of the most common implementations of this approach.

Hesiod

One such approach, Hesiod​[17]​, is a mechanism for distributing passwd table information, or for that matter any textual information, across a network, extending the traditional domain name service (DNS). In the Hesiod approach, authentication credentials initially stored in a local Unix passwd table (user ids, encrypted passwords, etc.) are now stored in extended DNS records and made available by extending the normal domain name resolution protocol. DNS

(31)

servers that have Hesiod support, in addition to the traditional DNS records of class “IN”, has records of class “HS”. Class “HS” records may include records of type “TXT”, containing text strings indexed by other text strings.

Digital Equipment Corporation’s Ultrix operating system provided native support for Hesiod for distributing credentials within the Unix passwd tables. In this implementation, Hesiod adds its own pseudo-domains, containing class “HS”, type “TXT” records that can include user information in the standard Unix passwd and group table formats.

“The primary DNS server for a given domain, “team.foo.org” for example, publishes not only the authoritative “team.foo.org” DSN information, but also authoritative Hesiod information for the pseudo-domains “passwd.team.foo.org” and “group.team.foo.org”. These

pseudo-domain tables contain passwd and group table information. For example, the

“passwd.team.foo.org” domain table might include records of the form shown in Table 2 below, that support access to user passwd table entries indexed on both login id and uid number.”​[12]

njc HS TXT “njc:PASSWORDCRYPT:105:4:Joule:/home/njc:/bin/bash”

105 HS TXT “njc:PASSWORDCRYPT:105:4:Joule:/home/njc:/bin/bash”

Table 2. Domain table records

These pieces of information, along with the standard DNS information, is made available across the network via the standard name resolution protocol, with some added functionality.

Hesiod implements an extension of the standard DNS API with replacements for the standard resolver routines, thus allowing applications to search and find authentication information in the form of passwd table entries.

This ability to index credentials into passwd table entries within Hesiod server’s primary

(32)

Hesiod tables was used to provide native support within the operating system offered with Hesiod and it served as a primary authentication mechanism.

The Hesiod project has some strengths, as well as weaknesses. Besides the

implementations of Hesiod being open-source, it has been implemented as an extension to the DNS, which is a well-standardized protocol. As such, Hesiod has standards that few similar systems achieve. Being structured similarly to regular DNS tables, Hesiod tables are easy to work with, and can be distributed to multiple Hesiod servers using the same mechanism used to synchronize DNS tables. Hesiod information can be published by multiple servers, as is the case of DNS, and of those, some of may act as caching servers to enhance Hesiod

performance when looking up entries.

Hesiod is not a perfect solution to the SAR problem, as it has some big issues. There is only one vendor providing native operating system support for Hesiod, and that is the

organization that created Hesiod, Digital Equipment Corporation. Though it can be an option to modify the open Hesiod API to work with any operating system or application, thus creating a SAR based on Hesiod, it can be very difficult to introduce Hesiod support into existing

applications, especially legacy, or non-open source applications.

More than that, Hesiod does provide any mechanism for the security of information that is to be stored in Hesiod tables. Hesiod tables, as well as DNS tables, are stored in plain text on Hesiod servers, as well as Hesiod information being sent over the network in plain text. Hesiod servers do not enforce limits on which clients can access Hesiod information. Taking into considerations these security downsides, it can make Hesiod unsuitable for secure credential information it can be unsuitable as the basis for a SAR, depending on what the organization decides.

(33)

NIS

A different authentication database based on the network, as a means to developing a SAR is the use of Network Information Nameservice (NIS)​[18]. NIS provides functionality closely related to that of Hesiod, but uses a different network infrastructure. NIS was originally

developed as a Sun Microsystems initiative, the source code access to Sun’s ONC RPC interface has led to different vendors offering support for this same mechanism.

In the NIS environment, cooperating machines are bundled in domains. Although they frequently coincide with DNS domains, a NIS domain is completely different than a DNS

domain. Machines in multiple different DNS domains can be members of the same NIS domain, and vice-versa.

NIS provides a mechanism for distributing database tables from a set of database servers to clients across a network. The database tables distributed by NIS can include

authentication credentials, usually stored in the standard Unix passwd and group table format.

That being said, NIS uses a completely different mechanism to distribute information.

NIS relies on Sun’s ONC RPC mechanism, instead of the a pre-existing standard DNS

mechanism. Client machines within a given NIS domain use NIS-specific RPC calls to perform search, retrieval, and update operations on NIS data tables stored on NIS servers. Client binding is used to direct NIS clients their respective domains’ NIS servers. NIS client machines are aware of the NIS servers in their environment, and can make appropriate RPC calls to their respective servers.

NIS takes the place of the traditional Unix passwd and group table mechanism, if it’s thought to be the basis for implementing a SAR. NIS client systems are typically equipped with NIS-aware versions of the standard Unix routines, allowing the use of NIS to authenticate in a

(34)

transparent manner to applications written to use native Unix authentication mechanisms.

For a DNS domain, there may be more than one NIS server configured. If that is the case, one NIS server acts as the master, while the other NIS servers act as slaves. Both NIS master servers and NIS slave servers can respond to RPC requests and can be used for load balancing.

NIS suffers from drawbacks when viewed as a solution for a SAR implementation. While NIS is supported natively by a wider variety of operating systems and applications platform than Hesiod, they suffer from similar security problems. NIS implementations still expose secure authentication credentials on a most likely insecure network, and distribute passwd table

information to systems outside a given NIS domain. This is usually a security breach, since user authentication information, especially encrypted passwords, can assist attackers in attacking the authentication system, either by brute-force of other cryptographic attacks.

NIS+

Sun Microsystems developed NIS+, a NIS (version 2) follow-on​[19]​, to address concerns regarding the NIS and Hesiod security models. NIS+ uses a different network distribution

mechanism, representing a very different approach to distributing passwd table information over the network, providing some of the same features as Hesiod and NIS.

NIS+ relies on an a modified RPC mechanism called “Secure RPC”, used for distribution of data on the network, and it is designed to solve the problem of taking the traditional Unix passwd table mechanism into a networked environment. In NIS, clients can call any available NIS server and execute RPCs to get data available on the server, thus opening the possibility of unauthorized access to secure data. In NIS+, clients and servers share a secret encryption key, making it so that NIS+ clients and NIS+ servers can authenticate themselves before accessing

(35)

information in any NIS+ tables. This allows Secure RPC based applications to use encryption on network conversations, limiting the possibility of secure information in plain text being sent across an insecure network, or even eliminating it.

This being said, NIS+ suffers from some serious drawbacks for an organization to use as a solution for a SAR in a network environment. As opposed to NIS, NIS+ protocols are not open source, so that it cannot be used to implement across an array of applications. Therefore, NIS+

clients are only available natively within Sun platforms, with the mention that some reverse engineering was performed for some Unix distributions. NIS+ servers can be deployed in “NIS compatibility mode” to enable support for other clients, but running the servers as such basically disables any security improvements offered by NIS+, making it so that NIS+ in compatibility mode a bit more secure than a traditional NIS server. “Further, NIS+ has been demonstrated to suffer from serious performance degradation and stability problems in very large environments.

NIS+ can be used effectively as a replacement for NIS or YP in small, homogeneous environments, but does not scale well to large applications and is not appropriate for use in heterogeneous computing environments.”​[12]

RADIUS

Hesiod, NIS and NIS+ can be user for a basis for a SAR, based on the distribution of a single database of authentication information to multiple systems. While this approach has the advantage of interoperability, being that a lot of existing applications are designed around the traditional Unix security model, other approaches are available.

RADIUS (Remote Authentication Dial In User Service) authentication mechanism​[20]​, is one such approach, which deals in authentication by proxy. RADIUS authentication is achieved

(36)

by clients communicating with a central RADIUS server and passing their authenticators through it, looking them up and determining their validity, and accepts or rejects the clients connections depending on the results of the look-up. Client systems, which fall under the RADIUS authentication scheme, do not need to have access to the secure authentication tables, because the authentication is actually performed by proxy, by the RADIUS server.

Instead of distributing the authentication database directly on client machines, RADIUS passes user credentials to the RADIUS server for validation.

“RADIUS has been implemented by a number of vendors of remote-access hardware (terminal servers, routers and other network devices) as a cost-effective mechanism for providing user authentication from within embedded applications.”​[12]

RADIUS suffers from some security flaws, if it were to be considered as a base for a SAR for an organization. Although the communication between a RADIUS client and a RADIUS server is hardened, the connection between a RADIUS client and the user that tries to

authenticate is most of the times unencrypted. This being the case, uses of RADIUS have reduced to applications for which the connection between the user that tries to authenticate and RADIUS client is expected to be invulnerable to attacks.

One-Time Passwords

A different set of authentication mechanisms designed for the network which should be taken into account is the group of so-called OTP or One Time Password systems, mostly because they address the problems that arise in developing a SAR solution. From software-based approaches (like S/Key​[1]​) to hardware-based solutions (so-called “smart

cards”​[2]), OTP solutions counter the problem of multiple identities using another type of solution.

(37)

Instead of trying to reduce the number of authenticators across systems and applications by providing a way to having only one authenticator to be used for all systems, OTP solutions are securing authenticators by changing them each time they are used. The usual OTP scheme involves a user having as many passwords as they have authenticated sessions. Each and every time a user’s credential is used, it immediately invalidates, and new set of authenticators are issued for that specific user.

Taking into account that the mechanism by which new authenticators are issued to end-users is unpredictable enough to observers to not be easily broken, OTP schemes eliminate the security problems involved in sending plaintext credentials over an insecure network. That is, because if an attacker becomes aware of the user’s identity and credentials, by the time he tries to use it to access the system, it no longer works, as it has been invalidated and changed.

Most OTP systems have a central authority that singlehandedly creates, invalides and issues authenticators that can be used one single time, they can be taken into consideration to provide SAR-like functionality. If said OTP systems maintain consistency of user identities, they can be used to develop a security system similar to a SAR.

Software-based OTP mechanisms, like S/Key, typically pre-compile and assign a sequence of authenticators to each user, which is basically that user’s upcoming passwords.

Each of the users must periodically query the S/Key service for a new list of future passwords, and must use the list when trying to authenticate on the systems with the mechanism in place.

Just like the users, the systems must make themselves accessible to the S/Key service, so that used authenticators may be invalidated and replaced with the next valid value.

The security of such list-based OTP solutions is based on the security of the

mechanisms that are used to send the authenticators over to the systems and users. Users that

(38)

request a new list over an unsecured network channel, or are printing that list on an unsecured printer (or leaving them in plain sight on the desk), or storing an the list of future in some other insecure fashion invalides the security of the entire list. However, if secure channels are setup to manipulate authenticator lists, list-based OTP can improve the security of an organization.

Hardware-based OTP solutions eliminate the need for future authenticator lists and as such, the need to send and store them securely, and instead generate the needed credential as they are requested by the user. Each user is issued an electronic device able to calculate a cipher which is used as the owner’s authenticator. Usually, the cypher is either basing itself on the current time or a function of some challenge presented to the user when he tries to start authentication operation, or a function of both.

Taking into account that the cipher that the smart cards are generating are secure enough to not be easily reproducible, these systems can provide the same kind of security advantages that software OTP systems provide. These systems also offer the advantage of requiring little change in the behaviour of end-users, as they do not need to operate other systems in order to authenticate themselves, just use the new device to input their credentials instead of a password.

That being said, the costs of hardware-based OTP are typically much higher than the costs of a software-based OTP. Especially valid for large organizations, the cost of acquiring and issuing thousands of smart cards may be way too big, and in addition, the cost of

maintaining and replacing malfunctioning or stolen authentication devices can amount to a huge amount over time. Although software-based OTP can be deployed with less cost, even within large organizations, users need to be re-trained and taught the new security rules, which leads to a whole new set of costs and may impact productivity greatly.

The difference between SAR solutions based on distributed database and proxy

(39)

solutions such as RADIUS and OTP solutions is that the latter rely on “something the user has”, rather than “something the user knows” for establishing strong authentication. These systems can be better or worse in terms of overall security, based on how well the users are taking care of their password lists or smart cards. Better than being a replacement, OTP mechanisms are being developed alongside classic authentication systems.

Public key infrastructure

Public key infrastructure​[21], or PKI is arguably a much more secure approach to authentication, which uses digital certificates (the best approach being the X.509 standard).

Authentication is done through the use of a digital certificate, which is sent to the system or application the user is trying to connect to. The digital certificate is no more than data

identifying the certificate’s owner and it typically includes a digital signature of the certificate’s issuer (CA) over the owner’s public key. Presented with this digital certificate, a cooperating system can verify that the certificate is whose it needs to be, and the CA can be queried to verify that a particular certificate is valid. CAs must each have their own public key/private key pairs and must act as key distribution agents, providing a central repository for the retrieval of public key information. Participating systems and applications then “trust” particular certificate authorities, accepting valid certificates issued by those CAs, usually on the basis of the CA’s being known to use trusted methods to identify individuals before issuing them certificates.

Public key certificates offer a number of advantages as an authentication mechanism, providing highly security through the use of public key cryptography. It’s practically impossible to forge a digital certificate, because the attacker would need to have both the user’s private key

(40)

and the CA’s private key. Certificates offer some of the advantages of hardware-based OTP, meaning that the authentication is done based on a user’s certificate, rather than his password.

Digital certificates also provide natural means of establishing secure communications over an insecure network. Once a user’s public key certificate has been exchanged and validated as proof of authentication, a shared encryption mechanism is available to both the authenticating user and the system to which he or she is authenticating. The user may encrypt data in his or her private key to ensure the authenticity of transmissions, as for decryption of that message only the user’s public key can be used. The systems that take part in this kind of communication may encrypt data in the user’s public key, ensuring that only that specific user can decrypt the message it’s sending.

That being said, public key certificate systems are not the perfect solution for developing a SAR for an organization. “In order to deploy a SAR based on public key certificates, an

organization must first develop and deploy a rather complicated set of support services, a public key management infrastructure, which may include, in addition to one or more local Certificate Authorities, a key escrow system (for the secure retrieval of private key information) and mechanisms for updating, invalidating and re-publishing public keys. Security of the various portions of the public key infrastructure become critical, since compromise or impersonation of a part of the public key infrastructure can directly undermine the security of any authentication mechanisms designed around it.”​[12]

There is also a problem in using PKI certificates for environments where users are highly mobile. Certificates cannot be memorized or re-entered by their owners, and so must be stored electronically. In non-mobile user environments, certificates can reasonably be stored on users’

client machines, where they can be reliably accessed at all times. In more mobile environments, where a single user may work from any of a number of locations, access to a user’s PKI

(41)

certificate can become more complicated. A solution would be to store the certificates on a card, that can make it easy to transport, but we again arrive at the immense cost of issuing this kind of solution to thousands of users across an organization. Software-based solutions involve making certificates network accessible, but as such, require an alternative authentication in order to ensure secure access, and are reduced to the same user/password problem.

Kerberos

Kerberos​[22] was developed at MIT under the Athena project which spawned the Hesiod system, and has been used for many years in large organizations. The Kerberos technology, represented by MIT Kerberos versions 4 and 5 and DCE, is widely understood, accepted and reasonably safe to attacks.

The development behind the Kerberos authentication model is based on the issue of how users can authenticate across insecure networks without exposing their authentication.

Having at the base protocols developed by Needham and Schroeder, the Kerberos model was developed for authenticating clients without sending authentication information (passwords, etc.) over an insecure network.

Kerberos loosely groups clients and servers together into realms, with each realm containing at least one shared security server. This security server, called a KDC or Key

Distribution Center, shares a secret encryption key with all the users and the service providers it has. The keys are used as a means to authenticate the users and to issue reusable

authentication credentials called tickets.

Kerberos achieves authentication through the use of shared secret keys. The model is based on the fact that if both parties can authenticate themselves to the other party, they can

(42)

communicate securely by encrypting information using a shared secret key. The party initiating the authentication process can send a message encrypted in the shared secret key, and if the responding party is able to properly decrypt the message, both parties can be reassured of one another’s identities. Provided that the key is actually a secret shared solely between the two parties, the ability to decrypt one another’s messages is sufficient to prove authentication. If part of the encrypted information passed in the original exchange is further encrypted in a secret key known only to the originating party, the originating party can subsequently verify the origin of the initial message.

The KDC provides two different but related services. An authentication service or AS, used for initial authentication of users, that can also issue tickets for the ticket granting service.

A ticket granting service or TGS, which is used to issue tickets for other services that use the same authentication mechanism. In most existing implementations of the Kerberos model, the AS and TGS are deployed on the same host, and in most cases the two are implemented in the same application.

Initial authentication involves obtaining a ticket granting ticket or TGT from the KDC’s AS. The client sends an authentication request to the AS, making it aware of the user who requests a ticket granting ticket and providing information useful for the validation of the identity and request. The AS responds with a TGT which is encrypted in the user’s secret key. If the client is able to decrypt this TGT, it basically means that he has the secret key, and as such, he is authenticated the client’s proof of authentication.

Along with the meaningful authentication information in the ticket granting ticket are more information useful to the user. Of these, five are particularly important: a user credential, a timestamp, a validity time, a checksum for checking that the message didn’t change, and a session key. If the client is able to decrypt the ticket granting ticket correctly and if the timestamp

(43)

on the ticket matches the current local time on the client, the decryption is done, or the result can be marked as being invalid if some of the information doesn’t match. The integrity and confidentiality of the TGT is ensured, through the encryption and checksum verification.

The user’s TGT can be used in other ticket exchanges with the KDC, and is used to acquire so called service tickets as proof of authentication for use with services other than the TGS. Further ticket exchanges proceed similarly to the first ticket exchange, but the requests are made to the TGS rather than the AS, and with requests specifying a target service in addition to a target user. Authentication for service ticket requests may be performed in the same way as the authentication for the initial ticket requests is performed, but a

previously-acquired TGT is used as proof of authentication. The TGS responds to a service ticket request with a service ticket containing information similar to that in the TGT. The

response is encrypted in the session key shared between the KDC and the authenticated user, and contains within it information encrypted in the secret key shared between the KDC and the service. This service ticket can then be used as part of an authentication transaction with the target service, which can ensure that the ticket being presented to it is valid based on its being encrypted in the target server’s secret key.

Three different implementations of the Kerberos model are currently in common use: the original Kerberos version 4, Kerberos version 5​[23]​, and the Open Group’s DCE security

server​[24]. Kerberos version 4 is perhaps the most commonly implementation of the model, but as it went on being deployed on larger scales, it became apparent that the implementation suffered from some serious security flaws, including a particular vulnerability to dictionary-based attacks. Kerberos V5 was developed to address these concerns, and to provide some additional features required by new applications of the model. The addition of support for prior encryption of ticket granting requests and changes in the underlying ticket exchange protocol make

(44)

Kerberos V5 less vulnerable to certain common dictionary-based cryptographic attacks. The DCE security service was developed as part of the Open Group’s Distributed Computing Environment, a larger project providing an infrastructure for secure, cross-platform distributed computing. Loosely based on an intermediate version of Kerberos V5 from MIT, the DCE security service is conceptually similar to Kerberos V5, but relies on the DCE secure RPC mechanism for ticket exchanges.

The Kerberos model provides the advantages of strong authentication based on strong encryption without the complications exhibited by current PKI implementations. In principle, only one dedicated network server must be installed, secured and maintained to support a Kerberos infrastructure. Kerberos provides a natural mechanism for ensuring the privacy and integrity of application-level data exchanges over an insecure network and provides a mechanism for two-way authentication.

Kerberos does suffer from some well-known deficiencies​[25]​. Kerberos is a password-based system, and as many have pointed out, it is subject to a variety of

password-guessing attacks. Further, the model is subject to certain types of replay attacks; a determined attacker may, under certain circumstances, be able to circumvent the replay

protections built into the Kerberos protocol and for a short time communicate to the servers and services posing as an authenticated user by replaying part of a previous ticket exchange on an insecure network. Since Kerberos session keys may be used to encrypt multiple messages between a single client and server during the lifetime of a given Kerberos ticket, session keys may be vulnerable to cryptographic attacks.

The Kerberos approach to building a SAR suitable for an organization is not without developments costs, as applications must be designed to support Kerberos in the first place or modified to support Kerberos in order to be able to participate in a Kerberos-based SAR.

(45)

Kerberos V5 is significantly stronger as an authentication mechanism than Kerberos V4, and the DCE implementation of Kerberos offers even greater protections against certain forms of attack by modifying the ticket exchange protocol in some significant ways. Kerberos

implementations continue to evolve, with MIT and other investigating extensions to the Kerberos protocol to support more cryptographically secure authentication mechanisms and to integrate support for newer authentication approaches.

“A Kerberos-based SAR can provide more than adequate security with greater confidence and less administrative overhead than other SAR mechanisms. While PKI-based authentication mechanisms may offer advantages in the realm of electronic commerce, where users may not be known by the organizations they interact with until the need for authentication arises, they do not offer the proven reliability nor the existing installed base of standard

implementations Kerberos boasts.”​[12]

SSO alternatives

The SAR solutions discussed above can be viewed as providing a full SSO solution or just as a base for one. In the Kerberos model, the client obtains reusable credentials after a single authentication to the SAR. This is enough to authenticate to any Kerberized application without re-entering any further credentials. Also, the possession of a personal certificate can make it so a user can authenticate to the supported applications without re-entering passwords or other authentication credentials.

Common SSO approaches fall into three main categories: those which rely on the already implemented SAR to authenticate onto multiple services, and those which rely on some centralized repository that holds authenticator credentials, and hybrid solutions incorporating

Referințe

DOCUMENTE SIMILARE

Moshe de Leon.(65) Therefore, in lieu of assuming that Jewish philosophy would, invariably, inhibit Jewish mysticism from using extreme expres- sions, there are examples of the

Toate acestea sunt doar o parte dintre avantajele in care cred partizanii clonarii. Pentru a si le sustine, ei recurg la o serie de argumente. Unul dintre ele are in atentie

Abstract: The Canadian Immigration and Refugee Protection Act provides that one of the objectives of immigration is “to see that families are reunited in Canada.” The Act

(2016) have used in their research study two types of edible and also medicinal fungi species, Ganoderma lucidum and Pleurotus ostreatus, and have chosen as a

Talvila , Estimates of the remainder in Taylor’s theorem using the Henstock- Kurzweil integral,

De¸si ˆın ambele cazuri de mai sus (S ¸si S ′ ) algoritmul Perceptron g˘ ase¸ste un separator liniar pentru datele de intrare, acest fapt nu este garantat ˆın gazul general,

According to our previous investigations it seems that tolerance, whether regarded as a political practice or a philosophical or moral principle, is a strategy (or tactics) of one

The number of vacancies for the doctoral field of Medicine, Dental Medicine and Pharmacy for the academic year 2022/2023, financed from the state budget, are distributed to