• Nu S-Au Găsit Rezultate

Introduction to deep learning

N/A
N/A
Protected

Academic year: 2022

Share "Introduction to deep learning"

Copied!
67
0
0

Text complet

(1)

INTRODUCTION TO DEEP

LEARNING

(2)

CONTENTS

(3)

Introduction to deep learning

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

3

Contents

1. Examples

2. Machine learning 3. Neural networks 4. Deep learning

5. Convolutional neural networks 6. Conclusion

7. Additional resources

(4)

LET’S START WITH

SOME EXAMPLES

(5)

Introduction

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

5

Object detection

https://www.youtube.com/watch?v=VOC3huqHrss

(6)

Introduction

Image segmentation

https://www.youtube.com/watch?v=1HJSMR6LW2g

(7)

Introduction

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

7

Image colorization

https://www.youtube.com/watch?v=ys5nMO4Q0iY

(8)

Introduction Mario

https://www.youtube.com/watch?v=L4KBBAwF_bE

(9)

MACHINE LEARNING

(10)

Machine learning

What is machine learning?

To learn = algorithmically find the choice of parameters that best explain the data.

- Object detection -

- Linear regression -

(11)

Machine learning

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

11

Uses machine learning in industry

• Object detection

• Image segmentation

• Image classification

• Speech recognition

• Language understanding and translation

• Spam filters and fraud detection

• Automatic email labelling and sorting

• Personalized search results and recommendations

• Automatic image captioning

• Online advertising

• Medical diagnosis

(12)

Machine learning

Deep learning – what is it?

• A particular subset of ML algorithms a.k.a. “enhanced neural network”

• The closest to an ideal learning agent

(13)

NEURAL NETWORKS

(14)

Introduction to neural networks

RBRO/ESA1 | 30/08/2017

14

Biological motivation and connections

Intuition : as humans we use our brains to learn the characteristics of different objects and phenomena.

Brain neurons: receive input signals from dendrites and produce output signals along axon.

Image taken from http://cs231n.github.io/neural-networks-1

(15)

Introduction to neural networks

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

15

Biological motivation and connections

Computational model : signals interact multiplicatively with dendrites of the next neuron (linear combination of the input signals).

 The weights (synaptic strengths) and bias (threshold) are learnable. They control the influence of one neuron over another.

Image taken from http://cs231n.github.io/neural-networks-1

The weights 𝑤 𝑖 and bias 𝑏 are

parameters learned through training.

(16)

𝑤 3

Introduction to neural network Perceptron

• Element (neuron) that takes decisions based on evidences

• Takes several binary inputs and produces a single binary output

𝑥 1 𝑥 2

𝑥 3

𝑤 1 𝑤 2

t output

𝑥 1 , 𝑥 2 , 𝑥 3 ∈ {0,1}

(17)

Introduction to neural network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

17

Perceptron: example

You are trying to decide whether to go to a concert or not.

You might make your decision by weighing up three factors:

1. Is the weather good?

2. Do you have enough time to attend the concert?

3. Does your boyfriend or girlfriend want to accompany you?

(18)

Perceptron: example

Introduction to neural networks

weatℎ𝑒𝑟 (x 1 )

time (x 2 )

company (x 3 )

output

6 2 2

I only go if the weather is good

5

𝑤 1 𝑤 2 𝑤 3

t

output = 1, 𝑖=1 3 𝑤 𝑖 ∗ 𝑥 𝑖 > 5 0, 𝑖=1 3 𝑤 𝑖 ∗ 𝑥 𝑖 ≤ 5

• Output is 1 only if weather’s input is 1

• T=5 𝑤 1 > 5

𝑤 2 + 𝑤 3 ≤ 5

(19)

Perceptron: example

Introduction to neural networks

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

19

I go if the weather is good and I have enough time!

output = 1, 𝑖=1 3 𝑤 𝑖 ∗ 𝑥 𝑖 > 3 0, 𝑖=1 3 𝑤 𝑖 ∗ 𝑥 𝑖 ≤ 3

• Output is 1 only if weather’s input is 1

• T=3

𝑤 1 + 𝑤 2 > 3 𝑤 2 + 𝑤 3 ≤ 3 𝑤 1 + 𝑤 3 ≤ 3

weather (x 1 )

time (x 2 )

company (x 3 )

output

6 2 2

5

𝑤 1 𝑤 2 𝑤 3

t

(20)

Perceptron vs Artificial neuron Introduction to neural networks

𝑥 1

𝑥 2 𝑥 3

output

𝑤 1

𝑤 2

𝑤 3 t

𝑥 1

𝑥 2 𝑥 3

𝑤 1

𝑤 2

𝑤 3 f(z) output

b

activation function 𝑧 =

𝑖=1 𝑛

𝑤 𝑖 ∗ 𝑥 𝑖 + 𝑏

𝑜𝑢𝑡𝑝𝑢𝑡 = 𝑓(𝑧)

(21)

ACTIVATION FUNCTIONS AND

ARCHITECTURES

(22)

Architectures and activation functions Activation functions

• Properties:

 Nonlinear function: makes possible to solve more complex problems -> artificial neural networks are universal function approximators [Cybenko 1989, Hornik 1991]

 Differentiable function: necessary for learning parameters

• The activation function is applied after computing the linear value z of the neuron

𝑥 1

𝑥 2 𝑥 3

output

𝑤 1

𝑤 2 𝑤 3 𝑧 =

𝑖=1 𝑛

𝑤 𝑖 ∗ 𝑥 𝑖 + 𝑏 f(z)

𝑜𝑢𝑡𝑝𝑢𝑡 = 𝑓(𝑧)

(23)

Activation functions

Sigmoid functionTanh function

Architectures and activation functions

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

23

σ(z) = 1+ 𝑒 1 −𝑧 tan(z) = 𝑒 𝑧 −𝑒 −𝑧

𝑒 𝑧 + 𝑒 −𝑧

Images taken from http://neuralnetworksanddeeplearning.com””

(24)

Activation functions

ReLU functionLeaky ReLU function

Architectures and activation functions

relu(z) = max(0, 𝑧) Lrelu(z) = max(0.01 ∗ 𝑧, 𝑧)

Images taken from https://datascience.stackexchange.com/questions/5706/what-is-the-dying-relu-problem-in-neural-networks””

(25)

Architectures and activation functions

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

25

Architectures

Images taken from http://cs231n.github.io/neural-networks-1””

+b +b +b +b

+b

+b

+b

+b

(26)

APPLICATIONS

(27)

Regression vs Classification

 Fit some real-valued function

 𝑦 𝑖 = 𝑓 𝑥 𝑖 , 𝑊 , 𝑦 𝑖 ∈ ℝ

 Assign a label to an input vector

 𝑦 𝑖 = 𝑓 𝑥 𝑖 , 𝑊 , 𝑦 𝑖 ∈ {𝑐𝑎𝑡, 𝑏𝑖𝑘𝑒, 𝑑𝑜𝑔, ℎ𝑜𝑢𝑠𝑒, 𝑐𝑎𝑟}

Neural networks

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

27

0 0,2 0,4 0,6 0,8

Cat Bike Dog House Car

https://cdn.comsol.com/wordpress/2015/03/Experimental-data-and-fitted-function.png

(28)

Applications

Applications of shallow neural networks

Handwritten digit/character recognition

https://knowm.org/wp-content/uploads/Screen-Shot-2015-08-14-at-2.44.57- PM.png

Stock market (time series) prediction

http://milenia-finance.com/wp-content/uploads/6359633929809316592035809433_stock-market.jpg

Image compression

https://ai2-s2-public.s3.amazonaws.com/figures/2016-11- 08/1e50094bcaf81dac5ea44cea87fd84b25ceb9090/2-Figure3-1.png

(29)

TRAINING A NEURAL NETWORK

BACKPROPAGATION

(30)

Training a neural network - Backpropagation General aspects

• Common method for training a neural network

• Goal: optimize the weights so that the neural network can learn how to correctly map arbitrary inputs to outputs (learn to generalize a problem)

• Steps:

 Forward step

 Compute the error

 Backward step

 Update parameters

𝑇𝑎𝑟𝑔𝑒𝑡 𝐶𝑜𝑠𝑡 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛

Image taken from “” https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example””

*example valid for supervised learning

(31)

Training a neural network - Backpropagation

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

31

Cost function

“How good" a neural network did w.r.t. it's given m training samples and the expected output

• The set of weights and biases have done a great job if C(w,b) ≈ 0

• Our aim is to minimize it such that 𝑦 x i , w, b becomes identical to 𝑦 𝑥 𝑖

How ?

The Squared Error:

•𝑥 𝑖 : network input set

•𝑦(𝑥 𝑖 ): labeled output set (expected, true outputs)

• 𝑦(𝑥 𝑖 , 𝑤, 𝑏): network outputs

𝐶 𝑤, 𝑏 = 1 2 𝑖=1

𝑚

𝑦 𝑥 𝑖 − 𝑦 𝑥 𝑖 , 𝑤, 𝑏 2

(32)

Training a neural network - Backpropagation Gradient descent

• Optimization algorithm used for finding the minimum of a cost function

• Cost function depends on weights and biases

• Gradient finds how much a weight or bias causes the cost function’s value

• Update weights and biases to minimize the cost function

Learning rate:

 used for weights and bias updates

 small, positive parameter

 fixed or dynamic

Images taken from http://neuralnetworksanddeeplearning.com””

(33)

Gradient descent

learning rate

Training a neural network - Backpropagation

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

33

Images taken from http://neuralnetworksanddeeplearning.com””

Weight and bias update after a

training sample:

(34)

DEEP LEARNING

(35)

NAÏVE DEEP LEARNING

(36)

Naïve deep learning Applications

 Exceptional effective at learning patterns

 Solves complex problems

 Applications:

Speech recognition Computer vision Natural language processing

(37)

Naïve deep learning

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

37

From shallow to deep

Shallow Neural Network Deep Neural Network

Source: http://neuralnetworksanddeeplearning.com

(38)

Naïve deep learning

Deep neural networks challenges

 Inputs are vectors => spatial relationships are not preserved (input scrambling)

 The number of parameters increases exponentially with the number of layers

 Huge number of parameters would quickly lead to overfitting

 Networks with many layers have an unstable gradient problem

(39)

DEEP LEARNING

(40)

Deep learning

Neural networks as computational graphs

(41)

CONVOLUTIONAL NEURAL

NETWORK

(42)

Convolutional Neural Network

Layers of Convolutional Neural Network

1. Input layer

2. Convolutional layer 3. Subsampling layer 4. Fully connected layer

Source: https://en.wikipedia.org/wiki/Convolutional_neural_network#/media/File:Typical_cnn.png

(43)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

43

Input layer: What is an image?

Binary image:

 A matrix of pixel values – each pixel is either 0 or 1

Grayscale image:

 A matrix of pixel values – each pixel is a natural number between 0 and 255

 Pixel value – intensity of light

RGB image:

 3 matrices of pixel values – each pixel is a natural number between 0 and 255

 Pixel value – intensity of the color (red, green or blue)

RGB image Binary

Grayscale

Source: https://medium.com/@ageitgey/machine-learning-is-fun-part-

3-deep-learning-and-convolutional-neural-networks-f40359318721

(44)

Convolutional Neural Network

The purpose of convolution is to extract features from the input:

1. Gets a 3D matrix as an input e.g.: an RGB image with depth 3 2. “Convolves” multiple kernels on the

input 3D matrix

3. Creates the output 3D matrix:

the feature maps – also called activation maps

Convolutional layer

Source: http://cs.nyu.edu/~fergus/tutorials/deep_learning_cvpr12/

(45)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

45

Convolution on a single matrix

Input:

 W 1 x H 1 matrix (e.g. binary image)

 Kernel (filter): W 2 x H 2 matrix

Input matrix Kernel Input image

Convolution operation:

 Slide the kernel over the W 1 x H 1 matrix

 At each position calculate element wise multiplication

 Calculate the sum of multiplications

Output:

 W 3 x H 3 matrix –> feature map (activation map)

Source: http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution

(46)

Convolutional Neural Network

 Another view of the convolution operation

W 1 x H 1 matrix W 2 x H 2 matrix W 3 x H 3 matrix (feature map)

Convolution on a single matrix

http://intellabs.github.io/RiverTrail/tutorial/

(47)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

47

What is a kernel?

It is a learnable filter

 The values (weights) of the kernel will be learned during the training process

 The trained filter will activate on the image (during the forward pass) when it sees some type of visual feature on the image (e.g.:

edges)

The size of the kernel is a hypermarameter of the convolutional layer

 Typical kernel sizes: 3x3, 5x5

Random kernel Trained kernel

(48)

Convolutional Neural Network Convolution on a 3D matrix

 The input is an W 1 x H 1 x D 1 3D matrix

‒ W 1 x H 1 is the width and height of the 3D matrix

‒ D 1 is the depth of the 3D matrix

‒ E.g.: an RGB image (with 3 channels)

 To convolve a kernel on the 3D matrix, the depth of the kernel should be the same: W 2 x H 2 x D 1

 The convolution operation is still the same:

‒ Slide (convolve) the kernel across the width and height of the input 3D matrix

‒ At each position calculate element wise multiplication

‒ Calculate the sum of multiplications

‒ (It produces 1 value at each position)

 The output of the convolution is an W 3 x H 3 x 1 feature map

K er n el h ei gh t

Kernel width

Source: https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/convolutional_neural_networks.html

(49)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

49

Multiple kernels

 A convolution layer usually contains multiple kernels

 Each kernel produces a separate 2 dimensional feature map

 Stacking these feature maps, one can get the output volume of the

convolution layer

 The number of kernels (the depth of the output volume) is defined by the depth hyperparameter

Input 3D matrix Output - 3D matrix 6 different feature maps

Source: https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/convolutional_neural_networks.html

Convolution with 6 different kernels:

(50)

Convolutional Neural Network What are the filters learning?

 1 st layer: edges

 2 nd layer: corners, local textures

 3 rd layer: simple shapes

 …

 n th layer: complex shapes, objects

Source: Zeiler & Fergus, 2014

(51)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

54

Convolutional Layer - Summary

Convolution:

 Input: W 1 x H 1 x D 1 3D matrix

 Output: W 3 x H 3 x D 3 3D matrix

 Hyperparameters of the convolutional layer:

 Size of the kernel: W 2 x H 2 (the depth of the kernel is equal with the input depth)

 Number of kernels (depth of the output): D 3

 Stride: S

 Padding: P

K er n el h ei gh t

Kernel width

(52)

Convolutional Neural Network Subsampling (Pooling) Layer

 Perform a downsampling operation along the spatial dimensions – width, height

 It reduces the spatial size of the representation

 It operates independently on every depth slice of the representation

 Most common subsampling operation:

max pooling

Source: http://cs231n.github.io/convolutional-networks/

(53)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

56

Example: max pooling

Hyperparameters:

 Size of stride

 Size of the filter

Source: http://cs231n.github.io/convolutional-networks/

(54)

Fully Connected Layer

Convolutional Neural Network

Source: http://cs231n.github.io/convolutional-networks/

 It is a neural network (similar as in the previous course)

The input of the layer are the features extracted by the convolution and

subsampling layers

The output of the layer is the output of the full CNN

(for example class probabilities)

(55)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

58

Information flow in a CNN

Source: http://cs231n.github.io/convolutional-networks/

(56)

Convolutional Neural Network History and state of the art

 AlexNet

- Initial architecture that had good

performance (2012) - 7 layers

 VGGNet

- Better performance using deeper network with less parameters (2014)

- 16 layers

 GoogLeNet

- Better performance using processing done in parallel on same input (2014) - Over 100 layers

 ResNet (Microsoft) - Better performance

using residual information (2015) - 152 layers

- Models can be trained to perform many different tasks, with few modifications

Sources: http://cv-tricks.com/cnn/understand-resnet-alexnet-vgg-inception/

https://www.saagie.com/blog/object-detection-part1

http://file.scirp.org/Html/4-7800353_65406.htm

(57)

Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

60

Case study: AlexNet

 ImageNet Large Scale Visual

Recognition Challenge winner in 2012

 Task: images classification (each image is associated with a class)

 Dataset:

 ImageNet 2012

 Training set: 1.2 million images containing 1000 categories (classes)

 Testing set: 200.000 images

 Training details:

 90 full training cycle on the training set

The training took 6 days on two GeForce 580

AlexNet architecture

Train images Results on test images

Sources: http://cv-tricks.com/cnn/understand-resnet-alexnet-vgg-inception/

https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

(58)

MOTIVATION OF

CONVOLUTIONAL NEURAL

NETWORK

(59)

Example:

224x224x3 input image: 150528 long input

=> 150528 weights per neuron

150528*3 (451584) parameters (for 3 neurons)

Motivation of Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

62

Parameter sharing

Example:

224x224x3 input image 11x11x3 size of a filter 3 filters

11*11*3*3 (1089) parameters (for 3 filters)

Source: https://noppa.oulu.fi/noppa/kurssi/521010j/luennot/521010J_convolutional_neural_network.pdf

(60)

Motivation of Convolutional Neural Network Local connectivity and spatial invariance

Share the same parameters across different locations

Apply and learn multiple filters

Source: https://noppa.oulu.fi/noppa/kurssi/521010j/luennot/521010J_convolutional_neural_network.pdf

(61)

Motivation of Convolutional Neural Network

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

64

Benefits

 The same small set of weights (parameters) is applied on an entire image

 Better training. Better generalization

 Preserves spatial relationships in the receptive field

 Spatial invariance

Local

connectivity

Parameter

sharing

(62)

CONCLUSION

(63)

Conclusion

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

66

 you have access to quite large amounts of data

 the problem is reasonably complex, in a high dimensional space

 no hard constraints or hard logic (smooth, differentiable)

 It’s a general framework

 Efficient graph structure

 Well performing given the right circumstances

And some tips

Strengths of deep learning: You should consider deep learning if:

(64)

RESOURCES

(65)

Libraries and networks

• C++ fans for easy prototyping:

• OpenCV – both Neural Networks and Adaboost

• FANN – Fast Artificial Neural Network Library

• OpenNN – Open Neural Network Library

• Nnabla – Neural network libraries by Sony

• Python:

Tensorflow

• Keras

• Theano

Caffe(also for C++)

Torch7

PyTorch

DeepLearning4J

MXNet

• Deepy

• Lasagne

• Nolearn

• NeuPy

Resources

RBRO/ESA1 | 30/08/2017

© Robert Bosch GmbH 2017. All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property rights.

76

(66)

Learning resources Resources

 Stanford CS231n course (Karpathy et. al.)

 Deep learning book (Goodfellow)

 various introductory YouTube videos

 For beginners:

www.reddit.com/r/LearnMachineLearning

 After you grasp the concepts:

www.reddit.com/r/MachineLearning

(67)

THANK YOU

QUESTIONS?

Referințe

DOCUMENTE SIMILARE

SINGLE STEP FACILE SYNTHESIS OF SUPERPARAMAGNETIC IRON OXIDE NANOPARTICLES FOR CATALYTIC PYROLYSIS APPLICATIONSM.

Membrane filter assay of 45nm nano silver-coated filter for detection of E-coli in the treated water sample after 2 hours filtration.. The number of bacteria is

Numerical construction of Gaussian quadratures with respect to strong non-classical weights and some exotic weight functions, as well as several applications of such rules

au fost femei şi 20,5% au fost bărbaţi; deci proporţia repartzată pe sex este egală în cazul scorurilor pacienţilor diagnosticaţi cu Alzheimer, disfuncţie

Women Wellness Project of the Regional Center of the East Bay and the Women’s Health Project of UCP of the Golden Gate.

18 Ca „umbră a lumii” cartea – metaforă poate să însemne pe de o parte, spiritul ei tutelar, omniputer- nic, cea care intercesează şi captează lumina generatoare, dar în

Таким образом, процент заповедности, то есть соотношение площади природно-заповедного фонда (S ПЗФ ) к общей площади территории (S общ ), в

All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property