jueves, febrero 22, 2024
InicioDroneDeveloping Neural Networks From Scratch: Half 2

Developing Neural Networks From Scratch: Half 2


Carry this mission to life

Photograph by JJ Ying / Unsplash

Neural Networks are a core side of synthetic intelligence. Nearly each single idea of deep studying includes coping with neural networks. The basic performance of neural networks might be thought of a black field. Many of the parts of why they work or yield the outcomes they do are but unknown. Nonetheless, theories, hypotheses, and quite a few research assist us to know how and why these neural networks work. A deep dive into the basic ideas of neural networks will assist us perceive their performance higher.

I’ve coated among the elementary ideas of tips on how to assemble neural networks from scratch partly 1 of this weblog sequence. The viewers can test it out from the next hyperlink. We mentioned among the important primary features and proceeded to construct a neural community for fixing various kinds of «Gate» Patterns. We additionally had a comparability and dialogue on tips on how to carry out comparable duties with the deep studying frameworks. On this article, our sole focus is to cowl a lot of the important matters such because the implementation of layers, activation capabilities, and different comparable ideas. The Paperspace Gradient platform is a improbable possibility for the execution of the code snippets mentioned on this article.

Introduction:

Developing neural networks from scratch is without doubt one of the few strategies by which a person can grasp the ideas of deep studying. The information of arithmetic to know the core parts of neural networks is crucial. For a lot of the matters that we are going to cowl on this article, we’ll contact on the baseline math required for understanding them. In future articles, we’ll cowl extra on the mathematic necessities, similar to differentiation, which is important to utterly perceive backpropagation.

On this article, we’ll cowl a serious portion of the basic ideas required for the implementation of those neural networks from scratch. We’ll start by developing and implementing layers for our general neural community structure. As soon as we have now the architectural format with the layers, we’ll perceive the core ideas of activation capabilities. We’ll then contact on the loss capabilities to readjust the weights and prepare the mannequin appropriately to realize the specified outcomes. Lastly, we’ll compute all the weather we studied right into a remaining neural community design to unravel some candid duties.


Getting began with Layers:

Picture Supply

The above picture exhibits a typical illustration of a easy neural community. We now have a few enter nodes within the enter layer the place the information is handed via the hidden layers with 4 hidden nodes to lastly give us a ensuing output. If the viewers have completely understood the primary a part of this text sequence, it’s straightforward to correlate the AND, XOR, or different Gate issues to this neural community structure. The enter is handed via hidden layers the place there’s a ahead propagation to the output layer.

As soon as the ahead propagation is accomplished, we are able to compute the loss on the output layers. If the ensuing worth is much from the best resolution, the ensuing loss is larger. By means of backpropagation, the weights of the community are readjusted to acquire an answer nearer to the precise values. After coaching via a bunch of epochs, we are able to efficiently prepare our mannequin to study the relative patterns accordingly. Notice that in some sorts of issues, it isn’t essential to have an enter layer. The inputs can moderately be handed straight via the hidden layers for additional computation.

On this part, allow us to give attention to implementing a customized «Dense» layer via which the hidden layer computations of our Neural Community can happen to carry out a particular activity. The hidden layers in a deep studying framework like TensorFlow or PyTorch can be their totally linked layers. The category Dense is used within the case of TensorFlow, whereas a Linear class is used within the case of the PyTorch library. Allow us to assemble our personal hidden Dense layer from scratch and perceive how these layers work.

For constructing our customized layers, allow us to firstly import the numpy library via which we’ll perform nearly all of our mathematical computations. We’ll outline a random enter of the form (3, 4) and perceive how a Dense layer of a neural community works. Under is the code snippet for outlining the array and printing its form.

# Importing the numpy library and defining our enter
import numpy as np

X = np.array([[0., 1., 2., 3.], 
              [3., 4., 5., 6.],
              [5., 8., 9., 7.]])

X.form

(3, 4)

Within the subsequent step, we’ll outline our customized class for creating the hidden (Dense) layer performance. The primary parameters initialized within the class are the variety of options in every pattern. This characteristic is at all times equal to the variety of columns in your dataset. In our case, the primary parameter for the variety of options is 4. Nonetheless, the second parameter is the variety of neurons which might be outlined by the consumer. The consumer can outline what number of ever the variety of neurons they deem mandatory for the actual activity.

The random seed operate is outlined to get comparable values per execution in order that the viewers can comply with alongside. We outline random weights with the enter options and the variety of neurons supplied by the consumer. We are able to multiply the weights with 0.1 or another decrease decimal worth to make sure that the generated random numbers are lower than one within the numpy array. The bias can have a form matching the variety of neurons. The ahead go operate will comply with the earlier guidelines of the linear equation.

$$ Y = X * W + B $$

Within the above method, Y is the output by the neural enter, X is the enter, and W and B are the weights and biases, respectively. Under is the code snippet for decoding the hidden class accordingly.

# Setting a random seed to permit higher monitoring
np.random.seed(42)

class Hidden:
    def __init__(self, input_features, neurons):
        # Outline the random weights and biases for our assertion drawback
        self.weights = 0.1 * np.random.rand(input_features, neurons)
        self.bias = np.zeros((1, neurons))
        
    def ahead(self, X):
        # Create the ahead propagation following the method: Y = X * W.T + B
        self.output = np.dot(X, self.weights) + self.bias

We are able to now proceed to create the hidden layers and carry out a ahead go to obtain the specified output. Under is the code snippet for performing the next motion. Notice that the primary parameter have to be 4, which is the form of the enter options handed via the neural community. The second parameter, which is the variety of neurons, might be something that the consumer wishes. The performance is much like the variety of models handed in a Dense or Linear layer.

hidden_layer1 = Hidden(4, 10)
hidden_layer1.ahead(X)
hidden_layer1.output

array([[0.30669248, 0.17604699, 0.16118867, 0.37917194, 0.3990861 ,
        0.41789485, 0.16174311, 0.18462417, 0.36694732, 0.17045874],
       [0.79104919, 0.84523964, 0.73767852, 1.01704543, 0.92694979,
        0.99778655, 0.42172713, 0.7854759 , 1.05985961, 0.61623013],
       [1.17968666, 1.4961964 , 1.34041738, 1.46314607, 1.3098747 ,
        1.49725738, 0.66537164, 1.38407469, 1.65824975, 0.93693172]])

We are able to additionally stack one hidden layer on high of the opposite as proven within the under code snippet.

hidden_layer2 = Hidden(10, 3)
hidden_layer2.ahead(hidden_layer1.output)
hidden_layer2.output

array([[0.13752772, 0.14604601, 0.11783941],
       [0.40980827, 0.43072277, 0.37302534],
       [0.64268134, 0.67249739, 0.58861158]])

Now that we have now understood among the basic ideas of implementing hidden layers to our neural networks, we are able to proceed to know one other essential matter of activation capabilities within the upcoming part of this text.


Activation Features for Neural Networks:

Picture Supply

When you may have a set of values, and you are attempting to suit a line on these values, most of the time, a single straight line can’t match on a fancy dataset as you need. The becoming course of would require some sort of exterior interference to regulate the mannequin accordingly to suit on a dataset. One of many widespread strategies to realize this becoming is by using activation capabilities. Because the title suggests, activating a operate means activating the output nodes to differ the lead to an optimum method.

Extra technically, activation capabilities are non-linear transformations that assist to activate the earlier inputs earlier than sending them over to the subsequent output. Within the late twentieth century, among the extra widespread choices for activation capabilities have been tanh and sigmoid. These activation capabilities are nonetheless utilized within the LSTM cells, and it’s price studying about them. We additionally used the sigmoid activation operate partly 1 of this sequence. The working mechanisms of those activation capabilities are comparatively easier, however over latest years, a few of their utility has been diminished.

The first difficulty with these activation capabilities was that since their derivatives resulted in smaller values, there was typically an issue of vanishing gradients. Notice that the derivates are computed in the course of the backpropagation phases to regulate the weights within the coaching part. When these derivatives attain minimal values for advanced duties or bigger sophisticated networks, it turns into futile to compute the weights as there can be no important enhancements.

One of many fixes to this resolution was to make use of a now very talked-about activation operate in Rectified Linear Unit (ReLU) as a substitute for these activation capabilities. Our main give attention to activation capabilities for this a part of the article is the ReLU and SoftMax activation capabilities. Allow us to begin by briefly analyzing and understanding every of those two important activation capabilities.

ReLU Activation Perform:

Picture Supply

The ReLU activation operate is the most well-liked selection amongst knowledge scientists in latest occasions. They assist to unravel a lot of the points that beforehand existed in different activation capabilities, similar to sigmoid and tanh. As noticeable from the above graph, the ReLU operate has an output of 0 when the enter x values are much less or equal to zero. As the worth of x will increase larger, the output can be activated linearly. Under is the mathematical illustration of the identical.

$$X =start{instances} 0, & if x leq 0 1, & if x > 0 finish{instances}$$

The ReLU operate isn’t steady as a result of it isn’t differentiable on the ‘0’ level within the linear graph. This non-differentiability trigger a slight difficulty that we are going to focus on later. For now, you will need to perceive that the ReLU activation operate returns the utmost worth of the values supplied that the values are larger than zero. Allow us to implement the ReLU activation operate in our code and perceive how they work in a sensible instance situation. Under is the method illustration and code snippet for coding a ReLU activation operate.

$$(X)^+ = max(0,  X)$$

ReLU_activation = np.most(0, X)

The ensuing by-product values can be zero when X is lower than zero and one when X is larger than one. Nonetheless, the by-product output at zero is undefined. This drawback may end up in points similar to dying gradients, the place the ensuing outputs present no enchancment because the weights are neutralized to none kind values throughout backpropagation. We might want to make slight changes in our code to compensate for this example or use a variation of ReLU. There are a number of various kinds of modifications made to the ReLU activation operate to deal with this difficulty accordingly. A few of these modifications embody Leaky ReLU, ELU, PReLU (Parametric ReLU), and different comparable variations.

SoftMax Activation Perform:

Within the case of Multi-class classification, activation capabilities like ReLU do not actually work within the final layer as a result of we have now a likelihood distribution given by the mannequin. In such instances, we’d like an activation operate that’s good for dealing with possibilities. Among the finest choices for multi-class classification issues is the SoftMax activation operate. The SoftMax operate is a mix of the exponentiation of the inputs adopted by its normalization. The formulation of the SoftMax activation operate might be written as follows.

$$textual content{Softmax}(x_{i}) = frac{exp(x_i)}{sum_j exp(x_j)}$$

Now that we have now an understanding of the method illustration of the SoftMax operate, we are able to proceed to implement the code accordingly. We’ll outline a random output array that we have now acquired and attempt to compute the respective SoftMax loss for a similar. We’ll compute the exponential values for every of the weather within the output array after which proceed to take the imply (or common) of those values leading to a normalized output. This normalized output is the ultimate results of the SoftMax operate, which is principally a bunch of possibilities. We are able to be aware that the sum of those possibilities provides as much as one, and the bigger quantity has a better likelihood distribution.

# Understanding Exponents and softmax fundamentals
import numpy as np

outputs = np.array([2., 3., 4.])

# Utilizing the above mathematical formulation to compute SoftMax
exp_values = np.exp(outputs)
imply = np.sum(exp_values)
norm_values = exp_values/imply
norm_values

array([0.09003057, 0.24472847, 0.66524096])

Notice that after we carry out the same computation with a multi-dimensional array or over batches within the case of deep studying duties, we might want to barely modify our code accordingly. Allow us to import a random multi-dimensional array that we used within the earlier part for additional evaluation. Under is the code snippet for outlining the inputs.

# Importing the numpy library and defining our enter
import numpy as np

X = np.array([[0., 1., 2., 3.], 
              [3., 4., 5., 6.],
              [5., 8., 9., 7.]])

The computation of the exponential values stays the identical as within the final step. Nonetheless, for computing the sum of the array, we might want to barely modify our code accordingly. We’re not aiming for a single worth, as we have to acquire a sum for every of the enter options in a selected row/batch. Therefore, we’ll specify the axis as one to compute the weather alongside the row (zero for column). Nonetheless, the output generated will nonetheless not match the specified array form. We are able to both resize the array or set the maintain dimensionality attribute as True. Under is the code snippet and the generated output.

# Computing SoftMax for an array

exp_values = np.exp(X)
imply = np.sum(exp_values, axis = 1, keepdims = True)
norm_values = exp_values / imply
norm_values

array([[0.0320586 , 0.08714432, 0.23688282, 0.64391426],
       [0.0320586 , 0.08714432, 0.23688282, 0.64391426],
       [0.01203764, 0.24178252, 0.65723302, 0.08894682]])

With that we have now coated a lot of the intricate particulars required for the implementation of the SoftMax activation operate. The one different difficulty that we would run into is the problem of explosion of values because of


Implementing Losses:

The following important step that we are going to cowl is the implementation of the losses for neural networks. Whereas there are a number of metrics similar to accuracy, F-1 Rating, recall, precision, and different comparable metrics, the loss operate is essentially the most helpful requirement for successfully developing neural networks. The loss operate signifies the error computed in the course of the coaching. The loss determines how shut our predicted values are to our desired output.

There are a number of various kinds of loss capabilities which might be utilized in neural networks. Out of the a number of completely different loss capabilities, every one among them is used for explicit duties, similar to classification, segmentation, and many others. Notice that despite the fact that there are a number of pre-defined loss capabilities in deep studying frameworks, you could must outline your individual customized loss capabilities for particular duties. We’ll primarily focus on two loss capabilities, particularly imply squared error and categorical cross-entropy, for the needs of this text.

Imply Squared Error:

The primary loss operate that we are going to briefly analyze is the imply sq. error. Because the title suggests, we carry out a imply on the sq. of the expected and true values. The imply sq. error is without doubt one of the extra simple approaches to calculating the lack of a neural community. On the finish of the neural community on the output nodes, we compute the loss, the place we examine the result with the anticipated output. Within the imply squared error, we take the common of the squares of the distinction between the expected and anticipated values. Under is the mathematical method and the Python code implementation of the imply sq. error loss.

$$ frac{1}{n}sum_{i=1}^{n}(y_{true} – y_{pred})^2 $$

import numpy as np

y = np.array([0., 1., 2.])
y_pred = np.array([0., 1., 1.])

mean_squared_loss = np.imply((y_pred - y) ** 2)
mean_squared_loss

0.3333333333333333

Whereas engaged on initiatives, particularly a subject like developing neural networks from scratch, it’s typically a good suggestion to cross-verify if our implementations are computing the anticipated outcomes. Among the finest methods to carry out such a examine is to make use of a scientific library toolkit like scikit-learn, from which you’ll import the required capabilities for evaluation. We are able to import the imply squared error metric obtainable within the scikit-learn library and compute the outcomes accordingly. Under is the code snippet for verifying your code.

# Importing the scikit-learn library for verification
from sklearn.metrics import mean_squared_error

mean_squared_error(y, y_pred)

0.3333333333333333

From the above outcomes, it’s noticeable that each the customized implementation and the verification via the scikit-learn library yield the identical consequence. Therefore, we are able to conclude that this implementation is appropriate. The readers can be at liberty to compute extra such computations for different losses and particular metrics if required. With a quick understanding of the imply squared error, we are able to now proceed to additional analyze the explicit cross-entropy loss operate.

Categorical Cross-Entropy:

The first loss operate that we are going to think about for this text is categorical cross-entropy. This loss could be very widespread as a result of success it has garnered, particularly within the case of multi-class classification duties. Usually, a categorical cross-entropy loss is adopted after a remaining layer ending with a SoftMax activation operate. For this loss, we compute the logarithmic values of the category labels linked to their respective predictions. Because the values produced by pure logarithms of values between zero and one are sometimes unfavourable, we are able to make use of the unfavourable (minus) signal to transform our loss operate right into a optimistic worth. Under is the method illustration and the respective code snippet with the explicit cross-entropy loss output.

$$ Loss = – sum_{i}^{} y_i occasions log{}(hat{y_i}) $$

import numpy as np

one_hot_labels = [1, 0, 0]
preds = [0.6, 0.3, 0.1]

cat_loss = - np.log(one_hot_labels[0] * preds[0] +
                    one_hot_labels[1] * preds[1] +
                    one_hot_labels[1] * preds[1])

cat_loss

0.5108256237659907

We’ll cowl the extra intricate implementation particulars of the explicit cross-entropy loss within the upcoming part, the place we put all of the information gained on this article right into a single neural community. The 2 variations for categorical labels in addition to the one-hot encoded labels can be coated whereas developing the category for computing the explicit cross-entropy loss. Allow us to proceed to the subsequent part to finish our architectural construct.


Carry this mission to life

Placing all of it collectively to assemble a neural community:

Allow us to mix all of the information we attained on this article to assemble our personal neural community. Firstly, we are able to begin by importing the numpy library to carry out mathematical computations and the matplotlib library for visualizing the dataset that we are going to work on.

# Importing the numpy library for mathematical computations and matplotlib for visualizations
import numpy as np
import matplotlib.pyplot as plt

The dataset we’ll make the most of to check our neural community is on the neural networks case research knowledge referred to from the next web site. The spiral dataset supplies the customers with distinctive knowledge to check their deep studying fashions. Whereas knowledge factors scattered in clusters or having distinct distances between lessons are straightforward to suit even for machine studying fashions, knowledge parts which might be spiral or comparable in these constructions are typically more durable for fashions to study and adapt. Therefore, if we’re in a position to assemble neural networks which might be in a position to obtain such advanced duties, we are able to conclude that our networks are nicely constructed and trainable to realize the specified activity.

# Reference for the spiral dataset - https://cs231n.github.io/neural-networks-case-study/

N = 100 # variety of factors per class
D = 2 # dimensionality
Okay = 3 # variety of lessons
X = np.zeros((N*Okay,D)) # knowledge matrix (every row = single instance)
y = np.zeros(N*Okay, dtype="uint8") # class labels
for j in vary(Okay):
    ix = vary(N*j,N*(j+1))
    r = np.linspace(0.0,1,N) # radius
    t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
    X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
    y[ix] = j
    
# Visualizing the information
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.present()

Now that we have now imported the libraries and have entry to the dataset, we are able to proceed to assemble our neural community. We’ll first implement the hidden layer that we mentioned in one of many earlier sections of this text. As soon as we outline our hidden (or dense) layers, we can even outline the required activation capabilities that we are going to make the most of for this activity. The ReLU is the usual activation that we are going to use for a lot of the layers, whereas the SoftMax activation is used within the final layer for the likelihood distribution. Since there are three lessons, we are able to compute our neural community accordingly with SoftMax. Notice that there’s a slight modification within the SoftMax code to keep away from the explosion of values as a result of affect of exponentiation.

The following step is to outline our loss operate. We’ll make use of the explicit cross-entropy, which is without doubt one of the extra frequent loss capabilities utilized for fixing multi-class classification duties. The information is clipped on either side. The primary aspect is clipped to keep away from division by zero, whereas the second aspect is clipped to forestall the imply from studying in the direction of any particular worth. There are two particular instances talked about within the code snippet under. The primary case is when the labels are categorical, whereas the second case depicts the labels are one-hot encoded. Lastly, we’ll compute the imply of the losses for the neural community.

# Setting a random seed to permit higher monitoring
np.random.seed(42)

# Making a hidden layer
class Hidden:
    def __init__(self, input_features, neurons):
        # Outline the random weights and biases for our assertion drawback
        self.weights = 0.1 * np.random.rand(input_features, neurons)
        self.bias = np.zeros((1, neurons))
        
    def ahead(self, X):
        # Create the ahead propagation following the method: Y = X * W.T + B
        self.output = np.dot(X, self.weights) + self.bias
        
# defining the ReLU activation operate
class ReLU:
    def ahead(self, X):
        # Compute the ReLU activation
        self.output = np.most(0, X)
        
# defining the Softmax activation operate
class Softmax:
    # Ahead go
    def ahead (self, X):
        # Get unnormalized possibilities
        exp_values = np.exp(X - np.max(X, axis = 1, keepdims = True))
        # Normalize them for every pattern
        possibilities = exp_values / np.sum(exp_values, axis = 1, keepdims = True)
        self.output = possibilities
        
# Cross-entropy loss
class Loss_CategoricalCrossentropy():
    # Ahead go
    def ahead (self, y_pred, y_true):
        # Variety of samples in a batch
        samples = len (y_pred)
        # Clip knowledge on either side
        y_pred_clipped = np.clip(y_pred, 1e-7, 1 - 1e-7)
        
        # Possibilities if categorical labels
        if len (y_true.form) == 1 :
            correct_confidences = y_pred_clipped[range (samples), y_true]
        
        # Masks values - just for one-hot encoded labels
        elif len (y_true.form) == 2 :
            correct_confidences = np.sum(y_pred_clipped * y_true, axis = 1)
        
        # Losses
        negative_log_likelihoods = - np.log(correct_confidences)
        return negative_log_likelihoods
    
    def calculate (self, output, y):
        # Calculate pattern losses
        sample_losses = self.ahead(output, y)
        # Calculate imply loss
        data_loss = np.imply(sample_losses)
        # Return loss
        return data_loss

Now that we have now carried out all of the layers, activation capabilities, and losses required for our neural community, we are able to proceed to name the neural community parts and create the neural community structure. We’ll make use of a hidden layer adopted by the ReLU activation, a second hidden layer adopted by the SoftMax activation, and the ultimate categorical cross-entropy loss. Under is the code snippet and the respective consequence achieved.

# Defining the neural community parts
dense1 = Hidden(2, 3)
activation1 = ReLU()
dense2 = Hidden(3, 3)
activation2 = Softmax()
loss_function = Loss_CategoricalCrossentropy()

# Creating our neural community
dense1.ahead(X)
activation1.ahead(dense1.output)
dense2.ahead(activation1.output)
activation2.ahead(dense2.output)
print (activation2.output[:5])
loss = loss_function.calculate(activation2.output, y)
print ('loss:', loss)

[[0.33333333 0.33333333 0.33333333]
 [0.33332712 0.33333685 0.33333603]
 [0.33332059 0.33334097 0.33333844]
 [0.33332255 0.33332964 0.33334781]
 [0.33332454 0.33331907 0.33335639]]
loss: 1.0987351566171433

Utilizing the information we have now gained all through this text, we have now been efficiently in a position to assemble our neural networks from scratch. The computed neural community mannequin is ready to have a single go of knowledge the place the loss is computed accordingly. Although we have now touched on numerous main features of neural networks on this article, we’re removed from protecting all of the important matters. We’ll nonetheless want to include the coaching course of, make use of the related optimizers, get extra aware of differentiation, and assemble extra neural networks to know their working process utterly. My main reference for a lot of the features of this text is the Sentdex YouTube channel (and their e book). I’d extremely suggest trying out a video information for these ideas. We’ll cowl the remaining info on neural networks from scratch within the subsequent a part of this sequence!


Conclusion:

Artificial General Intelligence: "When researching AGI, what stuck with me is the idea that it should be able to adapt to an infinite amount of things. It should be able to evolve. With this in mind, I created a ‘centre core’ that represents a newborn AI. The expansion shows its varying specialisations as it evolves, growing in many different directions. The glass material was used for its transparent properties. I wanted each sphere to feel like its own organism/collection of knowledge." Artist: Nidia Dias
Photograph by DeepMind / Unsplash

The utility of deep studying frameworks similar to TensorFlow and PyTorch can typically trivialize the methodology behind the development of deep studying fashions. It’s important to realize a core understanding of how neural networks work to realize extra conceptual and intuitive information of the delicacies behind deep studying. I’d as soon as once more suggest trying out the next channel for video guides to among the sections coated on this article. Studying tips on how to assemble neural networks from scratch will permit builders to have extra management and understanding over the advanced deep studying duties and initiatives that one should remedy.

On this article, we took a take a look at among the basic ideas required for developing neural networks from scratch. As soon as we had a quick introduction to a couple of the elementary matters, we proceeded to give attention to the foundations of neural networks. Firstly, we understood the implementation of layers in neural networks, primarily specializing in the hidden (or dense) layers. We then touched on the importance of activation capabilities, particularly the ReLU and SoftMax activation capabilities, that are extraordinarily helpful for activating the nodes to suit the mannequin successfully. We then mentioned the loss capabilities, particularly imply squared error and categorical cross-entropy, that are helpful for adjusting the weights in the course of the coaching of the mannequin. Lastly, we put all of the ideas collectively to create a neural community from scratch.

We now have solely coated among the primary features of neural networks within the first two components of this sequence. Within the upcoming third half, we’ll take a look at different important ideas similar to optimizers and convolutional layers and assemble extra neural networks from scratch. Till then, maintain programming and exploring!

RELATED ARTICLES

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí

Most Popular

Recent Comments