What is the biological neural network?

When it fires the information is passed from the inn What led to D out clear like similar to what happened in the biological neural network When the activation function fires information is passed from one led to the other okay so like I told earlier also the weight can be used to amplify our d amplify the original input signal meaning to say they are They’re used to determining how each variable will be used What is the importance that they are given to eat off the variables Okay Took on today the simple example.


This example it’s got nothing to do with the neural networks actually um consider that you have to say to input variables X when the next to um So these are two input variables in a data set One of them having away our value of 0.6 and one-off them having overly off one Um on the weights Say that applied to both of these input variables at 0.5 and zero-point it again you’re also being given out the shoulder off want.

What happens in simple terms in the neural network is the first step off A chance will function what we saw in the previous light off Nothing but the sound produced will happen here So 0.6 in 20.5 plus one into 0.8 So that gives you 1.1 So this is similar to the R summation function that we had seen here Um next we compare this value that we’ve caught with a threshold to say our value Oh that we’ve got is 1.1 But the pressure lives 1.0 So we see that output this more than the threshold that was allowed.

Since the output, this greater than the threshold So the neuron is activated and it fires Okay, so this is a very simple example This is not what happened Actually happens The neural network because they’re in a bed innovative off the other function, And all that is calculated toe edit function is nothing but the actual output, Ah a difference between the actual output and the desired output So they’re not dead innovative off that is calculated and passed back to the ah neural network.

I put this example only to show you how the trash world works So when whenever the output exceeds the threshold the neural network will know that it needs to fire Um the signal What is meant by fighting the signal is nothing but up back propagation Meaning to say that whenever and now put increase the threshold Um and the algorithm will I’m sorry Whenever the Albert and creators threshold a bad propagation will occur and a reiteration reluctant annuity operation within a neural network will occur And the output other l gotta temple tried Oh improved the output Okay.

Now we’ll see how we are Implement the artificial neural network So this is a simple diagram to show how a neural network works So there are for example three input variables to output variables here So they can be any number of hidden layers that are present in the neural network And the hidden lives actually perform whatever Um plea and garden need to be applied Is stunned by the hidden layers only Ah there are different ways on how we decide Ah how many hidden layers are to be present and we’ll see that when we work with are So what happens here is that the input Claire will take the input from the input variables.

You have three variables here, so each input eats off the neuron that you see in the input layer will have data from one variable each The input layer will take input Junkie input variables The hidden layer will help The import led to move to the output clear canto The input layer will pass the information to the hidden layer with the weights present And then they have hidden layer will move that data to be up there But the hidden letters essentially oh big black box lay as I told you earlier you will never be able to understand what is happening with the hidden yet on the output layer will stroll you the final output So this is a Nader toe A more detailed on a diagram of the of a diagram.

What I’m trying to show here is that there could be any number of notes that once can be present at every layer And then there are who waits that are applied to the Excuse me Okay so then the weights that are applied to the input that takes them to be hidden layers if you see in the diagram will Oh there are three in boats 123 And then there are some weights will turn up blade do the each of the in boots when it is being taken to the hidden layer So each input variables with some way the plight will get eat neuron in the hidden here So if there are three input variables on there are Rome three hidden layers So eat off the importation The input arm I want Will got to eat off the neuron Really hidden Blair on a different weight will be applied.

When it goes through each often you’re on in the hidden there and there is another category off wait So one is the way it’s not applied to the input variables There’s another category your fate which we call Astor bias values Tobias values are a can used to train the neural network So this can be compared to if you remember with on regression Also we used to have a similar thing where now we’ll have some weights that are assigned to eat off the baby a ball so that woody coefficients back in our regression when ah you used to get immigration something like Why equals to arm a plus bone x one plus b two extra glitz be three extra and so on So then they where coefficients are in regression out of weights returned neural network But there was also an intercept which was the unexplained our variance or unexplained output that the intercept used to capture.

Similar happens here with the hidden bias So that is some information that the input variables along with the weight cannot cap after that was captured by the hidden bias So but eat off Um the letter that you have apart from the input layer there’ll be a bias entered like if you see in the dive from here with the second layer or that is the hidden layer present in the between There’s a hidden bias that is added on with each hidden land that you have like here we have just Hinton lead but each hidden layer that this present they’ll be open hidden bias for each of the layers on Finally they’re old.

They will also be an output bias which is present for the output layer Nothing but the intercept What We used to have it The person in question Okay that was the basic off What an artificial neural network does will go backpropagation Now to see how the neural network loans are camped based on a neural network is a powerful learning Morgan is um, um although it is a powerful learning McCann is um you will never be able to understand what that landing mechanism is But that being said though you relent work has power for learning McCann is up on It can learn any function given it has enough hidden units So eat off the hidden units that are present enable it does loan tow the input data set Um and how it loans us by the mechanism that we call us back proper location.

So like I told you earlier also when on in Ports fed to the neural network that is when the input variables are fed into the noodle network Um the input will some weights will be applied to the important actress Oh coming to Some bias will be applied to the in addition to the input that is coming And finally, all that data will go to the hidden layer the Same process will be repeated on the data will move to the output layer When you get the date I’d be out clear.

The difference between what the actual output or the desired output was and the output that you are getting some era will be calculated based on the difference Whatever is the error will be fed back to they are forced layer on the whole process Repeats Yeah So these editors off it back to the neural network and then these weights to the question that they were asking the weights are then changed to try to reduce the errors and give the correct output to the first situation Random rates will be assigned but with registration that hap opens the weights are changed Um using the errors that we have caught in the previous letter This is the diagram that explains back propagation So there are some input variables Then there is a hidden layer So say there are oh tree input variables here.

Leave a Comment