adplus-dvertising
frame-decoration

Question

What is the feature that doesn’t belongs to pattern classification in feeddorward neural networks?

a.

recall is direct

b.

delta rule learning

c.

non linear processing units

d.

two layers

Posted under Neural Networks

Answer: (b).delta rule learning

Engage with the Community - Add Your Comment

Confused About the Answer? Ask for Details Here.

Know the Explanation? Add it Here.

Q. What is the feature that doesn’t belongs to pattern classification in feeddorward neural networks?

Similar Questions

Discover Related MCQs

Q. What is the feature that doesn’t belongs to pattern mapping in feeddorward neural networks?

Q. In determination of weights by learning, for orthogonal input vectors what kind of learning should be employed?

Q. In determination of weights by learning, for linear input vectors what kind of learning should be employed?

Q. In determination of weights by learning, for noisy input vectors what kind of learning should be employed?

Q. What are the features that can be accomplished using affine transformations?

Q. What is the features that cannot be accomplished earlier without affine transformations?

Q. What are affine transformations?

Q. Can a artificial neural network capture association if input patterns is greater then dimensionality of input vectors?

Q. By using only linear processing units in output layer, can a artificial neural network capture association if input patterns is greater then dimensionality of input vectors?

Q. Number of output cases depends on what factor?

Q. For noisy input vectors, Hebb methodology of learning can be employed?

Q. What is the objective of perceptron learning?

Q. On what factor the number of outputs depends?

Q. In perceptron learning, what happens when input vector is correctly classified?

Q. When two classes can be separated by a separate line, they are known as?

Q. If two classes are linearly inseparable, can perceptron convergence theorem be applied?

Q. Two classes are said to be inseparable when?

Q. Is it necessary to set initial weights in prceptron convergence theorem to zero?

Q. The perceptron convergence theorem is applicable for what kind of data?

Q. w(m + 1) = w(m) + n(b(m) – s(m)) a(m), where b(m) is desired output, s(m) is actual output, a(m) is input vector and ‘w’ denotes weight, can this model be used for perceptron learning?