Question
a.
recall is direct
b.
delta rule learning
c.
non linear processing units
d.
two layers
Posted under Neural Networks
Engage with the Community - Add Your Comment
Confused About the Answer? Ask for Details Here.
Know the Explanation? Add it Here.
Q. What is the feature that doesn’t belongs to pattern classification in feeddorward neural networks?
Similar Questions
Discover Related MCQs
Q. What is the feature that doesn’t belongs to pattern mapping in feeddorward neural networks?
View solution
Q. In determination of weights by learning, for orthogonal input vectors what kind of learning should be employed?
View solution
Q. In determination of weights by learning, for linear input vectors what kind of learning should be employed?
View solution
Q. In determination of weights by learning, for noisy input vectors what kind of learning should be employed?
View solution
Q. What are the features that can be accomplished using affine transformations?
View solution
Q. What is the features that cannot be accomplished earlier without affine transformations?
View solution
Q. What are affine transformations?
View solution
Q. Can a artificial neural network capture association if input patterns is greater then dimensionality of input vectors?
View solution
Q. By using only linear processing units in output layer, can a artificial neural network capture association if input patterns is greater then dimensionality of input vectors?
View solution
Q. Number of output cases depends on what factor?
View solution
Q. For noisy input vectors, Hebb methodology of learning can be employed?
View solution
Q. What is the objective of perceptron learning?
View solution
Q. On what factor the number of outputs depends?
View solution
Q. In perceptron learning, what happens when input vector is correctly classified?
View solution
Q. When two classes can be separated by a separate line, they are known as?
View solution
Q. If two classes are linearly inseparable, can perceptron convergence theorem be applied?
View solution
Q. Two classes are said to be inseparable when?
View solution
Q. Is it necessary to set initial weights in prceptron convergence theorem to zero?
View solution
Q. The perceptron convergence theorem is applicable for what kind of data?
View solution
Q. w(m + 1) = w(m) + n(b(m) – s(m)) a(m), where b(m) is desired output, s(m) is actual output, a(m) is input vector and ‘w’ denotes weight, can this model be used for perceptron learning?
View solution
Suggested Topics
Are you eager to expand your knowledge beyond Neural Networks? We've curated a selection of related categories that you might find intriguing.
Click on the categories below to discover a wealth of MCQs and enrich your understanding of Computer Science. Happy exploring!