adplus-dvertising
frame-decoration

Question

In perceptron learning, what happens when input vector is correctly classified?

a.

small adjustments in weight is done

b.

large adjustments in weight is done

c.

no adjustments in weight is done

d.

weight adjustments doesn’t depend on classification of input vector

Posted under Neural Networks

Answer: (c).no adjustments in weight is done

Engage with the Community - Add Your Comment

Confused About the Answer? Ask for Details Here.

Know the Explanation? Add it Here.

Q. In perceptron learning, what happens when input vector is correctly classified?

Similar Questions

Discover Related MCQs

Q. When two classes can be separated by a separate line, they are known as?

Q. If two classes are linearly inseparable, can perceptron convergence theorem be applied?

Q. Two classes are said to be inseparable when?

Q. Is it necessary to set initial weights in prceptron convergence theorem to zero?

Q. The perceptron convergence theorem is applicable for what kind of data?

Q. w(m + 1) = w(m) + n(b(m) – s(m)) a(m), where b(m) is desired output, s(m) is actual output, a(m) is input vector and ‘w’ denotes weight, can this model be used for perceptron learning?

Q. If e(m) denotes error for correction of weight then what is formula for error in perceptron learning model: w(m + 1) = w(m) + n(b(m) – s(m)) a(m), where b(m) is desired output, s(m) is actual output, a(m) is input vector and ‘w’ denotes weight.

Q. Convergence in perceptron learning takes place if and only if:

Q. When line joining any two points in the set lies entirely in region enclosed by the set in M-dimensional space , then the set is known as?

Q. Is it true that percentage of linearly separable functions will increase rapidly as dimension of input pattern space is increased?

Q. If pattern classes are linearly separable then hypersurfaces reduces to straight lines?

Q. As dimensionality of input vector increases, what happens to linear separability?

Q. In a three layer network, shape of dividing surface is determined by?

Q. In a three layer network, number of classes is determined by?

Q. Intersection of linear hyperplanes in three layer network can only produce convex surfaces, is the statement true?

Q. Intersection of convex regions in three layer network can only produce convex surfaces, is the statement true?

Q. If the output produces nonconvex regions, then how many layered neural is required at minimum?

Q. Can all hard problems be handled by a multilayer feedforward neural network, with nonlinear units?

Q. What is a mapping problem?

Q. Can mapping problem be a more general case of pattern classification problem?