JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Chapter 19. Multilayer Perceptrons

Key Topics

Multilayer perceptrons (MLPs) are another kind of artificial neural network, with layers of weighted connections between the inputs and outputs. The structure of MLP essentially resembles a set of cascaded perceptrons. Each processing unit has a relatively complex output function, which increases the capabilities of the network.

This chapter builds upon the information in Chapter 17, "Perceptrons," which covered single-layer perceptrons. This chapter covers the following topics:

  • The history behind perceptrons, notably why multiple-layered models are necessary

  • The representation of MLP, introducing the concept of topology and nonlinear activation

  • The simulation of multiple layers of processing units, and how it differs from the single-layer variant

  • The parallels between perceptrons with their biological counterparts: neural networks

  • Methods for training MLP based on the concept of back-propagation of error

  • Practical issues behind the training process, and the problems that occur with multiple layers

  • The major advantages and disadvantages of perceptrons for game development

Perceptrons with multiple layers can recognize patterns in the gameplay, predict the outcome of a fight, and control the nonplayer character (NPC) movement. These tasks can be learned from a set of examples, and with potentially better performance than single-layer perceptrons.

      Previous Section Next Section
    



    JavaScript EditorAjax Editor     JavaScript Editor