tion of security games to this domain specifically,
reviewing how adversaries’ behaviors are modeled
and how to optimize patrolling strategies through
allocating limited security resources. Finally, we presented our PAWS software—which was built based on
security game models for addressing wildlife protection problems—describing data inputs (for example,
animal density, poaching data) and what outputs are
generated (for example, models of poachers’ behaviors and suggested patrolling routes for rangers). We
described how similar approaches had been previously successfully used in the wildlife domain, and
how they could be used on Sumatra with PAWS.
Participants engaged in several discussion sessions on
challenges in wildlife protection including resources
(that is, factors that motivate people to enter protected areas), illegal activities (that is, types of illegal
activities in conservation areas), and wildlife protection (that is, improving security approaches). In
small groups, they exchanged knowledge about these
topics and generated potential solutions; each group
then presented their conclusions to the rest of the
groups. We encouraged groups to develop solutions
and provide feedback that could be conceptualized in
a game-theoretic manner and potentially incorporated into AI software.
Participants played the board game as poachers and
rangers. For this activity, participants were divided
into two groups. Each group took turns playing as
rangers (who created patrol strategies) and poachers
(who decided where to poach in games generated by
the other team), and each defender strategy was
played only once.
Given the large amount of time for the workshop
relative to the classroom-based units, in addition to
board games, every participant played five rounds of
the computer-based games as poachers (figure 5).
After each round, the poacher behavior models were
updated based on participants’ responses, and each
subsequent game used a defender strategy created
using these updated models. On the final day, we presented the game results, that is, the defender utilities
based on poachers’ decisions in the online games.
By playing these games in a repeated fashion, the
participants developed a better understanding of how
poachers may react to rangers’ strategies over time,
and of the weaknesses of various defender strategies.
They also learned how AI software such as PAWS can
make optimal decisions based on models of players’
behaviors, and how such decisions can adapt and
improve over time as more data are collected.
Results of Games
Each defender strategy in the board games was
played only once, so the results based on single data
Figure 5. Participants Played Board Games (top)
and Computer Games (bottom).
points may not be reliable. In light of this and the
fact that only the security experts played the computer game (whereas high school and university students did not), we highlight the results of the computer games here. Figure 6 shows the defender
utilities obtained by deploying AI-based defender
strategies (that is, PAWS) over several rounds against
security experts playing as poachers. In the figure,
lower values on the y-axis indicate better participant
performance, and worse performance by PAWS. We
observe that PAWS’s performance begins low, initially increases, then declines. This suggests some
improvement and learning over time among the
security experts, providing modest support of our
unit objective of improving probabilistic reasoning