Competitive evaluation in a video game development course

6
Competitive Evaluation in a Video Game Development Course Manuel Palomo-Duarte Computer Languages and Systems Department University of Cádiz Cádiz, Spain [email protected] Juan Manuel Dodero Computer Languages and Systems Department University of Cádiz Cádiz, Spain [email protected] José Tomás Tocino Computer Languages and Systems Department University of Cádiz Cádiz, Spain [email protected] Antonio García-Domínguez Computer Languages and Systems Department University of Cádiz Cádiz, Spain [email protected] Antonio Balderas Computer Languages and Systems Department University of Cádiz Cádiz, Spain [email protected] ABSTRACT The importance of the video game and interactive entertain- ment industries has continuously increased in recent years. Academia has introduced computer game development in its curricula. In the case of the University of C´ adiz, a non- compulsory course about “Video Game Design” is offered to Computer Science students. In the artificial intelligence les- son, students play a competition to earn part of their grade. Each student individually implements a strategy to play a predefined board-game. The different strategies compete in a software environment that implements it. Grading this part of the course depends on the results obtained in the competition. Additionally, before running the competition, students have to read the source code written by their class- mates and bet a part of their grade on other teams’ results. This way, they elaborate a critical analysis of the source code written by other programmers: a very valuable skill in the professional world. Students that are not so proficient in coding are still rewarded if they demonstrate they can do a good analysis of the source code written by others. Categories and Subject Descriptors I.2.1 [Artificial Intelligence]: Applications and Expert Systems; K.3.2 [Computers and education]: Computer and Information Science Education General Terms Experimentation Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ITiCSE’12, July 3–5, 2012, Haifa, Israel. Copyright 2012 ACM 978-1-4503-1246-2/12/07 ...$10.00. Keywords Competition, video game development, artificial intelligence, expert system programming 1. INTRODUCTION The importance of the video game and interactive en- tertainment industries has continuously increased in recent years [16]. Academia has introduced computer game de- velopment in its curricula. In the case of the University of adiz, a non-compulsory course about “Video Game Design” is offered to students of its degrees on Computer Science in their last year (third one). The recommended prerequisite is passing the Object Oriented Programming, Data Structures and Analysis and Design of Algorithms courses of the sec- ond year. There is also an elective Introduction to Artificial Intelligence course offered in the second year. But it is not a prerequisite as our experience gives all the necessary back- ground on the topic and the course does not include lessons on expert systems. The course mainly follows a project-based learning ap- proach [13]. But in the artificial intelligence lesson, students play a competition (implementing a serious game [3]) to get part of their grade. Each student individually programs a strategy to play a predefined board-game. The different strategies compete in a software environment that imple- ments it. Grading this part of the course depends on the results obtained in the competition, serving as a motivating factor [2]. To implement the competition, a specially designed free software tool provides a common environment where differ- ent strategies coded using rule-based expert systems com- pete against each other. The system models a square board in which players have a partial knowledge of the world (i.e., there is no “perfect strategy” to play the board-game). Using this application, the students can program and test their ex- pert system rule sets against other rule sets or human play- ers. After competing in a league, students can improve their systems before participating in a play-off tournament. This way, the students can develop the skill of critically analyzing the strategy they have implemented. 321

Transcript of Competitive evaluation in a video game development course

Competitive Evaluation in a Video Game DevelopmentCourse

Manuel Palomo-DuarteComputer Languages and

Systems DepartmentUniversity of Cádiz

Cádiz, [email protected]

Juan Manuel DoderoComputer Languages and

Systems DepartmentUniversity of Cádiz

Cádiz, [email protected]

José Tomás TocinoComputer Languages and

Systems DepartmentUniversity of Cádiz

Cádiz, [email protected]

Antonio García-DomínguezComputer Languages and

Systems DepartmentUniversity of Cádiz

Cádiz, [email protected]

Antonio BalderasComputer Languages and

Systems DepartmentUniversity of Cádiz

Cádiz, [email protected]

ABSTRACT

The importance of the video game and interactive entertain-ment industries has continuously increased in recent years.Academia has introduced computer game development inits curricula. In the case of the University of Cadiz, a non-compulsory course about “Video Game Design” is offered toComputer Science students. In the artificial intelligence les-son, students play a competition to earn part of their grade.Each student individually implements a strategy to play apredefined board-game. The different strategies compete ina software environment that implements it. Grading thispart of the course depends on the results obtained in thecompetition. Additionally, before running the competition,students have to read the source code written by their class-mates and bet a part of their grade on other teams’ results.This way, they elaborate a critical analysis of the sourcecode written by other programmers: a very valuable skill inthe professional world. Students that are not so proficientin coding are still rewarded if they demonstrate they can doa good analysis of the source code written by others.

Categories and Subject Descriptors

I.2.1 [Artificial Intelligence]: Applications and ExpertSystems; K.3.2 [Computers and education]: Computerand Information Science Education

General Terms

Experimentation

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.ITiCSE’12, July 3–5, 2012, Haifa, Israel.Copyright 2012 ACM 978-1-4503-1246-2/12/07 ...$10.00.

Keywords

Competition, video game development, artificial intelligence,expert system programming

1. INTRODUCTIONThe importance of the video game and interactive en-

tertainment industries has continuously increased in recentyears [16]. Academia has introduced computer game de-velopment in its curricula. In the case of the University ofCadiz, a non-compulsory course about“Video Game Design”is offered to students of its degrees on Computer Science intheir last year (third one). The recommended prerequisite ispassing the Object Oriented Programming, Data Structuresand Analysis and Design of Algorithms courses of the sec-ond year. There is also an elective Introduction to ArtificialIntelligence course offered in the second year. But it is not aprerequisite as our experience gives all the necessary back-ground on the topic and the course does not include lessonson expert systems.

The course mainly follows a project-based learning ap-proach [13]. But in the artificial intelligence lesson, studentsplay a competition (implementing a serious game [3]) to getpart of their grade. Each student individually programs astrategy to play a predefined board-game. The differentstrategies compete in a software environment that imple-ments it. Grading this part of the course depends on theresults obtained in the competition, serving as a motivatingfactor [2].

To implement the competition, a specially designed freesoftware tool provides a common environment where differ-ent strategies coded using rule-based expert systems com-pete against each other. The system models a square boardin which players have a partial knowledge of the world (i.e.,there is no“perfect strategy”to play the board-game). Usingthis application, the students can program and test their ex-pert system rule sets against other rule sets or human play-ers. After competing in a league, students can improve theirsystems before participating in a play-off tournament. Thisway, the students can develop the skill of critically analyzingthe strategy they have implemented.

321

Table 1: Pieces in each army

Value Number of pieces1 12 83 24 25 26 1

Additionally, besides programming a strategy, studentshave to read source code written by their classmates andbet a part of their grades on other teams. This way, theyreview source code written by other programmers, which isa very valuable skill in the professional world. At the sametime, students that are not so proficient in coding are stillrewarded if they can do a good analysis of the source codewritten by others by identifying the future winner of thecompetition.The rest of the paper is organized as follows: first, we

describe the methodology followed in the classroom. Then,we discuss the results obtained last year. The fourth sec-tion comments similar experiences. Finally, we compile someconclusions and draw future works.

2. METHODOLOGYThe main aim of the experience is that students of the

video game design course learn artificial intelligence pro-gramming using expert systems. All work for this lessonmust be done within the allotted 8 hours in the classroom,so the game has to be very easy to play and code for.

2.1 The gameThe game implemented for the competition is a simplified

version of the Stratego board game [19]. It pits two armiesmade up by sixteen pieces against each other in a 8x8 grid.Both armies have the same pieces listed in table 1. Everypiece has an associated value that is initially only visibleto its owner. Players can initially place their pieces on theboard in any order: one army is placed on the two lowestrows of the board and the other on the two highest rows.Figure 1 shows a possible initial set: brackets indicate thatthe value of the piece is hidden to the opponent.Players take turns moving one piece to an adjacent square

(upwards, downwards, rightwards or leftwards). When apiece moves into a cell occupied by an enemy piece, theirvalues are revealed and the lowest valued piece is removedfrom the board (dies). If they have the same value, bothpieces die. The objective of the game is to capture the op-ponent’s lowest valued piece. After a certain number of turnshave passed, the match is tied.A free software tool named Gades Siege [8] has been de-

veloped to support the competition. It improves upon aprevious system called “Resistencia en Cadiz”, and providesa common environment where the expert systems developedby the students and those saved from previous years can faceoff each other. Gades Siege allows for playing leagues, play-offs and single matches step by step, or collecting statisticsfrom batches of 100 unattended matches against an oppo-nent (using different random seeds). A human can also takecontrol of a team to test the behavior of the strategy un-

Figure 1: A possible initial set.

der certain situations. This helps students test and improvetheir strategies.

Figure 2 shows a screenshot of a match between two teams.Most of the window is dedicated to the game board. Theright edge of the window shows the names of both teamsand their dead pieces, along with the number of turns playedand the remaining turns before the match is tied. Below thegame board, four buttons with arrows allow for navigatingthe saved game states for the match. From left to right, theyallow for replaying the match from the beginning, go backone step, go forwards one step, and fast forward to the lastturn, respectively. Finally, three more icons on the bottom-right corner activate automatic simulation (advancing oneturn per second), continue to the next match (or exiting ifnot inside a tournament), and exit the tournament, respec-tively.

On the one hand, the game implements a partial knowl-edge world for players. Thus, there is no “perfect strategy”to play the game, avoiding a loss of interest due to unbeat-able teams. But, on the other hand, players have perfectknowledge of their armies. Since there is only one agentplaying, the game is kept simple as there is no need to im-plement the fuzzy intelligence and communication protocolsusually required by a multi-agent system. As a result, theproblem is suitable to be solved using expert systems.

In particular, as the board is discrete, the artificial intel-ligence can be easily programmed using a rule-based expertsystem [5]. This way, the strategy is implemented as a setof prioritized if-then rules that are executed according toinformation from the current game state. Programming isas simple as adding, modifying, removing or re-prioritizingrules. This offers a wide variety of behaviors that can beimplemented making slight changes in a rule set. It includesnon-deterministic behaviors if several rules of the same pri-ority can be activated at a certain moment. Even more,while general rules seem good for the beginning of a match,more specific ones play better in the last steps or in certainsituations. So a good balance between both kinds of rules isrequired to create a winning team.

The system is programmed in Python using PyGame (awrapper of the SDL multimedia library), and Glade for theGUI. As for the core, it is developed on CLIPS [15]. All sys-

322

Figure 2: Screenshot of a match in Gades Siege. The value of the 6-point orange piece is revealed.

tems and our own code are multi-platform and free. GadesSiege is freely available for download at [8].

2.2 ScheduleThe experience was performed throughout 2 weeks, using

the weekly 4-hour blocks available for the course (2 hours oflectures followed by 2 hours of laboratory):

• A 2-hour lecture dedicates the first hour to provide therequired conceptual foundations behind rule-based ex-pert systems, emphasizing their practical applicationsin science and engineering. The second hour is used toshow the game. Some sample rules are shown along,from simple cases to a reasonably complex system, sothey can get familiar with the environment.

• After the lecture students move to the computer class-room. Due to the ease to program the system, theycan code basic rule sets in 2 hours, learning expertsystem programming with the support of a teacher.

• Students have 5 days to code their strategies at home.Two of the features previously commented proved tobe very useful for learning: playing a very large batchof matches automatically versus sample strategies tocollect statistics on general performance, and playingagainst a human to test how the strategy behaves un-der certain circumstances and do fine tuning.

• One day before the competition starts, students haveto upload their strategies into the Moodle virtual learn-ing platform used in the course. When all the studentshave uploaded their strategies the teachers make thempublicly available for the enrolled student, so they canread the code of colleagues. Note that only strategiesand not initial formations are available for download.If they both could be downloaded, the students couldactually play a competition and (except for light dif-ferences caused by random seeds) know the winner.

• Lecture time for the second week starts with studentsdeclaring which strategy or strategies will bet partof their marks. They can bet on one, two or threedifferent strategies, and the sum of the grades betmust be between 10% and 50%. Then, a two-leggedround robin league in which all students are confrontedagainst each other is played. Some matches are watchedby the entire class on a large wall screen so studentscan explain the behavior they implemented.

• Usually, after playing the league, students identify mi-nor issues and weak points in their systems that couldbe easily fixed. Note that a change in the priority of arule or the addition/deletion of one rule more is veryeasy to implement and can lead to a very different be-havior. This improvement could include adding rulesconsidered suitable from the strategies of a classmate.They have 1 hour in the laboratory to modify theirrule set and then, the final strategies take part in aplayoff tournament.

2.3 GradingIn order to pass the course, a strategy only needs to defeat

a naive (sparring) team that is included in the system. Thisway, students can be confident they will pass this part of thecourse, even if the rule set ends up last in the competition.We do not consider fair that students able to program agood expert system in 2 weeks fail because their classmatesare better programmers. Regardless of the quality of thestrategies, one student must necessarily win the tournament,another must be second, and so on until the last place.

An additional point is given if the student explains clearlyher strategy. This prevents cheating on one hand (or at least,it ensures that students know what their strategies do), andon the other hand, classmates can easily use some rules toimprove their strategies for the play-off. In order to furtherimprove their marks, students need to win matches againsttheir classmates.

323

Table 2: Grades distributionMark (0-10) Performance of the team Range of teamsFail (below 5) No match won 0 or 1

5 Only defeats the sparring (DS) 0 or 16 DS and explains the strategy clearly 0 to All7 DS and wins another match 0 to 28 DS and stays in upper half of the league or passes one play-off round Half to all9 DS, stays in upper half of the league and passes one play-off round Half to half+2

Also DS and wins a tournament10 DS, stays in upper half of the league, passes one play-off and wins a tournament 0 to 2

Strategies get extra points (up to a total score of 10) foreach of:

• Winning a match (not against the sparring opponent).

• Being in the top half of the league.

• Passing a play-off round.

• Winning a tournament.

The experience up to this point has been carried out sincethe first year of the course, in 2007–08. It has a clear prob-lem: it implies certain restrictions on the number of teamsthat can get certain grades. For example, there can be onlyone winner in each competition, and only half of the teamswill pass the first play-off round. This limits the distribu-tion of grades according to teams’ performance as shown intable 2.This part of the course represents 15% of the course grade.

As we have seen, getting 7 points in this part is quite afford-able: students only need to program a strategy, explain itto their classmates and win a match against a real oppo-nent. A student getting 7 points could get up to 9.55 pointsin the final general course grade (over a maximum of 10,allowing an A++ mark). Considering the relative value ofthe experience in overall course grade and the motivation ofthe competition, we think that the limitation of the rangeof teams obtaining certain grades is a good tradeoff. Nev-ertheless, a more flexible distribution of marks and gradesaccording to students’ skills was desired.For this reason, in the 2010–11 academic year, besides

programming an expert system, students had to read sourcecode written by their classmates and bet a part of their finalmarks on other teams. This way, they elaborated a criticalanalysis of the source code written by other programmers, avery valuable professional skill. At the same time, studentsthat were not so proficient in coding were still rewarded ifthey showed they could do a good analysis of the sourcecode written by others, identifying the future winner of thecompetition.It is important to note that the bet only concerned the

performance of the strategies in the league. The play-offswere played using modified strategies that students improvedin the lab, so they were different than those read by theircolleagues before the competition. It would have been inter-esting to extend the betting system to the playoff, but it wasnot possible due to the strict time constrains (two blocks of4 hours). The first block is dedicated to learning, and thesecond is for playing and making quick changes in strategies,leaving no time for analyzing and improving upon them.This way, the relation between the performance of the

strategy and the grade obtained by its author is further re-

laxed. If the final mark of a team in the league is X (of amaximum of 9), then the student who coded it could get aminimum grade of X/2 (by betting 50% for a team unable tobeat the sparring opponent), and a maximum of X/2 + 4.5(betting 50% for the best team, that obtains 9 points). But,if we suppose that every team gets at least 7 points, theminimum would be X/2 + 3.5 for betting half of the gradeon a team that only got 7 points.

In any case, if a student preferred keeping the maximumof the mark of his team he could get a minimum of X ∗ 0.9(supposing that he bet for a team that won no matches) orX ∗0.9+0.7 if we suppose 7 to be the minimum actual mark.And the maximum would be X ∗0.9+0.9 (betting for a teamthat wins a competition).

As remarked before, the formulas concern only the league,covering up to 9 points. Two additional points could beobtained in the play-off of the improved strategies, up to 10total points.

3. RESULTS OF THE EXPERIENCEDue to the nature of the game (providing only partial

knowledge of the world) and the limited knowledge of theundergraduate students on artificial intelligence techniques,there has been no “absolute best” team in any year. It iscurious that quite often some of the teams in the top of theleague had difficulties or even could not beat some in thelowest part. This demonstrates that even the best teamshad some weak points.

Informally speaking with students, they reported that thecompetition between peers is a motivating factor to learn ex-pert system programming, reinforcing conclusions of similarexperiences [18]. They also commented that seeing the armyevolve without their intervention against others was a niceway to understand the application of artificial intelligenceto game development.

In this year 15 students joined the course, but only 14attended the experience: a student uploaded a strategy butdid not go to the sessions and got 0 points in the bet (ashe did not provide a satisfactory reason to miss it). The14 students bet an average of 24.29% of their marks: mostbet below 30% except for one who bet 40% and another onewho bet 50%. As for the lowest bets, three students bet theminimum 10% of their grade.

Only three students obtained lower grades due to theirbets and seven improved it. The maximum gain was ob-tained by a student whose strategy had 7 points and bet halfof his mark for the winning one (who obtained 8, the maxi-mum available in the league). Highest losses were −0.85 and−0.65. The rest of students varied their strategy grades in a0.2 points or less. Finally, four kept the same grade because

324

they bet different rates for several teams whose weightedmean was the same grade their strategy got.Interestingly, some students acknowledged that the main

reason to bet for a certain team was not only their impres-sions when they reviewed the code, but the overall marks ofthe developers in previous courses. Although we do not haveaccess to that information, in general seems that studentswho perform well in programming subjects, also do well inthis experience.Students were, on average, much more interested in this

course than in others, judging from the anonymous surveysconducted near the end of the term. It included different as-pects about the course, all of them rated from 1 (minimum)to 5 (maximum). Results showed a very high score:

• Evaluation method in the course: 4.69

• Competition of expert systems: 4.92

• Betting system included: 4.15

• General course value: 4.61

4. RELATED WORKSThere are similar experiences of competition between stu-

dents in the literature. Concerning simulated robotic soccer,there are different experiences using RoboCup Soccerserver,an evolution of Javasoccer [10]. This systems uses a client-server architecture, so the artificial intelligence modules canbe programmed not only using rule-based expert systems[12, 4], but also other techniques [11, 17]. In [7] groups ofstudents had to implement the artificial intelligence for aRoboCup team during a course. At the end of it, a compe-tition was run among the teams coded by students and someother from Robocup World Cup participants. The basis forthe student’s examination and grade was a written report onthe team design and performance in the competition. TheAnother example is Robocode [6]. Robocode is a game

that implements a virtual battle field where two robots fighteach other. Students use Java to implement different arti-ficial intelligence techniques. Then, they observe how theiragent performs compared to others by watching the agentscompete.There is an interesting experience using software based on

the Ataxx game [14]. Groups of two or three students hadto use Prolog to implement an agent which had to completeevery game using a limited amount of CPU time. The as-signment grade was determined by the ranking obtained ina free-for-all competition among these student programs.Finally, even hardware competitions [9, 1] using robots

are documented. They are developed in the laboratory ses-sions of the Artificial Intelligence course. This way, studentscan bring into practice the concepts learned and are highlymotivated.Nevertheless, all of the experiences commented use much

more complex games than our proposal. So they would haveneeded more time than what was available. As we com-mented previously, time restrictions are very tight in thisexperience. As for the betting system, we have found noexperiences using it, showing its originality.

5. DISCUSSION AND CONCLUSIONThe importance of the video game and interactive enter-

tainment industries has caused academia to introduce com-puter game development in their curricula. In an elective“Video Game Design” course offered in the Computer Sci-ence degrees of the University of Cadiz the grade in theartificial intelligence lesson is obtained competitively. Stu-dents code strategies to play a serious game. They competein a software environment that implements a board-game,and the results obtained in the competition will be part oftheir grade.

Firstly, each student has to code a strategy as a rule-based expert system. Those strategies compete in a league.Then, students can improve them before participating in aplay-off tournament. Another part of the grade depends ona bet on the strategies of their classmates. This way, theypractice critical analysis on the source code written by otherprogrammers, a very valuable skill in the professional world.At the same time, students that are not so proficient incoding are still rewarded if they demonstrate they can do agood analysis of the source code written by others identifyingthe future winner of the competition.

The experience has been rewarding both for the studentsand the teacher. Developing the environment was a timeconsuming task in the first year, but results showed it wasa very interesting way to motivate students. As the systemis open source, is has evolved year after year, mainly dueto students who detected deficiencies or interesting featuresthat could be added and coded them. Thanks to its au-tomation, the experience is very easy to handle. Enrollmentfigures were 20 students in 2011, 16 in 2010, 27 in 2009 and13 in 2008.

Skills developed include not only those specific for thelesson (programming rule-based expert systems, using it forvideo game artificial intelligence systems, etc.) but also gen-eral ones: improvement of a strategy by analyzing its per-formance and critical analysis of the source code written byother programmers.

In general, due to the generous grading system used forthe strategies, all the students get good grades in this les-son. This way, students that are able to program a goodexpert system in 2 weeks cannot fail simply because theirclassmates are better. This is necessary, as regardless of thequality of the strategies, the range of teams in each place ofthe tournament is limited: one student must win the tour-nament, another must be second, and so till the last place.

The use of a betting system has an interesting impact inthe experience, as it relaxes the relation between the perfor-mance of the strategy and the grade obtained by its author.In fact, most of the 15 students in the course increased theirgrades thanks to it (seven in particular), and only three de-creased them. Students could bet from 10% to 50%, beingthe average almost 25%. Interestingly, some students ac-knowledged that the main reason to bet on a certain teamwas not only their impressions when they reviewed the code,but the overall marks of the developer in previous courses.Although we do not have access to that information, ourgeneral impression is that students who perform well in pro-gramming courses also do well in this experience.

Two lines of future work lie ahead. The first line of workis making it easier to change the rules of the board-game foreach competition or match without having to code them,so the teams could play in different environments and avoid

325

over-specialization. For example, there might be some un-reachable square in the matrix that needed extra intelligenceto play, or some pieces could be uncovered from the begin-ning of the match. Our second line of work would be ap-plying log analysis techniques to detect weak rules in anstrategy.

6. ACKNOWLEDGMENTSThis work has been funded by the Proyecto de Innovacion

Educativa Universitaria del Personal Docente e Investigador“La competitividad y el analisis crıtico en la evaluacion”(CIE44) of the Convocatoria 2010 de Proyectos de Inno-vacion Educativa Universitaria para el PDI de la Univer-sidad de Cadiz, funded by the Consejerıa de Innovacion,Ciencia y Empresa of the Junta de Andalucıa and the Uni-versity of Cadiz (Spain), and by the Computer Languagesand Systems Department of the same University.

7. REFERENCES[1] M. Arevalillo-Herraez, S. Moreno-Picot, and

V. Cavero-Millan. Arquitectura para utilizar robotsAIBO en la ensenanza de la inteligencia artificial.IEEE RITA (Revista Iberoamericana de Tecnologıasdel Aprendizaje), 6(1):57–64, 2011.

[2] J. C. Burguillo. Using game theory andcompetition-based learning to stimulate studentmotivation and performance. Comput. Educ.,55:566–575, September 2010.

[3] M. J. Dondlinger. Educational video game design : Areview of the literature. Journal of AppliedEducational Technology, 4(1):21–31, 2007.

[4] R. Fathzadeh, V. Mokhtari, M. Mousakhani, andA. Shahri. Coaching with expert system towardsrobocup soccer coach simulation. In A. Bredenfeld,A. Jacoff, I. Noda, and Y. Takahashi, editors,RoboCup 2005: Robot Soccer World Cup IX, volume4020 of Lecture Notes in Computer Science, pages488–495. Springer Berlin / Heidelberg, 2006.10.1007/11780519 45.

[5] J. C. Giarratano and G. Riley. Expert Systems. PWSPublishing Co., Boston, MA, USA, 3rd edition, 1998.

[6] K. Hartness. Robocode: Using games to teachartificial intelligence. In Journal of ComputingSciences in Colleges, page 2004, 2004.

[7] F. H. Johan, J. Kummeneje, and P. Scerri. Usingsimulated RoboCup to teach AI in undergraduateeducation. In In Proceedings of the 7 th ScandinavianConference on AI, 2001.

[8] Jose Tomas Tocino et al. Gades Siege.http://code.google.com/p/gsiege, 2011.

[9] F. Klassner. A case study of LEGO Mindstorms’suitability for artificial intelligence and roboticscourses at the college level. SIGCSE Bull., 34:8–12,February 2002.

[10] B. Lopez, M. Montaner, and J. L. de la Rosa.Utilizacion de un simulador de futbol para ensenarinteligencia artificial a ingenieros. In Actas de lasJornadas de Ensenanza Universitaria de laInformatica, 2001.

[11] I. Noda, H. Matsubara, K. Hiraki, and I. Frank. Soccerserver: A tool for research on multiagent systems.Applied Artificial Intelligence, 12(2-3):233–250, 1998.

[12] M. Palomo, F.-J. Martın-Mateos, and J.-A. Alonso.Rete algorithm applied to robotic soccer. LectureNotes in Computer Science, 3643:571–576, 2005.

[13] P. Recio-Quijano, N. Sales-Montes,A. Garcıa-Domınguez, and M. Palomo-Duarte.Collaboration and competitiveness in project-basedlearning. In Proceedings of the Methods and Cases inComputing Education Workshop (MCCE), pages 8–14,2010.

[14] P. M. P. Ribeiro, H. Simoes, and M. Ferreira.Teaching artificial intelligence and logic programmingin a competitive environment. Informatics inEducation, 8(1):85–100, 2009.

[15] G. Riley. CLIPS home site.http://clipsrules.sourceforge.net, 2011.

[16] S. E. Siwek. Video Games in the 21st Century: the2010 report, 2010.

[17] P. Stone and G. K. December. Publications related tothe robocup soccer simulation league. World,29(3):866–871, 2007.

[18] S. A. Wallace and J. Margolis. Exploring the use ofcompetetive programming: observations from theclassroom. J. Comput. Sci. Coll., 23:33–39, December2007.

[19] Wikipedia. Stratego.http://en.wikipedia.org/wiki/Stratego, 2012.

326