Fighter Combat-Tactical Awareness Capability (FC-TAC) for use in Live, Virtual, and Constructive...

17
Fighter Combat-Tactical Awareness Capability (FC-TAC) for use in Live, Virtual, and Constructive Training Eric Watz, Lumir Research Institute, Inc. 2620 Q. Street, Bldg. 852 Wright Patterson AFB, OH 45433 937-938-4068 [email protected] Margery J. Doyle, L-3 Communications Link Simulation and Training 2620 Q. Street, Bldg. 852 Wright Patterson AFB, OH 45433 937-938-4013 margery.doyle.ctr@u s.af.mil Keywords: Environment Abstraction, Observation Agent, Tactical Awareness, Pilot Behavior Modeling, Systems Integration, Training and Simulation, Situation Modeling. ABSTRACT: In order to properly simulate behavior based representations of real red and/or blue force operators through the use of models/agents acting as computer generated forces, it is necessary that the behavioral model receive information about the world including who exists, where they are, and relevant events. This includes not only what the actual hardware platform would receive or be capable of sensing, but also the information a human pilot could capitalize upon; information often obtained through communication channels or via the pilot’s own sensory, perception, knowledge, and/or decision making processes. For example, a pilot receives an air picture from the Air Battle Manager, which is derived via the ABM’s assessment of tactical information. In live and virtual exercises, any additional information a pilot might receive is limited to what the systems on his/her aircraft can sense; limitations not realized by a constructive model connected to the training network. Meaning, a behavior model of an aircraft with a maximum actual radar detection range of 15 miles would, without constraints representative of actual system capabilities, may detect entities outside of its normal range of capability; greatly diminishing external validity, realism, and believability. As such, we are developing a proof-of-concept for a Fighter Combat-Tactical Awareness Capability. This approach involves a model-to-Distributed Interactive Simulation-Environment Abstraction-Tactical Observer Agent to accommodate realistic information sharing with model/system function components. The EA–TOA translates low-level Distributed Interactive Simulation information into objects and then uses filters, based on actual maneuver recognition, hardware and human-in-the-loop oriented capability and limitation parameters, to create a Tactical Awareness Capability; limiting the information available to a model to the semantic/knowledge level of analysis. Herein, the genesis of FC-TAC, realizable through a multi-step data and EA process, syntactic and semantic validation, the application’s generalizability, and an actual proof-of-concept

Transcript of Fighter Combat-Tactical Awareness Capability (FC-TAC) for use in Live, Virtual, and Constructive...

Fighter Combat-Tactical AwarenessCapability (FC-TAC) for use in Live,Virtual, and Constructive Training

Eric Watz, Lumir Research Institute, Inc.2620 Q. Street,

Bldg. 852Wright Patterson AFB, OH

45433937-938-4068

[email protected]

Margery J. Doyle, L-3 Communications Link Simulation and Training

2620 Q. Street,Bldg. 852

Wright Patterson AFB, OH45433

937-938-4013margery.doyle.ctr@u

s.af.mil

Keywords:Environment Abstraction, Observation Agent,Tactical Awareness, Pilot Behavior Modeling,Systems Integration, Training and Simulation,

Situation Modeling.

ABSTRACT: In order to properly simulate behavior based representations of real red and/or blue forceoperators through the use of models/agents acting as computer generated forces, it is necessary that thebehavioral model receive information about the world including who exists, where they are, and relevantevents. This includes not only what the actual hardware platform would receive or be capable of sensing, butalso the information a human pilot could capitalize upon; information often obtained through communicationchannels or via the pilot’s own sensory, perception, knowledge, and/or decision making processes. Forexample, a pilot receives an air picture from the Air Battle Manager, which is derived via the ABM’s assessmentof tactical information. In live and virtual exercises, any additional information a pilot might receive is limitedto what the systems on his/her aircraft can sense; limitations not realized by a constructive model connected tothe training network. Meaning, a behavior model of an aircraft with a maximum actual radar detection rangeof 15 miles would, without constraints representative of actual system capabilities, may detect entities outsideof its normal range of capability; greatly diminishing external validity, realism, and believability. As such, weare developing a proof-of-concept for a Fighter Combat-Tactical Awareness Capability. This approach involvesa model-to-Distributed Interactive Simulation-Environment Abstraction-Tactical Observer Agent toaccommodate realistic information sharing with model/system function components. The EA–TOA translateslow-level Distributed Interactive Simulation information into objects and then uses filters, based on actualmaneuver recognition, hardware and human-in-the-loop oriented capability and limitation parameters, tocreate a Tactical Awareness Capability; limiting the information available to a model to thesemantic/knowledge level of analysis. Herein, the genesis of FC-TAC, realizable through a multi-step data andEA process, syntactic and semantic validation, the application’s generalizability, and an actual proof-of-concept

currently in use will be discussed.

1. The Genesis of FC-TAC

The Fighter Combat – Tactical AwarenessCapability (FC-TAC) was borne out oflessons learned from a multi-yeareffort involving collaboration betweenAFRL 711th HPW/RHAS and industry on aproject called the “Not So GrandChallenge”. The Not So Grand Challenge(NSGC) is an effort to establish abaseline of capabilities and gapsamongst existing best of breedcomputational, cognitive, behavioral,game AI, discrete event, and agentbased modeling approaches for use indevelopment of agile Warfightertraining research tools andenvironments. The Phase I objective wasto develop, integrate, and demonstrateagile modular approaches for realisticentity modeling based on rapidlychanging Warfighter trainingenvironments. The goal of the projectis to advance and enhance training andresearch methodologies and technologieswithin the Live Virtual Constructive(LVC) and Distributed MissionOperations (DMO) domain to addresseconomic, resource, and trainingresearch challenges for the future [1].

1.1 NSGC Overview

While it is recognized that the mostformidable adversary is a human RedForce Subject Matter Expert (SME),because they easily provide the properand on occasion novel trainingsituations and have the ability torapidly adapt to novel situationsthemselves, be they offensive ordefensive in nature, relying only onSME Red Force pilots to serve asadversaries has proven expensive andlogistically difficult. Therefore,modeling Red Force Agent (RFA)adversaries within constructive forces

generation systems has become commonpractice. Still though, this approachhas also proven to be expensive andthe development of these systemsrequires highly skilled resources. Inaddition, the pre-defined rule-basednature of such models often leads toinflexible instantiations of thethreat profile when in use. To makematters worse, static rule-basedmethods generally contain weaknessesthat can be “gamed” by operators intraining who have learned how toexploit and defeat these weaknesses towin, potentially even leading tonegative training effects. Theseproblems exist even in state-of-the-art game AI technologies, hamperingthe quality of training humanoperators can receive [1, 2, 3].

Therefore, through collaboration withseveral industry teams includingCharles River Analytics, CHI Systems,ALION, APTIMA, SoarTech, and Stottler-Henke, AFRL 711/HPW RHAS initiated theNSGC project to integrate and assessthe viability of using commercial offthe shelf (COTS) and government offthe shelf (GOTS) models to support LVCand DMO training and rehearsal withrapidly-developed, accurate, andbelievable models. These models, alsoreferred to as agents, are designed tomimic human behavior within an LVC orDMO training scenario. In addition,the models may potentially serve asnew functional capabilities (i.e.,agent, component, module, metricmechanism, process) for furthering thedevelopment of, or providing service/swithin, an LVC DMO Learning ManagementSystem (LMS) [1, 2, 3].

1.2 NSGC Project Goals

Complex agent-agent and human-agent

interactions in a simulated combatenvironment can produce unexpectedoutcomes. Therefore, NSGC needed todetermine if such an approach couldalso provide for sufficienttractability and traceability toretain the capability to effectivelyevaluate training outcomes in anobjective manner in an advancedresearch and development environment.Also, to assure an ability to preservesome anonymity of each team’sproprietary architectures, softwaretools and/or processes, the NSGCproject leveraged a modular,distributed architecture [1, 2, and3].

As such, one of the first NSGC designdecisions was one of separating the“brain” from the “body”, as illustratedin Figure 1. Doing so allowed forbehavioral modelers to concentrate onwhat they do best and for theconstructive forces system to model thephysics of flight and the environment.By default, this approach created afairly realistic bounded decision-action space.

Figure 1. NSGC Decision-ActionModularization Concept

In addition, because this was a firstround process and we needed to assesseach type of relevant modelingcapability at various levels of

detail, in NSGC Phase I we chose tobound the problem space as much aspossible and specifically focus onesmall area of the system that providesadversarial red force stimuli (i.e.,computer generated forces, or CGFs) fortraining. For purposes of initialintegration and functional applicationa key performance measure indicatingsuccess at this juncture was how well asimulation scenario unfolded, given theuse of various black box architectureapproaches to provide realisticstimulus-response adaptive and reactivebehaviors [1, 2, and 3].

The larger NSGC multiyear effort andapproach serves to develop and assessviability of implementing rapid adaptivemodels for use in LVC DMO environments.Our goals are to (a) identify/developthe methodologies and technologiesneeded for proper development,evaluation, and use of rapid behaviormodeling and agent design, (b) definedata parameters specifications, orminimum data thresholds, necessary forthe modeling architecture to beinformed, then through (c) validationand verification (V&V), evaluation andor experimentation, determine theviability and utility for use of thesemodels in adaptive constructive trainingenvironments. Phases II and III willserve to develop, integrate, anddemonstrate agile modular modelingapproaches for realistic entity modelingto support optimal readiness againstrapidly changing missions and trainingenvironments [6, 7].

1.3 NSGC Technical Overview

To enable Phase I of NSGC, AFRLprovided the following components: aconstructive threat environmentsystem, a testbed in which to replayscenarios, a model-to-DIS (m2DIS)bridge, the Performance Evaluation andTracking System (PETS), and several

exemplar scenarios for the Phase Ieffort. The architecture of NSGC PhaseI is shown in Figure 2 [7, 6].).

Figure 2. NSGC Phase I Framework

The AFRL-owned Network IntegratedCombat Environment (NICE) was used asthe constructive threat environmentsystem for Phase I. NICE is agovernment-owned, real-time, physics-based simulation capable of modelingfriendly and adversarial forces forthe purposes of DMO and LVC training.In Phase I the scenarios usedconsisted of only red and blueconstructive players; future phases ofthe project will incorporate livepilot based virtual blue forces foradditional realism against the redforces. NICE performs physics-basedmodeling of the environment andplatforms as well as weapons andsensor operations. The system iscompatible with standard data formatsused to communicate betweenapplications, outputting and receivingnetwork data packets in accordancewith the DIS Standard as documented in

IEEE 1278.1-1995 and IEEE 1278.1a-1998[8, 9].

AFRL supplied the teams with asoftware bridge, known as m2DIS, tointegrate their models into the DISenvironment. In order to facilitatesemantic interoperability with theNICE system, the m2DIS ApplicationProgramming Interface (API) providedcommon interfacing terminology whichwas used by the teams to both send andreceive information to and from theDIS network; by default this ensuredsemantic interoperability at a partiallevel (publish), and made the m2DISAPI the only mechanism by whichmodelers could leverage the NICEphysics-based environment [4, 6, 7,3]. Teams were also allowed to usetheir own middleware libraries tocapture PDUs from the DIS network.

The distributed nature of DIS, alongwith the physical separation of thebehavior models from the NICE physics-based flight and environment models,allowed the NSGC teams to maintain aseparation of any proprietaryinformation while also remainingresponsible for their own theoreticalunderpinnings, i.e., their ownscientific validity and engineeringstandards, while still following thecommon protocols for development andoutput. Separating the behavior modelsfrom the physics-based environmentlimited behavior to the extent of whatwas allowed by NICE’s player models.For example, if a model commanded aNICE player to change its heading from90 to 270 degrees, the player did not“snap” (i.e., instantaneously change)to the new heading; rather the NICEplayer attempted to make the headingchange as quickly as possible based onthe parameters of its aerodynamics andplatform models.

The Phase I effort involved the teams

creating models to fight against twoscenarios: VID-6 and Sweep-2. For thisfirst phase, the focus was onintegrating external models into anLVC environment and working throughthe technical challenges. Tofacilitate this, scenarios with a lowlevel of difficulty were selected forPhase I. The objective was to presentred forces with a blue air picturethat was of a low to moderatecomplexity, something that could beovercome via a basic coordinationbetween red forces. As the projectmoves forward, the difficulty andcomplexity of the scenarios will beincreased, and that will imposeadditional high level logicrequirements on the models in order tocontrol a larger number of red forceplayers with increasingly coordinatedmovements among the red force groups.

The Visual Identifier 6 (VID-6)scenario, shown in Figure 3, was thefirst scenario used. This scenario wasdesigned as a baseline integrationtarget, i.e. the “Hello World”scenario. The blue forces were non-reactive in this scenario; meaning,blue forces would not execute anytactical maneuvers or attempt toemploy weapons on the red forces. Theobjectives of the VID-6 scenario wereto have red air perform a specificmaneuver called a stern intercept. Astern (or “baseline”) target interceptis designed to: build lateral offset,exit the aircraft’s radar searchvolume, and close distance on thetarget while maintaining lateral andvertical turning room. The objectiveis to arrive within VID or Eyeball (IDvia targeting pod) distance and fireon the target from the beam or sternWeapons Engagement Zone (WEZ). Atypical stern intercept will result inthe attacking aircraft arriving 6o’clock to the target at 6-9,000’range. Range, azimuth, and elevation

should be considered as discretecomponents of the separation betweenred and blue air.

Figure 3. VID-6 scenario

The next scenario, Sweep-2, includedreactive blue forces. Red forcesinitially present a range problem forBlue forces; Blue would typicallyattack this presentation by executinga Grind tactic (one element continuinghot to target the leading edge of theRed formation with the other Blueelement turning cold to gain aposition in trail.). Figure 4illustrates the Sweep-2 scenario.

Figure 4. Sweep-2 scenario

By executing the mirror Flankmaneuver, the Red forces reduceclosure and decrease the Blue forceWeapon Engagement Zone (WEZ) and atminimum delay Blue shots. If Blueforces fire missiles far enough awaythe Flank maneuver may even defeatBlue shots. By Flanking, the Redforces alter the tactical picture froma range problem to an azimuth problem.

Red forces would reduce range to thetarget and reduce maneuver room forthe Vipers.

1.4 Challenges and LessonsLearned

One of the most significant up-frontchallenges in the NSGC project wasinterfacing with an unknown simulationand training environment. The DISstandard describes the structure ofmessages that are communicated on thewire, and it was decided early on thatAFRL would develop a “bridge”component that would allow models toissue DIS commands. This component wasrealized as the model-to-DIS (m2DIS)library, and was successfullyintegrated and used by all the teamsparticipating in NSGC. AFRL leveragedits internal engineering experienceworking with DIS to produce a librarythat generated PDU’s in compliancewith the DIS 1278.1 standard. Thedevelopment team took advantage ofbeing co-located with the NICE teamand was able to perform in-house unittesting for DIS compliance as well asfor compliance with NICE before it wasdelivered to the NSGC teams. All teamswere given a copy of this library forintegration with their models. Using acommon library such as m2DIS to issuecommands to the DIS network eliminatedany potential issues with malformedpackets being received by NICE, andallowed the industry teams to focus ondevelopment of their cognitive,behavior related models andeffectively ignore the details of theunderlying simulation architecture[7].

Other challenges included a lack ofready access to a testbed and to thesystems being used. While NICE wasused as the Phase I constructiveentity generator, it was notdistributed out to industry teams.This constraint required that the

teams travel to the Dayton area andperform on-site integration. Realizingthat integration time was an issuetowards the end of the project, AFRLhosted a two-day integration workshopprior to the final demonstration.Going forward with future NSGC phases,it will be necessary to provide arobust virtual testbed which willallow on-demand access to the NSGCsystems and tools, and one that willcontinue to maintain separation andprotect the individual teams’intellectual investments into theproject.

2. Realizing domain knowledge

In order for a model of pilot behaviorto invoke realistic behaviors, thereneeds to be a means for the system tocommunicate detailed domain knowledgeto a model/agent, such as telling ared force player the current status ofthe air picture.

The realization of fighter combatdomain knowledge began with a verystraightforward objective: take acomplex dynamic environment andtranslate the events (i.e., sequencesof instance based interactions) orslightly larger sequences of eventsthat occur in real-time into alanguage or otherwise structured formof information that could beunderstood outside of the domain;e.g., something palatable by models oragents. The initial conceptual designbegan by recognizing that whatconstitutes an event likely correlateswith changes in the DIS state-spaceand so postulated that, after firstchoosing what constitutes an event, wemay want to quantize the event space.This idea was suggested because it hadbeen shown that sequences of instance-based interactions create sequences ofevent signatures in the form of

syntax, suggesting that if informationwere formatted in such a way that anagent could recognize matches of statespace changes within a syntax, then asequence of events/instances should“mean” something in particular withina domain taxonomy/ontology A taxonomyis a formal representation of domainknowledge which uses a sharedvocabulary to denote the properties ofitems (context; sequences of events,behaviors) [10].

The fighter combat domain taxonomyrequired for the development of anobserver / recognition agent wasdeveloped through a process whereinengineers engaged AFRL’s SMEs in adiscussion on how to identify andcategorize an air combat environmentand or an event. We began by lookingat the smallest unit of measure, asingle aircraft, then went on todefine what it means when multipleaircraft are co-located (i.e.,considered to be in a group),eliciting from the SMEs the rules ofthumb used to define the conditionsunder which two aircraft areconsidered to be a group. At each stepof the process, the team consultedwith our in-house SMEs to ensure therule sets being developed weretactically correct and conformed tohow a human would think about theseconcepts while in flight. Having thusdefined what a group is, and thecriteria for when a player isconsidered part of a group, the teamturned towards identifying andcategorizing the various commonarrangements of two groups; two groupsrange and azimuth. The team thenlooked to define common arrangementsof more than two groups; producing aset of definitions for the air picturesuch as 4+ groups wall, 3-group vic,etc. After the team defined what agroup was and what to call two or moregroups in a particular arrangement, we

defined maneuvers. Most of themaneuver definitions were taken fromthe unclassified F-16 flight manuals.The flight manuals provideddefinitions of when, among otherthings, to call a change in heading acheck turn, a flank, a beam, or a dragmaneuver [14].

After defining the rules and criteriafor groups and maneuvers, the teamdeveloped a software based tool thatencapsulated the set of rules anddefinitions into C++ algorithms. Theresulting application was named theTactical Observer Agent (TOA). Figure5 represents an overview of the TOAarchitecture and its capabilities.

Figure 5. TOA overview

The TOA was developed to process databeing broadcast over the DIS network,track all players on the network andtheir locations, and run the Group andManeuver algorithms on all players.When a new player is detected on thenetwork, the TOA will process it asfollows:- Store the EntityState for theplayer- Check if the player meets groupcriteria within any existing groups.If yes, the player is added to anexisting group. If not, create a groupand add the new player into the group.- Begin tracking the player’sheading, orientation, and speed; the

Maneuver Manager will monitor forchanges in these values. If a headingchange exceeds a pre-definedthreshold, the Maneuver Manager willflag the entity as having started amaneuver.- The maneuver is consideredcomplete when either a)heading/altitude/speed is no longerchanging or b) the change in values isoutside the rules of thumb for aparticular maneuver.

While the TOA itself performs well interms of providing information aboutthe current mission, there exists aneed for an Environment Abstraction(EA) layer within m2DIS (m2DIS-EA) inorder to make the information beingdelivered to a model more realisticand therefore confined to actions andevents that are possible in the realworld.

3. Environment Abstraction

Models acting on unrealistic detectionranges, or having knowledge of enemyforces beyond what would be availablein real life situations, are destinedto be perceived as unrealistic.Therefore, Red Force Agent modelbehaviors, recognition capabilities,and/or information being used fordecision making must be based on thelimitations of their hardwareplatforms and the environment. Forexample, a red force platform shouldbe limited from getting the Time,Space Positional Information (TSPI) ofa blue force player outside of theplatform’s radar contact range, eventhough this information may be readilyavailable on the DIS network.

One approach being considered toimprove the realism of data in futurephases is to incorporate a realisticmodel of the AWACS radar scan pattern

within the TOA. Currently, the TOAlooks at all data on the DIS networkand reports maneuvers as soon as theyare detectable; however, this type ofinformation throughput to a modelwould not be representative of real-world pilot or platform capabilitiesand limitations. Nor would theresultant behavior choices an agentmay make if choices are made based onexaggerated insights into the currentsituation. However, the resultantbehavior may be made more realistic bylimiting the frequency at which TOAreports information, based on the scanpattern of AWACS.

Another option being considered ishaving m2DIS limit the available TSPIinformation to what is currently inthe radar scan volume of the aircraft.If the aircraft doesn’t have a playerwithin their scan volume, then it willnot get the precise location of theopposing player. If a player requeststhe heading, altitude, or airspeed ofan opposing force player and theopposing force player is outside ofthe requestor’s radar scope, m2DISwill indicate an ‘invalid request’ tolet the calling application know thatits request was out of scope.

We also need to consider thetimeliness of when other elements ofthe fight, such as weapon detonationevents, are reported through the m2DISinterface. In the current iteration, aweapon detonation event is reportedthrough m2DIS at the time it occurs.This scenario does not represent real-world conditions, however, because ina live fight neither the air battlemanager (ABM) nor the pilots operatinglive or simulated aircraft will knowabout the effects of a weapon’sdetonation – provided the detonationoccurs beyond visual range of anyoperator – immediately after the

weapon detonates. In a real-worldscenario the datalink track for thetargeted aircraft would persist for ashort period of time, possibly up to20-30 seconds, until the radar trackon AWACS’ scope ages out. At thispoint, the AWACS will cease tobroadcast the datalink track for thenow-deceased entity. By implementingthis feature in the m2DIS-EAcomponent, we will be providing themodel with data that would be presentin the real world and at the correcttime when the data is known to allobservers.

Another option for tacticalbelievability is to have red forceplayers adhere to a) a commit rangeand b) their primary role andresponsibility; latter being more onthe models’ side than m2DIS. M2DISwould be responsible for enforcing thecommit range; although this issomething the model should technicallybe doing.

Maintaining a realistic pilot behavioris paramount for a model’s success,and with the application of an EAlayer this becomes more feasiblebecause without access to the full setof information available on the DISnetwork, or “omni-data”, a model isrequired to think more like a realpilot. The upcoming phase(s) of NSGCwill include a functional EAcapability with the m2DIS library.

Environment Abstraction can be and isrealized through methods ofobservation and knowledgerepresentation, knowledge management(structure as applied and filtered),and knowledge sharing along with amutual understanding of what thatinformation means, ‘that whichexists.’ A description of things inthe world; within this architecturethis equates to the TOA capability.

Then abstracting to understand what itmeans to be a particular entity/in agiven domain and situation roughlyequates to the FC-TAC capabilitydescribed herein. The goal is toachieve an account of reality; ashared understanding of the conceptswithin a domain. For purposes ofdeveloping a knowledge and informationexchange architecture that can be usedby most if not all non-static modeltypes as well as humans. In otherwords make knowledge of a given domaincomputationally useful.

Two or more agents/systems can beintegrated and made to work together.However, in order to interoperate,they must have a mutual understandingof what any given chunk of data orsequence of events means. That is, amutual understanding of data at higherlevels of abstraction most oftenconsidered at three primary levels:syntactic, semantic, and pragmatic[11, 6]. The three levels ofabstraction are represented in Figure6.

Figure 6. Levels of abstraction

• Syntactic: data structures areexchanged between two systems withoutany real need for the understandingthe meaning of the data. Integrationrealized when data exchange standardsare implemented• Semantic: integrated systems share acommon understanding of the meaning ofa chunk of data. Data that has beentransformed into knowledge (i.e. datawith context) • Pragmatic: integrated systems sharea common context and sense of a commonglobal purpose, while varied local

goals may be implemented to meet alocal or global purpose.

Environment abstraction is a form ofactivity, conduct, or process thatinvolves the process ofrepresentation, including theproduction of meaning; a process bywhich something is marked /demarcated. An object turns into arepresentation or sign when itpossesses the capacity of generatingan interpretant, i.e., something thatmakes an impression or leaves aneffect of some kind upon the observer.A representation has the capacity torepresent an object, by virtue of itsrelationship with that object. Anobject is a person, place, or thingwhich affords an agent a match withlocal or global goals or a goal set.Making clear the signs/signals/eventsnever stand-alone and are always partof a larger sign system context;therefore the need for abstraction[12, 13].

In this case, a sequence of events isbeing defined and qualifies asrecognizable by way of the domainspecific syntax inherent in sequencesof events that follow domain specificrules. Syntax is defined as thestructural relationship or compositionof the events as they are understoodby way of their sequential relation toother events. That is, the order ofsmaller actions, capabilities, andlimitations which an interpretant(pilot/agent) either recognizes orcommunicates to another pilot/agent,will take the form of an abstractionof the situation based onclassification and assignment ofmeaning based on domain specificrules/heuristics. Noting that, themore uncertain one is aboutclassifying and assigning meaning toan event, the more information that istypically required to ensure

understanding of what the situationmeans. In this case, the process ofabstraction and representation ofinformation requires that an agentsupport the process of meaning-makingand assigning meaning to thesituation. An agent is capable ofobserving, abstracting, andclassifying information from theenvironment by way of using rule setsto notice and categorize events basedon indication, designation, alikeness, or any other form ofrepresentation. That is, an agent whoattends to the environment recognizes,classifies, and communicates it assuch, in the form of meaningfulinformation about state space changes.This is a set of interwoven processtypically divided into three brancheswhich in this case, equate toobservation, classification,communication, and the use of:

Syntactics: Observing the relationsamong events in formal structures.

Semantics: Use of domain specificrepresentations to associate andassign meaning to events, to thatwhich they refer. Representingdomain specific knowledge by way ofrelating sign and signals inherentin the syntax of events to aparticular structured sequenceclassified as meaningful.

Pragmatics: provide meaningfulpurpose directed information aboutany given situation, domainknowledge, gathered intelligence,or any other information consideredto be meaningful to an agent whichinterprets representation.Information filtered to berepresentative of real worldcapabilities and limitations.

In this context, syntax refers to therules by which models of pilots combineactions, signs, signals, and sequencesof maneuvers to form complex meaningwhen abstracted through a “good enough

match” with the sequence of eventsobserved. Maneuvers and actions at onelevel become the basis forabstractions on the subsequent level.So, we propose that the linguisticlevels of interoperability can bothserve as guideline for an EAcapability, and be used to guideinformation exchange, and if need be,maintenance of a stable mutualunderstanding across agents. On theother hand, what purpose any givenchunk of information affords thereceiving agent is a whole othermatter entirely.

In the case of syntactic and semanticvalidation, syntax is considered rulesfor specifying the correct action,event, and/or maneuver order, whilesemantics is considered a set of rulesused for assigning meaning to anaction/event in context, or a seriesof events/maneuvers by considering theorder in which they occurred. Ifactions or signals that occur during astring of maneuvers

are not in the correct order we wouldsay there would be insufficientsyntactic cues for an agent torecognize and make a match tomeaningful information to be shared;this would render the sequence ofmaneuvers invalid for use by anabstraction process. However, if thesequence of events is rendered validbut later found to semanticallyinvalid, this could be due to theconsideration of any number ofincorrect parameters in context duringthe next level of the abstractionprocess. Therefore, context that onlyvalidates that the syntax of events iscorrect may not be a sufficient testof semantic validity. Meaning, it ispossible to have the correct actionsin the correct order at the locallevel of goals and interactions but,if, for example, the geo-location

information is incorrect on even oneentity, we may still not end up with avalid set of maneuvers to be able toassign the correct meaning toward them.A meaning that is likely for thepurpose of establishing, communicating,and acting on a set of shared globalgoal/s; as it may pertain to an overallmission plan.

Case in point, when trying to relateto someone one who you may not knowvery well over a matter of seeminglymutual interest, we can sometimes goto the trouble of even comparing notesas they may pertain to the who, what,when, where, and why (timeline,sequence of events, variousparameters, and/or assumptions) of asituation. All this would be done inorder to determine if the two peopleinvolved in the conversation can agreeon a mutual understanding as itpertains to the facts being compared.Later one individual may discoverthat, due only one or two seeminglyminor mismatched points beingcompared, the conversation, andpossibly even any later assumptionsbuilt upon this earlier process, alongwith any perceived sharedunderstanding, is invalid; eithersemantically, syntactically, or onboth levels of abstraction. This leadsto why not only is an observationcapability of tactical events needed,but also why a Tactical AwarenessCapability that can assign meaning toobservations is needed. First, thiscapability is necessary to filterinformation through parameters basedon limitations and capabilities ofreal world platforms, pilots, themission, and/or training maneuversconsidered appropriate to enact ormeaningful when enacted, and second,to ensure valid information is derivedfrom the abstraction process andcommunicated to the proper model oragent. In the case of NSGC, at least

for now, how that information is thenused, acted upon, validated, and theresulting behaviors of such willremain assigned to each of theindividual models currently beingtreated as a modularized black boxcomponent.

4. FC-TAC

After Phase I completed it wasnecessary to determine what are theremaining representative data neededby the modeling architectures; i.e.,principle types of data, domainknowledge, heuristics, rules-of-thumb,etc. Then, to be able to supportknowledge capture and manage knowledgetransfer at various levels ofabstraction for model use, we need todetermine how features and tokens aredefined and used in various types ofmodeling approaches. That is, to beable to support many types ofmodel/agent development, integration,and use we also need to define andcreate a knowledge-to-model structureto support various types of knowledgerepresentation and at various levelsof abstraction. A primary concern inthis effort is how do we design astructure that provides as muchcoverage as possible across thesymbolic sub-symbolic modelingcontinuum? For now, we have decided toconcentrate on the knowledge level-of-analysis to address what a knowledge-to-model abstracted structure shouldlook like, and be able to functionlike for the purpose of transferringdomain specific information to themodels [15]. The m2DIS + EA + FC-TACarchitecture is shown in Figure 7.

Figure 7. NSGC Phase II+ architecture

What we do know is that, currently, anagent or model used in LVC DMOtraining model must be able to takeinformation that is seemingly non-meaningful (primitive or raw) as it ispublished via DIS then transform itinto meaningful, actionable higher-level state representations. Then, totake an action in the LVC DMOenvironment, the agent or model musttranslate pertinent action-setinformation back down to a series ofprimitive, simulation-specific networkcalls by way of m2DIS. Details of thespecific commands being communicatedon the simulation network will beabstracted out at the m2DIS layer.

There exists a need for a shareddomain model that includesdescriptions of red and blue forcebehaviors. Some examples of this shownin the first phase of NSGC includerelative angles, relative distances,crossing angles, and formations.For instance, members of a groupperforming the same action at the sametime are said to be maneuvering; toachieve a specific objective or aspart of a larger tactic. Typicaltactics within the air combat domain

include: prosecuting a local numericaladvantage, exploiting opposing forcerange to bull’s-eye and/or defendedpoint, building formation awareness,testing opposing force radar search,influencing turn direction, andsplitting defenses. The opponent’smaneuvering can be monitored todetermine the intent of their actions.Understanding the objectives of anopponent is key to realisticallymodeling pilot behavior or respondingproperly to a pilot’s behavior. Forexample, an opposing force player maybe co-located with other players,which in domain terms would beconsidered a group. If the group thenbreaks up into two sub-groups, witheach taking an approximate butopposing 45 degree heading turn fromtheir original heading, this is knownas a flank maneuver. The recognitionof a flank maneuver occurring, itsintended effects on the adversarialforce, and the own force tactics tocounter or delay the intended effectsare all components of FC-TAC.

5. TOA and FC-TAC use cases

While providing situation awareness tomodels is one of the use cases for FC-TAC, we are also using the informationcoming from FC-TAC to feed into afirst-order Learning Management System(LMS). The exact needs for LMS aredifferent than those of NSGC; whereasTOA is being used to monitor ascenario in real-time and providetactical situation information aboutthe scenario, for the LMS we are usingTOA to calculate metadata about ascenario in real-time. The TOAcapabilities can be applied to a newscenario and, with the addition of afew new algorithms, be used togenerate metadata based on groups,maneuvers, and their relativelocations within the battle space

[16].

Going forward, the TOA capabilitieswill be used to rapidly index andexpand the library of availabledynamic scenarios that will be used asfoundation for an Expert Diagnosticianmodule. The Expert Diagnostician is atechnology that will provide supportto the Instructor Pilot (IP) whenmaking choices between variousscenarios that should be applied tothe training simulation next. [4, 16]Simply put, use of the TOA in LMS ismaking dynamic scenario generationpossible. TOA generates metadata tagsfor new scenarios; as well as cardcatalog type information concerningand denoting who, what, when, andwhere of event frames within ascenario. This metadata taggingprocess also serves to index thescenario within the LMS scenariolibrary database. It also provides forthe use of taxonomic filters, andcomplexity scoring informationeffectively even categorizing anddefining new or novel scenarios. Thelatter two capabilities, taxonomicfilters and complexity score indexingare vital to the LMS expertdiagnostician logic, and the algorithmused to recommend scenarios for theInstructor Pilot to use next.

Within the TOA Fighter Combat-TacticalAwareness Capability (FC-TAC)development process we proposed amethod for producing, understanding,and sharing modularized quantizeddomain knowledge, information, andtactical awareness through anabstraction structure that bothfacilitates and provides for the rapidspecification of new DMO agents atvarious levels of the abstraction, aswell as provides a method for reusingagent functions as well as many of theapplied development and testingmethods that were used in this process

across a range of DMOenvironments/systems.

Finally, formalizing the output of ourcurrent approach into a protocol orinterface will afford the modeling andsimulation community a rapid agent ormodel development related capabilitythat meets the base requisites andpromotes portability and reusabilityrapidly developed DMO training agentsand models.

6. Looking Forward

The NSGC project is an ongoing effortat RHAS. We have successfullycompleted an initial first phaseinvolving industry teams and RHAS. Weare currently collecting lessonslearned and building out additionalinfrastructure to support futurephases of the project.

An Environment Abstraction capabilitywill be included in future phases ofNSGC. EA offers a capability to limitthe amount of information available toa cognitive/behavior/computationalmodel, thus, is warranted taking awaya model’s ability to access the fullset of information on the network;better ensuing models work with theset of information available to a realpilot and increasing the likelihoodmodels will behave or be perceived tobehave in a realist manner. In someways maybe even think like a pilotwhen it comes to observation,abstraction and the assignment ofmeaning and communication with otheragents about what may be meaningful.

TOA will also be included in the nextphases of NSGC. AFRL’s TacticalObservation Agent provides the keyevents, situations, and contextswithin an air combat training scenariothat represent decision points for a

pilot. Through a SME-focused knowledgeelicitation process, the NSGC team wasable to determine the rule sets thatdefined key situations, events, andcontexts, and transform those intoalgorithms within the TOA. Thecapabilities of the m2DIS library arealso being expanded for Phase II andbeyond. Additional API’s will affordmodels a finer-grain of control overNICE models, high level awareness ofthe player’s current situation, andintegration with the TOA to providem2DIS users with knowledge of theopposing force’s current tactics andintent. In addition, we are exploringthe integration of m2DIS with otherthreat environment generators such asBigTac. M2DIS will continue toabstract out the details of how theunderlying simulation operates, and atthe same time offer a robust, high-level API to facilitate theintegration and interoperability ofvarious modeling capabilities for usein the LVC DMO domain.

Given awareness, recognition, and useof context (i.e., previous set ofevents), it may be possible to predictthe likely next few event/s that willfollow. Through a process ofdiscovery, the AFRL NSGC team alsorealized that this particularconceptual design may even provideinformation at the right level ofgranularity to other systems and/orfunctions that rely on state spacerecognition as well as models. Then,if afforded a large database of suchevents, using virtually the same typeof observer agent, it may be possibleto use data mining techniques such asagent-based models to identify a smallset of conditions consideredmeaningful. It may even be possible toelicit information about expertknowledge, tactical, and learningstrategy over a novice- expertcontinuum. That is to say, something

akin to an automated domain-basedknowledge elicitation data mining toolthat could also offer a manual querycapable interface that would be usedto inform other agents/models with awell-informed hypothesis about what aplayer or group of players in theenvironment may intend to do next.

In the future, a system consisting ofreal-time adaptations performed usingan intelligent agent framework capableof responding and adapting to thewarfighter’s actions during an LVC DMOtraining exercise is possible.Development and use of an off-lineTactical Observation Agent executingalgorithms to effectively searchthrough the database space ofcollected, structured, and storedadversary behaviors, exploitinginformation about the ‘adversary,’resolute to expose weaknesses in BlueForce warfighter (individual and team)mission related tactics and learningstrategies when training against awarfighter model. Both expert and non-expert warfighter strategies andtactics could be extracted fromrecordings of simulation events fromthe operational environment; i.e.,real live adversary or from liveengagements.

7. References

[1]. Doyle, M.J., Portrey, A.M.,Mittal, S., Watz, E., & Bennett, W.Jr. 2014. Not So Grand Challenge:Are Current Modeling ArchitecturesViable for Rapid Behavior Modeling?(AFRl-RH-OH-2014-xxxx). Dayton, OH:Air Force Research Laboratory,Human Effectiveness Directorate,Warfighter Readiness ResearchDivision RHAS.

[2]. Doyle, M.J., & Portrey, A.M.(2011). "Are Current ModelingArchitectures Viable for Rapid HumanBehavior Modeling?"Interservice/Industry TrainingSimulation, and Education Conference(I/ITSEC). Orlando, FL.

[3]. Doyle, M.J., & Portrey, A.M.(2014). "Are Current ModelingArchitectures Viable for Rapid HumanBehavior Modeling?"Interservice/Industry TrainingSimulation, and Education Conference(I/ITSEC). Orlando, FL.

[5]. Mittal, S., Doyle, M. J., & WatzE. (2013). "Detecting IntelligentBehavior with EnvironmentAbstraction in Complex Air CombatSystem" IEEE Systems Conference.Orlando, FL.

[6]. Mittal, S., Doyle, M. J. &Portrey, A., (2014). Human-in-the-loop in System of Systems (SoS):Modeling and Simulation:Applications to Live, Virtual andConstructive (LVC) DistributedMission Operations (DMO) Training.In Handbook on System of SystemsEngineering, Tolk (Ed.). John Wiley& Sons

[7]. Watz, E. (2012). Interface DesignDocument for the Not So GrandChallenge (NSGC) Project, Revision CV4 Sep. 28. Dayton, OH: Air ForceResearch Laboratory, HumanEffectiveness Directorate,Warfighter Readiness ResearchDivision

[8]. Institute of Electrical &Electronics Engineers, Inc.(September 1995). IEEE Standard forDistributed Interactive Simulation– Application Protocols [IEEE Std.1278.1-1995]. New York: IEEEComputer Society.

[9]. Institute of Electrical &Electronics Engineers, Inc. (18August 1998). IEEE Standard forDistributed Interactive Simulation– Application Protocols [IEEE Std. 1278.1a-1998]. New York:IEEE Computer Society.

[10]. Doyle, M., & Kalish, M. (2004)Stigmergy: Indirect Communicationin Multiple Mobile AutonomousAgents. In Pechoucek, M. & Tate, A.(eds.) (2004) Knowledge Systems forCoalition Operations 2004, CzechTechnical University Press, Prague,Czech Republic, October 2004.

[11]. Wang, W. G., Tolk, A., & Wang, W.P. (2009). The levels of conceptualinteroperability model: Applyingsystems engineering principles toM&S. In Proceedings of the SpringSimulation Multiconference(SpringSim '09). San Diego, CA:Society for Computer SimulationInternational.

[12]. Chandler, D. (2004). Semiotics forbeginners. Oxford, U.K.:Routledge. Retrieved November 9,2004.http://www.aber.ac.uk/~dgc/semiotic.htm

[13]. Peirce, C. S. (1998). “What is asign?” In the essential Peirce:Selected philosophical writings.Vol. 2 (1983-1913): 483-491 ed. Bythe Peirce Edition Project, NathanHouser Bloomington: IndianaUniversity Press.

[14]. Woodward, B. (1990). Knowledgeengineering at the front-end:defining the domain. KnowledgeAcquisition 2(1): 73-94.

[15]. Doyle, M.J., Watz, E., Harris,J., & Bennett, W., Jr. (2014).

Knowledge-Level Abstraction Structurefor Rapid Agent Development.Interservice /Industry Training,Simulation, and Education Conference(I/ITSEC) Nov.-Dec., 2014. OrlandoFlorida.

[16].Tannehill, B.R., Watz, E.A., Wade,A.N.: "Developing a Competency-BasedExpert Diagnostician Capability forLVC/DMO Training." Paper submittedfor presentation at Fall SimulationInteroperability Workshop, OrlandoFL. September 2014

[17].Tannehill, B.R., Watz, E.A.,Portrey, and A.M.: "Developing aLearning Management System for LVCand DMO Environments." Papersubmitted for presentation at FallSimulation Interoperability Workshop,Orlando FL. September 2014

Author Biographies

ERIC WATZ is a Software Engineer withLumir Research Institute supporting theAir Force Research Laboratory,Warfighter Readiness Research Division(711 HPW/RHA) at Wright-Patterson AirForce Base, OH. He holds an M.S. inComputer Information Systems from theUniversity of Phoenix, and a B.S. inComputer Science from Arizona StateUniversity. He has extensive experiencedeveloping military simulation andtraining systems for DMO and LVCenvironments.

MARGERY J. DOYLE is a ResearchConsultant Cognitive Systems ResearchScientist and Engineer with L-3Communications Link Simulation andTraining supporting the Air ForceResearch Lab 711 HPW/RHA WarfighterReadiness Research Division at Wright-Patterson Air Force Base, OH. Margeryleads the Not-So-Grand-Challenge tosupport integration, validation, anduse of cognitive, behavior, and orcomputationally based models/agentswithin a modular architecture for

adaptive LVC DMO trainingenvironments. She earned her M.A. inExperimental Psychology with acertificate in Cognitive Psychologyfrom the University of West Florida in2007. While attending UWF, Margeryworked with the Florida Institute forHuman and Machine Cognition to makesignificant sustaining contributionsto the field of Augmented Cognitionthrough working on a seminal DARPAchallenge; entitled AUGCOG. Inaddition, Margery completed worktowards a PhD in Cognitive Science atthe University of Louisiana-LafayetteInstitute for Cognitive Science.Recently she co-edited a specialedition of Cognitive Systems Researchfocusing on the properties ofdistributed agency, stigmergy, andemergence in complex adaptive systems.