top of page

Achieving Warfighting Lethality via System and Semantic System-of-Systems Model-Based Engineering





Warfighting capabilities are increasingly dependent on the speed, safety, and maintainability of complex systems integrations involving diverse platforms. While System of Systems (SoS) integration is inherently complex and expensive, it is increasingly so due to the combinatorial nature of the integration effort and is further exacerbated by the increasing speed of technology advancement.


ASN RD&A Secretary Geurts has stated that “Velocity is its own competitive advantage”. Unfortunately, as applied to traditional methods of SoS integration, velocity is not readily achievable.


Traditional integration methodologies have been successful in handling the lower levels of interoperability that address transport and syntax of data. The DevSecOps initiative, tooling, and rigorous engineering practices focus on efficiencies related to what the paper refers to as the Build and Deploy aspects of integration, but they defer issues of integration relationships and composability. When an integration by commonality approach is consistently applied, a traditional approach is sufficient. However, when commonality is not suitable, semantic connection and composition of behavior must be explicitly addressed through model-based, machine-actionable documentation.


Beyond Build and Deploy, this paper proposes the interoperability paradigms Relating and Composing. Through these paradigms, the paper will examine more advanced interoperability concepts. These approaches facilitate faster, safer and more scalable integrations through semantic, model-enabled automation. The rigorous engineering, testing, and performance necessary in the defense domain requires more attention on these latter two concepts, and, in fact, future innovations in defense demand it.


Throughout the paper the three paradigms are correlated to the Conceptual Levels of Interoperability Model (At what level of interoperability is one operating?) and, where applicable, Interface Documentation Maturity Levels (How does one achieve this level of interoperability?). Through these examinations, the paper explores state-of-the-art methods to achieving integration at velocity.


INTRODUCTION

In order to realize velocity at scale among our ever-advancing warfighting Systems of Systems (SoS), the practice of free-text, prosaic documentation must be replaced by methodologies that enable machine-readable, machine-exploitable documentation. Neither science fiction nor artificial intelligence, it is possible to automate the process of integration by clearly and unambiguously documenting systems.


As enemies continue to adapt to the US’s advancing technology, so must the US adapt to their advancements. “Out-spending” other sovereign nations is no longer affordable. The industry must identify and implement an approach to integrate systems faster and more accurately.


When discussing advancements, the difference between warfighting technologies and the tools that enable those technologies might seem obvious: these two are often confused. Consider the following statement from the NAVWAR requirements management lead upon completion of NAVWAR’s first Digital Twin on the USS Abraham Lincoln.


“Modeling complex systems in a digital environment is like working on a giant puzzle, where multiple people are working separate sections of the puzzle. It is not until the people begin to communicate that they are able put their sections together to reveal the final image. These systems are like separate sections of the puzzle, and in the past we have waited until fielding to put the puzzle pieces together, but with digital engineering we are putting the pieces together before fielding, allowing us to address issues before we deliver the system to the warfighter” (Navy, 2019).


The advancement highlighted here is the tooling that enabled the systematic development and sharing of information during the Build and Deploy process. The tooling is automated, improving development and testing processes that enable increased value in the warfighting technology. That said, the warfighting technology itself is not fundamentally changed through this process.


In this paper, tooling advancements are secondary to the advancement of the actual warfighting technologies and the management of those technologies. Automating integration, while achieving higher levels of interoperability, fundamentally changes not only the way in which systems are designed and developed but also how they are deployed and managed.


The benefits of digital transformation are key, as are the related methodology and tooling advancements. These advancements alone, however, are not enough. The rigorous engineering, testing and performance required in this domain necessitates automation, not only for sharing of documentation, but in the actual integration and testing process. Leveraging the successes in the digital transformation strategy, the industry must also push for innovations in the interoperability of the systems developed with these tools.

What is needed is a new way to more rapidly adapt existing systems to meet changing mission requirements and to introduce new capabilities into pertinent systems. Moving current process and design techniques into digital workflows and model-based representations alone will not fully address the need for flexibility. Too often, models codify a specific implementation and a specific system’s design constraints, dramatically limiting their adaptability to future needs.


The ‘new way’ is not some new protocol, communications standard, hardware fabric, functional allocation, or even new modeling standards and representations. These are necessary but not sufficient. The ‘new way’ involves leveraging Model Based Systems Engineering (MBSE) and DevSecOps but requires a very different approach to integration which explicitly defines the semantic relationships and the behavioral composition in the design of the architecture. Ongoing initiatives should be leveraged while using advanced methodologies to ensure that data models are fit for machine logic and processing consumption. Such models will contain the right sets of related and linked information that inform optimization and algorithmic approaches to procedurally generate integration software and infrastructure.


INTEROPERABILITY AND INTERFACE DOCUMENTATION

In order to constructively examine the subject of systems integration in relation to current and future efforts, one must first provide a baseline against which the relevant topics may be referenced. It is particularly helpful to discuss integration with relation to two key concepts: The Levels of Conceptual Interoperability Model (Tolk, 2004) which encompasses five (or six, by some sources) Levels of Interoperability (LOI) and Interface Documentation Maturity Levels (IDML) which define seven IDML and suggests additional levels of maturity beyond the first seven (Hand, Lombardi, Hunt, & Allport, 2018).


Although various other interoperability models exist, the Levels of Conceptual Interoperability Model (Figure 1) was developed with the intention of bridging technical and conceptual design. The greatest benefits of interoperability, including scalability and adaptability, are only achieved at the higher levels of interoperability requiring conceptual design.




Figure 1: Levels of Conceptual Interoperability adapted from “Composable Mission Spaces and M&S Repositories - Applicability of Open Standards" (Tolk, 2004)


The IDML is an additive maturity model progressing towards the most advanced practices in interface documentation (Figure 2). The IDML is based on the premise that, at a fundamental level, it is the rigor and specificity with which interfaces are documented that enables an architecture’s interfaces to be managed, understood, shared, and extended. Since interface documentation is so key to integration, the management and scalability of that documentation provides a litmus test for categorically identifying levels of interoperability.



Figure 2: Interface Documentation Maturity Levels adapted from “IDML: An Introduction” (Hand, Lombardi, Hunt, & Allport, 2018)


While the Levels of Conceptual Interoperability Model provides a context for what level of integrability is being discussed, the IDML provides a context for how, or through what methods, LOIs 1-4 may be successfully achieved. The latest IDML does not address, however, through what methods Conceptual Interoperability (LOI 5) may be realized. Such a framework is yet to be developed and could provide significant benefit to the use, extent, and maintenance of future interoperability initiatives.


BUILDING AND DEPLOYING

Today there is significant momentum behind the “Build and Deploy” paradigm. In many DevSecOps initiatives, the focus is centered on improved efficacy and efficiency of hardware and software design, development, testing, and deployment. Many of the programs instituted as a part of DevSecOps are both material and beneficial to any integration project, including the adaptability and efficiency of the DevSecOps Lifecycle model, process and communication automation and the integration of security throughout the lifecycle (DoD CIO, 2019).


When it comes to integration, however, the DevSecOps Reference Design defines Continuous Integration as going one step further than Continuous Build “…with more automated tests and security scans. Any test or security activities that require human intervention can be managed by separate process flows.” The practice’s goals of reduced mean-time to operation, increased deployment frequency, and updates “at the speed of operation” address integration only through the automated testing of that integration. Neither automated integration itself nor automated maintenance of those integrations is addressed. In fact, the entirety of what content (syntax, semantics, behaviors) to integrate and how to perform those integrations prior to testing is left to the integrator (DoD CIO, 2019).


When hardware and software needs to be connected, these connections are facilitated through Hardware and Software Architectures, and the connections become a part of the Build and Deploy process. Although there are a great many initiatives right now to improve Build and Deploy, when it comes to the integration of the relevant deployed systems, traditional methods of integration prevail. As such, only the lower levels of interoperability are reached.


Automating the Build and Deploy aspects of software development reaps a great many benefits; however, it simply does not address integration connections or the composition and orchestration of behavior.


Challenges Addressed

Within Build and Deploy, engineers and integrators tend to focus on the integration of hardware, software, and sub-component systems as a part of the software or systems development lifecycle. Significant focus is placed on how to turn inputs and outputs into messages that can translate needed information from one system to another across the wire. Integrators are very adept at reviewing message requirements and Interface Description Documents (IDDs) and writing code that makes system A talk to system B. Of course, larger systems integrations require more and more integrators, and that only addresses the initial integration, not the testing or maintenance of these mission-critical systems.


One of the more significant challenges encountered in basic integration is signal bridging involving the basic signals the system uses for communication. Sometimes, these signals are sophisticated protocols that provide reliable delivery of information. Other times, they are simple, serial connections with relatively small bandwidth. However, there is no guarantee that systems will be able to communicate at this level without some sophisticated bridging or adaptation. The challenge of signal bridging and encoding is automatable in current tool chains.


A second challenge addressed by Build and Deploy is that of data representation and encoding. Once the data is identified, there is no guarantee that it will always be encoded on-the-wire using the same rules across all systems. If there are bandwidth constraints, the data might be compressed or otherwise pre-processed. If the system uses a different hardware architecture, the data might naturally appear using a different byte-order.


How is Interoperability Achieved?

In terms of the LCIM, what levels of interoperability are generally involved in Build and Deploy? Most often the lower levels only are addressed: LOI 1, Technical, and LOI 2, Syntactical. At the Technical level, physical connectivity is established, enabling the exchange of bits and bytes. At the syntactical level, data may be exchanged leveraging standardized formats. Both levels of interoperability are foundational, as more complex interoperability must rely on these fundamentals for success. In simple, relatively static systems, not only are these basic levels foundational, but they are also sufficient.


Lower LOIs are generally achieved through very traditional integration methods – what would be referred to as “early levels,” or IDML 1-3. IDML 1, or source code is, of course, always a part of any development initiative. At an IDML2, interface documentation, such as IDDs, simply capture the content of the interface without machine-readable syntax or semantics. Such documentation must be read and interpreted by engineers. This method is subject to interpretative bias and human errors and also does not allow for any of the benefits of higher levels of documentation including integration automation or scalability.


IDML 3 introduces the Electronic Data Definition which include many common types of electronic documentation such as interface definition language (IDL), extensible markup language (XML), XML schema definition (XSD), and other syntax-based documentation. IDML 3 adds the rigor of syntax, enabling the interface documentation to be machine- readable. This provides the opportunity to leverage a certain limited level of automated processing. Such documentation may be used for in-memory type representations as well as encapsulation specifications for signal exchange.


As programmatic and technical complexity increases, interface management becomes more challenging. More rigorous and specific documentation of interfaces is required, and it becomes necessary to record revisions and updates and to trace changes. Certain levels of automatability become desirable if not necessary.


It is here that the “Data Model” comes into play. More often than not, a data model is built based on the required message sets. The entities in this type of data model directly mirror the interfaces (IDML 4). In this way, programs can share information where message sets and model components are reusable. The model’s adaptability and reuse, however, is significantly limited. Because the model is designed around the interface patterns, when an interface changes, the model must be rebuilt to continue to mirror its corresponding interface.


An illustrative interpretation of the alignment among the Integration Paradigms, the Levels of Conceptual Interoperability, and the Interface Documentation Maturity Levels is provided in Figure 3. Today’s Build and Deploy efforts have laid the foundation for the extension of additional, and even novel, warfighting value related to automatable, scalable interoperability. It should be noted that the alignments are approximate and based on predominant methods and applications.





The models common to the Build and Deploy paradigm limit the potential for interoperability. In fact, an interface is minimally bound by the extent of functionality of not only that interface itself but also the minimum functionality of all other interfaces interacting with it. In other words, the level of interoperability between any two interfaces is limited by the interface with the lowest IDML.


This does not automatically discount the appropriateness of such models. In small, relatively static environments, such interface documentation often suffices. This documentation meets the needs of the program as management is simple and scalability is not desired.


Technologies continue to progress, driven in part by many of today’s efforts to improve the Build and Deploy process. For instance, the Department of Defense’s Digital Transformation Strategy and movement toward Model Based Systems Engineering (MBSE) are driving advancements in mission value through modeling, process automation and optimization. The last few years have seen a significant shift in focus from a design-build-test methodology to a model-analyze-build methodology. The DevSecOps initiative promotes additional engineering practices that work to unify software development (Dev), security (Sec) and operations (Ops), helping to ensure system reliability and cybersecurity while reducing risk (DoD CIO, 2019). Many advancements in processes and tooling have been made, including the DI2E Cloud, Integrated Combat Systems (ICS) and NAVWAR Battle Management Aids. It is important to note that these and other ongoing improvements are critical for the continued success of defense programs.


Although lower levels of integration can be realized via Build and Deploy, scalability at higher levels of interoperability require new areas of focus. Beyond Building and Deploying, this paper proposes the two paradigms of Relating and Composing. The rigorous engineering, testing, and performance necessary in the defense domain requires more attention on these latter two concepts, and future defense innovations demand it.


RELATING

The Relate paradigm consists of the documentation and interconnection of semantics. Beyond Hardware and Software Architecture, Data Architecture becomes vital.

DevSecOps initiatives address this paradigm but only implicitly. Integration work in a DevSecOps program, for instance, would be likely to include IDDs and defined APIs (Application Programming Interfaces). Integrators would use specific frameworks, APIs, and messages to develop code that interprets the data from the APIs according to the system requirements. The semantics are implied, but they are not explicitly defined. As such, changes to the message sets, the APIs, or the systems themselves would require human intervention and a new integration.


Within DevSecOps there is good reason for the focus on automation to replace or augment human processes in the software development lifecycle. Such automation can also benefit other elements within the development pipeline. Explicit documentation and interconnection of semantics with the goal of automating integration, increases the scalability, testability, and adaptability of that integration in the future. It is only through such rigorous documentation that one may begin to address additional challenges to integration in a scalable manner.


Challenges Addressed

One of those challenges is the interpretation of data. In smaller or homogeneous systems, this may not be difficult; however, as systems become more complex, dynamic, and distributed, it becomes harder to unambiguously document the meaning of data. In order to solve this challenge, one must move from the Build and Deploy mindset into the Relate mindset.


Traditionally, data interpretation has been the responsibility of humans, even when the mechanics of data syntax and transmission are model-driven. For example, suppose one has the following structure in some machine-readable format:


struct PositionMsg {

id_t id;

time_t time;

pos_t position;

};


Does this denote one’s current position in a status message? A waypoint along a course? A desired position in a command message? A sensor reading of a target’s position?

Sometimes the intent is clarified by describing several structures (for example, one could rename the field “commanded_position”) or by communicating the data on a named channel or topic (“PositionReport”). These help a human, but not a computer.


This developer-reliant approach has been an essential aspect of integration for decades, and there has been little innovation to mechanize the process. Although it is natural to consider interoperability challenges in terms of abstraction layers, one must take a holistic view. A solution driven from a single perspective may end up architecturally limiting. For example, for a specific system it is convenient to solve technical interoperability via commonality – just choose one middleware and enforce that solution.


When all endpoints are controlled internally or by a single project, commonality is sufficient. However, as the system ages, its scope of integration tends to increase, perhaps needing to work with a subsystem that chose a different middleware or to use heterogeneous transport fabrics.


When such changes were not originally anticipated, assumptions about the infrastructure tend to leak into the application layer. Thus, to avoid porting business logic to new frameworks, the system fragments into domains of commonality: these five applications use standard X, these eight use standard Y, these three use standard Z… with gateways and bridges and adapters in between. This requires a small army of engineers to read, parse, and implement adapters between individual modules.


As another example of commonality causing unintended consequences, consider the structure of data messages. With the goal of maximizing flexibility, many frameworks use a “least-common denominator” message structure such as flat strings, XML or JSON. While this does address issues around recompilation or language dependencies, the approach often shifts the larger burden of data interpretation to the higher layer: each application must examine the contents, decoding string delimiters or walking XML trees to find and extract its relevant data.


The underlying transport is another area where assumptions about a common infrastructure’s behavior makes integration harder. For example, if applications assume an HTTP transport, they will likely use a request-response communication style, making it harder to interoperate with a subsystem using a publish-subscribe style. Similarly, if applications assume a TCP transport they will likely assume data on the same stream is guaranteed to arrive FIFO; this assumption will not hold if the transport is switch to UDP, where messages may arrive out-of-order.


Importantly, there is no single “right” way to do things. All engineering involves tradeoffs, and the assumptions made may be perfectly appropriate for a particular system. The problem is when assumptions are not documented. In the “PositionMsg” example it is the semantics of data that must also be documented. In the transport example it is the behavioral properties that must also be documented.


How is Interoperability Achieved?

To solve these challenges, integration must extend into two new Levels of Interoperability. LOI3, the Semantic level is characterized by the ability to not only exchange data but also to exchange that data’s context.


Although an interface-centric model (IDML 4) will begin to enable the connections and integration within the Relating paradigm, it is the addition of semantic context that truly demarcates the next step forward.


To combat the limitations of the interface-centric model, the data model must be designed as a true entity model whereby message attributes instead directly project to the entity attributes they represent (IDML 5). In this form, the message set and data model are decoupled, allowing the message set to change without impact on the data model.

In addition, by their very nature, these projections do capture a fundamental semantic of the attribute. For example, such a model might capture the altitude of an aircraft. It is that introduction of semantics to interface documentation that provides context for the existing content and syntax and gives meaning to the message sets.


Beyond projections, a well-structured entity model with containment (IDML 6) provides even deeper semantic meaning. Within such a model, attributes are projected through composed entities. The composition, or containment, adds context. For example, a command system receives a message with the volume of a fuel tank. In a model without containment, no context is provided. The volume provided could be for any fuel tank on any craft, anywhere in the world. However, a model with containment provides the additional context that the volume is from a specific fuel tank on a specific air vehicle. This information enables the system (and its users) to better understand the context of the information and therefore the message itself. At this level, the data model also becomes more rigorous, testable and fully reusable.


An Entity Model with Relationships (IDML 7) further extends the semantic expression by adding related context. For example, a system requires a video file generated by a specific camera on a specific air vehicle within a specific fleet. An Entity Model with Containment enables the documentation to cover the “specific camera on (contained in) a specific air vehicle within (contained in) a specific fleet.”. The camera is not “composed of” the video file. The camera and the video file have a different relationship. In this case one might choose to say that the video file was “generated” by the camera. In this way the semantic of the relationship can be captured and context is added to the message.


It is worth noting that the models that result from these efforts must be usable by different classes of users at different times. Too often, model-based approaches only support requirements analysis, functional decomposition, interface data type specifications, or system topology and deployment, etc. In order to achieve greater adaptability, the modeled information must be traced through the architecture, design, implementation, and test phases of a program. As such, the models and modeling tools must enable complex model management and collaboration among multiple classes of users at various points in time. The ultimate result is mergeable data models and “living” data models that scale over time versus data model instances which are widely relied upon today.


Initiatives are forming to address the Relate paradigm through graph-based models. For instance, certain architectural standards such as the Modular Open Systems Approach (MOSA), the Future Airborne Capability Environment (FACETM) Technical Standard, and The Defense Advanced Research Projects Agency (DARPA) SoS Technology Integration Tool Chain for Heterogeneous Electronic Systems (STITCHES) program serve to provide common guidelines that enable the addition and management of this new level of rigor. MOSA and FACE are open standards and may be used as the basis for hypergraph-based model solutions where a message is translated to an entity model and back to a message. STITCHES proposes an incremental standard with near real-time construction of peer-wise (message to message) SoS connectivity (Fortunato, 2016). These standards and guidelines must continue to improve as only through unambiguous, machine leverageable documentation will truly flexible, scalable integration be possible.


COMPOSING

This integration paradigm is characterized by the composition and orchestration of behavior and involves the Data, Software, and Functional Architectures to achieve interoperability of systems and sub-systems. It is these behaviors - the interactions and knowledge of interactions - as well as the resulting system state that is critical when deploying and maintaining distributed SoS. In other words, when to do something, when to wait for something to be done, and addressing the implications of other system’s interactions and data exchanges, is key. Often, these interchanges and resulting implications are encoded in the APIs or data and signal exchange concepts. They are documented in prose in Service Descriptions or nominal cases are elaborated in sequence diagrams using SysML or UML tools. While these approaches are not incorrect, as in the Relating paradigm where one moves from implicit documentation of semantics to explicit documentation of semantics, one must formally document the implications and expectations of behavior. The machine-leverageable documentation of behavioral interactions within an architecture becomes particularly important in the modification and extension phases of distributed mission-critical programs such as Command and Control systems, and distributed sensors and Combat Systems.


The ability to deterministically and algorithmically compose or orchestrate both semantics and behavior among many diverse systems enables new and interesting ways to integrate and deploy software capabilities.


Challenges Addressed

There are limited standards that addressed the ‘meaning’ of behavior. This is not to be confused with standards that can be used to document the behavior in human-parseable formats and iconography. A certain level of success is realized through various methods, including commonality-based approaches, normalization of behavior interactions within a container framework, and deployment of a common infrastructure managed by a small team. However, when seeking to address the SoS integration challenge, and the fact that there will always be external interfaces and systems over which the integrator has little control, parameterization of behaviors is necessary. For example, what is the mapping between a brokered queue message distribution mechanism and a publish-subscribe mechanism? What and how does one document that the very interaction itself means something, e.g. using a regular status message as an indication of system state and health?


Before developing standards, the aspects and meaning of behaviors must first be parameterized to include:

- The meaning of signal and data dependencies with regard to timing and presentation

- The documentation of implicit dependencies and expectations among systems present in infrastructure APIs and container implementation details

- The internal system states that have dependencies on who does what, when, to whom, and for what purpose


How is Interoperability Achieved?

Within the Relate paradigm, a program has successfully produced well-defined information exchange definitions (LOI 2) and rigorously documented data’s use and context within the system (LOI 3). However, even under these circumstances, it is still possible that interoperability will fail. Furthermore, it is definite that the full benefits of true Compositional integration cannot yet be realized.


The potential for failure is based on the fact that a simulation system is simply an executable form of a model, and that model represents a subset of the real world. During the development of the model, many factors of reality are intentionally, and sometimes unintentionally, omitted. Problems will occur if these excluded factors turn out to be necessary to ensure proper interoperability (Tolk, 2004).


The conceptual model documents which part of the real world are modeled, under which constraints, and which parts are excluded from the model. The Pragmatic / Dynamic model (LOI 4) is identified by the exchange of both the data’s context and its use and applicability. Such interoperability can only be realized through alignment of conceptual models (LOI 5). The conceptual model requires the participation and engagement of subject matter expertise in the specified domain.


As the semantic documentation at increased IDML levels does not eliminate the need to have syntactically defined data types, the documentation of behaviors and interaction expectations does not alleviate the use of APIs, service descriptions, or sequence diagrams. These higher levels of IDML have yet to be characterized and represent the next significant challenge in scalable system integration and interoperability.


CONCLUSION

As the number of points of connectivity and the complexity of that connectivity increases, traditional integration methods necessitate that ever-increasing numbers of software developers, engineers, and architects must be tasked to keep critical systems glued together. With each integration, countless assumptions are made about the concept of operations for that system. As new missions, platforms or data are introduced, reintegration becomes necessary. Current approaches to integration yield inflexible, brittle solutions.

When an enemy adapts, the US needs the ability to get new tools, new sensors, and new capabilities into the hands of warfighters as quickly as possible. Defense programs do not always have the luxury of time and resources required for a full tech refresh or to bring down a full battle group for an update.


Programs must adopt new methods capable of more rapidly adapting existing systems to meet changing mission requirements and streamlining the introduction of new capabilities into pertinent systems. Transposing traditional methodologies into digital workflows, models and even automating lifecycle processes such as testing is critical but insufficient.

In addition to these advances, one must begin to think about interoperability and its role in the systems development lifecycle differently. Integration efforts must be in involved early in the cycle, working, beyond syntax, to explicitly define the semantic relationships and the behavioral composition in the design of the architecture. In this manner, integration and even infrastructure become automatable, adaptable and scalable, increasing warfighting value far into the future.


REFERENCES

DoD CIO. (2019). DoD Enterprise DevSecOps Reference Design. Washington, DC: Department of Defense. Fortunato, E. (2016). STITCHES SoS Technology Integration Tool Chain for Heterogeneous Electronic Systems. Apogee Research. Hand, S., Lombardi, D., Hunt, G., & Allport, C. (2018). Interface Documentation Maturity Levels (IDML): An Introduction. Open Group FACE™ / U.S. Army Technical Interchange Meeting. Huntsville, AL. Navy, U. S. (2019, October 23). NAVWAR Completes First Digital Model of Systems on USS Abraham Lincoln. Retrieved from United States Navy: https://www.navy.mil/Press-Office/Press-Releases/display-pressreleases/Article/2237404/navwar-completes-first-digital-model-of-systems-on-uss-abraham-lincoln/ Tolk, A. (2004). “Composable Mission Spaces and M&S Repositories - Applicability of Open Standards". 2004 Spring Simulation Interoperability Workshop. Washington, DC.

AUTHOR BIOS


Gordon Hunt

Gordon Hunt is a co-founder of Skayl. His focus is on system of systems integration and semantic data architectures which enable increased systems flexibility, interoperability, and cyber security. Hunt’s experience in building real systems with current and emerging infrastructure technologies is extensive. His technical expertise spans embedded systems, distributed real-time systems, data modeling and data science, system architecture, and robotics & controls. He is a recognized expert in industry on Open Architecture and data-centric systems and as a regular author and presenter, he speaks frequently on modern combat-system and command and control architectures. As a CAPT Engineering Duty Officer in the US Navy, he supports combat system and system integration challenges. Hunt earned his BS in Aeronautical Engineering from Purdue University and his MS in Aerospace Engineering & Robotics from Stanford University.


Chris Allport

Chris Allport is a co-founder of Skayl. Allport is a specialist in realizing concepts into advanced prototypes. He has a proven track record with numerous aspects of aerospace industry: from leading the development of standards to hands-on software development and integration. Throughout his professional career, Allport has been called upon to lend his expertise to myriad interoperability activities. In earlier years, these efforts were met with simple, point solutions, intended to solve the problem at hand. However, as the years have progressed and challenges more sophisticated, he has had to develop more sophisticated solutions to a variety of integration challenges. Allport earned a BS in Electrical Engineering, a BS in Computer Engineering, and an MS in Electrical Engineering from West Virginia University.


Shaun Foley

Shaun Foley is a senior software engineer at Skayl and has provided distributed systems expertise for defense and commercial customers in North American and EMEA. He appreciates the interoperability challenges at all levels of abstraction: the need for a rigorous data architecture, the need to maintain legacy functionality, and the pragmatic constraints of real-time and embedded platforms. He is active in the FACETM Consortium and leads several Small Business Innovation Research (SBIR) projects at Skayl. Foley earned his BS in Electrical Engineering & Computer Science from MIT.


Sonya Hand

Sonya Hand is the Director of Strategy & Marketing with Skayl. Hand co-authored and released the paper introducing the Interface Documentation Maturity Levels (IDML) which she presented at the 2018 US Army and Future Airborne Capability Environment (FACETM) Consortium Technical Interchange Meeting. Sonya has an extensive professional history working in systems and technology integration, ensuring alignment with enterprise mission, business objectives and program requirements. Hand has been involved as member of the FACETM Consortium, as well as a member of the Leadership Team for the NIST Global Cities Team Challenge (GCTC) Data Governance and Exchange Super Cluster. Hand earned her BS in Information Technology from the University of Virginia and her MBA from Johns Hopkins University.


This paper was published and copyrighted by the American Society of Naval Engineers (ASNE) April 2021.

bottom of page