The Open Group’s Future Airborne Capability Environment (FACE) Technical Standard is lauded as a great stride forward in Open Architecture and an enabler of the government’s Better Buying Power initiatives. As the FACE Technical Standard continues to proliferate, the distance between the decision to use the standard and the developers who build the systems will increase. There is a tremendous amount of excitement and passion for the standard as evidenced by the active and passionate participation of so many individuals representing so many companies at the FACE Consortium meetings. To maintain this level of enthusiasm, it is helpful for all stakeholders to understand how the standard applies to them in their roles as well as how and why the standard applies to others in the cross-functional team.
In addition to educating, this paper aims to push out the envelope with regard to the way we tend to think of the data model-language aspects of the FACE Technical Standard. Each section of this paper is intended to capture a generalized view of each represented stakeholder role in order to best convey the spirit of the standard and how it applies to one in that role without delving into the minutiae. As with previous papers, we offer this content as the starting point for conversation.
The FACE Technical Standard is very comprehensive and supported by a sizeable group with a lot of volunteers working to build a standard that addresses an entire framework for system
of systems architecture: this short paper will not be so broad. It will focus specifically on the various benefits that can be realized through the use of effective data architecture and data modeling.
A whitepaper with a reading guide? This paper is meant to meet you where you are in your current role. As young engineers, we would not have likely cared what an acquisition professional would think about the standard. Our job was to write code – how does that help me? On the other side, as contract administrators, we might not care how a developer is going to implement this thing, but we do need to know that what they implement can be traced to requirements.
That said, everyone’s job has a LITTLE_BIT (that’s an enumeration measurement w
ith a conceptual type of AmountOfSubstance (1)) of impact and overlap on everyone else’s. As a result, we recommend the following:
Start by reading the section of the paper most-appropriate for your perspective. Follow that by reading the other sections to gain an understanding and appreciation for why the standard is important from their point of view.
And without further ado...
For someone approaching the “Why should I care?” question from a business perspective, there are myriad considerations and objectives unique to this outlook. Maybe you have to figure out how to keep this massively complex system glued together in a fraction of the budget you used to have. Plus, you keep hearing about how tough the competition is getting, so you want to find that competitive edge, maybe by adding additional capabilities. But of course, when you add capabilities, that’s even more work that you have to add to the budget. So, let’s get straight to the point and address some of the major business concerns,
related questions and explore the benefits of the FACE standard from the business angle.
How can we add capabilities to get ahead in this ever more competitive market?
Let’s face it, adding capabilities costs more money. Our proposal estimating systems account for this. We have built systems upon systems for estimating costs so we can minimize risk and maximize profit. A data architecture is not going to change that.
However, it does allow us to make significant gains in other aspects of our work.
It is well documented that the cost of software changes increases exponentially as the product moves through the various stages of development. In short, software is easiest to change during the requirements development phase before any design has been performed or any code has been written. Contrast that to the cost of changing the software after certification, whereby any change in the software drives the need to repeat the test and certification process.
In an appropriate data architecture, the data model can be used to calculate the integration alignment with other similarly documented systems before design begins. This process highlights where the systems are already aligned and where additional effort may need to be focused to achieve required integration. And this is all done before any code is written.
It is also possible to leverage the data model to generate data mediation code. Once the process has been validated (i.e. shown to reliably create the expected code),
this eliminates the need to manually develop code to mediate between different types of data ever again. This means that portions of code can be automatically generated, eliminating the need for a developer to perform this redundant task thereby reducing
Furthermore, Skayl is actively developing a solution that allows for a similar extension to automatically bridge different protocols. With this technology, it is possible to gain an even more significant increase in the capability of automatically generated code. Simply document the protocol and behavior in your data model, and the entire data integration between systems can completely and automatically generated.
By increasing the amount of automation, you are able to eliminate the typical barriers
of non-integrated systems and focus the resources you would have previously spent connecting systems on innovation instead.
How do we protect our IP using a data model with open architecture?
Although your data model must document your system interfaces for conformance, you are not required to disclose this documentation. While this may be required by a particular contract, access to the data model can always be managed.
This means that you can freely document your systems and guarantee your customer that
the system interface meets the requirements without having to expose all of your intellectual property. Now, you may wish toshare your data model for the sake of sharing the data you collect (or use), but access and use of your data model are always subject to negotiated terms.
Building Data Models is expensive and time consuming. How do we manage?
Designing a good data model that can be leveraged for large-scale integration is
difficult. If you have an experienced team specifically familiar with this style of data model, you might be able to budget about 30 minutes to model each attribute, but a safe bet for a skilled data modeling team (though maybe one not terribly experienced with data models organized as specified in the FACE Technical Standard) would be closer to 60 minutes. You can cut this back significantly with appropriate tooling.
Additionally, you can license existing domain data models in specific domains. This alternative provides you with a data model that is supported much like a software development library and has the added benefit of providing an automatic integration capability with others who have licensed the same technology. Furthermore, it opens future opportunities as more domains are added to the data model allowing for the documentation of more and more types of systems. The path to adoption and implementation of a data architecture is not to make it easier to build multiple US models from scratch. How many times do we need to build and document a particular model entity? Each time we build an avionics related component? No, we need only one. The only real path to easing the burden on model providers is to get to the point where they can begin with the 90% solution. And the rest can be integrated into the model with the smallest increment of effort.
Government players have a unique set of goals regarding contracting, costs and concerns around obsolescence. With new technological advancements being made almost daily, there’s an increasing need for flexibility, interoperability and automation.
How do we predict and manage costs and changes over time?
Tell me if you’ve heard this one before: The government issues a contract to a prime. Months (or years) into the project, the government wants to make a change and the contractor tells them it will costs tens (or hundreds) of millions of dollars and will
delay the schedule by five years. Sound familiar?
What if, instead of requiring the contractor to stop and perform an impact analysis, they could simply document the proposed change, run their data model integration, and deterministically calculate the overall system level impact? The ultimate goal is to limit the impact of a change such that it is relative to the size of that change rather than to the size of the system.
Instead of a single change rippling out and causing a cascade of changes for which countless additional hours must be billed, imagine that the work could be automatically processed. While this is not yet possible for the hysical aspects of a system, it is possible with regard to a system’s software interfaces and integration with other systems.
As seen in Figure 2 –SoS Integration Effort (in IPM) per Interface as a function of the Number of Major Interfaces2, the integration effort necessary during Integration & Testing increases exponentially with the number of interfaces. However, with appropriate tooling in place, you can see that it is possible to make the effort constant relative to the size of the systems integration regardless of the number of interfaces.