Systems engineering training and consulting
Portuguese Chinese

PPI SyEN 80 Article 1

 

SYSTEMS

ENGINEERING

NEWSLETTER

PPI SyEN 80 – 19 August 2019

 

HOME        FEATURE ARTICLE 2.1        ARTICLE 3.1        ARTICLE 3.2        ARTICLE 3.3

 

brought to you by

Project Performance International (PPI)

systems engineering training for project success

 

3.1 Updates in the Evolution of Systems Engineering

by

Steven H. Dam, Ph.D., ESEP

SPEC Innovations

Email: steven.dam@specinnovations.com

Abstract

In January 2018, the International Council on Systems Engineering (INCOSE) began an initiative called FuSE – the Future of Systems Engineering Initiative. In 2011, we began thinking about this, following a presentation by Dr. Michael Ryschkewitsch, who at that time was the NASA Chief Administrator. His presentation: “NASA Systems Engineering Challenges” provided a set of “uses and challenges” for systems engineering. Many of those challenges mirror the intended outcome for the FuSE initiative as stated by the President of INCOSE, Mr. Gary Roedler: “evolving systems engineering that enables us to leverage the new technologies that drive us fully into a dynamic, nondeterministic, and evolutionary environment.” As a result of Dr. Ryschkewitsch’s paper, the Lifecycle Modeling Language (LML) and Innoslate® tool were developed to respond to those challenges. This paper will show how many of the ideas and goals of the FuSE Initiative have already been accomplished, so that in part the future of systems engineering is already here.

Introduction

On April 15, 2011, I attended a Conference on Systems Engineering Research (CSER) event where Dr. Michael Ryschkewitsch presented his paper entitled “NASA Systems Engineering Challenges.” 1 Three parts of his future vision included: Model-based Artifacts; Seamless Data Flow; and Distributed Teams. He also discussed how the uses and challenges vary throughout the lifecycle. In this slide he presented a number of questions including: “How to enable modeling that provides the needed fidelity yet can be done quickly and cheaply?”; “How do we develop the standards that allow lossless integration across organization and tool boundaries?”; and “How do we make the full suite of information captured during design and development available to the operators without having prior knowledge of their needs?”

We had begun exploring the development of a desktop tool for systems engineering. I had used several software and systems engineering tools in the past 25 years (at that time), including many different systems engineering tools: a Data Flow Diagramming tool, a State Machine tool, RDD-100, CORE, CRADLE, DOORS, and Systems Architect but we had seen that these tools had stagnated, not evolving with the new technologies and approaches. The new tools coming into the discipline were focused on UML and more recently at that time SysML. These tools required knowing a language and concepts that were unfamiliar to most systems engineers, let alone the broader set of stakeholders with whom we must communicate.

It was clear from Mike R’s (as he liked to be called) presentation that we needed to embrace one of the emerging technologies of the time: cloud computing. Cloud computing enables distributed, collaborative teams. It also provides speed and computational power as needed to perform complex modeling and simulation quickly and cheaply. As such, we scrapped our desktop software development and began exploring cloud technologies.

But just solving the distributed teams’ part of the problem did not completely address the other two parts of Mike R’s vision: Model-based Artifacts and Seamless Data Flow. To get to those, we had to take a close look at the language. SysML was not the answer, so what was? To best understand why not SysML, it helps to explore the past language developments.

History of Systems Engineering

Some believe systems engineering can be traced back thousands of years. The wonders of the world, such as the pyramids of Giza, Ziggurat at Ur, Treasury of Atreus, and the Great Wall of China 2, could only have been designed and built systematically. Others trace it back to the “Machine Age.” Clearly the industrial revolution and the assembly lines required systems thinking. But by the “Space Age”, systems engineering as we know it was clearly born. Clearly “systems of systems” thinking was required by this time. Many people ascribe the modern age of systems engineering to Ramo and Woolridge, which became TRW, which is now part of Northrop-Grumman. They were the lead contractor for the “Atlas Project,” that resulted in the first Intercontinental Ballistic Missile (ICBM). This project required over 18,000 scientists and engineers. A systemic approach was essential to deal with the size and scope of this project, since it was not only the missile itself, but also the basing system and command and control. Even the ICBM is only part of a larger strategic offensive capability, dubbed MAD (Mutually Assured Destruction). A little later, TRW also aided in the development of a strategic defensive system called Project Safeguard. As part of that development, TRW personnel developed the Systems and Software Requirements Execution Model (SREM) approach. It consisted of an ontology and executable diagrams called behavior diagrams. SREM evolved over time and was used as the basis for a number of government-developed tools and commercial tools, including RDD-100 and CORE. These were called Computer-Aided Systems Engineering (CASE) tools.

Meanwhile, many software development approaches came and went. In the1960s we used flow charting techniques for diagramming software. In 1970, Data Flow Diagramming (utilized by Yourdon-DeMarco and Ward & Mellor) became popular. In the 1980s we had the Integrated Definition (IDEF) diagrams, as well as State Machine modeling. In the1990s, Object-Oriented Analysis and Design, which lead to UML. All these modeling techniques were primarily used for software development, but many of us attempted to apply them to systems engineering as well. We even used the Computer-Aided Software Engineering (CASE) tools for this purpose, with limited success. In each case, the systems engineering community seemed to adopt the software technique soon before it became unpopular in the software community.

SysML was developed in the mid-2000s as a profile on UML. Although, several SPEC personnel and I had significant experience in the Java object-oriented programming language, we really did not envision that approach fitting well with system engineering. Our programmers, who had received their degrees from one of the top Computer Science departments in the US, had very little familiarity with UML and had never heard of SysML. When I asked them why they didn’t know UML, they did what all of today’s computer scientists do: they “Googled” it. The graph in Figure 1 was and still is the result. Since the timeline in the graph does not start until 2004 and UML was first introduced in late 1997, we don’t know when the interest peaked, but this graph clearly shows a decreasing trend in the use of UML. A similar query of SysML shows a flat line.

Why has the software world lost interest in UML? One reason may be the same as what we found with flow charting and these other previous modeling techniques: it’s a lot faster and easier to write, debug, and execute the code than it is to draw the drawings. So, the diagramming techniques tended to be used more to document the code than design it. If it is used first, then usually you find that the models developed lacked much of the critical information needed for software development, because they are only an abstraction of the software.

Figure 1: Interest in UML has decreased significantly in the last 15 years.

The modern software tools provide automatic checking for syntax, code repositories, and debugging tools that far exceed what they can get out of any of the modeling tools. With Agile Software Development approaches, they only want the functional requirements, so why are we in the systems engineering world so focused on providing diagrams the software developers don’t want? How do we move into the future with a technique that can leverage the technologies of today and tomorrow, while producing the products needed by all stakeholders, not just software developers? These and other questions are being posited by the INCOSE FuSE Initiative.

INCOSE’s Vision of FuSE

According to the INCOSE Charter 3 for this working group, the purpose of FuSE is to:

  • Position systems engineering to leverage new technologies in collaboration with allied fields.
  • Enhance the systems engineer’s ability to solve the emerging challenges.
  • Promote systems engineering as essential for achieving success and delivering value.

In the briefing presented at the INCOSE International Workshop this year 5 to create LML as an open standard. This group included systems engineering professors and expert systems engineers from both large and small corporations. The Steering Committee is led by Dr. Warren K. Vaneman 6, a retired US Navy Captain who is also a Professor of Practice at the US Naval Postgraduate School (NPS). In 2012, the group published the first LML specification. The second iteration (version 1.1) was released in 2014 and included an ontology for SysML. The language includes technical information classes (Action, Asset, Conduit, Requirement, etc.) and programmatic classes (Artifact, Cost, Risk, Time, etc.). In all, 12 primary classes and 8 subclasses form the base language. Another class (Equation) and subclass (Port) were added in version 1.1 for SysML. Almost all the classes are related to each other and there are several relationships between the classes. All the relationships use the same verb form for the relationship and its inverse (i.e., decomposed by/decomposes). Also, attributes are provided for each of the classes and many of the relationships. In this way, the language is well defined with the nouns (classes), verbs (relationships), adjectives (attributes on the classes), and adverbs (attributes on the relationships). In this way, LML provides a robust base language. It is advertised as the “80% solution” in that it is only meant to be a common language for everyone to use. You can easily extend it to other domains or to enhance data capture, but if you use it as a basis for the language it will be easier for others to use and understand.

LML requires only three diagram types (Action, Asset, and Spider) to represent the functional model, physical model, and traceability, but it recommends using other common diagrams for visualizing the information, such as timelines, hierarchies, risk matrices, etc. Even the Action Diagram has only one truly unique feature: it replaces the usual logic symbols used in other languages with a special type of Action. This special type of Action represents the decision points and can be traced to the Assets that perform them, thus enabling us to design in the controls needed for command and information assurance deep into the design.

To read more concerning LML, one can obtain the specification for the website 7 or read Essential LML 8 . More books and papers are available or will be available soon.

How Can We Use the Technology Advancements

So now that we have the “language of the future,” let’s look at the available technologies and tools to help us with the other needs identified above.

Clearly cloud computing technologies will help us deal with the need for “massive storage and retrieval of information.” Public, private, and hybrid clouds are available everywhere now. Amazon Web Services (AWS) and Microsoft Azure appear to be the current leaders in this technology. Google AppEngine was the early favorite, but they seem to have dropped behind AWS and Azure, at least in the US Government space.

This technology also helps us deal with the capture part of “methods to capture and visualize tremendous amounts of information.” The visualization is trickier in that it requires using modern diagramming techniques and types, such as the sunburst diagram or using numbers to replace objects in a diagram. Most people want to see typical diagrams, such as the hierarchy, but for a major system, that diagram ends up with tiny boxes even on a large plotter. The spider diagram can have the same effect. So, tool developers usually provide ways to limit the information on a diagram by levels or relationships or decomposition. As long as the diagrams are generated from the database and not separately drawn, the visualization “problem” becomes lessened. Unfortunately, most of the tools available today do not use “concordance” 9 as a means to generate the diagrams from the data. Tools with this capability have been around for many years, including RDD-100, CORE, Cradle, and Innoslate. So again, this technology is available today.

The “capability to move data around easily, between applications” comes from having a common language and application programmer interfaces (APIs). A common language is needed to be able to translate between different classes of information. LML provides this capability today. It has been mapped to other languages, such as the DoDAF Metamodel 2.0 and SysML. 10 APIs also exist today, and most tools implement them as a means of communications. Again, a solution exists with current technologies if everyone wants to use it. The real problem has been that the vendors keep their ontologies proprietary. Innoslate uses LML, which is an open ontology. The XML generated by Innoslate can be easily understood since an XSD also is available.

In summary, the basic needs are addressed by current technologies. Next, let’s address that the major technology seems to be driving the concern about artificial intelligence.

Modeling and Applying AI

As part of a presentation on the “Shaping Systems Engineering for the Future” Mr. Garry Roedler showed the chart provided in Figure 2.

The top and bottom items particularly refer to AI as being an area of concern. So how do we model AI behavior? It is clearly “dynamic, non-deterministic, and evolutionary,” but so are people. We have all those same features and properties. In systems engineering, we spend a significant amount of time and energy in modeling human behavior. We usually want to use the system to channel that behavior in a particular direction – positive for the goals of the mission. We model human behavior using probabilistic models that are calibrated by real-world experience and experiments. With LML, we can allocate the decision points to whatever performs that Action, including AI components. The decision points also provide a mechanism to embed cybersecurity and information assurance features into the design at all levels, thus addressing the second item in Figure 2 as well.


Figure 2: Slide from presentation by Mr. Garry Roedler.

Concerning the V&V issues in Figure 2’s third item, since we can develop automated tests, those tests can use AI components as well to aid in the testing. For instance, we have used AI technologies (Natural Language Processing and Machine Learning) to aid in determining requirements quality, modeling quality, and traceability. We use those technologies today in Innoslate. In a sense, we are in fact conducting early V&V using this technology.

Since Innoslate extends LML by using a subclass of the Action (Test Case), we can also model our test processes at the same time. We can then identify where and how to use AI techniques to aid in conducting the tests themselves. Further exploration of this capability will be conducted by SPEC Innovations, the tool developer, and many of the Universities that use Innoslate as a research tool. With Innoslate’s APIs we can push the boundaries of today’s technology limitations in preparation for new technologies. SPEC Innovations provides an Academic version of the tool that is only limited by the number of entities in an Innoslate project. Researchers can and are using the APIs to enhance our understanding of systems engineering. We at SPEC Innovations support such efforts.

MBPM and Model-Based Reviews

The programmatic features of the LML language also enables model-based program management (MBPM). This capability comes from the recognition that the program manager also optimizes cost, schedule, and performance for the program, just as the systems engineer does for the system. Both also work to mitigate risk in these areas. Innoslate implements these features for program managers and provides views, such as the Risk Matrix and Timeline Diagram. The Action Diagram also can be used to model the program processes. By putting in timing and resource estimates, the program manager can derive the overall cost, schedule, and performance from the discrete event and Monte Carlo simulations. The Monte Carlo distributions also provide a measure of the potential for schedule slippage or cost overruns. These measures can be translated into risk probabilities for the overall program.

Another area often pointed to as a need in the future is the idea of Model-Based Reviews (MBR). An MBR means never actually printing out the program documentation, but instead having reviewers inspect the model. But most reviewers will not be experts in either the modeling or the tool. So to accommodate them, Innoslate provides documentation in the Documents View. So reviewers can read the documents within Innoslate and provide comments using the commenting tool. All you have to do is provide the reviewers with: 1) access to the tool; 2) a hyperlink to the document(s) you want them to read; and 3) explain how to use the sidebar to provide comments. Innoslate has been used for over five years to conduct these kinds of reviews and has even been used in a training course 11 for all the NASA milestone reviews, demonstrating that this technique can be used throughout the lifecycle.

Summary

Updates in the Evolution of Systems Engineering have provided a mechanism to propel systems engineering into the future of digital engineering. LML provides a strong ontology and diagramming framework to model complex systems, including those that use AI technologies. As a cloud-native tool, Innoslate implements LML and goes beyond the current standard to provide a seamless, integrated, collaborative environment for program management, systems engineers, and other stakeholders to work together and provide the necessary legal products required by any lifecycle process.

About the Author

Dr. Steven H. Dam is the President and Founder of Systems and Proposal Engineering Company (SPEC Innovations). Dr. Dam has a BS degree in Physics from George Mason University and a PhD. in Physics from the University of South Carolina. He has been involved with structure analysis, software development, and systems engineering for over 30 years. He participated in the development of C4ISR Architecture Framework and DoD Architecture Framework (DoDAF), the Defense Airborne Reconnaissance Office (DARO) Vision Architecture, the Business Enterprise Architecture (BEA), and Net-Centric Enterprise Services (NCES) architecture. He has also been a long-term member of INCOSE and was a Past-President of the San Diego Chapter before relocating to the Washington Metropolitan Area. Dr. Dam has presented numerous papers and seminars to the WMA Chapter. He is currently a Past-President and Programs Chair of the WMA Chapter of INCOSE.

 

Kind regards from the PPI SyEN team:

Robert Halligan, Editor-in-Chief, email: rhalligan@ppi-int.com

Ralph Young, Editor, email: ryoung@ppi-int.com

René King, Managing Editor, email: rking@ppi-int.com

Project Performance International

2 Parkgate Drive, Ringwood, Vic 3134 Australia

Tel: +61 3 9876 7345

Fax: +61 3 9876 2664

Tel Brasil: +55 12 9 9780 3490

Tel UK: +44 20 3608 6754

Tel USA: +1 888 772 5174

Tel China: +86 188 5117 2867

Web: www.ppi-int.com

Email: contact@ppi-int.com

Copyright 2012-2019 Project Performance (Australia) Pty Ltd, trading as
Project Performance International

Tell us what you think of PPI SyEN. Email us at syen@ppi-int.info.

 

  1. Presentation by Dr. Michael Ryschkewitsch and Mr. Stephen Kapurch, NASA Office of Chief Engineer, at the Conference on Systems Engineering Research (CSER), April 15, 2011.
  2. From the SYST 505 Course by Dr. Peggy Brouse, George Mason University.
  3. INCOSE Charter for Future of Systems Engineering, FuSE Charter V1.2, December 3, 2018.
  4. Owner Shortell, FuSE (Future of Systems Engineering) Town Hall. INCOSE 2019 International Symposium, Torrance, CA, US, 28 January 2019., they also asked these questions and answers:

    Q: What will “good” look like when we have used FuSE to deliver systems?

    • Methods, processes, techniques for self-learning systems (including process changes and handling V&V) will be provided.
    • Improved simulations to handle dynamic objectives will be available.
    • Architecture techniques for AI heavy systems will be provided.
    • It will be demonstrable how AI positively improves a system while considering “ilities” (i.e., Safety/Security) within acceptable bounds.

    Q: What’s stopping us from doing this now?

    • Data availability/usage (OCI and IP concerns).
    • Knowledge and research.
    • Assurance, trust, and understanding of the technologies.
    • SE is too slow to keep up with AI advancements.

    These questions and their answers require a fair amount of analysis to fully understand what they entail, but they all seem to be very focused on artificial intelligence, as if that was the only technology of interest. The real underlying problem is that SE is too slow. We take months and years to do what should only take weeks to months. If we resolved this issue, the rest might become much easier.

    Why is SE Today Too Slow

    The whole idea of model-based systems engineering (MBSE) was to speed up the process by having designers focus on the models, which would produce not only the documentation, but in the case of software, the code itself. INCOSE and others believed that SysML was the answer to MBSE. As we saw earlier, UML has lost the interest of many in the software development field, therefore that path to code is not ideal. And of course, that approach misses the other critical parts of the system: people, hardware, test facilities, etc. But the problem is more fundamental. Systems engineering deals with the design at a certain level of abstraction. We develop the requirements for the systems, not the detailed designs of the components. When we try to push our methodologies and approaches into the design engineering space, those methodologies would have to subsume all the other disciplines. In other words, we would have to be able to accurately describe the details of computer-aided design tools, electrical engineering diagrams, software (in all the possible languages), etc. That’s not our job. Our job is to optimize the cost, schedule, and performance at the system level. So, can SysML do this or is there something missing in that approach?

    To help answer that question, we need to identify what is needed. We need:

    • Methods to capture and visualize tremendous amounts of information.
    • Massive storage and retrieval of information.
    • The capability to move data around easily, between applications.
    • A language that enables decomposition and abstraction:
      • A systems engineering language, not a software engineering language;
      • A language that is simple so that systems engineering can easily use it; and
      • A technical and programmatic language so we can optimize cost, schedule, and performance for the system.

    SysML may be a lot of things, but no one thinks it’s simple and easy to understand by all stakeholders. Also, a language cannot deal with the needs noted in the other bullets above. Those bullets relate to the technologies and tools we need to use to implement the language.

    Lifecycle Modeling Language (LML)

    To address the language problem, several people formed the LML Steering Committee 4 See www.lifecyclemodeling.org for more information concerning LML and the LML Steering Committee.

  5. See the May 2019 issue of the Project Performance International Systems Engineering Newsletter (PPI SyEN) for an article by Dr. Vaneman entitled “Model-based Systems Engineering De-Mystified” available at https://www.ppi-int.com/ppisyen77/.
  6. See www.lifecyclemodeling.org.
  7. Essential LML: Lifecycle Modeling Language: a Thinking Tool for Capturing, Connecting and Communicating Complex Systems, Warren Vaneman, Ph.D., et. al., SPEC Innovations, 2018.
  8. Concordance is defined by Dr. Vaneman as “the ability to represent a single entity such that data in one view, or level of abstraction, matches the data in another view, or level of abstraction, when talking about the exact same thing.” From “Organizational Considerations for Effective Model-Based Systems Engineering Implementation,” presented at the 21st Annual NDIA Systems Engineering Conference, October 22-25, 2018.
  9. Note that although SysML does not have a defined ontology, many of the common words used, such as Activity and Actor, can be easily related to LML classes.
  10. “Practical Lessons Learned from the Application of Model-Based System Engineering (MBSE) to Space Project Design Reviews,” by Dr. Jerry Jon Sellers, presented to the NDIA Modeling and Simulation Committee, February 2014.