PPI SyEN 60 A1

 

A Convergence Model for Systems Engineering

by 

Michael Gainford

MA(Eng), FRAeS, CEng, MSc, ESEP, MINCOSE

Abstract

Readers of the Systems Engineering Newsletter (SyEN) are invited to participate in helping to refine concepts concerning a convergence model for systems engineering. There are at least three questions of interest:

  1. Is a convergence model for systems engineering potentially useful?
  2. If so, why and to whom, and how could it be improved?
  3. Is it original? (If not, I apologize to the authors whose work I have not read, and hope that SyEN readers can point me to relevant work so that I can give it credit.)

Several development models are well-known to SyEN readers (including the “V” or “Vee” model, iterative, incremental, spiral [2], and agile [9]). They all have their uses, and some have their problems. Over a number of years, the author has, in a rather unconscious and unresearched way, developed a different way of communicating what goes on during a systems engineering endeavor.

The so-called convergence model “does what it says on the tin”, and is launched on the unsuspecting SyEN readership as a crowd experiment. (The author grew up in the era of analogue computers but likes to give the impression that he can keep up with the times.)

Simply put:

  • There is a Problem (P) to be solved;
  • There is a Solution (S) to be created, deployed, operated, maintained, and retired;
  • We use Evaluation (E) to assess the match between P and S;
  • We systems engineers push for convergence between P and S.

The proposed model potentially offers benefits in terms of communication, cohesive decision-making, and planning for verification and validation.

Copyright © 2017 by Michael Gainford.  All rights reserved. Used with permission.

  1. Introduction

The author has been a practicing systems engineer and project manager for several decades. During that time he has seen a range of development models used, and always felt that there was something lacking about them. The proposed convergence model can potentially address some of those concerns.

  1. Philosophical origins

The convergence model may be attractive because it resonates with how we human beings think about things. This assertion is explained in paragraphs 2.1, 2.2, and 2.3.

An allegorical tale in 2.4 may help to make the philosophy more tangible.

2.1 How we think, in general

The philosopher John Dewey, in How we Think [4], describes a five-step model for how we engage in reflective thinking. Figure 1 aims to represent John Dewey’s text in diagrammatical form.

We feel a difficulty, or tension, and this incites us to think about the situation. The Dewey text deals mostly with finding explanations for things, and believing or not believing the suggested explanations.

2.2 How we think, while solving problems

Arthur Hall, in a relatively early systems engineering textbook [5], developed this into the more specific case of where we desire to find solutions to problems. His representation, based on another work by John Dewey [6], is reproduced in Figure 2.

2.3 How we think, in relation to the convergence model

A static view of the convergence model, which is a further development of Hall’s representation, is shown in Figure 3.

The problem statement corresponds to Dewey’s “felt difficulty” or “tension”. In systems engineering, this incites us to do more than just understand a situation; it incites us to do something about it. In this article we do not need to debate the vocabulary of “problem”, “opportunity”, or “need” and for simplicity will use “problem” to cover any situation that demands action. We remember that what is good news for one stakeholder may be bad for another. {A new high speed train line may be good for business people, but less palatable to the couple who devoted 30 years of their life to developing a beautiful garden at the side of the proposed track.]

In a similar way, to simplify the text and the diagrams, we may use “Solution” to cover definitions of proposed solutions, or the solutions themselves.[1]

2.4 An allegorical tale

“One night as I was trying to go to sleep I heard some scratching noises in the attic. They persisted for about 10 minutes but then stopped. My anxiety about this made it difficult to sleep, but I resolved to buy a few “humane” mouse traps in the morning, and after a while was able to “drift off”.

The next morning I installed the mouse traps, hoping to have solved the problem. During the day, I started to think about the possible outcomes of this experiment, and I looked into my systems engineering toolbox to find one of my favorite analysis tools; “the 4-box chart”; see Figure 4.

I then realized that when evaluating the problem, the solution, and the match between problem and solution, I was not going to make much progress based on my impetuous decision-making approach. I also started to think about root-cause analysis; the scratching was only a symptom, but what was the cause? I needed to put forward alternative solutions, and do a trade study, but without knowing the problem I was stuck. Maybe the problem was not that I heard noises, but that I heard noises (solution path: doctor – audiologist – psychiatrist).

By now it was obvious that my confidence in the evaluation would be very low, but how much confidence would I need? That would depend on the impact of failing to deal with the situation. If mice could chew through cables and cause a fire I would need very high confidence. If I wanted to sell the property and was concerned about surveyors finding mice in the attic, a moderate level of confidence might suffice. If I could get used to scratching sounds whilst falling asleep, a low level of confidence would be enough.”

Reflecting on the above, we need to establish our level of confidence in all three of the problem, the evaluation, and the solution. In the author’s experience:

It is “business as usual” for most businesses to check their confidence in the solution (even though they may be hazy about understanding the problem);

There has been an increasing trend for businesses to appreciate the need for checking their confidence in the problem statement, and some businesses do this in a systematic and effective way;

Relatively few businesses have a systematic approach to determining and demonstrating the required confidence level in the evaluation. This consideration is vitally important to drawing up an effective plan for verification and validation[2]. SyEN readers may not agree on the distinctions between verification and validation, but for the purposes of this article that is immaterial as long as both are covered! Halligan offers a methodology for doing this with the Verification Requirements Specification [7, 8].

  1. Development models: synopsis

We use development models as a communications tool and to govern how we progress through our development activities in an orderly fashion. Selection of the most appropriate development model for your project deserves careful attention in the planning stage. Having selected and tailored it:

  • It needs to be well-communicated;
  • Processes have to be written to align with that choice.

The ubiquitous V-model [3] is strong on showing logical dependencies between work products as we decompose and recompose (the architecture depends logically on requirements; solution elements depend on element specifications; etc.). In a V-diagram, the vertical axis shows levels of decomposition, which is a powerful communications tool. But what does the horizontal axis show? Only if we have a truly sequential project does it show time. Halligan [1] suggests that we can more generally think of it as a time trend.

There seems to be a natural human compulsion to read time into the V-diagram, and lots of variants have been produced to overcome this. However hard we try to explain that time dependency is not necessarily the same as logical dependency, there is a risk that the V-diagram confuses people, losing some of its power as a communications tool.

The spiral model [2] acknowledges that we can explore the Problem and the Solution in parallel, checking in at the end-of-loop gates to see how they match. This has many attractions, but doesn’t quite satisfy the craving to read time into the diagram: we have to imagine unravelling the spiral into a straight line. When presented as a spiral, the radial direction corresponds to a cumulative increase in cost incurred.

  1. Another representation: a convergence model

4.1 The match between problem and solution
Figure 5 illustrates how the convergence model represents the match between problem and solution, at a given point of time, for some selected cases.

Cases 1 and 2 represent two different problem-solution pairs when the confidence in the evaluation is high (a short bar for “E”). Although in each case there may be more work to do to improve the match, we can be confident that we understand where the issues lie, which will lead to better-informed decision-making.

For Case 3, the confidence in the evaluation is low (a long bar for “E”). In this case, the match between problem and solution is unclear; it could be anywhere on the following spectrum:

  • Worst case (no match) – Case 3a;
  • Most likely (some match) – Case 3b;
  • Best case (full match with some gold plating and some waste) – Case 3c.

In the author’s experience, most Gate reviews (which are major decision points in the system life cycle) have only asked about Case 3b.

4.2 How the match changes with time; the convergence model

Figure 6 illustrates the fundamental concepts behind the convergence model:

  • “P” represents the Problem;
  • “S” represents the Solution;
  • “E” represents our Evaluation of the match between P and S;
  • The horizontal axis represents time (yes, really!);
  • The vertical axis represents effectiveness (in an admittedly abstract way);
  • The heights of the bars represent our uncertainty in expressing P, S, and E;
  • As time progresses, we aim to achieve convergence between P and S, whilst reducing our uncertainty in P, S, and E;
  • A number of evaluation points, reviews, and gates are indicated; these feed into the project decision-making processes;
  • When the solution “goes into service” (or whatever the equivalent is), there may still be discrepancies between P and S, but we will be very confident that we understand them (a short bar for E);
  • The bars indicate that there is hopefully overlap between P and S. There can still be areas of P unaddressed, and areas of S that do not strictly serve a purpose, but as our confidence in E improves we will understand these better;
  • There could be some “gold plating” (see Figure 5 Case 2), but this possibility has been omitted from Figure 6;
  • E includes elements of both validation and verification (see Footnote 2);
  • The suggested downward trend in the mid-point of the “P” bar has only been used to give the diagram a symmetrical visual appeal. Of course there are many projects where the stakeholders are persuaded to reduce their aspirations during a project, but it doesn’t have to be that way. Alternatively the mid-point of “P” could be flat, with the solution climbing to meet it, or it could climb as opportunities are found to go beyond the original aspiration.

 

4.3 Variations on the theme

Many development model stories can be explained using the fundamental concepts in Figure 6. A few examples are provided in the following sections. Readers are invited to try out their own!

4.3.1 S-heartbeat faster than P-heartbeat

Figure 7 depicts a project where significant effort goes into baselining a set of requirements (P) early on. There are subsequently several iterations in S before P gets re-baselined. Each evaluation (E) of the P-S gap is done with respect to the early baseline of requirements.

We are rather trusting that the revision of requirements at Gate 4 is just a tidy-up!

4.3.2 In-service problem-led change

Figure 8 depicts what happens if (when) a new requirement comes along in service. At Gate 3 the solution is the same, but three things instantly increase:

  • The mismatch between P and S
  • Our uncertainty in P
  • Our uncertainty in E (we can’t be sure to understand the mismatch if we don’t understand the problem)

At Gate 4, we have a modified solution that increases uncertainty in S, and potentially in E.

We then use our systems engineering toolkit to achieve convergence by Gate 5.

4.3.3 In-service solution-led change

Figure 9 shows what happens with an in-service Solution-led change. This could typically happen if solution problems emerge in service, or if components become obsolete.

At Gate 3:

  • P is initially unchanged;
  • we know that the mismatch between P and S has increased (because we had a problem);
  • the solution is changed;
  • Our uncertainty in S and E increases.

At Gate 4:

  • The change control board spots an “opportunity” to introduce new requirements or “improve” existing ones (“we are opening the system up anyway”). This could be on the same day as Gate 3, or the day after. Alternatively, the change control board could resist the temptation.

4.3.4 Incremental and iterative approaches

For the purposes of this discussion, the author does not attempt to distinguish between “incremental” and “iterative” approaches (readers may not be unanimous on that issue, but it doesn’t matter for the purposes of this paper).

Figure 10 tells a story that might be described as incremental, or perhaps as something else. Hopefully readers can see how to tailor it to their understanding of such development approaches.

Imagine that at Gate 1, we have an initial problem understanding (P) and an idea of a solution concept (S). We then develop an early version of a solution (or a particular element of it), with reference to the initial P. When we get to Gate 2, we share the early solution with stakeholders. They like some aspects of it, there are bits they don’t like, and often they get excited to see that it can do things they had never imagined. They use an early version of S to help them articulate their needs, so uncertainty in P instantly goes up. Then we realize that some aspects of S aren’t relevant, and some are now missing, so uncertainty in S also increases.

We can imagine this scenario repeating itself a few times before ultimately getting to Gate 4, where everyone agrees there is sufficient match between P and S, given our confidence in E.

4.3.5 Combining development approaches

A system, by definition, comprises a number of elements. The optimum development model choice for the elements may not be the same as the optimum choice for the system. Hence a system development model may need to accommodate a mix of element-level development model approaches that are not necessarily synchronized.

Figure 11 offers a way of telling this story.

As depicted by the red dots in the diagram, the Gate 2 build standard at system level has to consist of valid combinations of element-level configurations. This is not necessarily the latest version of each element “on the day”. Halligan’s “Wedge” model [1] illustrates this in another way, using a 3-dimensional version of the “V”.

The diagram shows that selection of the optimum development model at system level is not a straightforward task. There is clearly a benefit in being able to align the element-level development models to the extent that this can be done without forcing a burden onto the element suppliers.

The reader is now asked to imagine combining Figure 6 with Figure 11, and to use Figure 6 as an alternative representation of the concepts behind the spiral model. Figure 6 can accommodate a whole range of element-level models as in Figure 11. Suggestion: the spiral model is an over-arching representation that can combine a range of element-level models, and the mix of element-level models can change as we go through a project. For example, an element that started out as incremental may switch to sequential once the risks in P and S are sufficiently low.

  1. Feedback request

The author would be delighted to receive any feedback at Michael.Gainford@es.catapult.org.uk.

Some questions:

  1. Is a convergence model for systems engineering potentially useful?
  2. If so why and to whom, and how could it be improved?
  3. Is it original? (If not, please point me to relevant work so that I can give it credit)
  4. Did you have a go at representing your project using the convergence model and if so, what did it look like?
  1. Summary and conclusions

The proposed convergence model may have benefits in how we communicate our development approaches. Readers have been asked to comment, and to provide ideas for its further development.

The main benefits claimed for the convergence model are as follows:

  1. It seems to resonate with “the way we think”;
  2. It integrates the consideration of uncertainty in Problem, Solution, and Evaluation, thus providing a more cohesive input to decision-making;
  3. The consideration of uncertainty in Evaluation should be a driver for effective verification and validation planning;
  4. With further work, there is at least potential for the vertical axis of the model to have some qualitative/quantitative meaning related to effectiveness, and the horizontal axis represents elapsed time. Other well-known development models are perhaps less tangible in terms of the meanings of the axes.

List of acronyms used in this paper

Acronym                     Explanation

E                                  Evaluation

P                                  Problem

S                                  Solution

Acknowledgements

My thanks are due to:

  • Philip J. Wilkinson (Rolls-Royce), who showed me Arthur Hall’s model (Figure 2), well over a decade ago;
  • Ian Gallagher (Altran), who reviewed a much earlier version of these ideas, a few years ago;
  • Robert Halligan, Ralph Young, and Alwyn Smit (Project Performance International), who all provided valuable inputs into the current article.

References

[1] Halligan, Robert J.  ”Beyond the “Vee” Model: The Wedge Model”. Systems Engineering Newsletter (SyEN 57), September 18, 2017. Available at www.ppi-int.com/systems-engineering/SYEN57-a2.php. Accessed October 29, 2017.

[2] Boehm, Barry W. “A Spiral Model of Software Development and Enhancement”, 2000. Available at

http://csse.usc.edu/TECHRPTS/1988/usccse88-500/usccse88-500.pdf. Accessed October 26, 2017.

[3] Tim Weilkiens, Jesko G. Lamm, Stephan Roth, and Markus Walker. “The V-Model.” Model-Based System Architecture, First Edition, 2016 John Wiley & Sons. Available at

http://onlinelibrary.wiley.com/doi/10.1002/9781119051930.app2/pdf. Accessed October 27, 2017.

[4] Dewey, John, “How we Think”, 1909, BiblioBazaar, ISBN-10: 1110296037.

[5] Hall, Arthur D., “A Methodology for Systems Engineering”, 1962, D. Van Nostrand, Princeton, New Jersey.

[6] Dewey, John, “Logic, the Theory of Inquiry”, 1938, Henry Holt and Co., New York.

[7] Halligan, Robert J. “The Business Case for Requirements Engineering”. April 2014. Available at http://www.ppi-int.com/systems-engineering/free%20resources/P1343-005258-2%20The%20Business%20Case%20for%20Requirements%20Engineering%20140409.pdf. Accessed October 29, 2017.

[8] Project Performance International. Data Item Description (DID), Verification Requirements Specification. Identification Number PPA-003914-3, 30 May 2012. Available at www.ppi-int.com/systems-engineering/free%20resources/PPA-003914-3%20(VRS)%20120530.pdf. Accessed October 29, 2017.

[9] Taymor, Emmerson. “Agile Handbook”. Available at http://agilehandbook.com/agile-handbook.pdf.

Accessed November 15, 2017.

[1] Furthermore, a better word might be “response”, because aspects of the response that do not match the need cannot strictly be called “solution”. Readers are asked to overlook this technicality for now.

[2] SyEN readers may not agree on the distinctions between verification and validation, but for the purposes of this article that is immaterial as long as both are covered!

PPA-006840-1

Scroll to Top