Some Future Software Engineering Opportunities and Challenges

Barry Boehm

University of Southern California,
Los Angeles, CA 90089-0781


Abstract. This paper provides an update and extension of a 2006 paper, "Some Future Trends and Implications for Systems and Software Engineering Processes," Systems Engineering, Spring 2006. Some of its challenges and opportunities are similar, such as the need to simultaneously achieve high levels of both agility and assurance. Others have emerged as increasingly important, such as the challenges of dealing with ultralarge volumes of data, with multicore chips, and with software as a service. The paper is organized around eight relatively surprise-free trends and two "wild cards" whose trends and implications are harder to foresee. The eight surprise-free trends are:
1. Increasing emphasis on rapid development and adaptability;
2. Increasing software criticality and need for assurance;
3. Increased complexity, global systems of systems, and need for scalability and interoperability;
4. Increased needs to accommodate COTS, software services, and legacy systems;
5. Increasingly large volumes of data and ways to learn from them;
6. Increased emphasis on users and end value;
7. Computational plenty and multicore chips;
8. Increasing integration of software and systems engineering;
The two wild-card trends are:
9. Increasing software autonomy; and
10. Combinations of biology and computing.

1. Introduction

Between now and 2025, the ability of organizations and their products, systems, and services to compete, adapt, and survive will depend increasingly on software. As is being seen in current products (automobiles, aircraft, radios) and services (financial, communications, defense), software provides both competitive differentiation and rapid adaptability to competitive change. It facilitates rapid tailoring of products and services to different market sectors, and rapid and flexible supply chain management. The resulting software-intensive systems face ever-increasing demands to provide safe, secure, and reliable systems; to provide competitive discriminators in the marketplace; to support the coordination of multi-cultural global enterprises; to enable rapid adaptation to change; and to help people cope with complex masses of data and information. These demands will cause major differences in the processes currently used to define, design, develop, deploy, and evolve a diverse variety of software-intensive systems.

This paper is an update of one written in late 2005 called, “Some Future Trends and Implications for Systems and Software Engineering Processes.” One of its predicted trends was an increase in rates of change in technology and the environment. A good way to calibrate this prediction is to identify how many currently significant trends that the 2005 paper failed to predict. These include:

·  Use of multicore chips to compensate for the decrease in Moore’s Law rates of microcircuit speed increase—these chips will keep on the Moore’s Law curve of computing operations per second, but will cause formidable problems in going from efficient sequential software programs to efficient parallel programs [Patterson, 2010];

·  The explosion in sources of electronic data and ways to search and analyze them, such as search engines and recommender systems [Adomavicius-Tuzhilin, 2005];

·  The economic viability and growth in use of cloud computing and software as a service [Cusumano, 2004]; and

·  The ability to scale agile methods up to 100-person Scrums of Scrums, under appropriate conditions [Boehm et al., 2010].

The original paper identified eight relatively surprise-free future trends whose interactions presented significant challenges, and an additional two wild-card trends whose impact is likely to be large, whose likely nature and realizations are hard to predict. This paper has revised the eight “surprise-free” trends to reflect the new trends above, but it has left the two wild-card trends as having remained but less predictable.

2. Future Software Engineering Opportunities and Challenges

2.1 Increasing emphasis on rapid development and adaptability

The increasingly rapid pace of systems change discussed above translates into an increasing need for rapid development and adaptability in order to keep up with one’s competition. A good example was Hewlett Packard’s recognition that their commercial product lifetimes averaged 33 months, while their software development times per product averaged 48 months. This led to an investment in product line architectures and reusable software components that reduced software development times by a factor of 4 down to 12 months [Grady, 1997].

Another response to the challenge of rapid development and adaptability has been the emergence of agile methods [Beck, 1999; Highsmith, 2000; Cockburn, 2002; Schwaber-Beedle, 2002]. Our original [Boehm-Turner, 2004] analysis of these methods found them generally not able to scale up to larger products. For example, Kent Beck says in [Beck, 1999], “Size clearly matters. You probably couldn’t run an XP (eXtreme Programming) project with a hundred programmers. Not fifty. Not twenty, probably. Ten is definitely doable.”

However, over the last decade, several organizations have been able to scale up agile methods by using two layers of 10-person Scrum teams. This involves, among other things, having each Scrum team’s daily stand-up meeting followed up by a daily stand-up meeting of the Scrum team leaders, and by up-front investments in an evolving system architecture. We have analyzed several of these projects and organizational initiatives in [Boehm et al., 2010]; a successful example and a partial counterexample are provided next.

The successful example is provided by a US medical services company with over 1000 software developers in the US, two European countries, and India. The corporation was on the brink of failure, due largely to its slow, error-prone, and incompatible software applications and processes. A senior internal technical manager, expert in both safety-critical medical applications and agile development, was commissioned by top management to organize a corporate-wide team to transform the company’s software development approach. In particular, the team was to address agility, safety, and Sarbanes-Oxley governance and accountability problems.

Software technology and project management leaders from all of its major sites were brought together to architect a corporate information framework and develop a corporate architected-agile process approach. The resulting Scrum of Scrums approach was successfully used in a collocated pilot project to create the new information framework while maintaining continuity of service in their existing operations.

Based on the success of this pilot project, the team members returned to their sites and led similar transformational efforts. Within three years, they had almost 100 Scrum teams and 1000 software developers using compatible and coordinated architected-agile approaches. The effort involved their customers and marketers in the effort. Expectations were managed via the pilot project. The release management approach included a 2–12 week architecting Sprint Zero, a series of 3–10 one-month development Sprints, a Release Sprint, and 1–6 months of beta testing; the next release Sprint Zero overlapped the Release Sprint and beta testing. Their agile Scrum approach involved a tailored mix of eXtreme Programming (XP) and corporate practices, 6–12 person teams with dedicated team rooms, and global teams with wiki and daily virtual meeting support—working as if located next-door. Figure 1 shows this example of the Architected Agile approach.

Figure 1. Example of Architected Agile Process

Two of the other success stories had similar approaches. However, circumstances may require different tailorings of the architected agile approach. Another variant analyzed was an automated maintenance system that found its Scrum teams aligned with different stakeholders whose objectives diverged in ways that could not be reconciled by daily standup meetings. The project recognized this and has evolved to a more decentralized Scrum-based approach, with centrifugal tendencies monitored and resolved by an empowered Product Framework Group (PFG) consisting of the product owners and technical leads from each development team, and the project systems engineering, architecting, construction, and test leads. The PFG meets near the end of an iteration to assess progress and problems, and to steer the priorities of the upcoming iteration by writing new backlog items and reprioritizing the product backlog. A few days after the start of the next iteration, the PFG meets again to assess what was planned vs. what was needed, and to make necessary adjustments. This has been working much more successfully.

2.2 Increasing Criticality and Need for Assurance

A main reason that products and services are becoming more software-intensive is that software can be more easily and rapidly adapted to change as compared to hardware. A representative statistic is that the percentage of functionality on modern aircraft determined by software increased to 80% by 2000 [Ferguson, 2001]. Although people’s, systems’, and organizations’ dependency on software is becoming increasingly critical, the current major challenge in achieving system dependability is that dependability is generally not the top priority for software producers. In the words of the 1999 (U.S.) President’s Information Technology Advisory Council (PITAC) Report, “The IT industry spends the bulk of its resources, both financial and human, on rapidly bringing products to market” [PITAC, 1999].

This situation will likely continue until a major software-induced systems catastrophe similar in impact on world consciousness to the 9/11 World Trade Center catastrophe stimulates action toward establishing accountability for software dependability. Given the high and increasing software vulnerabilities of the world’s current financial, transportation, communications, energy distribution, medical, and emergency services infrastructures, it is highly likely that such a software-induced catastrophe will occur between now and 2025.

Process strategies for highly dependable software-intensive systems and many of the techniques for addressing its challenges have been available for quite some time. A landmark 1975 conference on reliable software included papers on formal specification and verification processes; early error elimination; fault tolerance; fault tree and failure modes and effects analysis; testing theory, processes and tools; independent verification and validation; root cause analysis of empirical data; and use of automated aids for defect detection in software specifications and code [Boehm-Hoare, 1975]. Some of these were adapted from existing systems engineering practices; some were developed for software and adapted for systems engineering.

These have been used to achieve high dependability on smaller systems and some very large self-contained systems such as the AT&T telephone network [Musa, 1999]. Also, new strategies have been emerging to address the people-oriented and value-oriented challenges discussed in Section 2.1. These include the Personal and Team Software Processes [Humphrey, 1997; 2000], value/risk-based processes for achieving dependability objectives [Gerrard, 2002; Huang, 2005], and value-based systems engineering processes such as Lean Development [Womack-Jones, 1996].

Many of the traditional assurance methods such as formal methods are limited in their speed of execution, need for scarce expert developers, and adaptability (often requiring correctness proofs to start over after a requirements change). More recently, some progress has been made in strengthening assurance methods and making them more adaptable. Examples are the use of the ecological concepts of “resilience” as a way to achieve both assurance and adaptability [Hollnagel, 2006; Jackson, 2009]; the use of more incremental assurance cases for reasoning about safety, security, and reliability [ISO, 2009]; and the development of more incremental correctness proof techniques [Yin-Knight, 2010].

2.2.1 An Incremental Development Process for Achieving Both Agility and Assurance.

Simultaneously achieving high assurance levels and rapid adaptability to change requires new approaches to software engineering processes. Figure 2 shows a single increment of the incremental evolution portion of such a model, as presented in the [Boehm, 2006] paper and subsequently adapted for use in several commercial organizations needing both agility and assurance. It assumes that the organization has developed:

·  A best-effort definition of the system’s steady-state capability;

·  An incremental sequence of prioritized capabilities culminating in the steady-state capability; and

·  A Feasibility Rationale providing sufficient evidence that the system architecture will support the incremental capabilities, that each increment can be developed within its available budget and schedule, and that the series of increments create a satisfactory return on investment for the organization and mutually satisfactory outcomes for the success-critical stakeholders.

Figure 2. The Incremental Commitment Spiral Process Model: Increment Activities

In Balancing Agility and Discipline [Boehm-Turner, 2004], we found that rapid change comes in two primary forms. One is relatively predictable change that can be handled by the plan-driven Parnas strategy [Parnas, 1979] of encapsulating sources of change within modules, so that the effects of changes are largely confined to individual modules. The other is relatively unpredictable change that may appear simple (such as adding a “cancel” or “undo” capability [Bass-John, 2003]), but often requires a great deal of agile adaptability to rebaseline the architecture and incremental capabilities into a feasible solution set.

The need to deliver high-assurance incremental capabilities on short fixed schedules means that each increment needs to be kept as stable as possible. This is particularly the case for very large systems of systems with deep supplier hierarchies (often 6 to 12 levels), in which a high level of rebaselining traffic can easily lead to chaos. In keeping with the use of the spiral model as a risk-driven process model generator, the risks of destabilizing the development process make this portion of the project into a waterfall-like build-to-specification subset of the spiral model activities. The need for high assurance of each increment also makes it cost-effective to invest in a team of appropriately skilled personnel to continuously verify and validate the increment as it is being developed.

However, “deferring the change traffic” does not imply deferring its change impact analysis, change negotiation, and rebaselining until the beginning of the next increment. With a single development team and rapid rates of change, this would require a team optimized to develop to stable plans and specifications to spend much of the next increment’s scarce calendar time performing tasks much better suited to agile teams.

The appropriate metaphor for these tasks is not a build-to-specification metaphor or a purchasing-agent metaphor but an adaptive “command-control-intelligence-surveillance-reconnaissance” (C2ISR) metaphor. It involves an agile team performing the first three activities of the C2ISR “Observe, Orient, Decide, Act” (OODA) loop for the next increments, while the plan-driven development team is performing the “Act” activity for the current increment. “Observing” involves monitoring changes in relevant technology and COTS products, in the competitive marketplace, in external interoperating systems and in the environment; and monitoring progress on the current increment to identify slowdowns and likely scope deferrals. “Orienting” involves performing change impact analyses, risk analyses, and tradeoff analyses to assess candidate rebaselining options for the upcoming increments. “Deciding” involves stakeholder renegotiation of the content of upcoming increments, architecture rebaselining, and the degree of COTS upgrading to be done to prepare for the next increment. It also involves updating the future increments’ Feasibility Rationales to ensure that their renegotiated scopes and solutions can be achieved within their budgets and schedules.