Legacy Application Integration

Subject Overview

This white paper has been prepared for Management, Marketing, Sales, Services and Development members who wish to gain a high-level understanding of Application Integration. Some market drivers, and the evolution of the application integration infrastructure, are discussed. Also, some core competencies are loosely positioned.

John G. Deitz

Consultant

Draft: January 20, 1999

Table of Contents

Preface 1

Introduction 1

Core Competencies 2

Application Integration 2

Applications in a Distributed Infrastructure 3

Evolution Beyond Client-Server 4

Application Servers 5

Application State 6

Connection Pooling 6

Load Balancing and Fail-Over 6

Multi-Threading 6

Interoperability 7

Application Server Monitor/Application Manager 7

Middleware 7

Transaction Management 8

Security Administration, Access Control and Authentication 8

Conclusion 9

Preface

Several technology vendors and system integrators plan to provide solutions that aid and facilitate “legacy application integration”. This paper dispels some of the mystery around application integration, introduces some of the key concepts, and discusses why they are important.

I think it is essential to view application integration in the context of a vendor’s core competencies, and more fundamentally, in the context of creating business solutions that may be distributed or interact over the web. In this paper, I wish to make the following points:

·  The world of application integration has striking similarities to the maintenance and enhancement themes that certain vendors (i.e. Y2K service providers) know well,

·  These vendors’ core competencies are relevant and applicable to application integration, and

·  Business solutions (especially electronic commerce, customer relationship and supply chain) will increasingly require application integration capabilities for scalability.

In the context of making these points, an overview of application integration is presented.

Introduction

“Legacy application integration” is a label attached to the activities and projects that connect legacy application functionality and databases to each other, to new distributed application middleware, and to new web-based interfaces.

The subjects of application integration and legacy application integration are difficult to separate. Although legacy application integration is a specialized subset of the general integration market, legacy integration projects are rarely immune to the issues and challenges of deploying distributed systems. A solution to a business problem may touch legacy applications, as well as newer applications; at the conceptual level, the business problem may not care at all where the constituent business solutions reside.

The term “application integration” is a new buzzword. Yet it is essential to understand that application integration is just a logical extension of application development and maintenance. In the application integration world, application development/maintenance is simply much broader in scope: it includes more processing platforms (Windows NT, UNIX and AS400, in additional to MVS), application servers, ERP packages, middleware and gateways, languages and design architectures. And it typically involves the web. These are all aspects of applications that are distributed in nature: instead of an application running solely on a single MVS machine, parts of it run on MVS, other parts on a web server, and perhaps other parts on various application servers.

From an engineering perspective, application integration is not an entirely new endeavor. It is very similar to the practices of maintenance and enhancement. Application integration still requires that:

·  Solutions are designed;

·  Application parts are analyzed, modified/changed, tested, and re-deployed;

·  Projects are managed;

·  Business and IT people communicate effectively; and

·  Solutions solve the intended business problem in the most elegant manner.

Recognized in these terms, it is easy to see that the process of application integration maps nicely to the core competencies of several Y2K service vendors. I’ll outline some of these competencies in the next section.

The challenge [generally speaking] is to expand the core competencies of a subject vendor to handle the additional platforms, technology and management issues found in the application integration environment. The resulting solutions must provide unique value that differentiates a given vendor’s offerings from those of other vendors. The overriding objective is to solve serious IT problems that impede customer’s progress in developing, deploying and managing application solutions … even distributed ones.

Core Competencies

Specific core competencies lend themselves to the application integration problem space. Just a few of these are:

Ø  Analytical engines to:

·  Deeply analyze the “internals” of individual legacy application parts

·  Help identify which application parts are germane, inconsequential, or missing in a scope

·  Analyze the interface points between legacy application parts

·  Capture deep relationships, such as synonyms or data flows

Ø  Technology to perform impact analysis,

Ø  Technology to automate/assist program change, along with some change management,

Ø  Technology to re-engineer multi-function programs into more discrete functions,

Ø  Technology to debug programs after changes have been made, and track program test status and coverage,

Ø  Practices and methodologies relevant to object-oriented, component-based development techniques essential for application integration, and the ability to activate those practices,

Ø  Powerful repository technology for sharing information about:

·  IT application parts and interfaces, and some of their value as assets,

·  Models and designs,

·  Software tool, middleware, and transaction processing deployed in the environment,

·  Domains, networks, and operating systems deployed in the environment,

·  Terms, standards, conventions, policies and authorities,

·  People, organizations and ownership/accountability,

·  Goals, initiatives and projects.

Core competencies and strengths must be fully understood and appreciated when exploring any new market opportunity. It is often beneficial to view a vendor’s offerings in terms of the functionality they provide (as shown here) in order to see the breadth of capability provided. This perspective provides a basis for envisioning how the vendor’s current offerings can be leveraged and extended to fulfill roles relating to application integration.

Application Integration

Application integration will eventually become synonymous with solution development – something IT organizations do every day in the course of maintaining their application systems. IT developers:

Ø  Begin with a business problem to be solved

Ø  Envision one or more solutions

Ø  Determine the existing systems involved, and determine the scope of changes to be made to those systems, to serve the solution

Ø  Determine what other technology, infrastructure or process is required to implement the solution. Identify the resources necessary to support the solution

Ø  Apply the best practices of architecture and design to optimize the solution design

Ø  Prototype, or otherwise validate the design assumptions

Ø  Review the solution costs (budget) and gain approvals

Ø  Prepare a development plan, acquisition plan or procurement plan for the project. Allocate resources, and execute the plans

Ø  Test the individual components of the solution, and the aggregate solution

Ø  Prepare a deployment (rollout) plan and execute it

Ø  Verify that the deployed solution meets its objective in meeting the business problem, and/or measure it against the design objectives.

These steps are the essence of solution development, and of non-trivial application maintenance; they are also the cornerstones of application integration. In MVS-only IT environments, application integration between MVS applications has been going on for years.

Applications in a Distributed Infrastructure

So why is “application integration” a new buzzword that is shaking the industry? And why is it a key initiative for all of today’s Fortune 1000 companies?

The answer lies in a simple fact: the necessity to develop distributed and web-based applications is quickly becoming a fate compleat for many companies. But designing solutions that operate in a distributed or web-based environment is new to most traditional, mainframe-based IT organizations. The learning curve can be steep, because of the myriad of new technologies, architectures and platforms that come into play. This lack of experience explains why it is so difficult for IT organizations to determine how, or even if, legacy applications can be leveraged.

Yet, these same IT organizations will not be given much time to ramp up. Competitive pressures simply won’t allow IT organizations to approach new and advanced solution areas leisurely. As an example, observe the bite that upstart book company Amazon took out of the business of established rival Barnes & Noble; consider the very limited time that Barnes & Noble had to respond with a similar solution in order to protect its market share. Internet-based commerce continues to evolve at an alarming rate; and with it, savvy business managers have the means to crush competitors.

Lets look at a brief comparison of the traditional MVS integration environment versus the web-based, distributed one. The following figure illustrates some of the differences in facilities, and complexity, between the two scenarios.

Table 1: Integration Considerations: MVS-only vs. Distributed Environments
Traditional MVS Integration Environment / Web/Distributed Integration Environment
Hardware:
MVS-based mainframe / Hardware:
MVS-based mainframe
Windows NT
UNIX and variants
AS/400
Server and Network Architecture:
SNA, APPN, some TCP/IP / Server and Network Architecture:
SNA, APPN, TCP/IP
Application/Web Servers (IBM, BEA, Sun, other)
CGI, HTTP or IIOP
DCOM, CORBA, Java RMI; COM, ASP
Appl.Server Issues: load balancing, failover, etc.
TP Monitors:
CICS, IMS/DC or IDMS / TP Monitors:
CICS, IMS/DC or IDMS
BEA Tuxedo and M3; Encina
Component Broker (IBM)
MTS (Microsoft)
Source Code Managers:
Panvalet, Librarian, Endeavor / Source Code Managers:
Panvalet, Librarian, Endeavor (MVS)
SourceSafe, PVCS, several others
Middleware:
Low level APPC or EHLLAPI / Middleware:
MQSeries (IBM); NEON Integrator
MSMQ (Microsoft)
Active Server (Active Software)
Enterprise Data Access (Information Builders)
Message Oriented Middleware (various)
And more …
Databases:
IMS/DB, DB2, IDMS, VSAM, BDAM / Databases:
IMS/DB, DB2, IDMS, VSAM, BDAM
SQL Server, Oracle, Sybase
Jasmine, O2 , Versant, others (Object DBs)
Native file systems on NT and UNIX
Languages:
COBOL, ASM, PL/I; several 4GLs / Languages:
COBOL, ASM, PL/I; several 4GLs
C, C++; Java; Visual Basic; other
Perl, HTML/SHTML/XML, Tcl
ActiveX; IDL (for DCOM and CORBA)
Many 4GLs and private languages
Security:
RACF, ACF2, TopSecret; other / Security:
RACF, ACF2, TopSecret; other
Access Control Lists (ACLs)
Lightweight Directory Protocol (LDAP) stores
Cookies, certificates, and public keys
Internet firewalls
Applications:
Primarily home-grown / Applications:
Home-grown
Enterprise Resource Planning (ERP) packages
Business process automators (Vitria, Extricity,
and Crossworlds)

This table illustrates the myriad of new technologies, architectures and platforms come into play in the distributed application environment. Application integration is challenged with creating seamless solutions from existing applications, middleware, ERP packages and newly developed components. This can be a daunting task, and one that requires specific skills and extensive knowledge about the application deployment environment.

Evolution Beyond Client-Server

The distributed application infrastructures have evolved to serve a purpose. To understand the issues, you can look back at the history of client-server, and then identify how current business drivers have changed the requirements:

Early Client-Server

In the early years of client server, companies concentrated on putting value-added front ends (clients) on their server applications. These applications served users inside the company, and volume was typically low. Although poor performance was not well accepted, it was tolerated if the client-side application provided additional value (such as sexier information presentation). Client-server infrastructure was fairly arcane and cumbersome, including methods like EHLAPPI, APPC and proprietary remote procedure calls (RPCs). Applications were rarely designed with the potential for distributed access, or even application-to-application communication, in mind. Client-side applications pushed the performance limits of desktop computers, and became just as cumbersome to administrate as their server-side counterparts.

Web-based Client-Server

The client server market matured, and new protocols like TCP/IP were introduced to provide a standard “communication sockets” solution. The world-wide web began to gain acceptance, and with it a renewed appreciation for low-cost, thin client “browser” access to server applications running on larger, more capable machines. This architecture was attractive for replacing the earlier client-server solutions, which still served a fairly small audience and handled insignificant amounts of traffic; the concept of the “intranet” was born.

Companies scrambled to find simple ways to provide web-based access to server applications. Various “screen scraping” solutions flooded the marketplace. But users soon realized that as the volume of interactions increased, these simple solutions could not handle the load. Each web-based client allocated its own resource connections to databases and other important resources, squandering network throughput and connection pools. Still, as long as the volume was low, these solutions could be tolerated.

Internet, Supply Chain & Business-to-Business

The rampant adoption of the Internet for electronic commerce and business-to-business collaboration has changed the rules of client-server dramatically. Retail Sales and Customer Relationship solutions drive hundreds of thousands of Internet interactions. E-commerce applications require significant security and authentication. And supply chain (supplier-to-consumer) business system integration is driving the need to share access to legacy information systems between companies.

This radical jump in the volume of client-server interactions has driven a whole new set of requirements for applications: load balancing and fail-over to ensure 24x7 operation; new security and authentication methodologies including certificates, public keys, LDAP, and ACLs (access control lists); abilities to integrate anything with anything - including legacy applications, databases, middleware, ORBs, gateways, ERP packages, EDI and more. The traditional client-server environment simply cannot scale to meet these needs.

In short, the business drivers outstripped the capabilities of traditional client-server architectures, and a new architecture, called the application server, was born.

Application Servers

An application server is an application control mechanism that operates on a “mid-tier” platform. The mid-tier platform is located between the traditional “client” and traditional “server” platforms. The purpose of the application server is to provide new levels of scalability, reliability, security and performance to applications that must service significant interaction loads. Electronic commerce, customer relationship, and business-to-business collaboration (supply chain) applications fall into this category.

The idea of an application server actually evolved from the web server. A web server runs on an independent hardware platform and handles the HTTP message traffic to and from web clients. When the demand for high volumes of Internet interactions began to occur, the web server seemed like a good place to provide application scalability. Some web server vendors began expanding their servers to provide load-balancing features, while other new vendors began designing scalable application server solutions from the ground up. It can be hard to tell these solutions apart, but the latter typically has richer features.