DATA CONVERSION AND IMPLEMENTATION 8

Data Conversion and Implementation

Ian E. Macfarlane

Walden University

Object-Oriented Design

CMIS-3004-1

Dr. Vladimir Gubanov

October 04, 2012

DATA CONVERSION AND IMPLEMENTATION 8

DATA CONVERSION AND IMPLEMENTATION 8

HudsonBanc Case Study

"Two regional banks with similar geographic territories merged to form HudsonBanc" and "both banks had credit card operations and operated billing systems that had been internally developed and upgraded over three decades" (Satzinger, Jackson & Burd, 2012, p. 439). The authors cite that "merging the two billing systems was identified as a high-priority cost-saving measure" (Satzinger, Jackson & Burd, 2012, p. 439).

This paper will describe the data conversion techniques that were employed and the correction techniques that were implemented within this case study, as well identifying mistakes that were made and also providing alternative solutions in how the problems encountered by HudsonBanc, could have been avoided.

Data Conversion Techniques

Within the text case study, entitled 'HudsonBanc System Upgrade' the data conversion technique was described as operating the new system in 'parallel' with the old system for a period of two months. Satzinger, Jackson & Burd (2012) explains that "hardware for the new system was installed in early January" and "software was installed the following week, and a random sample of 10 percent of the customer accounts was copied to the new system" (p. 439). Furthermore, "to save costs involved with complete duplication, the new system computed but didn't actually print billing statements" adding that "payments were entered into both systems and used to update parallel customer account databases" and "duplicate account records were checked manually to ensure that they were the same" (Satzinger, Jackson & Burd, 2012, p. 439).

"After the second test billing cycle, the new system was declared ready for operation" and "all customer accounts were migrated to the new system in mid-April" adding that "the old systems were turned off on May 1, and the new system took over operation" (Satzinger, Jackson & Burd, 2012, p. 439). Upon doing this, problems occurred almost immediately.

Correction Techniques

After the new HudsonBanc system went 'live', several problems immediately occurred and were listed as (a) "the system was unable to handle the greatly increased volume of transactions", (b) "data entry and customer Web access slowed to a crawl", (c) "payments were backed up by several weeks", and (d) "the system wasn't handling certain types of transactions correctly (e.g. charge corrections and credits for overpayment)" (Satzinger, Jackson & Burd, 2012, p. 439).

Satzinger, Jackson & Burd (2012) states that "manual inspection of the recently migrated account records showed errors in approximately 50,000 accounts" and that "it took almost six weeks to adjust the incorrect accounts and update functions to handle all transaction types correctly" and upon "attempting to print billing statements for the 50,000 corrected customer accounts (on June 20), the system refused to print any information for transactions more than 30 days old" (Satzinger, Jackson & Burd, 2012, p. 439).

As a result, "clearing the backlog took two months" and "twenty-five people were reassigned from other operational areas, and additional phone lines were added to provide sufficient customer support capacity" as well as reassigning "system development personnel to IS operations for up to three months to assist in clearing the billing backlog" (Satzinger, Jackson & Burd, 2012, p. 439).

After "federal and state regulatory authorities stepped into investigate the problems, HudsonBanc agreed to allow customers to spread payments for late bills over three months without interest charges" ... "which further aggravated the backlog and staffing problems, in attempting to set up these new payment arrangements" (Satzinger, Jackson & Burd, 2012, p. 439).

Deployment Solutions

Clearly, HudsonBanc had made some fatal errors and assumptions when deciding upon the chosen parallel route of merging and migrating two databases into one. Ultimately, in originally identifying that the merging of the two billing systems will prove to be a cost-saving measure, HudsonBanc ended up paying an avoidable and unnecessary price, in their rush to realizing the implementation of their new system. By not fully testing the various dependent modules of their new system, they also managed to quickly sacrifice any and all consumer trust in their brand name. The fact that federal and state level regulatory authorities had stepped in to investigate, would have also reached the level of national news, further adding to the degradation of their previously established 'good standing', corporate reputation, and trust within the eyes of the customer.

As previously mentioned within the HudsonBanc case study, the method of the parallel deployment was improperly implemented. Satzinger, Jackson & Burd (2012) describes parallel deployment when "the old and new systems are operated for an extended period of time (typically weeks or months)" and that the "old system continues to operate until the new system has been thoroughly tested and determined to be error-free and ready to operate independently" (p. 429). "The primary advantage of parallel deployment is relatively low operational risk, if both systems are operated completely (i.e., using all data and exercising all functions" and "any failure in the new system can be mitigated by relying on the old system as a backup" (Satzinger, Jackson & Burd, 2012, p. 429).

That the HudsonBanc parallel deployment was conducted over a period of two months it did not fully test all system functions, by not performing any printing capabilities. Furthermore, HudsonBanc had only migrated a random 10 percent sample population of accounts into the new system. Satzinger, Jackson & Burd (2012) explains that performing these types of limited processes is defined as a 'partial parallel operation' in which it "always entails the risk that significant errors or problems will go undetected" (p. 430). In the HudsonBanc case study, this is exactly what had occurred on the 'go live' implementation date. HudsonBanc may have mitigated these problems had they instead deployed a 'full parallel operation'.

However, since the HudsonBanc project involved the merging of two regional banking systems into one, a more preferable method would have been a 'phased deployment'. Satzinger, Jackson & Burd (2012) states that "in a phased deployment, the system is deployed in a series of steps or phases, and can be combined with parallel deployment, particularly when the new system will take over the operation of multiple existing systems" (p. 430). The authors cite that "the primary advantage of phased deployment is reduced risk because failure of a single phase is less problematic than failure of an entire system" (Satzinger, Jackson & Burd, 2012, p. 431).

Testing

Satzinger, Jackson & Burd (2012) states that "testing is the process of examining a component, subsystem, or system to determine its operational characteristics and whether it contains any defects" (p. 411). Clearly, inadequate testing was performed in the HudsonBanc case study, at any described process level.

Types of testing include (a) unit testing, (b) integration testing, (c) usability testing, and (d) system, performance, and stress testing. Unit testing "is the process of testing individual methods, classes, or components (in isolation) before they are integrated with other software" and "errors become much more difficult and expensive to locate and fix after units are combined" (Satzinger, Jackson & Burd, 2012, p. 412).

Burns (2001) states that "the danger of not implementing a unit test on every method is that the coverage may be incomplete" and "the programmer should know that their unit testing is complete when the unit tests cover at the very least the functional requirements of all the code" (para. 10).

Integration testing "evaluates the behavior of a group of methods, classes, or components in order to identify errors that weren't or couldn't be detected by unit testing" (Satzinger, Jackson & Burd, 2012, p. 414). This type of testing could have detected the HusdsonBanc errors in which the system was unable to handle certain types of transactions, including charge corrections and credits for overpayments, before that new system was fully deployed and went 'live'.

Usability testing "evaluates functional requirements and the quality of a user interface" and these tests are ongoing (Satzinger, Jackson & Burd, 2012, p. 416). Lastly, system, performance, and stress testing "determines whether a system or subsystem can meet such time-based performance criteria as response time or throughput" (Satzinger, Jackson & Burd, 2012, p. 416). Once again, the HudsonBanc project had failed to anticipate system response time requirements and throughput requirements when merging the two banking database operations into one system. As a consequence, "the system was unable to handle the greatly increased volume of transactions, and data entry and customer Web access slowed to a crawl, and payments were backed up by several weeks" (Satzinger, Jackson & Burd, 2012, p. 439).

Satzinger, Jackson & Burd (2012) states that "performance tests are complex because they can involve multiple programs, subsystems, computer systems, and network infrastructure" and as a result, "corrective actions may include any combination of (a) application software tuning or reimplementation, (b) hardware, system software, or network reconfiguration, and (c) upgrade or replacement of underperforming components" (p. 416).

Conclusion

As revealed within the 'HudsonBanc billing system upgrade' case study, the merging of two systems into one new system requires many steps involving the selection of a proper migration and deployment strategy, combined with many types of system component testing. If these are not done, the inevitable fallout will ensue, as seen within the case study's many costly and time consuming corrective actions taken. The case study also reveals that sometimes in order to cut costs, for the sake of expediency, is not the best strategy, especially when it concerns valuable data assets, and the ripple effect that any interruption of processes and services it produces becomes a devastating and crippling experience to recover from.

Russom (2006) explains that "data migration suffers from a few myths and misconceptions. For example, it rarely copies data once, one way, from one system to another" adding that "data may exist in old and new systems simultaneously, requiring that the two systems be synchronized for months or years" (p. 2).

Russom (2006) also advises "don’t burn the bridge too soon" and that "in most migrations, the legacy and new platforms must run concurrently for weeks or months as phases of the migration complete and are certified" and "in some cases, the legacy platform may be needed for financial closings or other business processes long after all end users and data have migrated to the new platform" (p. 11).


References

Burns, T. (2001). Effective unit testing. Ubiquity. Retrieved from http://ubiquity.acm.org/article.cfm?id=358976

Russom, P. (2006). Best practices in data migration. Retrieved from http://download.101com.com/pub/TDWI/Files/TDWI_Monograph_BPinDataMigration_April2006.pdf

Satzinger, J., Jackson, R., & Burd, S. (2012). Systems analysis and design in a changing world (6th ed.). Boston, MA: Course Technology, Cengage Learning.