Corroborative and Cooptive Knowledge-Base Constructs

Corroborative and Cooptive Knowledge-Base Constructs


The significance of the Industrial Revolution is not simply that it occasioned a transition from simple to complex tools (from implements to machines), but that it initiated a shift in dependency from men to machines asagents of production. So began the era of automation, with the years since seeing a deepening and broadening of this shift, as marked by the steady increase in capital-labor ratios associated with both primary and secondary industries (and, it might be added in the military, medical and other sectors as well). Similarly, the significance of what’s popularly referred to as the Information Revolution, marked by the emergence of a fundamentally new class of machinesqua computers, is not simply the escalation of automation it excited, or even its extension to cover clerical, calculative and an ever-widening array of data processing dependent applications. Rather, the more compelling consequence of the computer is the dramatic shift in the balance of practical power it has set afoot. In more and more areas, and more and more completely, decision authority is being removed from human functionaries and invested in computers qua decision agents.

The instruments that enable this transfer will be designated as Directive Decision Devices.In contrast to decision support systems (whose authors are most often their users, whose missions are merely advisory, and whose employment is volitional), directive decision devices are commissioned by organizational superiors and subsequently imposed on subordinates. This suggests that, in addition to enabling a transfer of decision responsibilities from men to machines, directive decision devices may also enable a greater concentration of decision authority in the hands of those sitting towards the apex of any managerial hierarchy. To the extent that they serve somehow to extend the effective executive reach (or span of control, if you will) of those commissioning them, directive decision devices may be said to put automation in service to centralization!

There is, of course, a practical constrain on the domain over which directive decision devices can exercise authority. As things now stand, the only sorts of decision situations for which computers can reasonably be held responsible will be those that could be characterized as technically tractable, such that. a proper (rational, if not optimal) decision choice can be determined either by taking recourse to an orthodox algorithmic formulation, or a decision table construct or something therelike. While this constraint removes some classes of decisions —judgment-driven, axiomatic and axiological, most notably— from the domain over which directive decision systems could be expected to exercise their authority, it leaves the two types of decisions most likely to be encountered in the work-a-day world of most organizations: Operational (mathematically-addressable or rule-driven) and Tactical (probabilistic or statistically-resolvable).

As for their general technical underpinnings, directive decision devices would appear as an assemblage of three types of components: (i). Information acquisition facilities for gathering decision predicates, (ii). One or more analytical instruments, encoded as computer programs, for realizing and evaluating alternative courses of action, and (iii). Whatever actuating apparatus may be required to implementdecision choices, or execute the elected course-of-action. More specifically, however, the contents and character of directive decision devices will be determined by which of the several different sorts of missions it’s been designed to meet:


1. Conventional/Executoryconstructs are true and complete decision agents, in that they have are designed to execute a decision function entirely without any human intervention or contribution. Included in the domain of decision functions for which a computer program is sufficient would be those where a decision choice is to be determined by taking recourse to a decision table or something therelike, or where a neat algorithmic resolution is both available and appropriate. More generally, this means decision situations where neither discretion nor contextual sensibilities are required…or desired! Hence the displacement, by commercial banks, of the responsibility for credit-granting decisions from local branch managers to corporate computers, in pursuit of both consistency and objectivity (the latter also being the rational for the computerization of mortgage lending decisions, particularly by those firms where subjectively-predicated lending decisions led to charges of discriminatory lending, or “redlining”). Hence also the emergence of Programmed Trading (where buy-sell decisions are responsive not to any judgments about the prospects of particular stocks or industries, but are determined by timing/trend sensitive algorithms concerned only with prospective market movements), the trend among insurance companies towards “automated” vs. agent-generated quotes, and aviation-related development like hands-off inertial navigation programs (that allowed airlines to remove navigators from their cockpit crews) and broad-purview autopilots, or flight-director programs, that have much diminished those aspects of flying that require any human input or intervention (and which will provide the rational for eliminating a third operator, or perhaps even a copilot, from next-generation commercial aircraft), etc.

[It could also be argued that what have come to be known as Electronic Performance Support Systems constitute an increasingly well-populated family of conversional devices (though they may also be interpreted as having a compensatory cast to them, and so might best be considered as sitting somewhere in the mid-range between the first two types of directive decision devices).]

2. Compensatorysystems would be targeted at areas where: (a). the likelihood of a proper (rational, if not optimal) choice is dependent on the technical skills or sensibilities of the decision-making agent, and (b). there is some reason to suspect —or, alternatively, no reason to expect— that a human functionary will be adequately equipped with such skills or sensibilities. Thus we have the rationale computer-controlled (anti-lock) braking systems to guard against the natural but unfortunate tendency by human drivers to try to arrest a skid by jamming the brakes. Similarly, aircraft are now regularly equipped with computer-based stall-recovery programs, this to counter the instinct-driven tendency of inexperienced or distracted pilots to try correct the situation by raising the nose of the aircraft…this being the exactly wrong thing to do. As for administrative functions, among the most obvious candidates for transfer to a compensatory program would be those requiring quantitative analysis capabilities (which, sadly, are in such scarce supply among the members of the American managerial corps).

3. Corroborative devices have a preventative mission (e.g., interdicting fraudulent wire transfers, guarding against ill-considered operator initiatives in process control contexts, alerting superiors to decisions that may have been a product of incomplete or flawed methodological procedures). When a corroborative program is in place, local management decisions or directives would not actually be implemented unless or until they had been ‘corroborated’ by the computer as innocent of any predictably parlous prospects or procedural improprieties. Corroborative constructs are then a means for intercepting/interdicting prospective endogenous threats. They would prevent a system’s computers from executing an operator-ordered initiative until it had been validated (i.e., corroborated) as innocent of any predictably perilous consequences. Ideally, they will be ‘invisible’ to those on whom they are visited.

4. Cooptive devices would be designed to seize the initiative and implement an appropriate course-of-action (contingency planning script) if the human manager has not already done so prior to some point in time. As an illustration, modern commercial aircraft might be equipped with a cooptive construct as an adjunct to collision avoidance systems. If onboard radar notes conditions consistent with a possible collision (by projecting flight vectors), the cooptive construct could be programmed to peremptorily initiate evasive maneuvers once separation passes below some minimal threshold value. Cooptive constructs may make their most crucial contribution in situations where a human decision-maker might naturally be expected to be reluctant to act. That is, cooptive constructs are well-suited to cover true quandaries, i.e.,situations where, reminiscent of “Hobson's choice”, all available alternatives are equally unattractive.