OPC
Interface to the PI System

Version 2.3.4.0 to 2.3.5.0
Rev B

How to Contact Us

OSIsoft, Inc.
777 Davis St., Suite 250
San Leandro, CA 94577 USA
Telephone
(01) 510-297-5800 (main phone)
(01) 510-357-8136 (fax)
(01) 510-297-5828 (support phone)

Houston, TX
Johnson City, TN
Mayfield Heights, OH
Phoenix, AZ
Savannah, GA
Seattle, WA
Yardley, PA / Worldwide Offices
OSIsoft Australia
Perth, Australia
Auckland, New Zealand
OSI Software GmbH
Altenstadt,Germany
OSI Software Asia Pte Ltd.
Singapore
OSIsoft Canada ULC
Montreal, Canada
OSIsoft, Inc. Representative Office
Shanghai, People’s Republic of China
OSIsoft Japan KK
Tokyo, Japan
OSIsoft Mexico S. De R.L. De C.V.
Mexico City, Mexico
Sales Outlets and Distributors
·  Brazil
·  Middle East/North Africa
·  Republic of South Africa
·  Russia/Central Asia / ·  South America/Caribbean
·  Southeast Asia
·  South Korea
·  Taiwan
WWW.OSISOFT.COM
OSIsoft, Inc. is the owner of the following trademarks and registered trademarks: PI System, PI ProcessBook, Sequencia, Sigmafine, gRecipe, sRecipe, and RLINK. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Any trademark that appears in this book that is not owned by OSIsoft, Inc. is the property of its owner and use herein in no way indicates an endorsement, recommendation, or warranty of such party’s products or any affiliation with such party of any kind.
RESTRICTED RIGHTS LEGEND
Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph I(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013
Unpublished – rights reserved under the copyright laws of the United States.
© 1998-2007 OSIsoft, Inc. PI_OPCInt.doc

Table of Contents

Introduction 1

Reference Manuals 1

Supported Features 1

Configuration Diagrams 5

Principles of Operation 7

Overview of OPC Servers and Clients 8

Connections – Creating, Losing, and Recreating 9

The OPCEnum Tool 9

Timestamps 10

Writing Timestamps to the Device 10

Plug-in Post-processing DLLs 11

Polling, Advising and Event Tags 11

Data Types 12

Transformations and Scaling 14

Quality Information 17

Questionable Qualities – Store the Status or the Value? 18

Storing Quality Information Directly 18

Installation Checklist 23

Interface Installation on Windows 25

Naming Conventions and Requirements 25

Interface Directories 26

PIHOME Directory Tree 26

Interface Installation Directory 26

OPCEnum Directory 26

Plug-ins Directory 26

Tools Directory 26

Interface Installation Procedure 26

Installing Interface as a Windows Service 27

Installing Interface Service with PI ICU 27

Installing Interface Service Manually 30

Upgrading an Installation 31

DCOM Configuration Details 33

General Steps for DCOM Configuration 33

DCOM Configuration for Windows XP (SP1/SP2) and Windows 2003 33

DCOM Configuration for Windows NT/2000 41

Notes and Recommendations on DCOM Configuration 45

DCOM Security Configuration for the Interface 49

PI OPC Tools 51

PI OPCClient 51

PI OPCTool 51

Digital States 53

PointSource 55

PI Point Configuration 57

Point Attributes 57

Tag 57

PointSource 57

PointType 58

Location1 58

Location2 58

Location3 59

Location4 60

Location5 61

InstrumentTag 61

ExDesc 62

SourceTag 63

TotalCode 63

SquareRoot 63

Convers 63

Userint1 63

Userint2 64

Scan 64

Shutdown 64

Exception Processing 65

Output Points 66

Trigger Method 1 (Recommended) 66

Trigger Method 2 66

Sample Tag Configurations 66

Scan Classes 66

Polled Tags 67

Advise Tags 67

Event Tags 67

Array Tags 68

Arrays as Event Tags 70

Reading Basic Quality as a Digital Tag 70

Performance Point Configuration 73

I/O Rate Tag Configuration 75

Monitoring I/O Rates on the Interface Node 75

Configuring I/O Rate Tags with PI ICU (Windows) 75

Configuring I/O Rate Tags Manually 76

Configuring PI Point on the PI Server 77

Configuration on the Interface Node 77

Startup Command File 79

Configuring the Interface with PI ICU 79

OPC Interface Tab 82

Command-line Parameters 97

Sample OPCInt.bat file 111

Interface Node Clock 113

Windows 113

Security 115

Starting / Stopping the Interface on Windows 117

Starting Interface as a Service 117

Stopping Interface Running as a Service 117

Buffering 119

Configuring Buffering with PI ICU (Windows) 119

Configuring Buffering Manually 122

Example piclient.ini File 123

Appendix A: OPC Server Issues 125

Browsing 125

Timestamps 125

Disconnecting 125

False Values 125

Access Path 126

Appendix B: Notes on Some OPC Servers 127

Honeywell APP Node 127

DeltaV System 127

Appendix C: Debugging 129

Debugging Options 129

Using the opcresponse.log, opcscan.log, and opcrefresh.log Files 131

Appendix D: List of Startup Parameters Grouped by Usage 135

UniInt Parameters (Commonly Used) 135

DCOM Security 135

OPC Server 135

Advanced Options 135

Data Handling 137

Miscellaneous 138

Server-level Failover 138

Interface-level Failover 138

UniInt Interface-level Failover 139

Plug-Ins (Post-processing dlls) 139

Debugging 139

Obsolete 139

Appendix E: Error and Informational Messages 141

Message Logs 141

Messages 141

System Errors and PI Errors 148

Revision History 149

OPC Interface to the PI System 151

Introduction

OPC (originated from OLE for Process Control, now referred as open connectivity via open standards) is a standard established by the OPC Foundation task force to allow applications to access process data from the plant floor in a consistent manner. Vendors of process devices provide OPC Servers, whose communications interfaces comply with the specifications laid out by the task force (the OPC Standard), and any client software that complies with that standard can communicate with any of those servers without regard to hardware releases or upgrades. The connection between the client and the OPC Server is either through the Microsoft COM interface or through OLE Automation, and the client accesses data from the data cache maintained by the OPC Server or requests that the server read the device directly.

The PI OPC Interface is an OPC Data Access (DA) client application that can communicate to an OPC DA Server and send data to the PI System. It supports v1.0a, v2.0, and v2.05 of OPC Data Access standard. The design of the interface allows running multiple instances of the interface simultaneously. Each instance of the interface is able keep connection to only one OPC DA Server, which may be on the same or a different machine/node. More than one instance may be configured to connect to the same OPC Server. The interface may reside on a PI home node or a PI Interface node.

This interface is designed for running on Windows NT 4.0 with Service Pack 6 or higher, Windows 2000, Windows XP, or Windows 2003. It requires both the PI-API and the PI-SDK.

Reference Manuals

OSIsoft

·  PI Server manuals

·  PI API Installation manual

·  UniInt Interface User Manual

·  PI OPCClient User’s Guide

·  PI Interface Configuration Utility User Manual

·  OPC Interface Failover Manual

Supported Features

Feature / Support /
Part Number / PI-IN-OS-OPC-NTI
* Platforms / Windows (NT4.0 SP6, 2000, XP, 2003)
OPC Data Access Standard / 1.0a / 2.0 / 2.05
APS Connector / Yes
Point Builder Utility / No
ICU Control / Yes
PI Point Types / Float16 / Float32 / Float64 / Int16 / Int32 / Digital / String
Sub-second Timestamps / Yes
Sub-second Scan Classes / Yes
Automatically Incorporates PIPoint Attribute Changes / Yes
Exception Reporting / Done by Interface / Done by DCS
Outputs from PI / Yes
Inputs to PI: Scan-based / Unsolicited / Event Tags / Scan-based / Unsolicited / Event Tags
Supports Questionable Bit / Yes
Supports Multi-character PointSource / Yes
Maximum Point Count / Unlimited
* Uses PI SDK / Yes
PINet String Support / N/A
* Source of Timestamps / Interface / OPC Server
History Recovery / No
* UniInt-based
Disconnected Startup
* SetDeviceStatus / Yes
Yes
Yes
* Failover / Server-level Failover; Interface-Level Failover Using UniInt; Interface-Level Failover Using Microsoft Clustering
Vendor Software Required on PI Interface Node / PINet Node / No
* Vendor Software Required on DCS System / Yes
* Vendor Hardware Required / No
* Additional PI Software Included with Interface / Yes
Serial-Based Interface / No

* See paragraphs below for further explanation.

Platforms

The Interface is designed to run on the above mentioned Microsoft Windows operating systems and greater. Due to the dependency of OPC on COM and DCOM, the PI OPC Interface is not support on non-windows platforms.

Please contact OSIsoft Technical Support for more information.

Uses PI SDK

The PI SDK and the PI API are bundled together and must be installed on each PI Interface node. This Interface does not specifically make PI SDK calls.

If the PI Server is at version 3.4.370 or higher, and the PI API is at version 1.6 or higher, then the PI SDK is not used even if it is enabled since UniInt will use the new PI API calls for long Instrument tag field and multiple character point source.

If the PI Server is older than 3.4.370, the new PI API calls cannot be used. The PI SDK would need to be enabled to use the long instrument tag field and multiple character point source.

The PI SDK cannot be used if the interface will be setup to use Disconnected Startup since it is based on API calls only.

Source of Timestamps

The interface can accept timestamps from the OPC Server or it can provide timestamps from the local node. This is controlled by a command-line parameter.

UniInt-based

UniInt stands for Universal Interface. UniInt is not a separate product or file; it is an OSIsoft-developed template used by developers, and is integrated into many interfaces, including this interface. The purpose of UniInt is to keep a consistent feature set and behavior across as many of OSIsoft’s interfaces as possible. It also allows for the very rapid development of new interfaces. In any UniInt-based interface, the interface uses some of the UniIntsupplied configuration parameters and some interface-specific parameters. UniInt is constantly being upgraded with new options and features.

The UniInt Interface User Manual is a supplement to this manual.

SetDeviceStatus

The OPC Interface is built with UNIINT 4.3.0.15. New functionality has been added to support health tags. The Health tag with the point attribute Exdesc = [UI_DEVSTAT], is used to represent the status of the source device. The following events can be written into this tag:

a) “Good” - the interface is properly communicating and reading data from the OPCServer.

b) “2 | Connected/No Data” - This message shows that the interface is connected to the OPCServer but No Data has been read.

c) “3 | 1 devices(s) in error” - The connection to the OPCServer is down.

Please refer to the UniInt Interface User Manual.doc file for more information on how to configure health points.

Failover

·  Sever-Level Failover

The interface supports server-level failover that allows collecting data from either primary or backup OPC Servers. This feature is built into the interface and does not require any additional hardware or software. See OPC Interface Failover Manual for more details

·  Interface-Level Failover using UniInt

This interface supports Interface-Level Failover using UniInt, as well as Interface-Level Failover using Microsoft Clustering.

UniInt provides support for a hot failover configuration which results in a no data loss solution for bi-directional data transfer between the PI Server and the Data Source given a single point of failure in the system architecture. This failover solution requires that two copies of the interface be installed on different interface nodes collecting data simultaneously from a single data source. Failover operation is automatic and operates with no user interaction. Each interface participating in failover has the ability to monitor and determine liveliness and failover status. To assist in administering system operations, the ability to manually trigger failover to a desired interface is also supported by the failover scheme. This type of failover does not require a special type of hardware or software (e.g. a Microsoft Cluster).

The failover scheme is described in detail in the UniInt Interface User Manual, which is a supplement to this manual. Details for configuring this Interface to use failover are described in the “UniInt Failover Configuration” section of the OPC Interface Failover Manual.

·  Interface-Level Failover using Microsoft Clustering

This type of failover allows two copies of the interface to run on two clustered machines, with only one copy actually collecting data at any given time. This failover option can be combined with the Server-Level Failover, so that the user can have redundancy for both the OPC server and the interface. Details of configuring the failover are documented in the OPC Interface Failover Manual.

Vendor Software Required on DCS System

The OPC Server may run on the same system as the interface, or it may run on another system.

Additional PI Software Included with Interface

The PI OPCClient is an OSIsoft product which ships with the interface, that assists in configuring and troubleshooting the interface.

The OPC Foundation has provided a tool to allow OPC clients to locate servers on remote nodes, without having information about those servers in the local registry. This tool is called OPCEnum and is freely distributed by the OPC Foundation. The PI OPC Interface installation will automatically install OPCEnum as well.

Configuration Diagrams

Configuration 1: Preferred Configuration

This configuration is the simplest, and allows data buffering on the interface node.

Configuration 2: Common Configuration

This configuration allows data buffering on the interface node and it is recommended to have all machines in the same domain.

Configuration 3: Alternate Configuration

This configuration is possible, but not the preferred configuration. Having the interface and the PI Server compete for resources can impair efficiency.

Note: All configurations require DCOM settings and it is recommended that buffering be used even when the interface runs on the PI Server node, because OPC Server sometimes sends data in bursts, with all values coming in within the same millisecond.