OPC
Interface to the PI System

version 2.1.45.0

7/21/2003 3:46 PM 51

How to Contact Us

Phone / (510) 297-5800 (main number)
(510) 297-5828 (technical support)
Fax / (510) 357-8136
E-mail /
World Wide Web / http://www.osisoft.com
Mail / OSIsoft
P.O. Box 727
San Leandro, CA 94577-0427
USA
OSI Software GmbH
Hauptstrabe 30
D-63674 Altenstadt 1
Deutschland / OSI Software, Ltd
P O Box 8256
Symonds Street
Auckland 1035 New Zealand
OSI Software, Asia Pte Ltd
152 Beach Road
#09-06 Gateway East
Singapore, 189721

Unpublished -- rights reserved under the copyright laws of the United States.
RESTRICTED RIGHTS LEGEND
Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii)
of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013

Trademark statement—PI is a registered trademark of OSI Software, Inc. Microsoft Windows, Microsoft Windows for Workgroups, and Microsoft NT are registered trademarks of Microsoft Corporation. Solaris is a registered trademark of Sun Microsystems. HPUX is a registered trademark of Hewlett Packard Corp. IBM AIX RS/6000 is a registered trademark of the IBM Corporation. DUX, DEC VAX and DEC Alpha are registered trademarks of the Digital Equipment Corporation. OPC and OPC Foundation are trademarks of the OPC Foundation.
PI_opcint.doc

Ó 2000 - 2003 OSI Software, Inc. All rights reserved
777 Davis Street, Suite 250, San Leandro, CA 94577

Table of Contents

Introduction 1

Configurations 2

Supported Features 2

Requirements - PI2 and PI3 Servers 4

Upgrading from Version 1.x or 2.0 to 2.1 or Higher 4

Overview 7

Notes on OPC and COM 7

Timestamps 8

Writing Timestamps to the Device 9

Connections - Creating, Losing, and Recreating 9

The OPCEnum Utility 9

Plug-In Post-processing DLLs 10

Polling, Advising and Event Tags 10

Datatypes 11

Transformations and Scaling 14

Quality Information 17

Questionable Qualities -- Store the Status or the Value? 17

Storing Quality Information Directly 18

PI Point Configuration 21

Point Attributes 21

PointType 21

PointSource 21

InstrumentTag 21

ExDesc 22

SourceTag 23

Scan 23

Totalcode 23

SquareRoot 23

Convers 24

Location1 24

Location2 24

Location3 26

Location4 26

Location5 27

Userint1 27

Userint2 27

Scan 27

Shutdown 28

Exception processing 28

Software Configuration 31

Point Source -- PI 2 31

Point Source -- PI 3 31

Digital States -- PI 2 31

Digital States -- PI 3 31

IO Rate Points 31

Step 1 – PI Point configuration on the PI Server 32

Step 2 – Configuration on the Interface Node 32

Command-line Parameters 33

Starting and Finding Servers 33

Interface-level Failover 33

Server-Level Failover 34

Post-processing ("plug-in" dlls) 34

Controlling Data Collection 34

Handling Idiosyncracies for Various OPC Servers 36

Debugging 37

Complete Alphabetical List of Parameters 37

Interface Operation 49

No More Password File 49

Installing the Interface as a Service 50

Installing Debug Symbols 50

Startup/Shutdown 51

Editing Tags While the Interface is Running 51

Logging Messages 51

Connecting to OPC Server 52

Using PI-API Buffering 52

Configuring DCOM: The Basic Steps 52

Configuring Tags 55

Scan Classes 55

Configuring Polled Tags 55

Configuring Advise Tags 55

Configuring Event Tags 56

Configuring Array Tags 57

Configuring Arrays as Event Tags 58

Reading Basic Quality as a Digital Tag 58

OPC Server Issues 61

Browsing 61

Timestamps 61

Disconnecting 61

False Values 61

Access Path 62

Interface Installation 63

Naming Conventions and Requirements 63

Interface Directories 63

The PIHOME Directory Tree 63

Interface Installation Directory 63

Plug-Ins Directory 63

OPCEnum Directory 64

Tools Directory 64

Security 64

Interface Checklist 64

Upgrading an Installation 65

Common Problems 66

Debugging 67

Using the opcresponse.log, opcscan.log, and opcrefresh.log Files 69

Error and Informational Messages 71

PIOPCTool 79

OPC Interface Failover 81

DCOM Configuration Details 83

Appendix A: Notes on Some OPC Servers 93

Honeywell APP Node 93

DeltaV System 94

All Servers Built on the FactorySoft Toolkit 94

Revision History 95

OPC Interface to the PI System 93

Introduction

OPC (OLE for Process Control) is a standard established by the OPC Foundation task force to allow applications to access process data from the plant floor in a consistent manner. Vendors of process devices provide OPC servers whose communications interfaces comply with the specifications laid out by the task force (the OPC Standard), and any client software that complies with that standard can communicate with any of those servers without regard to hardware releases or upgrades. The connection between the client and the OPC server is either through the Microsoft COM interface or through OLE Automation, and the client accesses data from the data cache maintained by the OPC server or requests that the server read the device directly. This is an OPC COM custom interface for the OSI Software Plant Information (PI) system. The interface may reside on a PI home node or a PI API node.

Each interface will connect with one and only one OPC server, which may be on the same or a different machine. More than one interface may be configured to connect to the same server.

This interface only runs on an Intel platform running Windows NT v4.0 or higher with Service Pack 3 or higher. It requires both the PI API and the PI SDK.

For interface-level failover, Microsoft Clustering is required. See Failover section for more.

Configurations

The Preferred Configuration
Another Possibility

Supported Features

Feature / Support /
Part Number / PI-IN-OS-OPC-NTI
Platforms / NT-Intel, W2K, XP
PI Point Types / Float16 / Float32 / Float64 / Int16 / Int32 / Digital / String
Subsecond Timestamps / Yes
Subsecond Scan Classes / Yes
Automatically Incorporates PI Point Attribute Changes / Yes
Exception Reporting / Done by Interface / Done by DCS
Outputs from PI / Yes
History Recovery / No
*Failover / Yes
Inputs to PI: Scan-based / Unsolicited / Event Tags / Scan-based / Unsolicited / Event Tags
Uniint-based / Yes
Maximum Point Count / Unlimited
SDK / Yes
Vendor Software Required on PI API / PINet node / No
* Vendor Software Required on DCS System / Yes
Vendor Hardware Required / No
* Additional PI Software Included with Interface / Yes

*See below for further explanation.

Source of Timestamps

The interface can accept timestamps from the OPC server or it can provide timestamps.

Failover

The interface supports server-level failover, where the interface will shift to a backup OPC server if the current server becomes unavailable, and also interface-level failover, where one copy of the interface sits dormant until the primary copy becomes unable to collect data. Interface-level failover requires NT clustering.

Vendor Software Required on DCS System

The OPC server may run on the same system as the interface itself, or it may run on another system. The interface does not come with an OSI-supplied OPC server.

Additional PI Software Included with Interface

The PIOPCTool is an OSI product that assists in installing, configuring, and troubleshooting the interface.

Requirements - PI2 and PI3 Servers

Beginning with version 2.1.0, the OPC interface requires the PI SDK as well as the PI API to be installed and configured on the machine. The interface will not run without the SDK.

We are currently also offering a version of the interface which does not use the PI SDK. This is intended to make it easier for those customers who are using PI 3.2 servers. With PI 3.3, the security of the system is configured using the PI Trust relationships in the PI server, so we no longer require a login password to be encrypted by the interface. Older versions of the interface required the user to run the interface interactively to set the password which would be used by the PI SDK, and sent to the PI 3.2 system to validate the security of the connection. The /Setlogin flag allowed the user to change the login ID and password. That is no longer supported, so we are providing a version of the interface which does not use the PI SDK to make support easier for sites which have not yet upgraded to PI 3.3.

Note that it is the PI SDK which gives us support for features such as long strings for ExDesc, and InstrumentTag, as well as enabling tools such as the ICU (Interface Configuration Utility) and APS (Auto Point Synchronization). You can only have those features if you use the full version of the interface.

Upgrading from Version 1.x or 2.0 to 2.1 or Higher

With version 2.1, the OPC interface has been brought into alignment with other OSI interfaces. Location1 is now used to indicate the interface instance number. This means that users who are upgrading from an earlier version will have to make a one-time change to their tag configuration, and set their opcint.bat file accordingly. The change comes in three steps; you can use SMT to update your tags, or use the piconfig commands below.

For all tags with a value of 1 in Location1, set Location3=2.

The piconfig commands would be (Change pointsource=O to use your pointsource!!):

>@tabl pipoint

>@pointclass classic

>@mode edit

>@modify Location3=2

>@select tag=*,pointsource=O,Location1=1

>@ends

Then edit your opcint.bat file to make sure that /ID= has a numeric value, such as /ID=4.

Finally, set Location1 for all the OPC tags to be the same as your /ID=# parameter in your opcint.bat file.

>@tabl pipoint

>@pointclass classic

>@mode edit

>@modify Location1=4

>@select tag=*,pointsource=O

>@ends

If you have no output tags (which were signified by Location1=1, and are now signified by Location3=2), you can leave your current tag configuration in place as long as you do not use a numeric value in the /ID= parameter. If /ID= does not specify a number, or specifies 0, the interface will accept all tags with the correct pointsource, regardless of the value in Location1 of the tag. If you specify a numeric value for /ID=, the interface will only accept those tags that have the correct pointsource and that numeric value in Location1.

OPC Interface to the PI System 93

Overview

Notes on OPC and COM

The general idea behind COM is that using a standard method for passing information between programs allows us to ignore where a program runs: as long as we can connect to that program and exchange information, we don’t care if the program is running on our local machine, a machine across the room, or a machine in another country. For that matter, we don’t care what kind of machine it is.

The parts that are necessary for this to work are:

·  The actual program that has the information we need, or wants the information we have.

·  A program that runs on our machine that knows how to talk to the other program.

The actual OPC server may run on the same node as the interface, or on another machine entirely. Frankly, we don’t care which (well, there are some performance considerations). The important thing is that your OPC server will, when you install it, put entries into the NT Registry which will allow the NT system to locate the OPC server when someone wants to talk to it. If your interface and your OPC server are running on different machines, you can use OPCEnum to locate those registry entries on the other machine. As long as a program has a way to find those registry entries, it can use them to ask NT to connect it to the OPC server.

The notion behind OPC is that there’s a process device out there, it has some points on it, the OPC server knows about those points, and it’s willing to let you touch those points. This is all arranged by the client (that’s us, the OPC interface) creating a Group and adding points to it, or connecting to a Public Group that’s already been defined. All reads and writes actually happen to Groups, not individual points, although the client program can choose what points within the group to read or write. The Interface doesn’t define points, only groups: we add points to our groups, but they must be points which the OPC server is willing to recognize as valid points.

Another detail is that the OPC server will have a data cache where it keeps the most recent data. A client can specify that data should be read out of the cache, which will be very fast, or read directly from the device, which can be slow and will block all other reads and writes until the device read is finished. Our interface does all reads from the cache, and all writes to the device. When we create our group, we specify how often the cache values for the points in that group should be updated; the requested update rate is the same as the scan rate for those points. This is a requested update rate: the OPC server may not actually agree to update the cache that often. If the server says it can’t update the cache as quickly as we want, that information will be noted in the log file. The defining characteristic of a group is its scan rate.

Along with the interface, we’re providing a simple OPC client (PIOPCTool). It has an internal help, but it’s really very basic: it looks to see what OPC servers are registered on your machine and gives you a list to select from. You choose the one you want, and then press the Connect button. If the server is on another machine, tell it the name of that machine, and press Connect, and it will ask the other machine what OPC servers it has, and add them to its list. (You will need to have OPCEnum installed on both machines, that's covered later.)

If you can’t connect using the PIOPCTool client, then there's no sense even trying the interface until you've solved the problem. See the sections below on troubleshooting connections before you go any further. You can also use PIOPCTool to read and write data. Look at the help inside PIOPCTool for more instructions. If you can use Refresh and Advise to get data into PIOPCTool, the interface will be able to get data. Using SyncRead to get data only shows that the server responds to synchronous calls, it does not show that you will be able to get data into the interface, as the interface uses only asynchronous calls to get data.