PLEXOS international's Data Layer Plugin (DLP) provides application software with the ability to access, combine and organize data in a consistent, transparent manner from a wide variety of data sources. Server information can easily be integrated with desktop data. Data from different servers, even those running different database engines (e.g. SQL Server and Oracle) or operating systems, can be used by the application as if the data comes from a single source. Data sources are not limited to “traditional” databases – for example, attribute data from ESRI shape files, data from Excel spreadsheets and data from text files can be included. Additional data sources (such as real time data, device status reports, custom file types, etc.) are accommodated via add-ins to the DLP. These add-ins can be rapidly designed and developed by PLEXOS to meet the needs of the application.
Applications written to use the DLP “see” one or more organized views of the data available from all the available, defined data sources. These organized views of the data are also defined within the DLP. An application can choose to use one or more of these views interchangeably.
The DLP is created and managed using the DLPAdmin utility. DLPAdmin provides the user with an easy-to-use, wizard driven interface to manage the available data sources, views and organized data.
In the past, if an application needed data that was not already in its “native” format, a data conversion process was required to make that data accessible. A few closely integrated software packages (product suites) allowed the use of one program’s data in another, but using data from an unrelated software package was – at best – difficult. Not only were the file formats likely to be incompatible, but the architecture of how the data was stored for each application made it very difficult to establish relationships between their data, even after being exported or converted.
One common situation encountered is legacy data, perhaps on a different database platform than the current production systems. Oftentimes, this legacy data is organized differently and contains information that was not “ported to” or converted to the new system. The only way to use that old data is to have modifications made to the new system to incorporate that data, but this is not always possible or cost effective.
Another situation arises when new data is collected that is not (or not yet) part of an enterprise solution. To use this data, the user must either modify the application to access both data sources or must have the new data added to the enterprise solution. Quite often, neither of these options is available.
Data from non-traditional data sources is another area where data integration is difficult or very expensive. Shape file data from a GIS or CAD package is a common example. Data from real-time sources such as remote sensing units or process monitoring equipment is another.
The Data Layer Plugin (DLP) overcomes these issues. By allowing the user to connect to their data “where it lives”, establish relationships between data sources and define what data is visible to DLP-based applications, the need for data conversion or importing has been all but eliminated. Thus, the DLP is able to present the data to the application and the user as if it all comes from one source.
A DLP-based application does not need to be aware of where the data resides or how it is stored, it simply establishes a connection to the DLP engine, specifies a DLP file to use, and the DLP does all the “dirty work” of connecting and relating the various data sources.
The mapping information required to access cross-platform data and to present it to a DLP-based application is stored within a DLP (.dlp) file. The DLP engine (datalayerplugin.dll) uses this file to determine what data can be accessed, how to access it and how to relate it to the other data defined in the DLP file. The DLP file architecture consists of 3 core components:
A Data Source contains the connection information required to access data from a specified data source (physical location, security information etc). Multiple data sources can be defined within any DLP file, such as SQL Server/Oracle/Access databases, Excel/DBF/text files and even ESRI Shape Files. Once defined, information from any of the data sources can be used transparently within the DLP. It is through this mapping technology that data can be integrated and viewed “live” from applications that call on the DLP for data input.
Schemas are used to present organized hierarchical views of data to an application. The creation of a schema can be broken down into two parts. Firstly, the tables that contain the required hierarchical data must be added to the Schema (these tables can come from any mapped Data Source). After the relationships between these tables have been defined, the individual fields that will be used to construct the levels within the hierarchical view can be added. Once this is done, the DLP is able to construct a live view of the data for presentation to the DLP-based application.
Data Groups, as their name suggests, are used to group related data into a single entity. A Data Group can consist of a single table or multiple tables (from one or more Data Sources). From these tables, any number of fields (variables) can be selected for exposure to a DLP-based application. To complete the setup of a Data Group, a relationship must be defined between the Data Group and each Schema within the DLP file.
Data extraction by a DLP-based application requires two parameters. The first selects the portions of the Data Hierarchy for which data will be extracted. These Data Hierarchy Categories may be selected from one or more Schemas. The second parameter is the Data Group. The DLP can then extract the appropriate data and present it to the application.
An overview of the DLP
Once a DLP has been constructed, using the DLP engine as a data source for an application is simple. The application requests Schema (Category) and Data Group information from the DLP engine and presents this to the user for selection. Once the user has selected the categories of data and the variables that they are interested in, these two parameters are passed into the DLP and a result set is returned. An overview of the mechanics behind this process follows:
The first application of the Data Layer Plugin was in the oil and gas pipeline industry. The situation was a common one: a new GIS was being implemented and data was being migrated from several old systems to the new. In the process, the data model was being changed and a variety of additional databases were being combined. During this same time period, Federal regulations required that the firm establish and implement an Integrity Management Plan. This, in turn, required a risk analysis of their pipeline system.
Because the databases were in flux, some data was not yet available from server-based databases and was only available from desktop data sources. The only feasible method to perform the risk analysis required combining server data with desktop data. DLP Technology allowed the customer to use the data that was available, “where it lived”, to perform the analyses required. The DLP combined data from the new SQL Server database with data from multiple Access databases to provide the risk analysis program (Integrity) a unified view of the data. The customer successfully completed analysis of over 64,000 miles of gas transmission pipeline.
The ease of use and flexibility of DLP-based software provided the user with an unexpected benefit. In addition to performing the risk analysis, the DLP provided a simple means for validating the migrated and desktop data. In the process, many omissions, data errors and database errors were detected and corrected.
PLEXOS’ DLP Technology brings together data from many sources, organizes it and presents it to DLP-based applications in a simple and consistent manner. The details, such as physical location, database platform and logical organization of the data are removed from the application so that the application can concentrate on its intended purpose. Data conversions, exports and imports are eliminated – along with the related time, cost and inevitable errors.
DLP-based applications are insulated by the DLP from changes to an organization’s underlying data sources. Changes to a data source, either its location or its structure, simply require remapping of the affected DLP Data Sources and Data Groups with absolutely no changes to the DLP-based application.