> Anybody out there heard of Tekmon from Teknecron (sp?)? It's a monitoring > package and I have a SunOS version. I suspect it's a freebie. Somehow, I don't think Teknekron is giving this away for free. > 2. Does anyone have any doc on this thing. Sure. /********************************************************************** * 3R Software, Incorporated. Specializing in MIS Consulting * * George L. Roman george@3RSoftware.com 408-736-3540 * **********************************************************************/ Vantive Sybase C++ Unix ------------- http://www.tss.com/products/etk/etkwhitepaper.html#HDR19 Enterprise Toolkit Table of Contents ---------------------------------------------------------------------------- In today's rapidly changing business environment, enterprises face several simultaneous challenges. These include: * The push towards globalization Enterprises are increasingly operating and competing in a global market and need to provide services and products or react to changes anywhere in the world. * Reduced cycle times Technological and competitive pressures demand that cycle times be reduced at every level of an enterprise. This includes everything from time-to-market for new products, to the length of decision-making cycles within the enterprise, to the time to respond to customers. This requires a high degree of flexibility and responsiveness within the enterprise. * The demand for more specialized services and products Customers are no longer satisfied with a one-size-fits-all approach to business. They are increasingly demanding---and obtaining---products and services tailored to their needs. Enterprises that fail to provide this level of service run the risk of being forced out of business. * The trend towards decentralization and right-sizing To be successful in today's market, enterprises need to be quick and nimble. This has accelerated a trend towards right-sizing and decentralization that is likely to continue for the next several years. * The increasingly critical role of information The role of up-to-date information at all levels of an enterprise cannot be over-emphasized. To remain competitive and flexible, up-to-date information about any aspect of an enterprise should be available from anywhere in the enterprise. Individuals requiring information should have it on demand, quickly and easily and without the data overload that comes with it. Today's enterprise, therefore, has to think globally, react quickly and decisively to changes in the market, and be flexible enough to tailor its products and services to the changing requirements of its customers. At the same time, an enterprise needs to preserve its existing capital investment and infrastructure---it is not economically feasible to re-build an enterprise from scratch. It should therefore be possible to take an incremental approach to making the transition from a traditional relatively static enterprise to one that is dynamic, flexible and responsive. Unfortunately, the computing infrastructure that underlies many enterprises today is woefully inadequate to meet the challenges described above. Changes to the infrastructure or applications built on top of it are notoriously difficult, slow and error-prone. New applications require long lead times to develop, and existing applications often reflect a work process whose time has passed. Moreover, the level of integration between applications is minimal. Often, it is not possible for an application to inter-operate with another application running on the same machine, let alone with one across the globe! Critical information is not always easy to obtain, and when it can be obtained, it is not very timely. It is clear that an enterprise built upon such an infrastructure will find it difficult, if not impossible, to meet the challenges listed above. The Teknekron Enterprise Toolkit is the answer to connecting all the applications of a business. It consists of a set of software tools that builds upon existing hardware, software and networking platforms to provide a computing infrastructure that is truly global, flexible and responsive. It includes tools to build highly flexible applications, allowing them to be developed, deployed and modified rapidly and with minimal effort. It also includes tools that allow applications to inter-operate with new or existing applications, wherever they may be within the enterprise. Finally, the Toolkit provides support for integrating legacy applications into the enterprise infrastructure or for gracefully migrating away from these applications. In this document, we first describe the various components that are needed within an Enterprise Toolkit and then describe the specific products that form part of the Teknekron Enterprise Toolkit. The Enterprise Toolkit [Image] The Enterprise Toolkit is a set of software tools that together provide a computing infrastructure for building highly flexible and integrated applications and for sharing information within an enterprise. The figure above shows the divisions of the Enterprise Toolkit, their relationship to one another, and the components of the Toolkit that can be used to implement the desired functionality. The divisions are: * A foundation of enterprise-wide messaging facilities * Information modelling tools, which span all divisions of the Toolkit * Mission Critical Infrastructure * Service modelling tools * Application and Database integration adapters * Application development tools / Screen painters * Application management tools Each of these divisions is described on the following pages. Foundation: Enterprise-Wide Messaging Facilities These facilities form the foundation for enterprise-wide computing and information sharing. They provide the capability to send messages within the enterprise in a manner that: * is independent of the network geography (LAN, MAN, WAN) * is independent of the network technology (Ethernet, ATM, etc.) * is independent of hardware or operating system platform * is independent of the locations of senders and receivers of messages * supports multiple interaction paradigms (point-to-point, request/reply, publish/subscribe) * supports varying levels of fault-tolerance These facilities insulate applications from the distributed and heterogeneous nature of an enterprise network. They provide a uniform mechanism for applications to exchange messages regardless of whether the applications are on the same machine or are half way across the globe, connected through multiple intervening networks. The key advantage is that applications built using these facilities can be located or re-located anywhere within the enterprise, possibly on different platforms, without impacting other applications. An important aspect of these facilities is that they support multiple interaction paradigms. The traditional request/reply paradigm is important for "demand-driven" applications where the exchange of information occurs when the consumer of information demands it. The newer publish/subscribe paradigm is important for "event-driven" applications, where the consumer of information expresses an interest in (or "subscribes" to) certain kinds of information, but the actual exchange of information occurs only when some event occurs (causing the information to be "published"). Finally, by implementing different levels of fault-tolerance within the messaging facilities, applications are freed from the burden of having to implement their own schemes to cope with failures. This decreases the time it takes to implement and deploy new applications. Information Modelling Data is not the same as information, and facilities that allow messages to be exchanged are only a first step in solving the problem of information sharing within an enterprise. To turn unstructured data into meaningful information, one needs information modelling tools that can be used to describe the structure and meaning of data. The information modelling tools should be: * Expressive enough to be able to represent all the types of information of interest within an enterprise, * Extensible, so that new types of information can be added to the system or existing types can be augmented, * Dynamic, allowing new information models to be introduced into a running system and into existing applications. Most object-oriented modelling tools provide sufficient expressiveness and extensibility. The last feature requires some elaboration. In applications developed using traditional methods, the structure and meaning of data (that is, the information model) is known to the person developing the application and is hard-coded into the application itself. If the information model is changed, the application has to be re-coded, tested and re-deployed. This is adequate for applications operating in relatively static environments where the information model rarely changes. However, in a dynamic and flexible enterprise, the information model may need to be changed or extended frequently. The traditional form of application development is inadequate in this setting. To address this issue, the information modelling tools should allow the information model should be represented outside of applications, and in a form that can be read and interpreted by running applications, as opposed to being hard-coded within applications. This allows highly flexible applications to be constructed that either insulate themselves from or adapt their behavior to changes in the information model. Changes in the information model thus do not require applications to be re-coded or re-deployed. Dynamic information modelling thus forms the key to building applications for a flexible enterprise where change is the norm rather than the exception. Mission Critical Infrastructure The Teknekron Enterprise Toolkit includes reliable broadcast protocols that allow data exchange without triggering complex individual transmissions and acknowledgments. The components of this infrastructure include: * Common Information Bus (CIBI (tm) ), which provides a common interface to the various communication service disciplines, * Super*Cell Router (tm) , which provides universal connectivity by seamless extension of local applications communications into WAN environments, and * Remote Method Invocation (RMI (tm) ), which provides for seamless communication within a process, between processes on one machine, or between machines The Enterprise Toolkit's foundation in the TIB platform enables the construction of highly scalable, mission critical architectures. Service Modelling Exchanging information is clearly an important aspect of enterprise computing. A complementary aspect is the implementation of services at various locations in an enterprise. Like information, it is useful to able to model services as objects. However, unlike information objects, which flow freely from location to location within the enterprise, service objects are relatively static. A service object is typically implemented at one or a small number of locations. Users of the service access it through some form of remote operation invocation. Service modelling tools enable services to be modelled as objects and provide a means of formally defining the interfaces and functions provided by services. In addition, they provide the ability for users and applications to access a service that may be implemented any where in an enterprise, insulating them from details of locating or activating the service. Like information modelling tools, the service modelling tools should be expressive, extensible, and dynamic enough to introduce new types of services into a running system and have them be accessed by existing applications. Application Integration Adapters The messaging facilities and tools described in this section enable new applications to be built that are integrated with one another. However, it should also be possible to integrate legacy or third-party applications that may not be built using these tools. Thus the Enterprise Toolkit needs to provide the ability for such applications to inter-operate with one another and with newer applications. This is achieved by providing support for building "adapters" to legacy or third-party applications. These adaptors implement an information model for these applications and enable them to appear as service objects that may be accessed just like any other service object. Database Integration Adapters Most of the information in any enterprise is likely to be stored in one or more databases. In a large enterprise, these databases may come from different vendors. Currently, these are most probably relational databases, although object databases may become more common in the future. However, even if object databases become the norm, relational databases will still continue to hold information that will be of interest within the enterprise. The functions of the database integration tools are to: * provide applications with a vendor-independent means of accessing information in databases, * insulate applications from differences in underlying database technology (relational, object-oriented), * provide an object-oriented interface to information in databases that is consistent with the information modelling tools. The advantage of the last point becomes clear when one considers that applications and adaptors built with the Enterprise Toolkit use the information modelling tools to describe their information models. Application development and maintenance is simplified when the same model is used when accessing databases, leaving it to the database integration tools to take care of any translations that may be required (for example, if the database is relational or uses a different object model). Application Development Tools With the shrinking cycle times facing today's enterprise, the traditional "waterfall" method of designing applications (specify, design, implement, deploy) is no longer applicable for many applications. Often the application development time using this method is so long that the requirements have changed by the time the application is deployed. Instead, what is required is the capability for "rapid application development," whereby a first implementation of an application can be quickly put together, modified iteratively as needed, and then immediately deployed. The Enterprise Toolkit therefore contains tools that: * Enable initial implementations of applications to be rapidly put together using existing high-level abstractions, * Allow these implementation to be changed quickly and with little effort, * Permit applications so constructed to be turned into productized versions with very little effort. A prototyping tool constructs a prototype that is later discarded. In a rapid application development tool, the "prototype" is itself the final product or can be turned into one without much development effort. Application Management Tools Decentralized and distributed applications have several advantages from the point of view of flexibility and cost-effectiveness. However, they are far more difficult to manage than centralized applications, at least if one attempts to manage them manually. Application management tools are therefore an important part of the Enterprise Toolkit. These tools includes tools to: * Manage different versions of information and service models * Obtain and display status information from applications anywhere in the enterprise * Reconfigure running applications remotely from anywhere in the enterprise * Define exception conditions and have prescribed actions be taken in response * Run diagnostic or analysis programs automatically or in response to a user request * Maintain a log of significant events anywhere in the enterprise Toolkit Components The components that make up the divisions of the Teknekron Enterprise Toolkit are described in this section. Foundation Components Teknekron Information Bus The Teknekron Information Bus (tm) (TIB\xa8 ) platform is a body of software that facilitates the intelligent dissemination, organization, and integration of data in a distributed environment. The term "distributed" denotes any environment that runs multiple applications capable of exchanging data, whether this be several applications running on similar or different platforms or as part of one or more local area networks. The distributed design of the TIB platform facilitates reliable, high-performance exchange of data and services among applications. As a software platform, the TIB platform implements a high-level system environment, making it much easier to develop and maintain applications. The TIB environment hides the boundaries of various systems and services from programmers, thereby enabling programmers to hide these boundaries from users. The environment also hides network configuration and heterogeneity among machine environments. Programming in the TIB environment enables the building of large software systems that, though complex, are still easy to develop and easy to use. The TIB system model is a simple, intuitive model where applications and services interact indirectly and anonymously through self-describing data objects which are labelled with user-meaningful subjects. The services generating the data objects and the breakdown of functionality among those services are transparent to the applications. The TIB system model is an information-oriented model, in contrast to the connection-oriented model supported by most existing systems. Where the latter model is lower-level and yields rigid, tightly-coupled systems, the TIB model is high-level and yields systems that are flexible and de-coupled. This is achieved through several distinct capabilities in the TIB platform: * Source-Service Decoupling * Subject-Based Addressing (tm) * Data Independence * Configuration Decoupling Source-Service Decoupling The highest level of decoupling is source-service decoupling, also known as source-service independence. Generally speaking, users should not have to specify from which service or database data is to be retrieved, nor should developers be forced to hard-code into their software the source of requested data. Instead, the system administrator should have the flexibility to change services, combine separate databases into one, spread a large database across several hosts, replace a consolidated service by several services, or substitute one service for all or part of another service. All of this implies that the system should support an "information-model" (rather than a traditional "service-model") and should allow the user or application to request information by "subject" (rather than by "service"). Subject-Based Addressing (tm) The TIB platform permits users and application developers to request information by subject through its patented Subject-Based Addressing capability. For each requested subject, the TIB software determines the services that provide data on that subject and sends appropriately formatted data requests to those services. Any number of services can provide data on a particular subject. Therefore, through a single requested subject, data from multiple sources may be returned. The Subject-Based Addressing capability enables users and application developers to fully exploit the TIB platform's high-level information model. It should be noted that this capability requires the other two levels of decoupling discussed below. Subjects are hierarchically structured, allowing data to be organized in user-meaningful ways. Subjects are set-up and controlled by the system administrator, and can be customized on a per site basis. The TIB platform's data objects support a very powerful data model. The model allows nested record structures, clean separation of semantic information from representational and structural information, and supports an object-oriented model. Application developers can easily construct new "classes", either programmatically or through the use of the Teknekron Design Language (TDL (tm) ) facility, which is described in detail in the next section. Data Independence The next level of decoupling, known as data independence, is provided through the use of self-describing data objects. Each data object contains enough descriptive information to allow receiving applications to interpret the format, organization, and simple semantics of the information inside. The advantages of a powerful, general-purpose self-describing data capability are: * Automatic data conversions are richly supported. Powerful conversion utilities can be written that can convert any data class to another format. Such utilities already exist for converting to a display and network format, and for converting the TIB platform data to and from a relational database format. These utilities are powerful and general so that they apply not only to existing classes, but also to new classes as they are defined. Application programmers can easily add new automatic conversion utilities. * Existing data classes can evolve and change non-programmatically. Class definitions can be changed using the TDL facility, and most changes do not require application code changes or even recompiling the application code. * Complex data structures can be supported. In contrast to database records and most data interchange formats, the TIB model supports nesting and lists, so that very complex data structures can be defined and transmitted. Highly structured data are easily supported. The degree of data independence achieved in the TIB platform's self-describing data objects is greater than that of existing data exchange standards such as ASN.1. Also, the TIB model permits much more flexible and complex data structures than relational database systems. Configuration Decoupling The fourth level of decoupling, configuration decoupling, removes application dependency on network topology and system configuration. It also removes location dependency concerning where services or applications are executed. This is achieved by "location transparent" naming. All software components, including internal TIB components, refer to services and applications in a location independent fashion. Network addresses and host names are never "hard-coded." Nor is there a necessity for a centralized configuration database that defines the location of all system components. Instead system components can be "discovered" by using the communication facilities of the TIB platform. Information Modeling Component Teknekron Design Language (TDL (tm) ) The Teknekron Design Language (TDL) facility is an object-oriented way of representing data and simple procedures in an exportable format. It provides an object oriented framework with dynamic (run-time) class management to the standard C and C++ environment. It was developed to extend Teknekron's tools for representing configuration data, sequences of instructions, and self-describing objects. Most importantly, it seeks to join these needs into a consistent and efficient framework for manipulating dynamic classes and objects with their associated data and methods. The difference between the TDL facility and other representation systems is that it is focused on the design and modeling aspect of the problem, providing facilities to represent classes and relationships, rather than focusing on the detailed data representation and packing issues. For instance, you can use the TDL facility to design and represent the data model for a set of financial instruments and their relationships. The resulting information would be stored as TDL objects in memory, which can have method attachment and inheritance. Language Framework The term "language" in the TDL name is somewhat misleading, as the TDL facility is not a language in which programs are written. Instead it is a facility or library which programs use, but since the facility is characterized by mechanisms for describing data and procedures, the need for a formal language soon becomes apparent. By making the formal language an integral part of the system, the TDL facility provides a more complete and consistent basis for modeling, representation, and object management. The formal language framework for the TDL facility is derived from the Common LISP language and its object-oriented extensions, CLOS. The reasons for choosing this syntax are primarily its simplicity of parsing for both procedures and data, and its extensibility (the LISP syntax has been around for over 30 years, adapting itself to several new concepts during that time). However, understanding of LISP is not needed for the purposes of using the TDL facility for class and object management through a C or C++ program. Instead, there are a few simple API calls which can be used. The TDL facility provides run-time support of these operations, without having the application program burdened with the large size of a LISP interpreter (TDL is more than an order of magnitude smaller). A number of applications currently use the TDL facility as the primary representation or design element, and have exploited its flexibility to construct a powerful reconfigurable environment. Design Goals The TDL facility was developed to meet the increasingly complex configuration and design representation needs of applications, such as defining: * application layout, such as the sizes, strings, and colors of the various objects * data classes, with methods on those classes * mapping operations, which describe how a data object is to be shown in a view object * scripting operations, including the specification of scripts with conditional flow of control and iteration * computations In addition, the TDL facility has the following properties: * Portability to many platforms, including DOS, Windows, VMS, and the many variants of UNIX. * Efficiency * Extensibility * Easy to maintain The resulting capabilities of the TDL facility form a very effective basic data management and representation facility for programs. In a server program such as a feed handler, the TDL facility can provide the run-time data definition and management tools needed to make the server highly configurable for different data sources. In a front-end application, the TDL facility offers a framework for data management as well as application management, such as the appearance and actions of the application and other front-end customization choices. In some applications, the TDL classes and their associated objects have been used to define a kind of "abstract front-end" definition, such as classes for windows and menus, which is portable to other window systems. This provides very valuable flexibility in porting. Note that the TDL facility provides the class and object mechanism, not the set of window-system related classes. Those are separated into an upper layer. This split frees the TDL facility itself from dependencies, so that it can run on a range of platforms. Mission Critical Infrastructure Common Information Bus Interface (CIBI (tm) ) The Common Information Bus Interface (CIBI) facility is built on top of the TIB platform, and uses the TIB platform's high-speed, highly reliable communication facilities. It also uses Subject-Based Addressing capability, and can be used in combination with self-describing data packages such as the TDL facility. The Subject-Based Addressing capability provides a simple rendezvous mechanism for publishers and subscribers. The CIBI facility is a common interface to various communication service disciplines. Currently these service disciplines are: * ESA (tm) - Extended Subject-Addressed, * GDSS (tm) - Guaranteed Delivery Subject Service, * TIBQueue (tm) - TIB Queue. All of the service disciplines support the publish-subscribe communication paradigm, and deliver messages in the order in which the messages are published. The ESA service discipline is a simple discipline with low protocol overhead. It provides reliable communication between publishers and subscribers but no application-level fault-tolerance. It uses a straightforward data dissemination algorithm. The advantage of ESA is that it is fast, computationally cheap and imposes no system administration overhead. It is ideal for non-critical services and end-user applications. The GDSS service discipline provides guaranteed message delivery. Unlike the ESA discipline, subscriptions are not implicitly cancelled when a subscriber process exits; they must be explicitly cancelled. As long as a subscriber has not cancelled its subscription, it is able to receive all the messages published since the beginning of its subscription despite any crashes it or the publisher may have. The GDSS server provides a third party to carry GDSS messages from senders to receivers. With the GDSS server, the sender and receiver need never run concurrently. The TIBQueue discipline also provides guaranteed message delivery. However, under TIBQueue, each message published is delivered to one subscriber, rather than to all subscribers. Therefore, the TIBQueue discipline implements distributed queues. One usage of this service is in implementing services using redundant servers. The following table highlights the similarities and differences of the services accessible via the CIBI facility. Table 1: Service Descriptions ----------------------------------------------------------------------- Package Fault Number of Number of Actual Name Tolerance Potential Receivers Receivers ----------------------------------------------------------------------- ESA Reliable Multiple Multiple GDSS Guaranteed Multiple Multiple TIBQ Guaranteed Multiple Single ----------------------------------------------------------------------- Super*Cell Router (tm) The Super*Cell architecture was invented to meet the needs of enterprise-wide and inter-enterprise-wide information dissemination and integration. Its innovative concepts represent a quantum leap forward for building and supporting large-scale systems---systems which can scale from 1 node to 100,000 nodes. Moreover, its broad capability in accommodating a wide range of network topologies enable it to evolve with an enterprise as it grows and as new higher-capacity networking technologies are introduced. The technical objectives of the Super*Cell Architecture are: * Super-scalability across orders of magnitude * Enterprise-wide and Inter-enterprise data exchange * Support multiple administration domains with local autonomy * Support of multiple networking technologies within the same enterprise * Support a wide variety of network technologies and topologies * Provide graceful evolution to new topologies and technologies * Simple system and network administration. The Super*Cell architecture provides local autonomy by supporting a federated information dissemination system. As the term implies, a "federated architecture" defines a common framework for exchanging information among semi-autonomous groups. The framework defines how information can be exchanged among groups, while allowing each local system administrator to specify exactly what information is exported for exchange. The architecture is based on the fundamental notion of Cells, SuperCells, and SuperSuperCells. The Cell represents the smallest unit allowing independent system administration. It represents a single TIB environment, including a single subject naming context. The technical characteristics of the Super*Cell Router package are summarized in Table 2. Table 2: Super*Cell Router Technical Characteristics ---------------------------------------------------------------------------------------------- Technical Characteristic Description ---------------------------------------------------------------------------------------------- Ordered Message Delivery Super*Cell routers preserve the sending order of messages. Fault-Tolerant Message Delivery Reliable delivery ensure message delivery despite transient network failures. A guaranteed delivery protocol can be run on top of a supercell router. Federated Control Each cell determines it own subject naming conventions. Only subjects that are shared between cells need to be coor dinated. Cell configuration is independent of other cells. Self-Configuring New sites can be introduced into a cell, or new cells into a supercell dynamically. High-Performance Communication between cells is pipelined. Efficient Super*Cell routers are designed to minimize the consumption of limited WAN bandwidth. Subject Based Addressing Super*Cell routers use TSS's patented Subject-Based Addressing protocols, where data items are identified by human-readable TIB subjects, making it easy for independent applications to exchange data. Self-Describing Data Applications can optionally utilize the TIB platform's self describing data utilities to ensure that messages are under stood by all participants. ---------------------------------------------------------------------------------------------- The Super*Cell Router uses daemons to interconnect cells. Within the cell, the daemon acts as a normal TIB application. It publishes and subscribes to data using the standard TIB communication environment. However, the daemons can also communicate with each other directly. In this way, one daemon can receive a message in one cell and have the message re-published in another cell using another daemon. Each daemon acts on behalf of all other applications in all other cells. RMI (tm) The TIB RMI package provides remote method invocation for use in a distributed object-oriented system. It can be used for seamless communication within a process, between processes on one machine, or between machines. Object name references are location independent and anonymous so that the application does not need to explicitly locate a remote object. Despite this flexibility, no central server is required to deliver messages. Also, hidden optimizations are built into the protocol so that repeated references to one object are streamlined. The TIB RMI package is built on top of the TIB platform, utilizing its high-speed, highly reliable communication facilities. The TIB RMI package also uses the TIB platform's Subject-Based Addressing capability, and can optionally use its self-describing data facility. The Subject-Based Addressing capability provides the mechanism for initially communicating with objects, and it does not impose any system administration overhead. The self-describing data facility ensures that messages exchanged across the TIB platform will be understood by all parties, and it can be used as part of the larger object model. The technical characteristics of the TIB RMI package are as follows: Table 3: Summary of Technical Characteristics ------------------------------------------------------------------------------------------------------- ordered message Messages are delivered in the order sent. Messages are never lost or delivery duplicated if the sender and receiver do not crash. decentralized control The TIB RMI package requires no centralized facilities or resources; thereby avoiding single points of failures and potential performance bottlenecks. self-administering The TIB RMI package is self-administering---requiring no manual system set-up or ongoing system administration. This enables applications to create new object servers dynamically and on an as-needed basis. high-performance Through its decentralized design the TIB RMI package has been optimized to provide high message throughput. event-driven Like all TIB-based utilities, the TIB RMI package supports event-driven programming. Hence, the package can easily use event-driven programs, such as GUI-based programs and most "server" programs. Event-driven programming is required for real-time and "near" real-time programs, and it provides better message throughput a decreased communication latency. Subject-Based Objects are identified by programmer-friendly Addressing TIB subjects, making it easy for independent applications to reference the same object. self-describing data Applications can optionally utilize the TIB's self describing data utilities to ensure that messages are understood by all participants. location independence Objects are addressed by name rather than by address, allowing servers to migrate without modifying clients or requiring systems administration. asynchronous Applications do not have to wait for a reply synchronously. Therefore, they can do several jobs at once. hierarchical The subject space is divided into several levels for convenient grouping and subdivision. Also, there is a second level of subjects called domain subjects that can be used for additional refinement. load-balancing If the same object is served by multiple servers, RMI can automatically route requests to different servers. But RMI clients can, optionally, make request go to a specific server. ------------------------------------------------------------------------------------------------------- Service Modelling Component Remote Object Framework (ROF (tm) ) The Remote Object Framework (ROF) facility brings the advantages of object-oriented technology to client-server computing. The ROF facility permits servers to be implemented as remote objects---objects that can be invoked from a remote location. The interfaces of such servers are defined in an object-oriented language, which allows inheritance relationships between interfaces to be captured in the interface itself. Clients invoke remote objects by invoking methods on local "proxy" objects that are automatically created by the ROF facility. The ROF facility takes care of the details of locating servers, packing and unpacking arguments at the client and server ends (including machine-dependent conversions), and dispatching invocations to servers. By turning servers into objects, the ROF facility allows servers to be composed from other servers, extended or re-implemented in a manner that is transparent to clients. ROF thus forms the foundation for building a highly flexible and dynamic distributed system. In a distributed system, it is useful to distinguish between information objects and service objects. Information objects encapsulate data that is used within an application or passed between applications. Service objects are applications that implement operations that may be invoked by other applications, possibly from remote sites. Information objects are described using classes and inheritance relationships. Service objects have interfaces that describe the operations they implement and the inputs and outputs for these operations. These interfaces may have inheritance relationships. The figure below illustrates these two types of objects. Note that the inputs and outputs of operations of service objects are information objects. [Image] The ROF facility handles service objects. It can be used to describe the interfaces of service objects and the stubs generated by the ROF facility eliminate the need to write code to locate services, pack and unpack arguments, or dispatch incoming invocations. The ROF facility thus provides a uniform and convenient way to accesses services in a distributed system. It simplifies their implementation by automating many of the functions required to access distributed services. By separating interfaces from implementations, it insulates applications from changes in the implementation of a service, thereby resulting in a more extensible system. The ROF facility interfaces can be described in either TDL definitions or by using the CORBA Interface Definition Language (IDL). A feature that distinguishes the ROF facility from other remote object systems is its level of dynamism. Interfaces can be defined and evaluated at run time, thereby allowing new types of services to be introduced into a running system. It is also possible to generate stubs and register new application implementation objects at run time. So the ROF facility not only provides the means for building an extensible distributed system, it also permits extensions to be made to a running system. The ROF facility allows a service to be implemented by one or more actual servers. Clients can choose to be completely unaware of replication, in which case the ROF facility will automatically choose one of the available servers. Alternatively, a client can choose to be informed about available servers and can implement its own selection policy. On the server side, a server can choose, at run time, whether or not to respond to a particular client (e.g. based on the load on the server). The remote object framework thus provides the mechanisms to construct highly robust and flexible distributed services. Application/Database Integration Adapters Customer/Third Party Adapters A key concept of the Enterprise Toolkit is the use of Adapters. Applications and services are connected to the TIB platform with the use of adapters that allow data to pass freely between the TIB network and the application. Standard adapters are available for several spreadsheets, including Applix (tm) , Lotus 1-2-3 (tm) , and WingZ (tm) . Adapters for other commonly requested applications are available, and custom plugs are easily written for new or legacy applications. Once an application is "plugged" onto the TIB platform, it is immediately available to all the other services and applications on the network. The interface need only be defined once, after which it may be used by anything on the platform. Database Integration Service (DBIS) Database Integration Service (DBIS) is a vendor-independent database interface for manipulating data stored in relational database systems. It supports the implementation of higher level software components such as the ODB interface which manipulate objects defined through the Teknekron Design Language. DBIS provides a set of functions that are patterned after the emerging SQL Call Level Interface (CLI) standard under development by X/Open Co., Ltd. Table 3 lists the X/Open CLI functions available through the DBIS interface. Table 4: DBIS Interface Functions (Continued) ------------------------------------------------------------------------------------- DBIS function Description ------------------------------------------------------------------------------------- Allocate and Deallocate SQLAllocConnect() Allocate a connection handle. SQLAllocEnv() Allocate an environment handle. SQLAllocStmt() Allocate a statement handle. SQLFreeConnect() Free a connection handle. SQLFreeEnv() Free an environment handle. SQLFreeStmt() Free a statement handle. Connection SQLConnect() Open a connection to a server. SQLDisconnect() Close a connection to a server. Transaction Control SQLTransact Commit or roll back a transaction. Executing SQL Statements SQLBindParam() Define storage for a parameter in an SQL statement. SQLExecDirect() Execute an SQL statement directly. SQLExecute() Execute a prepared SQL statement. SQLGetCursorName() Get the name of a cursor. SQLPrepare() Prepare a statement for later execution. SQLSetCursorName() Set the name of a cursor. SQLSetParamValue() Set a parameter value. Receiving Results SQLBindCol() Define storage for a column in a result set. SQLColAttribute() Describe attributes of a single column. SQLDescribeCol() Describe a column of a result set. SQLFetch() Get the next row of a result set. SQLGetCol() Retrieve one column of a row of the result set. SQLNumResultCols() Get the number of columns in a result set. SQLRowCount() Number of rows affected by an SQL statement. Error Handling and Miscellaneous SQLCancel() Attempt to cancel execution of an SQL statement. SQLError() Return error information associated with a handle. ------------------------------------------------------------------------------------- DBIS supports query invocations that correspond to the PREPARE/EXECUTE and EXECUTE IMMEDIATE mechanisms in dynamic SQL. PREPARE/EXECUTE allows the cost for parsing and optimizing a statement to be amortized over multiple executions (possibly with different input parameters). EXECUTE IMMEDIATE sends the SQL statement to the database system for direct execution. By default, SQL statements are executed in the context of a multi-statement transaction that is automatically started. Each multi-statement transaction must be explicitly committed or aborted using SQLTransact. Each DBIS interface function returns a small integer (as the function's return code or value) to indicate basic success, warning, or failure. When the return code indicates warning or failure, the SQLError function can be used to obtain more detailed diagnostics information. In general, multiple error or information messages may result from a DBIS interface function call. SQLError can be invoked multiple times in order to retrieve these messages one at a time. The rationale for this design is to provide a simple way for applications to handle the basic control flow, but also to allow applications of their own volition to determine the specific causes of failure. DBIS is available as a C library in three packages: * As a front-end to a regular, non-replicated database * As a front-end to a synchronously replicated database for the purposes of fault-tolerance and load balancing * As a front-end to an asynchronously replicated database whose remote replicas can be used for decision support (read-only) purposes Object Database (ODB (tm) ) As the advantages of object-oriented software design techniques become increasingly apparent, many applications are representing information as objects. Most corporate information, however, continues to be stored in commercially available relational database management systems like Oracle and Sybase. To bridge this gap between applications that manipulate objects and relational databases that manage relations, the Object Database, or ODB, provides an object-oriented front end to any relational database. In the ODB facility, classes and inheritance relationships are described using the Teknekron Design Language. The ODB facility automatically converts TDL objects into relations and vice versa as illustrated in the figure below. [Image] The ODB facility insulates applications from the relational model and allows developers to work in an object-oriented environment. The ODB facility takes care of the details of decomposing application objects into SQL statements that insert rows into database tables, and reconstructing objects from information in the rows of the tables. Objects are defined by class definitions in TDL. Moreover, the ODB facility supports complex objects (objects that contain other objects by value or by reference) as well as varying length list objects. The ODB facility is a service that stores, retrieves, updates, and deletes objects. Applications can compose statements and queries as multi-statement atomic transactions. This capability takes advantage of a relational database's forte, namely, that a sequence of statements either is executed successfully to completion and its changes made permanently, or it fails completely and the changes are undone. The ODB facility has an interface defined using the ROF capabilities. This means that any application using the ROF facility can access the ODB facility. In particular, applications built using ObjectSheet (tm) application can fetch and store complex objects to and from a relational database. Writing this kind of application is significantly easier than explicitly writing the SQL statements necessary in conventional database application builders. The ODB facility is available both as a library and as a server. An ODB server appears to a database as a single application, but in fact handles the requests of multiple clients. Clients communicate with servers using the TIB platform. Multiple instances of the ODB server can be configured, providing increased concurrency, load sharing and robustness. The ODB capture package provides a link between ODB and the powerful publish/subscribe paradigm supported by the TIB platform. It automatically captures objects published on the TIB platform under specific subjects, and uses ODB to store the captured objects in a database. The publish/subscribe paradigm allows a publisher of objects to be located anywhere, and even to change its location transparently. The capture package provides an extremely convenient mechanism for capturing dynamically changing data. Application Development Tools MarketSheet\xa8 The MarketSheet application is a live information presentation tool that displays vital real-time data in a host of formats. Besides providing access to built-in definitions that allow administrators to create screens familiar to end users, arrangements of objects can be reused to create customized displays. The MarketSheet tool's pre-built objects include lists, graphs, tickers, and tables, and users can create collections of sheets containing any number of objects in any desired configuration. Users control size, highlighting, color, fonts, and alert notifications. With MarketScript (tm) , the MarketSheet tool's built-in scripting language, users can extend the functionality of the MarkeSheet application, customizing it for specific requirements. Users can define sequences of commonly-used actions, and customize object behaviors. Scripts can carry out any operation in the end-user environment and can be triggered by data updates, alerts, and end-user inputs. Scripts are expressed in a syntax similar to C++ that supports global and local variables, class definitions, and method definitions. The MarketSheet application allows users to access information from any source, including paged and elementized data feeds, internal programs, ticker plants, and databases. ObjectSheet (tm) The ObjectSheet application is a model-driven application building tool that has been designed to minimize the effort needed to develop Enterprise Toolkit-based distributed applications. Although it can be effectively used to build stand-alone applications, it provides the most leverage when it is used to build cooperating distributed applications. Such applications can be publishers, subscribers, clients, or servers, or any combination thereof. Like Smalltalk, the ObjectSheet application provides a complete object-oriented application development environment. However, the ObjectSheet application is considerably more light-weight and is, therefore, faster, easier to learn, and easier to interface to external software. Application builders such as PowerBuilder provide powerful database access facilities but lack built-in messaging support. On the other hand, the ObjectSheet application has full access to both the Enterprise Toolkit's messaging and vendor-independent database facilities. Key design objectives and architectural elements of the ObjectSheet application's design include: * Rapid application development. A complete application, including a graphical user interface and its underlying data model and behavior, can be developed without any time-consuming code/compile/debug cycles. A WYSIWG UI editor eliminates the need for window system coding while the ObjectSheet application's interpreter allows developers to quickly script and test application behavior. A TDL debugger facilitates script troubleshooting. * Multi-platform support. the ObjectSheet application's kernel support for user interface objects is built using Visix' Galaxy Application Environment. This allows one code base to support Sun, DEC, SGI, HP, and IBM Unix-based workstations; Microsoft Windows NT and Windows 3.1 PCs; Apple MacIntosh; DEC VAX/OpenVMS; and IBM OS/2 platforms. In conjunction with the TDL facility's platform-independent wire format, this flexibility allows a set of cooperating applications to be arbitrarily distributed across a heterogeneous network. In addition, applications can be developed on one platform for eventual deployment on another. * Model-driven applications. An ObjectSheet application consists of the ObjectSheet kernel and a TDL-encoded description of the application. The kernel can be thought of as the combination of the TDL interpreter and a hierarchy of TDL objects that encapsulate generic user-interface display and event management facilities. At runtime, the kernel interprets a TDL-encoded application model that expresses the desired application appearance, behavior, and data model. This allows the same executable to be used for a wide variety of applications and permits applications to be enhanced or tailored in the field without recompiling and relinking. * Visual object/data model integration. ObjectSheet kernel functions automatically maintain consistency between the information being displayed by the user interface and the associated components of the data model. This capability distinguishes the ObjectSheet application from other application or user interface builders as they tend to ignore the underlying semantics of the application. An ObjectSheet application's TDL-based implementation allows for convenient integration with Enterprise Toolkit service modelling and database integration adapters by eliminating the need for awkward translations between the TDL facility and another data format. All CIBI, ROF, and ODB facilities are directly available to ObjectSheet applications * Interpreter-based control. Low-level, computationally expensive operations such as display management are implemented in C and C++ but are packaged as TDL functions that are controlled by ObjectSheet's interpreter. This allows application behavior to be easily modified without sacrificing performance. On platforms that support shared libraries, TDL functions that encapsulate company proprietary or other specialized software can be implemented and dynamically loaded into a running ObjectSheet application. Once loaded, these functions may be used in the same way as any other TDL function. Application Management Tools TekMon (tm) The TekMon application is an administration and support tool for computer networks using the TIB platform. The key concept of the TekMon application is to allow an operator to monitor the health and state of a corporate network from a single console. The corporate network can be as small as a local area network (LAN) with a dozen nodes or as big as wide area network (WAN) spanning continents and being comprised of thousands of nodes. The monitoring and maintenance of the network is performed through a graphical user interface. The point-and-click user interface simplifies sophisticated operations. The TekMon application allows you to monitor a wide variety of system and application parameters on any node on the distributed network. Furthermore, the user may add new functionality to the TekMon application if desired. Operations and functionality to support setup, configuration, and maintenance of individual nodes or groups of nodes can also be provided through the graphical user interface. The TekMon application consists of three components: * The TekMon Back-End runs on each node to be monitored. * The TekMon Logging Back-End runs on one node (or more, if desired for availability) and logs events in an ASCII file. * The TekMon Front-End is a graphical user interface used by the system administrator. The purpose of the TekMon Back-End is to send heartbeat messages periodically and warning or error messages once a failure or a problem is detected. The TekMon Back-End does not send status information if the monitored node is healthy, i.e. without failures or problems. During a healthy period only heartbeat messages are sent. All TekMon messages are sent on the TIB platform. All nodes that should be monitored by TekMon need to run the TekMon Back-End, a set of processes that monitor the node. The TekMon Back-End is non-intrusive, i.e. it passively monitors the node without changing the node's behavior or configuration in any way. The TekMon Logging Back-End is a single process that listens to all warnings and errors as well as heartbeat messages from all TekMon Back-End components. All received messages are written to a TekMon log file in ASCII format. This is particularly useful because, not only does it give you access to a permanent historical record of TekMon events, it allows you to read this file in almost all editors. The TekMon Logging Back-End is an optional component. In most cases a single TekMon Logging Back-End per corporate network will be sufficient. To increase fault-tolerance one can run multiple TekMon Logging Back-Ends on the network. The TekMon Front-End is a single process that provides a graphical user interface for all interactions between the system and the operator. Information such as heartbeats and errors are graphically and audibly displayed at the TekMon Front-End. Current or past problems that have been solved are visually indicated through color changes. This allows the operator to spot problems at a single glance. Once a problem is spotted it can be analyzed further and possibly corrected through the TekMon Front-End. State investigation, setup, system configuration, and maintenance actions can be started on individual TekMon hosts or on groups of TekMon hosts. While the TekMon Back-End is passive and non-intrusive, the TekMon Front-End forms the active side of TekMon which allows remote system alterations. Each operator can run the TekMon Front-End on her/his console, i.e. an arbitrary number of TekMon Front-Ends can be operational at any time on the corporate network. The TekMon Front-End can be configured to suit a variety of needs. Differently configured variations of the same TekMon Front-End can be used by different operators at the same time if desired. Since there is a clean interface between the TekMon Front-End and the TekMon Back-End, it is also possible to write new CORTEX applications to perform monitoring. CORTEX (tm) The CORTEX facility allows system administrators to remotely and interactively monitor and reconfigure applications at run time. Through the CORTEX facility, you can also retrieve a process's trace and debug settings. You can then change these settings dynamically, without restarting the application. This is an excellent way to test the effect of various configurations. If you like a specific configuration, you can then modify the application's configuration files appropriately. For an application to be able to interact with the CORTEX facility, it must have a CORTEX implant imbedded in it. These implants are inserted using the very robust CORTEX API. Each implant need not take advantage of all the features available in this API, however, the degree to which applications can be managed is determined by the robustness of the CORTEX implants. The graphical user interface for the CORTEX facility is called the CORTEX Manager. It is a special MarketSheet-based application designed specifically for managing applications with CORTEX implants. Table of Contents ---------------------------------------------------------------------------- TSS Products TSS Home Page Questions, comments, or problems: webmaster@tss.com Copyright © 1994-1995 Teknekron Software Systems, 530 Lytton Ave. Palo Alto CA. All rights reserved.