Saturday, December 12, 2009

Considerations before Making a Selection Based on FOSS

As I mentioned previously, the Free Software Foundation (FSF) and Open Source Initiative (OSI) require, in their definitions that the source code to FOSS be available. The way that source code is kept freely available is through a license, the most common of which is the GNU General Public License (GPL) developed by the FSF. These licenses all stipulate different requirements on the treatment of the source code availability, so one cannot make many generalizations on the requirements of FOSS insofar as when or how people can access the source code. One must look at the individual FOSS license a project is using. Some lists of common FOSS licenses (like the GPL, Mozilla Public License, Apple Public Source License, or the BSD license) are available at the FSF license page or the OSI license page.

Offerings

Review the services offered by leading companies in the open source operating system space. Novell (SUSE), Red Hat, and Progeny all emphasize the types of implementation, migration, support, and customization services they offer. Novell points out (at length) processes involved for a successful enterprise Linux migration. Red Hat promotes its support subscription service to keep its customers up-to-date with patches, recent software updates, and expert support facilities. Progeny wants to customize and modify a Linux system specific to its customers needs.

In every case, those companies are capitalizing on, and participating in the FOSS project communities that produce GNU/Linux systems. Some companies argue that FOSS methodologies work fine for commodity systems but will not work for more specialized enterprise solutions. That doesn't seem to be the case though, as recently enterprise applications are beginning to climb out of the woodwork (a few examples include GNU Enterprise, ComPiere, Anteil, Project/Open, SugarCRM, JetBox CMS, MamboCMS, and the Zope application server).

While researching an open source solution or related engagement consider the following items:

What kind of pre-installation and post-installation support does the maintainer or partner offer?

Analysis of current systems and processes (previous to a new implementation or migration), ongoing support, and service level agreements, are all important to successful operations.

Does the open source project maintainer (vendor) work through local implementation partners? Vertical industry specialists?

Some open source projects or else vendors that offer open source, prefer to let their partners focus on consulting, implementation, and integration services. These vendors support their partners but feel the partners are best able to support clients in their own immediate geographic regions. If that is the case, there are some questions to ask about how the partner interacts with the core project.

* How does the partner contribute to the core project?
* Is there a partner certification program?
* Is there a training program?
* What is the partner's expertise with Free and open source software?
* What kind of geographic presence and industry experience is offered?

Is there on-line availability of project tracking?

As a company chooses to use open source, it is important to gauge the open source project's viability. Users of open source applications need to be reassured about a given project's future. Thus visibility into the project roadmap, user issues, bug fixes, and feature requests is necessary for clients to verify that the project maintainers actively coordinate development on the community's goals. Open source projects that run astray of their goals or alienate their developer community are not likely to instil the trust necessary to continue as a successful project.

Why Would a Company Want to Develop Open Source Software

A significant point from the perspective of a company with a business based on Free and open source software is that it cannot expect to sell the software based on the exact business model as a company selling closed source/proprietary software. By having the source code freely available, whenever a competitor, customer, or community of developers improves the code, the vendor benefits because it, like everyone else, can adopt the upgrades and the software improvements. However, that means vendors cannot always compete on something like a feature-set alone—everyone could potentially offer the same features.

Companies tend to position themselves as experts in the area of particular open source projects and then work with their clients to provide the types of things they need the software to do (sponsored development, customization, migration, and implementation services). Or perhaps, a company with expertise in a particular area of software offers other types of support and services (around that project) to its customers. IBM, for example, offers hardware and implementation services.

In the case of a company providing hardware and implementation services, it's relatively simple to see the sense that open source can make. If you're making money off of the labor involved in your engagements or support services, or you're selling a physical good (say a corporate search appliance rack server), by participating in the FOSS project's community you leverage a massive amount of high-quality software at a low cost.

Another example of how a vendor can benefit by providing open source applications would include vendors that take existing open source software projects and develop customized offshoots, which address specialized needs. Take a company that would like to provide a specialized CRM system for the health care industry. Such a company can stand on the shoulders of an open source CRM project that has no interest in specializing its project for the health care industry. The company can then customize the application as a solution for its health care clients. The company benefits by not having to build its application from scratch and may even receive additional help in its area from others that need similar requirements.

Lastly, even if a vendor is not providing its own software on an open source basis, it can leverage the cost savings, support providers, and reliability of existing FOSS technologies. It can offer competitive pricing on its supply chain management solution to clients because the solution supports open source servers, databases, etc.

Demand at the Fount of Open Source Part Two: A Primer Based in Demand Trends

For what reasons might an organization be interested in open source software? One of the more frequently touted benefits is cost. The proprietary crowd has put a fair amount of publicity into studies that argue open source actually results in a higher total cost of ownership. These studies consider a number of factors like paying for support, upgrade and maintenance costs, licensing fees, and implementation and migration costs; often these become points of contention. In these areas there is a likely cost advantage presented by open source and the following highlights a number of reasons why, at least in principle, an advantage with FOSS is possible.

Licensing

A proprietary software product is often sold in a way that puts certain post-purchase restrictions on its usage. Namely, when it is sold such that each license is treated like a unit of a good (a box of software or instance of software). Many proprietary software vendors would say that software licensing entails the purchaser/user to pay attention to requirements on deployment eligibility, transferring software to others, sometimes even downgrading the software version. Thus, an organization could have to pay for a license every time it needs an instance of the software installed.

Open source products can sometimes be purchased in a similar way but open source licensing does not necessarily treat the software like a unit of a good that must be attained over and over. Rather, open source licenses recognize the infinite reproducibility of software and do not prevent a company from making and distributing its own copies of the software. Thus on a large scale rollout, a company could realize major cost savings just in licensing.

Migration and Vendor Lock-In

Migration costs could be an issue for switching to FOSS platforms. It may become a significant expense to migrate from one type of system to another (considering the data, user training, and other changes). A proprietary system however, still brings with it a range of costs when upgrading from one version to the next (often similar sorts of data migration, user training, and maintenance work). Upgrading open source software does not necessarily entail the vendor lock-in that proprietary solutions do. From the customer's standpoint, there may be multiple sources that can provide patches and updates. Consider operating systems—there are many distributions (varieties) of Linux. Each distribution has its own characteristic strengths and weaknesses but is essentially capable of providing the same functionality. Besides, why does an organization consider migrating to open source in the first place? Some reasons often cited include benefits in up-time stability (the famous Netcraft uptime surveys are one example, showing that the most reliable web host providers are usually based on open source platforms), security, and improved safety from viral infections, which over time could ultimately outweigh FOSS migration expenses.

Support

Paying for knnowledgeable support personnel might seem like a current point in favor of some proprietary solutions. Considering just server operating systems, professionals with Microsoft certifications probably outnumber those with some type of certified Linux training. As demand for Linux continues to increase, demand for Linux support ought to increase, so the number of trained professionals will need to grow. FOSS providing companies such as Red Hat and Novell both offer specialized training and certifications, plus there are independent organizations offering training, such as the Linux Professional Institute.

Maturation of Free and Open Source Software

A growing contingent of enterprise software vendors now support Free and open source software. Though the concept of Free software may extend (in a variety of ways) further back in time, it formally began when the Free Software Foundation (FSF) formed in 1985. One of the most well-known results is the operating system known as Linux (often referred to as GNU/Linux), which was born in 1991. Another widely-used open source operating system is FreeBSD, which was born in 1993 (1979 if one were to trace its ancestors). Finally, an organization called the Open Source Initiative (OSI) began in 1998. These dates highlight the persistence of several significant organizations and projects, though there are other high-profile names that have been and continue to be integral parts in developing the FOSS movements.

The point is that the FOSS community has had time to learn from its mistakes and it has matured its practice. It is no longer a question of will open source software be adopted—it's already in widespread use by businesses and government organizations, at least in aspects such as server operating system and database, and now there are a number of communities creating specialized open source enterprise solutions. Next, let's explore the growth of demand for enterprise products that are based on open source platforms.

There are at least two shifts gaining ground in the software industry fueled by FOSS. The first shift reveals a notion, which seems to be a favourite lately for those that would like to call attention to FOSS successes. This notion stresses that anyone grading success just by peering into the desktop application market is missing the significant new area in the industry, namely Internet-based applications. Articles on the topic always cite Google and Amazon as the stars of "getting-it" because these companies' products are rooted in open source software and methodologies, and they prove those roots help them outperform their competitors. The application these companies provide is basically a trustworthy, easy-to-use, and pervasive point of access to an incredibly complex and vast pool of information, goods, and services; and they do it for an extremely diverse and massive user base—anyone accessing the Internet.

The second shift is the perspective from which software is developed and distributed. Practitioners and users of FOSS view their software more as the basis for a service-oriented relationship. That is to say, communication between the various roles (users, VARs, consultants, vendors, developers, etc.) is necessary for determining and developing the requirements, flaws, and direction of the software. At first glance this might not appear very different from a closed-source software vendor, which certainly must listen to its users' demands in order to be successful. But it is different because in the open source case, the features of the software may literally be developed by a party other than the one that originally provided the software and that development may actually be incorporated into the original source of the software itself. This means that if, for example, a company needs something from its open source software, which is not supported it can then develop or sponsor development for that functionality in the product. The functionality can further the growth of the product as whole. Thus the software's entire user base can benefit and the primary development team of the software may not have to devote as much in the way of resources to creating new functionality on its own. Companies sponsor such development because they have the opportunity to get what they need at a lower cost, and via an efficient process. This development process signals a shift in how the software industry does business.

Both shifts are frequent cause for debate. The proprietary religion practiced by many software vendors often seems at odds to the spreading atheism of open source. This is changing however, and many of the most faithful (Microsoft) even experiment with bits of open source. Why should the situation change? Even if pragmatic arguments for open source don't feel convincing (I'll pose some of these for consideration in the second part of the article), look at the demand trends from a global base of enterprise software customers.

Any vendor that does not take into consideration the type of demand fomented at the forges of FOSS communities is missing opportunities.

How do I know? Technology Evaluation Centers (TEC) publishes The TEC "Q" Report, which tracks and reports end user (enterprise software decision makers) demand trends every quarter. Reviewing information from the report in the graphs below, we can gain insight on some open source demand trends.


@2004 Copyright Technology Evaluation Centers Inc.

Based on the server operating system (OS) demand graph above it is hardly news that the proprietary Microsoft Windows OS is the market leader. Over the course of 2004, demand seems to have had a very slight decrease for both Windows and Linux. It is interesting to see however, that both operating systems saw demand ultimately rise in the year for Q3 2004 compared to Q3 2003. Specifically, Linux demand in Q3 2003 made up 9.5% of TEC's global total, while in Q3 2004 it made up 10.8%. The other OS leader, Unix, visibly decreased in demand, both as a trend for 2004 and in a comparison of Q3 2003 to Q3 2004. If one considers again that Linux was born in 1991, its share of the demand pool, which is now close to that of the venerable Unix, is impressive.

Server operating system is one important area to track demand, another is database platform. In 2004, database platform demand has gone through a more interesting change than server operating systems. In particular, FOSS communities are beginning to present a grave challenge to the proprietary likes of Microsoft and Oracle.


@2004 Copyright Technology Evaluation Centers Inc.

The database platform demand graph above shows that the open source MySQL saw its demand increase in Q3 2004 compared to Q3 2003 and continued to increase quarter by quarter for 2004. The same can be said, though to a lesser degree, of the open source PostgreSQL database. Demand for MySQL made up 2.4% of the global total in Q3 2003 and increased to 8.7% in Q3 2004. Demand for PostgreSQL came in at 1.6% in Q3 2003 and increased to 2.1% in Q3 2004. On the other hand, both Microsoft and Oracle have been stuck in a trend of declining demand quarter after quarter for the year and have even dropped below the demand levels they enjoyed in Q3 2003.

Over 59% of the approximately 5,000 global decision maker inquiries that TEC analyzed came from companies with annual revenues of less than 5 million up to 200 million. This means that a majority of these inquiries come from what is commonly known as the small and medium enterprise (SME) market. This market is often targeted (though not exclusively) by open source enterprise software providers. Often, vendors basing products on open source platforms can make strong justifications for their solutions to SMEs based on qualities like affordability (something vendors selling on proprietary platforms may not be able to compete against).

Demand at the Fount of Open Source Part One: A Primer Based in Demand Trends

This article provides an overview of Free and open source software (FOSS) concepts for both enterprise software clients and vendors that would like to be let in on the buzz resonating from the FOSS-related change in the software industry. I will address FOSS concepts in two parts. The first concerns the FOSS origin and rapid evolution as manifested in global customer demand trends. The second reviews reasons that enterprise clients and government organizations generate this demand as well as why it should push software providers to continue to meet it.

A Basic Background

Let me note a couple terms used in this article, source code and project. These may seem obvious but to the majority of people the terms are not exactly commonplace. Source code refers to a software programmer's art. The programmer writes code, which becomes the software we use. Source code is what people modify to fix problems (bugs) in a program and create new versions of the program. The second term, project, as used in this article refers to open source projects. Open source software is generally developed from a core project managed by a person or team and there may be multiple offshoots from that project. For example, a team may start a project to develop an open source content management system but will also form a business entity to support users of that software project. The business itself is not the core project but it is based on the project and will likely be a participant in the project.

Free software (referred to in this article with a capital F) is software accessible via a license that grants users permission, in perpetuity, to copy, modify, study, and distribute the software's source code. It is a philosophy about the development, distribution, and accessibility of software, namely the freedom involved to that end—Free does not refer to price. Open source is a pragmatic argument favouring the use of Free software. The open source argument promotes the cost savings, security, stability, and efficient development generally associated with Free software.

All that Jazz—Reports and Dashboards

All BI environments need to provide a complete suite of tools to create, publish, distribute, and schedule rich report content. Pentaho's Classic Engine is based on the banded reporting design. Banded layouts divide the report into sections, and the reporting engine traverses the data and fits the data into the predefined bands. In the classic banded engine, the data being sent to the report determines how the report appears. This paradigm has been—and continues to be—used successfully by many reporting tools. However, a relatively new model that is gaining popularity is based on the output rather than the data driving the processing of a report. The Flow Engine, still in development, will work on a report definition built using the Document Object Model (DOM); the final output will be rendered by combining the definition and incoming data. The Report Designer includes support for several data sources, a variety of formatting options, and the ability to render multilingual reports in hypertext markup language (HTML), portable document format (PDF), and Excel (XLS), among other output formats. An AJAX-based, thin-client, ad hoc reporting tool is also available in Pentaho's reporting suite. Reports in the ad hoc tool can be designed using the metadata layer, while the report designer, in addition to the metadata layer, can connect directly to data sources.

Pentaho Dashboards provide the ability to define metrics that are important to an enterprise and deploy them using a variety of user interface features: spreadsheet-style grids, integration with Google Maps, cross-tab reports, and drilldown to reports or multidimensional analysis. Integration to other web content through frames or AJAX components is also possible. The Community Dashboard Framework (CDF), developed by senior members of Pentaho's community, makes it simpler to develop new dashboards by defining the various components using a fairly straightforward syntax, without programming the interface. Forum discussions indicate that Pentaho may integrate the CDF into its product.

The Closing Statement

Although open source BI solutions may not yet have the longevity or maturity of traditional BI solutions, the evolution of open source BI is gaining momentum as its credibility and relevance increase. With minimal risk, organizations can discover whether open source BI will work for them by building application prototypes. Components of open source BI software can also be integrated into existing BI implementations for additional functionality. There is significant transparency in terms of technologies and product roadmaps. Collaborations and partnerships between open source vendors are constantly being established. Committed user communities make it possible to benefit from experience across several companies and platforms.

Analysis with Mondrian

The structure of enterprise business activities is almost always multidimensional. This is because the content of a business is defined in terms of quantifiable or measurable properties (e.g., sales, inventory, or donations) and qualitative attributes (e.g., students, customers, or products). Each business activity can involve a combination of these quantitative and qualitative entities. Although enterprise systems may actually store incoming activities in a relational format, a highly responsive, multidimensional environment is required to analyze and gain insight into the entire business.
Online analytical processing (OLAP), still a growing field in terms of research and development, refers to a manner of storing and querying very large volumes of data across multiple dimensions. The particulars of multidimensional OLAP (MOLAP) versus relational OLAP (ROLAP) still evoke a vigorous debate. But the choice depends entirely on the nature of data, latency, and resources (both hardware and software). For instance, ROLAP may provide a better solution for data that is dimension-intensive or in situations where latency needs to be very low or close to real time. On the other hand, MOLAP may be better suited for large sets of aggregations and more lenient latency requirements. In either case, adherence to sound design principles is essential for creating a successful OLAP solution.

Pentaho's answer to the question of multidimensional analysis is a ROLAP engine called Mondrian. The most important aspect of OLAP is how and where aggregations are stored. In a ROLAP environment, as with Mondrian, data and aggregations are stored in a relational database. Precomputed aggregates are stored in tables alongside the base fact tables. Such aggregate structures are necessary to avoid calculations over millions of fact records for each query. These tables are not part of the analytical engine; they have to be built using an ETL-style process. Pentaho offers a tool called Aggregation Designer that helps create and maintain aggregate tables. Mondrian includes an in-memory aggregate cache that saves multidimensional result-sets on first access for use in subsequent calculations. The extensive CacheControl application programming interface (API) is included for granular access to Mondrian's cache.

Organizations can choose from several approaches to provide a client tool for multidimensional analysis. A complementary open source project called JPivot offers a pivot-table client tool, written in Java Server Pages (JSP), to browse cubes created using Mondrian. Mondrian also provides a multidimensional expressions (MDX) interface (note that this is not entirely the same as Microsoft's implementation of MDX). Developers can write in-house applications using olap4j (or, OLAP for Java), an open specification being developed by several open source companies including Pentaho, JasperSoft, and LucidEra.

Pentaho: A Case in Point

That open source BI can provide a full-fledged solution to an organization's BI needs can be demonstrated by looking at how Pentaho's platform addresses the principal requirements of BI—data integration, reporting, and analysis.

ETTL with Kettle

Pentaho's BI platform implements the Common Warehouse Metamodel (CWM). The CWM, which has been implemented by proprietary vendors such as Informatica, is a specification that proposes using XML metadata interchange (XMI) to interchange data warehouse metadata. This entails that mappings can be migrated between tools that implement this interface. Pentaho's extract, transform, and load (ETL) system is based on its Kettle project. Kettle stands for "Kettle ETTL Environment," where ETTL is Pentaho's acronym for "extraction, transformation, transportation, and loading" of data. The ETL system supports: a variety of steps (a step represents the smallest unit in a transformation and contains either predefined or custom logic that is applied to each row as it makes its way from the source to the target); slowly changing dimensions (SCDs); connectors for a multitude of data sources (access to proprietary databases such as Microsoft SQL Server and Oracle is via Java Database Connectivity [JDBC]); and the ability to execute and schedule jobs both locally and remotely. Scripting in Javascript as well as pure Java allows developers to add custom code in any step of the transformation.

Two challenging issues that organizations face are data volume and latency requirements. In order to support high data volume environments, Pentaho has a clustering solution (a solution that uses more than one node or computing entity in order to achieve high performance and availability) that works alongside database partitioning; by using slave servers (a group of servers that perform specific tasks using the data sent by the master server) to distribute central processing unit (CPU) and input/output (I/O) load, performance can be improved by way of this parallelism. However, change data capture, which is based on a data integration technique that triggers data transfer by listening for changes in data sources, is not supported. Changes in data sources are detected by reading transaction logs; with the exception of open source databases, transaction log readers are seldom open source.

Although Pentaho's data integration still lacks a data quality and data cleansing solution, the development of a profiling server (a server dedicated to performing profiling tasks that help discover aberrations in data; see Distilling Data: The Importance of Data Quality in Business Intelligence) seems to be on the list of imminent improvements. In such situations, where the vendor does not support a specific functionality, organizations can look to complementary open source solutions; the DataCleaner project from eobjects.org, for instance, provides functionality to help profile data and monitor data quality. It also points to a significant advantage with open source applications: the fact that software is developed by the community and for the community makes it much simpler to share innovative solutions quickly and seamlessly.

Open Source Business Intelligence: The Quiet Evolution

In the current economic climate, organizations have to review and rationalize expenses associated with all enterprise software. As a direct consequence, open source business intelligence (BI) is emerging as an important choice for new as well as existing BI implementations. Even though most analyst research indicates that its evolution may have been understated thus far, open source BI is growing rapidly. Open source BI solutions have already been proved to complement and integrate well with traditional BI environments. In their own right, open source BI vendors offer competitive technologies and present the irrefutable advantage of cost savings.

The term "open source software" is often assumed to mean "free access to source code." However, the scope of open source software has widened considerably. The open source software license (often referred to as copyleft) is subject to regulations defined by the Open Source Initiative (OSI)—which dictates that open source software cannot discriminate against groups or technology and must be free to distribute.

Such software can be modified, but redistribution of modified licenses may be conditional (for instance, the license may require that changes be patch files rather than be integrated with the original code) to protect the integrity of the original author's work. The best-known open source package is the LAMP solution stack, which is comprised of the Linux operating system, the Apache hypertext transfer protocol (HTTP) Server, the MySQL database management system, and programming languages including PHP, Perl, and Python.

Open Source and Business Intelligence: The Common Thread

"Open source applications" is the term that describes systems built using open source software in the form of frameworks or libraries. Although copyleft licenses do not permit organizations to resell software developed using open software, mechanisms such as dual-license models have arisen, whereby commercial vendors can deliver their software under a community license that follows the open source license regulations and offers a commercial license with an attached fee. Vendors may charge users for services such as support, training, consulting, and advanced features.

In the past two years, commercial open source vendors have been working actively towards establishing a long-term position in the enterprise applications space. In February 2007, the Open Solutions Alliance (OSA) was formed to bring together commercial open source software businesses; its main purpose is to broaden the horizon of open source applications and most importantly, foster interoperability between them. JasperSoft, one of the pioneers of open source BI is among the founding members of this alliance. Pentaho, another open source BI vendor, has set itself apart by leading and sponsoring all of its core projects, implementing open industry standards and establishing partnerships with vendors of data warehouse technology, such as InfoBright and ParAccel.

BI has some of the most challenging technology problems among all enterprise software applications. These challenges include the design of very large databases; complex data integration between disparate and multiple data sources; the ability to search across a surfeit of information; and some of the most stringent performance and latency requirements. Even with proprietary solutions, organizations need a team of experienced professionals—including database administrators, business analysts, and programmers—to implement and support a data warehouse and BI environment.

Open source BI goes one step further: it encourages organizations to use and modify the software as needed and share advances with the rest of the community. It seems only natural that open source and BI technologies have converged. A crucial factor to consider when adopting an open source BI solution is that underlying technologies are often, if not always, open source themselves; although not mandatory, it is prudent to have technical teams acquire the necessary skills. For instance, most open source BI software is built on the LAMP stack. In order to adopt and maintain the applications, technical teams need to have development and administration skills using the LAMP stack.

Wednesday, December 9, 2009

Replacing Aging Reports Replacing

In the past and to some extent even today the only A/R management tool available was an aging report. While the aging report does list overdue invoices, the contact and follow up process is entirely manual. This may be acceptable for companies with a limited customer list or limited invoice activity, but it is certainly inefficient and ineffective for everyone else.

Improving the invoice-to-payment process must start with the assumption that the aging report as the primary management tool must be replaced with a software-supported process. Specifically the process should mimic the functions found in a contact manager.

While there is no doubt that bringing receivables under control will initially require some effort and expense (mostly time), the goal is getting customers to the point where they are thinking about you when selecting invoices for payment. If you can reach that point with most customers, the future time and cost required to keep receivables in line will be more reasonable.

Ultimately, your approach has to be positive, not negative. "He who screams loudest gets paid first" does not work in the long run. Threatening customers does not contribute to repeat (highly profitable) business. You want them to give you payment priority because they want to, not because they have to.

Competitive Landscape

Although the return on investment (ROI) for A/R management software is extremely high (500 percent and more), very few middle market accounting software vendors offer this application either directly or as a third-party product. Navision (third-party), Great Plains (direct), SouthWare (direct), Solomon (third-party), and Syspro (direct) however, do offer this functionality. Navision's application is by far the most comprehensive. There may be other third-party products that integrate with middle market accounting products, so the best course of action is to ask your vendor or reseller.

Monitoring and Improvement

The circle of credit and A/R management is closed when companies continuously monitor and improve each of the key steps in the process. Goals need to be established. Actual results need to be measured and compared against these goals. Opportunities for improvement need to be identified and changes made. Finally, new goals are established and information gathered to monitor the revised processes.

Please keep in mind that this is both an internal and external process. The internal component is an analysis of the efficiency and effectiveness of various credit and A/R management business processes themselves as well as individual employees (or work groups). Efficiency is tied to the cost of a business process (or one of its components), while effectiveness is tied to whether the specified goals are being achieved. You could spend great quantities of cash making a business process work, but that would not be cost effective. Conversely you could reduce costs but not achieve the required goals. Somewhere between these two extremes is the best suited path and that path will depend on your industry and the way you have decided to run your business.

What then needs to be measured and why? The following list is certainly not all inclusive but is intended to help establish a framework to measure efficiency and effectiveness.

Payment Terms: Cash flow can be impacted if payment terms are too liberal. While more lenient payment terms may be a reflection of competitive realities, they may also be used as a quick fix to gain business.

Invoicing Delays: If there is a delay between the time an order is shipped or a service provided and the invoice date, this may be an opportunity for improvement. Users should track both the average delay (general improvement) as well as investigate billing delays for specific invoices that exceed a specified limit (specific improvement).

Procedural Errors: The failure to meet a customer's business process requirements (purchase order number, invoice formats, pricing, even number of invoice copies) will lead to payment delays. These procedural errors need to be segregated by type, tracked, and continuously improved.

Service Disputes: If a customer does not pay on time due to an identifiable dispute or problem (goods or services are not provided on time, poor quality, or any other reason that can be tracked), cash flow will be negatively impacted. These specific problems need to be resolved as quickly as possible (another measurement standard) and they need to be tracked over time to determine if general improvement is being achieved.

Average Days Late: Average days late (ADL) measures a customer's payment history as it relates to the payment terms offered for each invoice. If a customer's ADL is increasing or is higher than average, this should be a trigger to contact the customer, not about a specific invoice, but their payment history in general. ADL may also indicate that a specific A/R representative is not quite as effective as they could be (when compared against other A/R reps). Finally, payment history should be one of the factors sales people consider when discussing future pricing with customers.

In each of the examples above, progress will be made only if identified problems are addressed immediately. This isn't the only solution though. Statistical and historical information need to be collected and tracked. For example, when an invoice becomes overdue, A/R reps need to determine both the classification of the problem (maybe this time it is a procedural error) as well as its specific subcategory (failure to indicate customer purchase order). By tracking the underlying causes of payment delay management can determine where they need to focus its attention and whether these problems are being improved over time.

Critical Business Functions: Misunderstood, Underutilized, and Undervalued Part Two: Closing the Circle of Credit and A/R Management

In the past, the idea of credit clung to the philosophy of risk management, measured by Days Sales Outstanding (DSO) and percent of bad debt. However, credit and effectively managed accounts receivable (A/R), when used as a sales tool can generate huge returns by motivating sales. By looking at the major components of credit-to-cash function, and giving each component a goal, companies can increase sales and improve cash flow while controlling losses and costs. Using updated reporting techniques, understanding the competitive landscape, and using appropriate A/R management software will help companies realize a notable return.

Billing

Credit/sales approval supports the sales process, but that's just the first step in the sales-to-cash cycle. Profitable sales cannot be achieved unless payment is received. A funny thing about selling on credit is that when you don't send an invoice (or send one that makes no sense), most customers won't send you money. The goal of billing is to facilitate payment, and to do so the billing must be timely, accurate, complete (according to the customer) and understandable. Finally, invoices and the original credit terms should reflect a late charge for late payments. If there is no late charge, there is no set term for payment, and no motivation on the part of a customer to pay on time.
At this point, the sale has been motivated by enlightened credit management; an order has been received; goods shipped or services provided; and an accurate and timely invoice placed in the customer's hands. A profitable sale is almost within grasp and will be achieved once this invoice has been paid. The gross margin, as calculated from the invoice, is now at its highest point, but will deteriorate slowly as time passes and the customer fails to pay on time. However, what cannot be stressed enough is that A/R Management is not "collections", the enforcement of payment. Collections is what collection agencies and attorneys do. They deal with "debtors", not customers.

A/R Management is "the completion of the sale". The primary goal is to keep customers current and buying, and in that process is crucial to achieve one of the most important underlying goals: the stimulation of repeat sales. The last thing smart companies want to do is create an impediment to lucrative, repeat business.

The secondary goal of A/R Management is the early identification and control of the small percent of credit customers who represent a potential for loss. Every past-due customer will fall into one of three categories.

Type 1: Slow Pays. Some customers practice tight cash management and deliberately pay late. Other customers may be disorganized, lazy or indifferent to paying on time. These are customers whose business is desired and profitable.

Type 2: Problem Accounts. The problem can be something going wrong or the customer not having the ability to pay. System problems, or things going wrong often make up the largest percent of all past dues. Financial problems are either short term (temporary) or long term (serious). We want to work with the financial temporaries and encourage their continued business and we want to cut off further credit to the financial serious who represent a 90 percent-plus risk of failure.

Type 3: Avoiders. This type of customer is constantly trying to beat you out of your money. It is also the smallest percentage of all past-dues. Rather than wasting time (your most precious resource) fighting for payment, cut off credit and refer this account to a "real" collector; an enforcer of payment.

Major Components

In most cases the credit-to-cash function can be broken down into four major components:

  • Credit/sales approval
  • Billing
  • A/R management
  • Monitoring and improvement

Each component must have a goal which compliments the purpose of the credit-to-cash function itself. Unfortunately most companies view the first three items as separate and distinct business functions while the fourth component is ignored altogether. If the credit-to-cash cycle is not viewed as a single process with individual components that contribute, in their own unique way, to the success of the whole, this critical cycle will consistently fail to achieve its full potential.

Credit/Sales Approval

Despite the investment and the goal of extending credit, the credit approval process has, at times, been described as finding ways to say "no" and the credit department has been referred to as the "sales avoidance department". Moreover, considering that credit people are being told (via performance measurements) that risk avoidance is the goal, it's surprising that any customer gets approved for credit.

Avoiding the Wrong Message

A substantial investment is made in getting customers to the point where they want to buy. It's such a waste to then look for reasons to reject the sale and to alienate the customer to the point where they may reject you in the future.

There are three key factors that must be considered during the credit approval process. They need to be considered not just at the original sale but on all credit sales

Customer Profile: Who are your customers and, just as important, how do they do business?

Past Performance: Has the customer ever paid anyone in the past?If they haven't what makes you think you're going to be the first? As the late former US president, Ronald Regan, said "Trust but verify."

Product Value at Time of Sale: If you are operating below capacity, the only expenses that matter are variable costs. This is true for both a service as well as product-oriented environment. The actual profit margin for these incremental sales is therefore higher, sometimes substantially higher. In order to fill this excess capacity, you can adopt very aggressive pricing models or relaxed credit terms. The write-off rate may be higher, but the incremental profit more than makes up for the added risk.

Sending the Right Message

The credit approval process should be to maximize sales opportunities wherever possible while minimizing risks. Working to find ways of saying "yes" to every possible sale while remaining confident of payment better complies with the sales support mission.

Most people treat credit approval as a fixed point in time with a fixed set of operating characteristics. Once a prospect has overcome all of the credit approval hurdles that inhibit the desire to place an order, the prospect is then assigned a credit line. However, this, in most instances, acts more as a deterrent to new business from the prospect than as a protection against increased risk.

For example, if a customer likes your products or services and wants to order more than either party had originally anticipated, credit line place a temporary cap on the ability to expand the relationship and do more business with you. What's wrong with this picture? Good customers with whom you want to do more business should not be impeded in any way and yet the desire to "protect" revenue has done just that. Good customers should be allowed to place their orders easily and permit the sales-to-cash cycle flow evenly into the billing process.

Using Credit as a Sales Support Tool

Credit is extended when a product or service is sold on the basis of payment at a later date. The extension of credit creates additional administrative costs such as information gathering, credit checking, account establishment, billing ,and past due A/R management. There's also the cost of carrying A/R, the time value of money. Finally the customer may fail to pay and with that comes the cost of bad debt write-offs.

Some people will tell you that costs are incurred because customers require time to ensure they got what they ordered and more time to process the invoice. Others will tell you that customers need time to add value to the products or services they are buying, and to make sales to their own customers. Finally, some will tell you that they must incur the costs of credit because it's the customary way of selling in their industry, and that if they don't extend credit, their competitors will.

However, the only reason a business should extend credit is to generate a sale that would otherwise be lost—not just any sale but profitable sale. Any fool can make sales and lose money doing so.

Credit and A/R Management as a Sales Function

Credit and A/R management is a sales enhancement function whose true potential has yet to be realized by most businesses. Credit is a lubricant of commerce and allows the expanded movement of products and services. Yet, improperly applied, credit policies can hinder free flowing commerce.

There was a time not so very long ago when there were valid reasons for viewing extended credit as a privilege, as a favor to some and not for others. Following World War II, pent up demand, commercial shortages, and earnings from work in the war effort precipitated unprecedented growth. Outside of North America, much of the industrial world had been badly damaged or destroyed by the war and had to be rebuilt. The lack of readily available finances for reconstruction in Europe created even more demand for credit.

Later, the 1950s became a time of limited competition; there were no big box stores, cyber competition or aggressive foreign competitors. Credit was granted only as a last resort, and credit terms were tightly controlled and designed to limit a company's risk. If customers failed to pay on time, they were cut off from further credit and potentially "blackballed" by others.

The natural outgrowth of this economic climate was the view that credit management was aligned with risk management and the KPIs were DSO and percent of bad debt. This view was therefore appropriate for the times.

However, the current economic climate is radically different from the post-WW2 era. Rather than shortages, now there are more goods and services available then ever before in human history with more on the way. Quality in products and services is a given expectation and for businesses to be competitive they must also have quality business processes.

Yet, today many businesses still cling to the old credit philosophy of the past and track DSO and percent of bad debt. But in doing so, they are missing an opportunity to increase sales, improve cash flow, control losses, elevate customer service levels and customer retention, and decrease their cost of doing business—all of which contribute to profit enhancement.

The notion of risk management that ruled in the 1950s was applicable for that time, but this is 2005. We must rethink the role of credit.

Critical Business Functions: Misunderstood, Underutilized, and Undervalued Part One: Credit and A/R Management

Jacob Bronowski, The Ascent of Man

What is the best value proposition for the credit function in business? If we consider the most common key performance indicators (KPI)—Days Sales Outstanding (DSO) and percent of bad debt—it would seem that the major role of credit is risk management. However these measurements and the thinking behind them are flawed, out-of-date, and potentially detrimental in today's business world.

Accounts receivable (A/R) is one of the largest and most liquid of corporate assets, credit and A/R management may be the most misunderstood, underutilized, and undervalued business process. For example, the potential cash flow that can be generated when A/R is brought under control is huge. A very modest three-day reduction in A/R for a typical $20 million company can be in excess of $200,000!

Yet, most often credit and A/R management are thought to be part of accounting—a cost center and a necessary evil. However, because of the potential of credit and A/R management, they should be an integral part of the sales process, by helping establish a framework for a long term and mutually beneficial relationship between buyer and seller. They can play a critical role in the prospect-to-cash cycle.

Tuesday, December 1, 2009

HOW BUSINESS METRICS FURTHER OPTIMIZE THE OPERATIONAL IMPACT OF ASSET PERFORMANCE

Any success-driven organization with significant physical assets can tap some or all of the preceding sources of profit improvement using Asset Performance Management. Yet as in all things, success depends on setting goals and establishing metrics that can be used to manage and measure results. This is especially true when an organization directs Asset Performance Management toward specific asset-related operating problems. The more calibrated the metrics, the more powerful and measurable the results.

Take the example of a large regional grocery chain with 140 stores in the United States. In the grocery business, perhaps the most critical capital asset is the refrigeration equipment that keeps high-dollar items cold like meat, produce and other refrigerated and frozen foods. If that equipment fails and the problem goes undetected, spoilage costs can be enormous. In the past, this organization, like many grocery companies, monitored refrigeration equipment store by store. It lacked visibility into the performance of the equipment company-wide. As a result, it was forced to be reactive-if equipment went down at a given store, the only hope was to discover the problem immediately and fix it as quickly as possible.

Now, with Asset Performance Management, the company is implementing a proactive solution for optimizing performance of the coolers enterprise-wide. The solution features a centralized, unified view of all refrigeration systems across all stores. This enterprise-wide solution adds the ability for the company to make decisions that directly improve operational performance, through monitoring and analysis of specific operational metrics such as:

* Temperature vs. energy consumption and cost
* Maintenance cost vs. temperature trend
* Cooler efficiency by store
* Preventive maintenance compliance vs. refrigeration alarms, spoilage incidents, and refrigeration leak rates

These metrics go beyond the standard transactional maintenance metrics of EAM because they depend on data from multiple systems-asset, finance, operating and others-not just the maintenance system alone. In addition, the modeling and trending tools offered by Datastream 7i Analytics enable a forward view on that data, unlike traditional reporting tools which simply visualize current transactional data. By analyzing these cross-silo metrics, the company can now decide which coolers run most efficiently, how temperature control affects cooler performance, what the appropriate maintenance cost per cooler should be, what level and schedule of maintenance is optimal. It can, in short, be completely proactive about optimizing equipment uptime at a lower operating cost.

HOW ASSET PERFORMANCE INFLUENCES ENTERPRISE PROFITS

The relationship between assets and bottom-line profits typically receives scant attention from upper management. This is another reason why the profit potential of Asset Performance Management remains something of a secret from the broader organization. The good news is that because this potential is overlooked, executives and organizations seeking top performance have new profit opportunities. Consider just a few ways that Asset Performance Management positively impacts the bottom line.
Reduced Production Costs and Increased Capacity

Asset downtime disrupts production and drives up both process and per unit operating costs. Executives often lose sight of this because they focus on output, not on the assets used to create it. As one CFO put it, "Companies care about how many cans they make, not the can machine." The irony is that companies can use Asset Performance Management not only to make more cans, but to make each can more profitably. This is because Asset Performance Management can:

* Increase the availability of assets through reduced downtime and improvement in Overall Equipment Efficiency (OEE)
* Reduce capital investment in assets by 15% or more through improved preventive and predictive maintenance and more efficient labor deployment

A specific example comes from a food manufacturer that recently deployed Asset Performance Management with the goal of using longer manufacturing runs and improved uptime to increase manufacturing efficiency from 87% to 92% and create $7M more in sales, with no additional manufacturing costs.
Reduced Revenue and Operating Losses from Asset Failure

As discussed earlier, little matches the punishment inflicted on an organization when critical assets fail. Whether it is a faulty sewer line or a poorly calibrated vaccine machine or a broken oil rig, asset failure can create significant revenue and operating losses. The stakes grow even higher in regulated industries- such as pharmaceuticals, healthcare, food & beverage, and biotechnology-as well as in the highly regulated public sector. Consequences of noncompliance can include not just lost revenue, but fines, plant or location closings, litigation, damage to reputation, and a loss of investor confidence that can pummel a company's stock price. In the public sector, a loss of taxpayer confidence can reverberate through all levels of government.

Asset Performance Management reduces the prospect of revenue and operating losses by ensuring:

* Forward-looking Risk Management Asset Performance Management allows organizations to monitor, model and forecast performance against stringent requirements and key performance indicators (KPIs) across an organization, enabling them to detect problems with high-risk equipment before failure occurs.
* Regulatory ComplianceCombining Asset Performance Management principles with Datastream 7i functionality such as in-depth asset profiling, calibration report documentation, electronic signatures, and audit trail tools enables companies to stay compliant with government regulations including the FDA's 21 CFR part 11, standards set by the Joint Commission of the Accreditation of Healthcare Organizations (JCAHO), the Occupational Safety and Health Administration (OSHA), Governmental Accounting Standards Board statements 34 and 35 (GASB 34/35) and the Environmental Protection Agency's Capacity, Management, Operation and Maintenance (CMOM) program.
*
* Improved Asset Quality and Reliability. AssetPerformance Management enables predictive failure analyses and preventive maintenance strategies that keep assets performing longer, better and more reliably. Predictive modeling allows organizations to forecast likely failure points and causes and proactively take corrective measures. Organizations can pinpoint unreliable assets, suppliers and processes, predict reliability issues before they happen, and plan for an asset's timely disposal.
* Reduced Inventory and Inventory Carrying CostsAsset Performance Management can help organizations reduce inventory of heavy machinery and equipment while also reducing spare parts inventory. Premier Manufacturing Support Services has used Asset Performance Management to eliminate the need for more than 60 trucks at one of its 42 plants in just eight months without affecting productivity, reducing its equipment inventory by one-third. The same kind of fine-tuned planning and forecasting can reduce spare parts inventory by 20-30% and reduce inventory carrying costs (such as storage, insurance, handling, and others) by an additional 20%.
* Increased Warranty RecoveriesUnclaimed warranties add up to significant sums for asset-intensive organizations-sums that can be added directly back to a company's bottom line. Consider the example of Coast Mountain Bus Co. (CMBC), a Vancouver, Canada-based transit company with 24,000 assets and more than 56,000 parts records. CMBC uses Asset Performance Management to maintain a complete fleet database to define and maximize multiple warranty recovery for different pieces of equipment. When a work order is opened on equipment with a valid warranty, the system immediately flags it as warranty work and all labor and associated costs can be captured and a proper claim filed. In CMBC's case, the result has been $950,000 of recovered warranties in 6 months' time.
* Reduced Labor CostsWith Asset Performance Management, organizations can model different scenarios to determine optimum maintenance schedules. They can focus labor resources on preventive rather than reactive work, so that more is done in less time with fewer delays and less overtime. In addition, because Asset Performance Management automates functions such as parts ordering, purchasing and payments, less labor is required.
* Reduced Capital OutlaysAsset Performance Management improves the reliability and extends the useful life of fleet and heavy equipment, to the extent that new equipment purchases can be reduced at levels of 3-5% annually.

THE EVOLUTION OF ASSET PERFORMANCE MANAGEMENT

Asset Performance Management has the power to overcome the maintenance stigma because it has the power to change operational performance. It completes the evolution from maintaining assets to optimizing assets for higher profits and better overall performance on the bottom line.

Figure 1: Evolution of Asset Management


CMMS EAM Asset Management Performance
Purpose Automate procedures Optimize
asset performance
Optimize asset impact on
operational performance
Scope Site-by-site Enterprise-wide Enterprise-wide
Data Maintenance procedures Maintenance transactions Performance data from Maintenance,
HR, Finance, Operations, Others
Answers When do we turn
wrench?
When do we order
replacements?
How do we extend
asset life?
How do we reduce
asset downtime?
How do we use asset to reduce
operational costs?
How does asset affect
bottom-line performance?

Understanding how and why this is true requires just a brief description of how Asset Performance Management has progressed from earlier asset solutions. First-generation solutions, known as Computerized Maintenance Management Systems (CMMS), are essentially a way to automate a maintenance to-do list" when to turn the wrench on a certain piece of equipment, when to order replacement parts, and so on. The next generation, Enterprise Asset Management (EAM), focused on how to track and manage an asset's performance"how to extend the asset's life and reduce its downtime. For example, if an organization has 200 forklifts across 10 sites, and 150 are productive and 50 are not, the company can analyze maintenance transactions associated with the better-performing lifts"which manufacturer made them, when and how often they are maintained, what comprises preventive maintenance, and so on"and then use those findings to improve performance.

Asset Performance Management builds upon and extends EAM by combining asset management, maintenance and tracking with the ability to use this data to improve operational decision-making. EAM answers the question, "How do we get the most out of an asset?" Asset Performance Management answers the question, "How does the asset affect operational performance?" APM supplements enterprisewide maintenance transaction data with data from other silos in the organization-such as finance, human resources, inventory, and production-and provides the advanced analytics necessary to identify correlations and trends, thereby improving operational decisions and results.

Consider a chief operations officer with the goal of increasing production output enterprise-wide by 10% through a more efficient production process. Using Asset Performance Management, the COO might look across the organization and notice that where production lags, a high number of reactive work orders are being issued, and that where this is true, there is also a spike in the use of contracted labor. This leads to the conclusion that increased use of contract labor directly decreases both equipment (asset) efficiency and production output. Information drawn from HR, production and maintenance data enables the COO to isolate a problem, predict the impact, and decide what steps to take for improvement.

WHAT IS ASSET PERFORMANCE MANAGEMENT

Top performers optimize their assets. It's true in any field"sports, the arts, and business. Yet in business, optimizing performance of capital assets often plays distant runner-up to the more glamorous pursuit of topline growth. That's not surprising, given the importance of increasing sales revenue and growing a customer base. But it can be costly. Under-performing capital assets leave tremendous profit potential on the table" potential estimated to be as much as 10 percent improvement to the bottom line annually, with up to 30 percent reductions in operating costs.1

Insufficient attention to optimizing assets can also lead to operating surprises that are even more punishing. Consider the prominent pharmaceutical firm that, due to production processes deemed unsafe by the Food and Drug Administration (FDA) and its UK counterpart the Medicines and Healthcare products Regulatory Agency (MHRA), was barred from providing more than 45 million doses of flu vaccine to the United States. It lost more than $100 million in revenue, took a charge of 36 cents per share, and saw its stock price plummet more than 30 percent2"all because critical capital assets and processes in its manufacturing operations were managed inappropriately.

Or consider the public water utility whose quality-control problems in 1993 with storm run-off systems and purification processes led to almost $200 million in medical and facilities expenses, and more tragically, to more than 100 deaths.3 The disaster could have been avoided if the utility had the ability to more effectively model and predict asset performance for circumstances like those that lead to this disaster.

The solution to achieving hidden profit potential and avoiding costly operating surprises is Asset Performance Management. Asset Performance Management combines best-of-breed enterprise asset management (EAM) software with the power of cross-functional data analysis and advanced analytics. This enables organizations to make decisions that optimize not just their assets, but their operational and financial results.
WHY IS ASSET PERFORMANCE OPPORTUNITY A SECRET?

ARC Advisory Group may have said it best. "Investments in capital assets are staggering"billions of dollars invested in hundreds of plants worldwide. While the nature of the assets may differ"acquiring, maintaining, and disposing of these assets is a very serious business. A one percent improvement in performance can be worth millions annually."4

How this magnitude of profit potential could remain a secret remains a mystery, particularly in today's tough, cost-constrained operating environment. But it has a lot to do with the realities of organizational culture and the traditional role of asset management. Asset solutions have long had a visibility problem. They have traditionally been marketed to and run by an organization's maintenance personnel. Unfairly, maintenance carries a stigma, even though it is critical"accounting for as much as 50% of some organizations' operational expenses. But it is also unglamorous and has resulted in asset solutions being hidden from executive level attention. The benefits of asset solutions are no secret to maintenance managers worldwide, but those benefits have yet to percolate to the top of the executive suite.

1 Datastream
2 Dow Jones Equity News
3 Milwaukee Journal Sentinel
4 ARC Advisory Group

To Be Certified, or Not to Be Certified: The Benefits of TEC Certification

Visit the TEC Vendor Showcase site and search for Pronto. Beside each of its eight solution sets, you'll see a black, white, and red "TEC Certified" symbol (for example, on Pronto's page in the ERP Distribution section). This means Pronto has requested TEC to certify its products- an impartial process in which TEC verifies that PRONTO-Xi does what Pronto says it does.

Hunkering over a TEC boardroom table during an interview in March 2009, Leister-his Blundstone-clad feet firmly planted on the floor-confirms that Pronto's status as TEC Certified is beneficial. "Being TEC Certified helps boost the credibility of our product, and our brand, in a market where we are not well known. Clients and prospects know that we can live up to our claims, because TEC has independently certified that what we say about our product's features and functionalities is true."

There are other benefits of going through the TEC certification process for vendors such as Pronto:

* Efficiency: Pronto can save time when responding to client requests for information (RFIs) because the TECcertified data gives Pronto a head start in responding to successive client projects.
* Increased exposure: products like PRONTO-Xi are more likely to be mentioned in select TEC publications written by TEC analysts, as they are assured the product data is accurate and reliable.
* Positive perception: the "TEC Certified" label prompts top-of-mind presence for the decision makers and consultants who regularly use TEC's services for evaluating and selecting enterprise software.

Case Study: Pronto Software, ERP Vendor

* financials
* engineer-to-order (ETO), discrete, process, make-to-order, assemble-to-order, and configure-to-order manufacturing
* computerized maintenance management system (CMMS) and enterprise asset management (EAM)
* point of sale (POS) and Web commerce
* retail and customer relationship management (CRM)
* supply chain management (SCM), with warehouse and inventory management, sales orders, purchase orders, electronic data interchange (EDI), advanced forecasting, distribution, and other functionalities
* mining


Offices:


* Head office: Melbourne (Australia)
* Value-added resellers (VARs): in Canada, Mauritius, Malaysia, Sri Lanka, India, Singapore, Vietnam, Australia, New Zealand, and the United States (US)


Company Goals:


* To continue its steady and profitable growth by consolidating its position in established markets and selectively expanding into new markets.
* To increase its number of VARs in North America, and to expand into Western Europe.
* To leverage its lean, value-based engagement model to grow against the trend during the global economic downturn.


Benefits of Using TEC:


* "TEC Certified" status helps Pronto create a credible brand in the minds of both VARs and end users.
* TEC helps Pronto acquire additional VARs around the world, but particularly in the North American market.
* TEC increases awareness of company brand and product via presence on Web sites, as well as in weekly and daily newsletters.
* Pronto solution data in ebestmatchTM allow prospects to see how well the feature/function set compares with other solutions in the market; ebestmatch also helps decision makers to cement decisions, or helps them sell their decisions internally.


Leading the ERP Pack "Down Under"

Pronto's Web site describes the company as "Australia's most successful domestic enterprise resource planning (ERP) software vendor." In business for over 30 years and headquartered in Melbourne (Australia), Pronto offers solutions for manufacturing and distribution, as well as support for business intelligence (BI), customer relationship management (CRM), and more. Its flagship product, PRONTO-Xi, serves clients in a wide range of verticals, including automotive and aerospace, high tech and electronics, retail, building and industrial supply manufacturing, food and beverage, and professional services.

Having attained sales of $48,000,000 (AUD) last year, won multiple Dunn & Bradstreet "Business of the Year" awards, and continued annual revenue growth at 15 percent, the description "most successful" seems plausible. Although it may be hard to determine the precise reasons for a company's success, Terry Leister, Vice President North America, believes there are two main elements that have contributed greatly. The first is having a dedication to partnering with customers to solve real business problems. The other crucial element is the ability to create partnerships with value-added resellers (VARs), which generate and provide valuable leads. In both instances, the key is developing relationships: people like dealing with people with whom they have an affinity.

TEC has not only helped Pronto acquire VARs, but also works with those VARs in creating business opportunities (details below). But Pronto benefits from exposure on TEC's Web sites in several other ways too.

Achieving World Class Performance

Not only has the maintenance system provided a means for addressing important safety issues, but the system has helped St. Marys progress toward world class maintenance standards. At the heart of St. Marys production are three major, critical production assets in the form of paper machines. For these machines, a percentage of maintenance downtime is budgeted against the machine availability. World class standards indicate that these type machines should have about 2.5% and 4.5% budgeted downtime according to equipment type. With the disciplined maintenance process provided by CHAMPS, each machine performed ahead of world class standards by better than one percentage point of actual downtime. The resulting downtime reduction made a positive impact toward production goals.

The world class performance standards are not new to St. Marys. The mill has consistently been able to perform ahead of world class standards with a proactive approach by finding potential problems before they occur through preventive maintenance, ensuring that spare parts are available, and improving planning for downtime work thus reducing the time to complete. Before CHAMPS was in place, downtime always exceeded the budgeted allocation in time and cost. Not only did it take longer to complete planned work, but equipment broke down more frequently from lack of preventive maintenance. CHAMPS became a welcome tool for change and helped set new standards in operational efficiencies.

Many would agree that success with respect to maintenance is measured in beating budgeted downtime and budgeted maintenance costs. Success with respect to a CMMS can be measured in terms of its acceptance among users. Maintenance systems in many installations are used exclusively by their maintenance departments and have trouble gaining a foothold elsewhere. At St. Marys, CHAMPS is accepted and in use to some degree by literally all departments. St. Marys considers that success.
About CHAMPS

�Since 1976, CHAMPS maintenance management software solutions have enabled large enterprises to optimize the life cycles of their capital assets with functionality specifically for preventive maintenance, fleet and facilities management.�

CHAMPS delivers quality asset management, improved productivity and efficient operations that result in substantial reductions in maintenance costs.
Value Distinctions

At CHAMPS, we:

* Personalize customer relationships that have lasted multiple decades
* Optimize critical plant and facility assets including work force, equipment, facilities, vehicles, tools and spare parts
* Deliver quality asset management, improved productivity and efficient operations that result in substantial maintenance cost reductions
* Provide an accelerated implementation methodology that is a turn-key, team-based and cost-efficient approach
* Deliver scalable and flexible maintenance solutions designed to manage work, enhance reliability, optimize parts inventory and regulatory compliance, reduce maintenance costs and increase operational efficiencies

Systematic Work Process Approach

From a maintenance perspective, CHAMPS has helped St. Marys become more systematic in the way they approach work. Rather than shooting from the hip, the system helps personnel to think through a solution. As a result, workers have come to understand the importance and the reasoning for improved spare parts control, maintenance cost control, and work order planning.

For users, the maintenance system has become a daily tool for improving work processes. It is the primary application used by the purchasing, stores and accounts payable departments. All workers at the mill use it as their spare parts inventory catalogue. Managers and superintendents use it to view up to the minute committed maintenance costs, approve (or deny) purchase requisitions and review work orders (particularly safety related work orders). Production supervisors enter and review work orders during the course of their shift and safety stewards use it to enter work orders related to issues they may discover during their safety audits.

Emphasis on predictive and preventive maintenance has played a critical role in improving maintenance efficiency. The CMMS is integral to these improvements in that its preventive maintenance module automatically generates work orders on timebased intervals. These work orders have associated with them user pre-configured attachments such as checklists and CAD drawings which print along with the work order step.

Repair day planning and execution is another critical function of the maintenance department. The PM module is used extensively in this respect for generating work orders associated with these repair days. Prior to scheduled repair days all work orders, both auto-generated or manually entered and tagged as requiring machine downtime, are extracted from the CMMS into Microsoft Project where they can be further prioritized and reviewed for potential resource conflicts. Repair day schedule compliance has improved markedly since this structured approach was implemented.

For materials management, the maintenance application has given St. Marys the opportunity to close their stores which allowed for much tighter control of inventoried items. Spare parts costs have also been addressed through reports that analyzed patterns of stores issues, identifying possible waste, and allocated costs to the appropriate departments. Stores issues are incorporated into St. Marys primary maintenance costs reports which are available online and deliver the status of maintenance costs versus budget on an up to the minute basis for management review.

Additionally, stores issues reports have been created on all parts that have not been issued in the last five years. The resulting report was broken down by department and forwarded to the appropriate department planners. The planners have used this as a tool to remove unnecessary parts from stores.

St. Marys Paper Ltd.: Customer Profile

St. Marys Paper Ltd., located along the banks of the St. Marys River, at the hub of the Great Lakes, produces over 200,000 tons of supercalendered paper per year. The product, a high grade, uncoated paper, is most commonly used in advertising inserts, catalogues and magazines.

Construction of the original mill, driven by an American entrepreneur, Francis H. Clergue, was completed in 1896. The facility began as the Sault Ste. Marie Pulp and Sulphite Company, the first mill in North America to produce dried pulp. Today, it is privately owned, partially by the employees, making it somewhat unique in the paper industry in that every employee has a personal stake in the success of the company. Workers pride themselves in their commitment to quality and service and their efforts have paid off with receipt of several supplier awards from major corporations.
Maintaining the Assets

As competitive pressures increased and technology continued to advance, St. Marys found themselves behind the technology curve. The company had no automated methods to maintain their critical production assets which totaled over $500 million.

Up until 1989, the approach to maintenance at St. Marys was completely manual. High value equipment assets such as grinders, slashers, debarkers, paper machines, screens, refiners, and supercalenders required the highest level efficiency output and uptime. St. Marys recognized that a Computerized Maintenance Management System (CMMS) was necessary for planned maintenance and reduction of downtime.

With the goal in mind of finding a CMMS to help reduce costs and improve efficiencies, St. Marys began their search. Initially, the search for a system required interfacing with the companys financials system which at that time was VAX-based. Of the vendors evaluated, CHAMPS was determined to be the best fit. After a successful implementation, St. Marys had a system in place to help reduce downtime and standardize maintenance processes.
Technology-Driven Maintenance

In this age of continuous change, however, St. Marys was faced with further maintenance requirements. Maintenance technology advanced as well, with the advent of Client/Server (CS) applications with graphical interfaces and greater flexibility than previously possible. Having updated their financial system to C/S it was time for St. Marys to do the same for their maintenance department.

In 1998, St. Marys decided to investigate potential vendors for the maintenance system upgrade initiative. After a review of several vendors, the mill once again turned to CHAMPS. We felt they brought a lot to the table including a solid reputation, extensive experience and an attractive data migration plan that fit our needs, stated Brian Delvecchio, Information Systems Superintendent for St. Marys Paper. The ability to access the system remotely was another factor in our decision process. And, we already knew the company and support staff on a first name basis so familiarity certainly influenced our decision.

Another factor influencing the decision was the maintenance systems ability to interface with St. Marys financials application, EmpowerFinancials. To address this situation, a cost-effective interface was mapped out between the maintenance and the financial systems to the mills satisfaction.

Prior to implementation of the upgraded version of the maintenance system, St. Marys dedicated a core group to develop mill-specific user manuals and procedures based on job functions. The core group delivered the training to all users just prior to implementation. This same approach has been used to address training needs of new hires and for those personnel requiring refresher courses.

The remote access capability of the maintenance system has been a tremendous benefit for planners and supervisors. These users are able to connect from anywhere over the Internet and run the application from their virtual desktop provided by a Citrix Metaframe server. Having this connectivity enables them to prepare for the upcoming day by planning and scheduling their work orders from home on the previous evening if they choose. This remote connectivity also enables St. Marys to efficiently address support issues. Rather than going through the attempts of describing a particular issue, St. Marys is able to access the maintenance system support desk which is immediately able to shadow the administrator to quickly resolve issues. This enables both St. Marys and the maintenance system vendor to view the same user session and know exactly what needs to be done.

How and Where Are GPS Navigation Systems Used by Enterprises

There's no such thing as the "best GPS navigation system"—even the cheapest GPS receiver may suit your needs. That's why it's vital to conduct a thorough analysis of your needs and to determine which criteria are really relevant to your organization's GPS requirements.

Software Features The key software features allow users to define routing needs, track multiple destinations, and base their trajectories on multiple customizable parameters. The more robust the feature set, the more types of routing capabilities are available.

* Routing capability. Routing capabilities allow multiple direction types to be used, based on requirements such as time, distance, and hazard avoidance.
o General features. General GPS features include built-in address books, pre-loaded maps, the ability to mark and name locations on maps, and speed-based volume adjustment
o Tracking technology. Key features of any GPS system include accuracy, tracking ability, and responsiveness.
* Hardware features. Hardware features determine the capacity, connection, and expansion options available to a GPS unit.
* Display. Consider the types of conditions under which you'll be using the GPS navigation device. Criteria include brightness, color, resolution, and display screen size.
* Interface. The interface determines how you can interact with your GPS system. Features to consider include voice output, input methods, and display options.
* Portability. If the system needs to be transported between multiple vehicles, you should precisely determine the extent and nature of your portability requirements.
* Additional non-tracking features. Non-tracking features allow the GPS unit to be used for purposes other than simple navigation.
* Manufacturer and support. These parameters are used to evaluate the viability of the manufacturer of the GPS unit and their technical support capabilities.

GPS Navigation System Vendors and Products

* Garmin GPS Systems (including
o nuvi GPS,
o eTrex GPS,
o eTrex Vista GPS,
o eTrex Venture GPS)
* Magellan GPS (including eXplorist GPS)
* TomTom GPS
* NAVIGON GPS
* GPS Lowrance
* Navman GPS
* amAze GPS
* Leadtek GPS
* Zoombak GPS
* Aviton GPS

Constant reviews and measurement against project objectives and timeframes

This stage is better stated as "Planning for Success" the project does not end until the benefits are realized and the "new" maintenance focus is both implemented and embedded within the corporate culture.

For this reason there needs to be the usual raft of project controls during the project to mark critical stages and points of control. But there also needs to be the planning of post project points of control. In these control points we can realize self-audits of the system, of the project, of the amount of change that has occurred and perhaps future possibilities for dramatic change.
With the advances today in technology it has become obvious that there is a need for maintenance management theory and practice to catch up with the advances made in business management theory and practice generally. With a focus on the everyday processes that we use to implement and control reliability initiatives we will really begin to realize great leaps in company performance and cost effectiveness of maintenance; as well as realizing the vast array of benefits available from an implementation of reliability growth initiatives and applications of new technologies.

The current state of CMMS technology is at a very advanced level, in a lot of cases far more so than our ability to apply it. This tool has very strong and provable results, yet there are a great number of projects involving CMMS systems that end in failure and in cost and time overruns. The proliferation of vendors in this market, instead of driving costs down, has been driving them up. When this fact is combined with the common occurrence of CMMS "failures" it becomes obvious that the market should be buyer driven. Vendors need to be challenged and compared rigorously over pricing, after sales service and contractual guarantees. (Possibly even to the shared risk model recently adopted by some)

By using a template approach we are able to realize immediate benefits to the implementation process as well as to the delivery of the maintenance function. We may even be able to create the changes required without the purchase of a new CMMS by better utilizing what we have today.

Define the Training Matrix and plan for Delivery.

As a consequence of the previous steps we should have arrived at a point whereby we have our roles of people who need to interact with the system defined, and via the Work Order Life Cycle we should also have determined when and what the interaction both with the system and in the process needs to be.

From here we are able to determine the training matrix for the implementation process. This is something that needs to be considered very carefully as it all too often leads to either the success or failure of the implementation effort. A crucial part of the training matrix is the development and deployment of training focussing on the processes we have defined in the previous step. This is without a doubt one of the key elements of CMMS failure.

Once the system is bought the training tends to focus on the functionality of the system, the recommendations of consultants and the half serious attempt to adapt existing processes to the new system. I have seen many times, and I am sure that I am not the only person, many processes created after an implementation. Thus often we are trying to adapt the organization to the requirements of the system instead of the other way around.
There are basic rules to an implementation project; however most of these circle around the themes of involvement and empowerment of the implementation team. Depending on the size and scope of the project the design and interrelations of the team required will change markedly.

It needs to be accepted that the elimination of the software vendor's consultancy services is not an aim of the template process. However there is a need for greater industry wide understanding of how to manage these type of efforts. Therefore potential clients can be empowered with greater control throughout the entire exercise. At this point in time we should have the requirements document pinpoint accurate including: (among other points)

* Interfaces / Integration points with other systems

* Migratory data should be recognised and a plan created for the management of this

* Processes have been re-defined to ensure the implementation of best practice CMMS management processes. And the functionality requirements will have fallen out of this. (Graded in the typical "Critical / Important / Nice to have" style of rating)

* Training requirements have been defined and have been planned out in terms of which role require what training is required for what roles.

* Key implementation information regarding size of project, team members identified and other critical information.

So at this point we are able to both create the requirements document and draft a general implementation plan for the rapid and successful project, executed under the control of the client organization.

Define the steps required to achieve the tactical goals

The diagram on this page illustrates a 6 step progression required to create implement proceed along the path to the realization of the tactical goals that have been created. In this particular case the goal would be the achievement of a planned state of maintenance management. (Steps to the Planned State of Maintenance Management) However all of these steps, not including the 5th step, are essential elements for the creation of any business process involving modern day CMMS systems. This is especially true for the areas of maintenance and engineering.

For example in the 5th step in a process of inventory optimization the equivalent step may be the creation of spares management policies.

This part of the process is, without doubt the most laborious and difficult area of any implementation template. This includes a vast number of areas all covered under the one heading. It is also the area where there exists the majority of waste in the implementation process. Many of the arguments are repeated time and time again in each project.

This paper includes a brief overview of what the terms of Business Rules and Work Processes; however a detailed approach to the steps required would need to cover all of these steps in this model. And as always, there are a number of common themes and principals that can be applied in each of the steps.

Business rules refers to a series of standards within the organization with regards to how we are going to manage maintenance in a generic and homogeneous manner.

Some example areas would include:

* Definitions of technical change management qualification procedures

* Rules on the prioritization of work, and how this is to be applied to the efficient management and programming of resources

* What are the maintenance indicators that will be applied

* How do we classify maintenance and work type.

There are a range of business rules associated with the implementation process. However as a general guide we need to be looking at those things that will be able to guide maintenance efforts in a generic sense over the entire operation. And these need to be created to suit the corporate goals of the organization.

Work processes refer to all the processes within the maintenance organization. These may include:

* Maintenance Planning and Scheduling

* Backlog management ? Technical Change Management

* Shutdown planning and execution systems

* Inventory management and dispatching systems

Some of the expected outcomes of this phase should be the development of role descriptions and role interactions, as well as the Work Order Life Cycle.

From here we are able to determine what are the business rule and process requirements of our system, as well as what are all of the roles those who need to interact with the system, and at what point their intervention is required. For those systems with complex authorization functions this step should also provide the basis for easily determining the authorities required by each role.

This step can be used, in its complete form, for redefinition of the maintenance hierarchy if this is deemed appropriate.