Best Practices for Successful TMS Implementations – Data Cleansing

In the past month, the Collaborator has heard from several readers asking us for articles on steps that may be taken to ensure successful TMS implementations.  We’re only too happy to oblige!  This is the first in what will be a series of posts highlighting crucial steps that must be taken if a TMS implementation is going to succeed.  Today, we’re discussing the importance of Data Cleansing in TMS implementation.

So it’s decidedly not a “sexy” topic, but data cleansing is one of those elements that, if not properly dealt with, will land the TMS user in a world of hurt once their provider’s deployment team has long since departed.  This is particularly true for those shippers transitioning from manual processes (spreadsheets, faxes, emails) to an automation tool like TMS or an optimizer because, unlike their counterparts replacing some incumbent TMS solution, the manual process-driven program has never focused on ensuring data quality.  Why does it matter?

________________________________________________________________________________________________________

Simply put, “garbage in, garbage out”.  There is no technology in the world powerful enough to convert damaged, incomplete, contaminated or otherwise corrupted data into clear, actionable intelligence.  This is why it is so important that, early in the implementation process, sufficient time is allocated to the cleansing of existing data on carriers, equipment, drivers, terminals, origin and destination points, etc., before any of it is loaded into the TMS or optimizer.  Failure to do so will almost certainly torpedo efficacy of the system. 

_________________________________________________________________________________________________________

 

The proverbial “stitch in time” is best applied to aggressive data cleansing at the beginning of TMS implementation.  Here are some examples of the types of information that should be gathered from whichever silos it may exist within:

Tractor and trailer information including:

  • Tractor ID numbers
  • Carrier ID numbers
  • Domicile ID numbers
  • Fleet ID numbers
  • Weight info

Carrier locations including:

  • ERP location identifiers
  • SCAC codes
  • Addresses

Rate information from existing routing guides including:

  • Carrier ID numbers
  • Origin-plant
  • Origin-state
  • Destination-region
  • Destination-state
  • Rates by lane
  • Rate expiration dates

For those running private fleets, information on drivers including:

  • ERP location
  • Driver numbers
  • Name/contact info

Of course there are numerous other data points that need to be collected and the contour of the data will vary widely as each shipper’s network and operations are completely different.  Yet, the point is the same.  Typically, in the absence of an end-to-end solution, these kinds of data points are stored in an ad-hoc fashion.  Even the smallest discrepancies – say a differences in how customer delivery constraints are listed across data sets – can blow up routing and scheduling plans, rendering entire optimizations useless.

A best practice when working with a trusted TMS provider to implement a new solution includes mapping the existing data to fields (and using standardized terms) according to direction from the provider’s implementation teams.  A well-organized solution provider provides templates for data to new customers, making it easy to complete the knowledge transfer and data cleanse/capture according to specific parameters.  It may be tempting to gloss over this process in the push to get the new TMS into a live and steady state.  But it’s critical to success to be patient and to do the inglorious legwork associated with data cleansing.

Leave a Reply

Your email address will not be published. Required fields are marked *