You have made the first step towards data accuracy, by recognizing that there are data anomalies impacting your business procedures (see previous post). Now you want to discover those anomalies and fix them. But with current applications this is a complicated and unorganized process. You need to login separately to multiple source system applications, open various forms, to develop and run specific reports and even create code to mine your source systems data. All this is required in order to determine data accuracy and the impact of data anomalies in it. The source systems may be local or installed on the cloud, or both, complicating matters even further.
Dealing with data anomalies impact on a periodic level might be too little, too late
The IT department has enough work load and tends to be passive when dealing with data anomalies, unaware of the impact this decision may have. Daily pressure and tasks that need immediate attention push back the anomalies issue, so that it is faced only at periodic events such as a renewal of contract with a specific vendor or a quarterly closure or even an annual report to management. By then, business procedures and decisions already been impacted by those data anomalies, and dealing/ fixing those anomalies requires extra work.
By being pro-active, the data anomalies impact may be reduced or perhaps prevented.
Check out TripleCheck to learn about data quality checking solutions.
For example, when a purchase order is created with an abnormal currency conversion rate, the mistake may be then pulled into the invoices and can cause problems with payments influencing company books, i.e. it can distort profit figures.
Sometimes the anomaly will not be detected along the way, which means that when it does surface, someone will need to go back to the beginning, making corrections and updates to various documents/ entities, if it’s even possible. A mistake can be entangled into various forms and may impact many different parts of the data base. The longer a mistake is allowed to exist, the more it will compound, impacting and increasing number of documents in your system. It’s easier to correct it if it’s discovered early.
Mistakes in data compound and become more difficult to correct as time passes.
Not having the ability to review historical data anomalies and compare historical results prevents business improvement. Data anomalies impact management decisions directly, affecting the ability to be flexible and to respond quickly to changes in ecosystems with a rapid environment.
Understanding your data anomalies impact is the first step to managing it
Companies need to be able to review potential data anomalies, to know where they might occur and to decide which anomalies are problematic and which are not. Business users need to be able to review and approve data with confidence, eliminating the need for repeatedly reviewing specific information - this will reduce the need to customize reports, making the data management system less complex and less prone to recurring mistakes.
Here is an example: the ERP system is set up so that a purchase order is required before an invoice can be created. But there are some vendors who do not require a PO. When the controller reviews the data, the system will alert him or her to the “missing” POs. In order to prevent this unnecessary warning, the controller has two choices: the first is to remember to manually exclude these vendors each time the report is run – this is of course not a reliable course of action. The second is to ask IT to create a customized report so that the specific vendors are excluded. This may entail comprehensive development and is a low priority for IT, it also makes the ERP system more cumbersome with potential to demand further IT resources in the future (e.g. vendor list update).
With TripleCheck, a business user can take action and approve a set of rules that will allow for certain data anomalies to be regarded as normal (and other scenarios to be always regarded as abnormal). Thus preventing unnecessary warnings and alerts. TripleCheck does NOT affect the source system (or its set of rules) but is an additional layer. (After a testing phase the new rule can also be approved in the source system). Within TripleCheck’s monitoring platform management, you will be able to review who approved the records, when, and until what dates these rules will remain valid, and why.
To understand how you can manage data anomalies impact and how you can be PRO-ACTIVE in reducing its scope take a look at the next post. It’s about TripleCheck 3C Monitoring platform and what it can do for you.