Move from Loss recovery to Loss prevention, dealing with duplicates

Large organisations often employ contingency recovery firms to have money returned, these organisations are well aware of the high costs of duplicate payments and the potential money they can recover. Those companies with revenues below about $800 million often don’t see this as an issue or are oblivious to their duplicate payment rates.  Perhaps the widely accepted average duplicate payment rate of 0.5-3% of total transactions is not believable or material enough to prompt action of detection or even better, prevention.

“Offshoring, cutbacks and system automation have slashed accounts payable payrolls but also have cost companies millions of dollars in overpayments” (CFO.com, Bloomberg, Wall Street Journal). 

Below is what this means in real dollars.

Using a conservative metric of 0.95% for duplicate payments, you can easily determine the potential value.  This metric is much lower that other quoted benchmarks of 1.07% to 3.12% of the value.  So in order to get an idea of your annual duplicate payments, most companies simply take this rate and multiple by the value of the annual Accounts Payable purchase – Easy.  But it isn’t that simple.  You need to consider the following:

Payment Volume vs. Invoice Volume 

Some businesses do not have an accurate amount for accounts payable. Often that includes payments for monthly expense items like utilities and rent which really aren’t subject to overpayments. However, you might pay these 13 times in a 12-month period.  Also, you need to note that the above ratios (0.95%, 1.07%) are based on transaction volumes not $ spend.

Many transactions may include employee re-imbursements and other payments which may seem to be duplicates but are in fact acceptable. Also, you would need to exclude an invoice raised and reverse and any credits that may have been raised but not applied. This all means that the true ratios of duplicates may reduce to approx. 0.2575%. So, if you have $800 million of Accounts Payable value per annum, you could anticipate that approx. $2.06Million might be the value of the duplicates slipping through the payment process.

Missing Duplicates

Most companies will have a duplicate report that they run monthly or believe that they have someone focussed on identifying these potential errors.  The reality is that based on a spend of $800 million, with an average transaction value of $4500, this equates to 14,815 transactions per month or 673 transactions per day. 

Many organisations also have 3-way matching, outsourcing or complex processes often designed to resolve this problem (amongst other controls) yet, they still have duplicate payments – why?  Because duplicate invoices and payments are caused by a myriad of reasons:

  • The reports we look at often need exact matches to be considered a duplicate.
  • There can be the same invoice number, date and amount but different vendors, that is almost impossible not to be a duplicate but can go undetected.
  • Then there are all the variations that can occur

         – Scanning issues where an “S” turns into a “5”
         – Date formatting 05/04/19 = 04/05/19
         – Invoice numbers being truncated e.g: 00173 = 173

  • This may also extend to the vendor master file where duplicate vendors exist for various
    reasons
  • Typos or miss keying errors e.g: $25,000 = $250.00 – in any typical duplicate report this kind of error would not be detected.

In addition, there may be deliberate attempts to create duplicate or suspicious invoices through fraudulent means.  Often these invoices will not be detected.  Besides the fact that many duplicate invoices are stopped from getting paid – if there is a sufficient gap between raising the invoice and paying it – what about the internal cost?

For example, it is estimated to cost anywhere from $4.5 to $35 of internal costs to raise and process an invoice.  If this now also needs to be credited, this adds another $12.  If this needs to be recovered as it was paid, this can cost anywhere from $35 to $1,000 of internal time – let alone the value of the payment, which hopefully can be recovered. Not counting if a recovery agency becomes involved.

Solving the problem 

The solution cannot be solved by an annual internal audit or sampling testing alone.  You need to be doing this on a continuous and daily basis.   The Accounts Payable and Vendor Master process can be involved in complex and most large organisations.  An annual audit will give a superficial insight but may not help solve, change or improve the process.  Many of the issues are time bound – what is the value in identifying a duplicate payment that is now 6 months old – it becomes too hard to identify or change the cause.

It’s economically better to implement prevention rather than detection. Prevention is better than cure. If you are falling back to detection, additional work and costs are now required.

An ideal world of automated continuous monitoring 

Let’s explore how this all could go with automated continuous monitoring, to prevent and detect issues within Accounts Payable and Vendor Master, specifically relating to duplicate invoices.

Firstly, separate historical issues. Implement automated continuous monitoring and prevent any further duplicates – find solves for anything that occurred yesterday.

Worry about historical errors last week/month/year through a separate process.
Also a tip – start with fewer tests or reports (we call these tests for consistency).  Why? Surely 30 of the best practise tests would be better.  True – but you will not be able to handle the output of data exceptions.  Overtime, you can build up to the total, but what is the value in creating output when no-one is looking into them.

Start with the test that returns the greatest likelihood and highest probability – the HPEs – High Probability Exceptions.  Examples are exact invoice detail matches and the same Invoices but different vendors – these are highly likely to be duplicates and warrant immediate follow up.

And that is where the difference and essence of continuous monitoring v reporting comes in.  Continuous monitoring is all about what you did with the exception.  Was it followed up, who is doing this, what was the reason and outcome, what was the resolution?  Also, by capturing all this added knowledge, you can then see trends and understand root cause.  In an ideal world there should be no exceptions, and this is the goal of a continuous monitoring process – to improve the business and reduce the issues to zero.

As exceptions produced by these tests are closed out and are now manageable in volume, you can add more tests or “loosen” the filters.  For example, to reduce volume and focus on value, tests might have been constrained to Duplicates only over $1000.  Now that the process is automated and under control – less time spend on recover as the invoice is not getting paid (revered), you can reduce the filter to say $100.

Now might be time to add a variation duplicate test where the results look like a duplicate, but the probability is lower.  These types of exceptions are easily “eye balled” for validity. An example is a $454,996 match with $454,699 or $4549.96.  They could be valid entries but also might be due to a miss key.

The challenge is, if identified to be a duplicate payment (as opposed to duplicate invoice), extra work is required to recover the money and can take weeks (getting acknowledgement from vendor and finally the funds back) to recover especially if not a frequent vendor.

So importantly you also need a workflow exception management process to track all the statuses of these exceptions and that they are all been managed to closure.

That is the essence of continuous monitoring – that all exceptions get managed and tracked to their right closure. As you are now also collecting the feedback to why the exception occurs, the action and outcome, amazing insights in the process can be made.  One notable insight is when things revert to old behaviour.  If duplicate payments are at zero, should these now increase, it will be obvious there has been a change in data or processes, and it can be quickly rectified.  Often causes are a change in systems or change in the team so it might just be awareness training required to solve the change in duplicates.

Overtime you can then start adding other tests besides duplicates to help identify other types of financial and non-financial issues. For example, these tests might be related to monthly payments where there should only be 12 in a year, vendors charging GST that are not registered for GST (Suspicious Behaviour Alert) or strange invoice sequences.

Now that you have a process in place to handle the new exceptions, these will now be managed and brought into the process with an owner and validated outcomes.

Making what you don’t know, known

It is said that the truth is in the transaction, not the control and that we should trust but verify.  By implementing a process of daily automated continuous monitoring, you can help to ensure that no exception slips between the cracks.  Most importantly, that you have insight, re-assurance and a clear picture of the processes and controls, that they are working effectively, efficiently and as they were designed – continuously.

We would suggest that before you invest in outsourcing recovery, you consider an investment in continuous control monitoring first. You can learn all about Continuous Control Monitoring Satori CCM and find out everything you need to know.