The term, along with the better known ‘deepfake’, is becoming familiar as we learn more about fake news and the processes that sit behind the production of false information.
Both shallowfake and deepfake refer to deception through the manipulation of media. However, while deepfakes require advanced artificial intelligence (AI) software, shallowfakes can be created with basic editing software, like Photoshop. For this reason, shallowfakes present a more immediate fraud threat than their complex relative.
Coverage of fake news has created new opportunities for fraudsters to explore. It has also taught us to question what we’re told and shown. These two opposing factors have seemingly combined, with shallowfakes being created and identified in increasing volumes.
The insurance industry has noted an increase in shallowfakes used to support the inception of fraudulent policies and pursue fraudulent claims. But the threat extends far beyond insurance fraud, and public service organisations involved in financial transactions and settlements are equally at risk.
Shallowfakes can be video, document or image based. A fraudster would use a shallowfake to secure appropriate levels of insurance cover and/or financially gain through a fraudulent claim, transaction, or acquisition of property. This will either be an isolated attempt by an individual to defraud, or a repeated more sophisticated fraud perpetrated by networks of organised criminals.
Shallowfakes usually fall into one of two categories:
- Proof of identity or address – any evidence produced by a person to prove who they are. These could be photo ID documents like driving licences or passports or national insurance cards, utility bills or bank statements.
- Supporting evidence – any evidence produced to support a claim or transaction. This could include invoices for services rendered, contracts and agreements, expert reports, accident damage and injury photos.
Any disclosure has the potential to be a shallowfake, but the crucial differentiator between a shallowfake and any other false document is the manipulation of genuine pre-existing media.
It is their basis in reality that makes the shallowfake hard to identify, especially by employees who have large volumes of documents and images to process and who are under pressure to make quick decisions on liability positions, payment plans, settlement offers and property ownership.
While some shallowfakes can be easily spotted through font misalignments, inconsistent backgrounds, or colour variations, often the alterations are too subtle to be obvious visually. Equally, such inconsistencies could be the result of poor but legitimate reformatting.
The true clues are often in the finer details. One recent example is an individual presenting a bank statement to prove a transaction. The overall statement balance did not add up from that transaction onwards, despite it calculating perfectly beforehand. Another example is a scanned colour copy of an individual’s driving licence, complete with a licence number containing one too many digits and a format that did not conform.
Some private organisations have purchased technology or are building their own tamper detection technology to assist in the identification and defence against manipulated media. Identification is achieved through basic metadata validation or pixilation variance analysis.
Unfortunately, technology is less effective at spotting shallowfakes than more complex deepfakes. This is due to the distinct coding patterns created by AI. Furthermore, even the most advanced detection software is easily circumvented through simple file formatting changes, such as resaving a manipulated image as a PDF, or printing a hard copy version of the shallowfake and taking its photo.
A photo would appear legitimate to any detection software and would bypass any built-in fraud triggers. The result is that human intuition and robust validation processes remain the most important guardians against shallowfakes, so staff training and awareness are vital.
Investigating suspected shallowfakes involves a comparison against what ‘genuine’ looks like, cross-checking against other evidence, and where appropriate, contacting the source and requesting confirmation that the document and details provided match those held on official record.
This thorough human checking process proved effective in a recent claim where an energy bill presented as proof of address raised suspicion. In that instance the fraudster hadn’t considered that the account number could be checked through a data protection request to the utility provider.
As the digital age develops and fraud intensifies, the tendency is to rely on technology to help us detect criminals. However, technology can’t yet fully protect us against manipulated media. It follows that more private and public service organisations will fall victim to the unscrupulous individuals and businesses intent on deceit.
Organisations and individuals must be vigilant if we are to win this fraud battle. We rely on human instinct, attention to detail, awareness and knowledge, to check the validity of what is being processed.
In our online worlds we may question the validity of the videos appearing to show famous people in compromising positions, or the accuracy of social media ‘news’, but do we apply that logic when reviewing the disclosure documents hitting our desks each day?