Audit Logs

Can anyone tell me the source of this data? Should we trust it?

If organisations are to successfully adopt AI it's clear that transparency and explainability will be critical foundations for building digital trust...


Have you ever questioned the source of the data you consume in your personal and business life? Ten years back the thought of questioning data and digital information being presented to us was unthinkable with our trust in digital content at a high. Yes, we knew the social media post about how lavish the neighbour’s vacation was not entirely truthful however, when it came to business data, we wouldn’t dare question it’s source or legitimacy fearing we might offend our employer.

Roll the clock forwards to 2024 and has our trust of digital information and data changed? Yes, you say, having just read about the latest deepfakes and misinformation in your news stream. These events are highlighted in the media daily and our awareness that not all data or digital information comes from legitimate sources is ever growing. Despite this digital trust in business hasn’t changed with many employees still never questioning the origins of the data they use to make critical business decisions.

Why don’t we question the origins of business data?

In a business world that is still largely in denial about historic poor data quality, data lineage and data management it’s no wonder we avoid asking the difficult questions. We thought that moving data and applications to the cloud would provide an opportunity and incentive for businesses to clean up data and get their houses in order. Unfortunately, in many cases the complexity and cost of classifying and cleansing data meant that many organisations defaulted to migrating their data to the cloud without any real housekeeping. Now with the arrival and eagerness of AI adoption the same challenge is faced by the same organisations who thought they had navigated around the data challenges. This time however the challenge is even greater than the previous move to the cloud with an even greater need for clean, classified, and permissioned data that has known origins and lineage.

So, will AI prioritise the conversation about the origins of data given its appetite for high quality permissioned data? Or will business leaders choose to ignore the underlying data lineage issues gambling their reputation by continuing to use unclassified “grey” data for AI applications to gain efficiencies?

With consumers and employees becoming ever more distrustful of data and digital information, businesses looking to adopt AI will firstly need to navigate improved data management and traceability of business data to meet Digital Distrust head on. Secondly, they will need to be positioned to provide defensible transparency when questioned about the data sources being used by the AI technology. Verbal reassurance or a self-curated document in response to the question will most likely not be adequate as we move into the AI immersed business world.

Will AI drive digital trust?

AI’s potential to revolutionise society is undeniable, yet its impact on digital trust is complex and multifaceted. Already we have seen AI technology amplifying digital distrust in the following ways.

  • Deepfakes and misinformation: Highly convincing fake content being created by AI causing many to start question media resulting in erosion of trust in digital information.
  • Algorithmic bias: AI systems being trained on poor or biased data that perpetuates and amplifies existing societal biases leading to distrust in AI-driven decisions.
  • Privacy concerns: Extensive data collection to feed AI development raises concerns about privacy and data misuse by the AI technology providers

Despite the potential for AI to amplify digital distrust there are also equally ways AI can solve the challenges and promote trust if the correct controls and frameworks are adopted from inception. Some examples of how AI can support digital trust include;

  • Trustworthy AI: By developing AI systems with transparency, accountability, and fairness built-in, it's possible to increase trust in AI.
  • Fact-checking and misinformation detection: AI can be used to identify and combat fake news and misinformation, helping to restore trust in information.
  • Personalised experiences: AI can be used to create highly personalised experiences that build trust between businesses and consumers.

Whatever your viewpoint on whether AI promotes digital trust or erodes it, it’s clear that AI technology is here to stay and it’s our collective responsibility to use it ethically. Organisations developing AI applications must prioritise balancing rapid innovation with robust trust-building measures, necessitating transparency about AI applications, capabilities, and limitations, as well as a strong ethical foundation to mitigate risks and foster confidence.

What does transparency in AI applications look like?

To establish the correct foundations of an AI application, organisations need to first focus on transparency and explainability that are crucial to establishing trust. When users understand how an AI system works and the rationale behind its decisions, they are more likely to accept its outputs and rely on it.  

Transparency involves disclosing information about an AI system's development, data, and operations and for many organisations this type of disclosure has been historically suppressed in order to retain competitive advantage. Sharing the AI model architecture and the underlying algorithms together with the limitations of the AI application is certainly not common practice so this is significant shift for business leaders to grapple with.

More importantly to transparency is the data the AI application has been built on. Where the data came from, how it was collected, and what processing and transformation it underwent. Don’t be fooled into thinking this is just about collecting logs to provide evidence of the actions and events taken. This goes far beyond collecting traditional audit logs - This is about proof and defensibility therefore the logs provide part of the solution but it’s the storage and presentation of this information to the user that’s critical if digital trust is to be established.

This is where technologies such as blockchain can be used to provide an additional tamper-proof layer to AI applications to capture all data transactions and interactions. In the case of blockchain the technology also allows secure sharing of captured AI interactions with users thus supporting and promoting trust and transparency in that same way that the technology is used for recording of crypto currency transactions or the tracking of materials in a supply chain to providing consumers with transparency of environmental credentials.

The second foundational requirement is explainability. Explainability in AI aims to render complex decision-making processes comprehensible to humans. This involves identifying key factors influencing decisions (feature importance), providing clear rules or logic underlying the outcome (rule-based explanations), presenting similar instances where the AI reached the same conclusion (example-based explanations), and demonstrating how altering input data would impact the result (counterfactual explanations).

The perfect digital trust recipe for AI

Getting the balance of transparency and explainability right will mean the difference between retaining loyal customers and business reputation. Getting it wrong has the potential to destroy an organisation overnight. If consumers of the AI application trust the data, then they are more likely to continue using the service resulting in long term business growth and loyal customers.

Other benefits of transparency and explainability include improved accountability making it easier to identify and rectify errors or biases, and enhanced collaboration fostering open communication between developers, users, and regulators.

Lastly regulatory compliance around AI systems already requires transparency and explainability so embracing these requirements ensures an organisation’s agility to meet these new legislative frameworks. For example, the recent publication of the EU Artificial Intelligence Act in the Official Journal of the EU marks a pivotal moment for the global AI industry. Taking effect on August 1, 2024, this landmark legislation, coupled with the introduction of the ISO/IEC 42001:2023 AI management systems standard last year, signals a new era of stringent AI regulation. Businesses must now prioritise understanding and adapting to these regulatory frameworks.

Final thoughts ….

The road ahead for business leaders is going to involve tough decisions and challenging conversations about data, its use, its origins, and its quality. There will no doubt need to be significant investment in cleansing and management of data both historical and current before it can be used for AI systems and applications.

To meet transparency requirements, organisations will need to balance the sharing of intellectual property that might reveal business secrets with the disclosing of just enough information to ensure trust is established.

Explainability will need to address the various technical level audiences being able to present information in a way that simplifies a complex AI application for a non-technical consumer of the data.

As with any new technology there will be errors and oversights resulting in legal ramifications so organisations will need to ensure they have a robust and defensible auditing solution to respond either to regulatory breaches or litigation. Solutions such as LogLocker offer this type of protection integrating with AI applications to ensure legal challenges can be responded to with tamper proof evidence guaranteeing digital trust and avoiding reputational and financial ruin.

Ultimately, the impact of AI on digital distrust depends on how it is developed and deployed. By prioritising transparency, accountability, and ethics, it is possible to harness the potential of AI to rebuild trust in the digital world.

Similar posts