19.8 C
Johannesburg
Monday, November 18, 2024

Building Trust in AI: The Key to Unlocking the Full Potential of Accounting Industry

Must read

Trust in AI – How to build reliable AI tools for customer success

By Aaron Harris, CTO at Sage

AI is revolutionizing the way the accounting industry works. It is elevating human performance by taking on the burden of repetitive but required tasks, and by accelerating analysis. But accounting is based on trust. So, companies must be totally sure that AI works accurately before they can use it to its full potential.

Even the most innovative AI tool is only useful if it’s being used in the right way. And people are only comfortable transitioning work into the hands of technology if they trust it will do the job safely and competently. A recent KPMG study, for instance, found that 61% of people are wary about trusting AI systems.

So, how do you instill trust in the AI designed to support the role of accountants? It’s about taking a responsible, humble approach to AI development while working in collaboration with customers throughout the process, ensuring they have faith not only in the technology, but also in the company behind the technology.

Humility and accountability

Mark Zuckerberg’s mantra, “Move fast and break things”, doesn’t apply to AI. AI can be an enormous force for good, but its speed and scale of operations mean it equally has the capacity to do harm. And, irrespective of which process is automated or how accurately, accountability for results, a core element of accounting, will always reside with humans. It’s about finding the balance. One in which AI seamlessly blends into workflows, enhancing them, while human guidance and contributions remain essential to the process.

It is ok to admit nobody knows all the answers. When I began my journey with AI seven years ago, I didn’t fully grasp the extensive impact that even the most harmless-seeming applications could have. That’s why developers must take a step back and explore all potential outcomes of creating a product to solve a business challenge.

For instance, Sage might create an AI tool that can speedily provide credit scores for small businesses. However, we’re not thinking about doing this right now because there’s a risk that the AI might be biased. This means it could unfairly give lower scores to businesses owned by women or minorities. At Sage, we’re cautious about using AI in ways that could unintentionally harm certain groups because we want to ensure fairness for all businesses.

Ethical considerations

Getting to a point where AI can provide these insights starts by being deliberate about what you are allowing AI to do. And articulating clear principles that define what you will and will not do with AI. This builds customer trust as they understand what types of problems you will use AI to address.

Instilling clear principles from the outset helps mitigate potential bias arising in AI. It guides developers in choosing which problems we can solve with AI that do not carry societal risk, and as a result can help to avoid any unwanted consequences of supplied AI insights from training data and data sources. Using AI auditors to identify particular areas to address in your AI development process can also combat possible biases. As can investing in recruiting diverse talent into development teams to build in diversity of thought and lived experience.

Diversity remains an issue in the AI industry, with less than 25% of AI employees identifying as a racial or ethnic minority in a 2022 McKinsey study. Meanwhile, women accounted for just 26% of workers in data and AI globally in 2021 and it is safe to say that hasn’t evened out by 2023.

At Sage, we have established advisory councils so we can test prospective AI solutions/ innovations with a diverse audience. We run a customer advisory council which allows us to showcase products in development to a cross-section of the SMBs we serve.

A customer-centric, trusted approach

It only takes one error to lose an accountant’s trust — especially in small businesses where even tiny mistakes can cause big problems. Thats’s why it’s super important that when accountants start using technology to automate jobs that people used to do, they have confidence it’s going to work right.

At Sage, we’re creating AI that works as well as a human does. Involving the people who’ll actually use this AI in the development process helps us to make sure we’re on the right track. By working closely with our customers, we can really understand what they need, where they’re finding our AI helpful, and use their feedback to make the AI more fit for purpose. For accountants, this frees up more time for strategic leadership – letting AI handle the routine stuff as well as crunching the numbers to give useful insights.

Take Sage’s General Ledger Outlier Detection. Accountants don’t have time to look through thousands and thousands of transactions, so this product identifies anomalous transactions for review. At the outset, we thought it would be valuable to flag every single problem immediately. But this was interrupting employee workflow. So, guided by our customers, we re-designed it to provide grouped outliers presented as potential problems. Even if there’s only one real problem in a list of 100 the AI provides, that delivers value because it might otherwise have been missed. And, by reacting to actual customer needs, we show them we understand how they work and make decisions with the aim of boosting their efficiency – two key elements in building trust.

Users must feel in control

Developing AI is about more than understanding how humans make a decision or perform a task. It’s about considering human emotions too. For example, AI has the potential to pay vendors with no human review. But that takes a leap of faith most users are not willing to make. It is human intuition to question if technology is doing what it claims. By understanding the emotional element, you can design a solution with the right degree of human oversight and control. In the case of vendor payments, we’d need a significantly higher degree of confidence in our predictions and would be far more conservative about what decisions are automated. Simply put, the more anxious an employee feels about getting something wrong, the more control they should feel across the process.

Take Sage Inbox, for instance. The tool uses AI to sort through all of your emails and determines what each one is about – like, is this an invoice or someone asking for legal information? Then, it sets up the steps to handle the email as required. It even drafts a suggested reply email for you, but the accountant has the option to change the email or do something else entirely. The accountant is still in charge; the AI just helps you get to an answer quicker.

Building trust at every stage

AI is already propelling the accounting industry forward but as its adoption increases, and AI interactions become more obvious, trust in the technology is going to be even more pivotal.

This trust comes from a few things. First off, the AI must be accurate so people can rely on it to produce the right results. It’s about being honest about AI’s limitations and managing expectations. Companies need to feel sure that the developers who make AI have thoroughly tested it and carefully considered its impact on society. It means being transparent, self-policing, and complying with regulations as they evolve. And, importantly, it means understanding how people and AI fit together. Now more than ever, the people developing AI and the people using it need to work hand in hand to make sure AI acts the right way.

The chance for AI to empower small and medium businesses is huge. Trust is the differentiator that can help unlock that potential.

- Advertisement -

More articles

- Advertisement -

Latest article