Investors are pouring billions into artificial intelligence. It’s time for a commensurate investment in AI management



In this AI innovation boom, organizations are pouring tens of billions of dollars into AI development. However, for all the money invested in capabilities, there has been no commensurate investment in AI governance.

Some companies may take the position that when the world’s governments roll out AI regulations, it will be the right time to champion AI programs in a governance structure that can address complex topics such as privacy, transparency, accountability, and fairness. In the meantime, businesses can focus solely on AI productivity.

The regulatory wheels are already in motion. However, regulations move at the speed of bureaucracy and AI innovation is only accelerating. AI has already been deployed at scale, and we are fast approaching a point where AI capabilities will outpace effective rulemaking, placing the responsibility for self-regulation squarely in the hands of business leaders.

The solution to this puzzle is for organizations to find the balance between compliance with existing rules and self-regulation. Some companies are tackling the responsible AI challenge: Microsoft has an Office for the Responsible Use of AI, Walmart a Digital Citizenship Team, and Salesforce an Office for the Ethical and Humane Use of Technology. However, more organizations must quickly embrace a new era of AI self-regulation.

The value of business in self-regulation

Government agencies cannot look at every enterprise, understand at a technical level what AI programs are emerging, predict the potential problems that may arise, and then quickly create rules to prevent problems before they occur. This is an unattainable regulatory scenario – and one that no business would ever want. Instead, each enterprise has a keen eye on its own AI efforts, putting it in the best position to address AI issues as they are identified.

While government regulations are enforced with fines and litigation, the consequences of a lack of self-regulation are potentially far more impactful.

Imagine an AI tool deployed in a retail store that uses video surveillance feeds, customer data, real-time behavioral analytics and other data to predict what a shopper is most likely to buy if an employee uses a certain technique for sale. AI also shapes customer personas, which are stored and updated for targeted advertising campaigns. The AI ​​tool itself was purchased from a third-party vendor and is one of dozens of AIs implemented in the retailer’s operations.

Emerging regulations may dictate how customer data is stored and transferred, whether consent is required before the data is collected, and whether the tool is provably fair in its predictions. These considerations are valid but not exhaustive from a business perspective. For example, has the AI ​​vendor and its tools been vetted for security gaps that could compromise the enterprise’s related technologies? Do staff have the necessary training and documented responsibilities required to use the tool properly? Do customers know that AI is being used to build a detailed personality that is stored elsewhere? Should they be aware?

The answers to these types of questions can significantly impact an enterprise in terms of security, efficiency, return on technology investment, and brand reputation, among other things. This hypothetical case reveals how the failure to self-regulate AI programs exposes an organization to a myriad of potential problems—many of which likely fall outside the government’s regulatory purview. The best way forward with AI is shaped by governance.

Trust management in AI

No two companies and AI use cases are the same, and in the age of self-regulation, the enterprise is called upon to assess whether the tools it uses can be deployed safely, ethically, and in line with company values ​​and existing or tangential rules. In short, businesses need to know if AI can be trusted.

Trust as a governance prism concerns more than the often-cited concerns about AI, such as the potential for discrimination and threats to the security of personal data. As I discuss in my book, Reliable AItrust also refers to things like reliability over time, transparency for all stakeholders, and accountability built into the entire AI lifecycle.

Not all of these factors are relevant to every organization. AI that automates trade reconciliation probably doesn’t pose a discrimination threat, but the security of the model and underlying data is critical. Conversely, data security is somewhat less of a concern for predictive AI used to predict food and housing insecurity, but unfairness and discrimination are priority considerations for a tool that relies on historical data that is potentially rife with latent biases.

Effective self-regulation in AI requires a life-cycle approach where attention to trust, ethics and outcomes is embedded at every stage of the project. Processes must be amended to define clear decision points. Employees must be educated and trained to contribute to the management of AI, with a solid understanding of the tools, their impact, and individual employee responsibility throughout the lifecycle. And the technology ecosystem of edge devices, cloud platforms, sensors and other tools must be aligned to foster the trust qualities that matter most in a given deployment.

Self-regulation bridges the gap between innovation and government-made rules. Not only does it put the enterprise on the path to complying with any regulations that emerge in the future, but it also provides significant value to the enterprise by maximizing investment and minimizing negative outcomes.

For all we’ve spent building AI capabilities, we must also look to invest in how we manage and use these tools to their full potential in a reliable way – and not wait for governments to tell us how.

Beena Ammananth is the Executive Director of the Global Deloitte AI Institute.

The opinions expressed in Fortune.com comments are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Happiness.

More required comments posted by Happiness:

Learn how to navigate and build trust in your business with The Trust Factor, a weekly newsletter that explores what leaders need to succeed. Register here.



Source link