Race to Artificial Intelligence FOMO
Delante Lee Bess
There is no secret that Artificial Intelligence (AI) systems provide a revolutionary capability that is transforming how we work and how organizations operate. Organizations of all sizes are investing heavily this year to either add AI to services sold to customers or to improve their internal business processes. In a March 2024 report by Logicalis, 85% of organizations surveyed were investing specifically in AI. Despite the race to implement the systems and the “Fear of Missing Out” (FOMO), organizations must ensure that they address the myriad cybersecurity risks associated with their usage.
To effectively address AI risks, organizations must have implemented a risk management process that tracks and provides visibility to organizational risk. The process must help organizational leaders to define acceptable risk that the organization is willing to undertake. This level of understanding focuses resources on addressing risks above the acceptable level. Tracking and transparency of risks allow for effective decision-making by organizational leadership and employees to ensure that risks do not fester or combine with others with damaging consequences. This capability is especially important since, according to a recent survey by QBE, only 48% of business executives are aware of their organization’s cybersecurity risk. Well-informed decision making is not possible without all available information.
With risk processes in place, organizations must ensure that effective governance processes are implemented covering the use of AI. These processes must consist of policies that are published to all organizational employees, contractors, and vendors. In the previously mentioned Logicalis report, 86% of organizations have already implemented AI policies. Besides AI-specific policies, organizations must also ensure that cybersecurity and privacy policies are implemented and aligned with the usage of AI systems and services. A recent report by RiverSafe stated that 20% of respondents had already experienced sensitive data loss as the result of Generative AI usage. Cybersecurity and Data Privacy policies and processes, such as Data Handling and Data Loss prevention, still apply to AI usage.
Organizations should also ensure that AI system usage is incorporated in their incident response processes. In a Vansom Bourne survey, 60% of respondents reported experiencing a serious security incident. AI systems are not immune. In 2023, Samsung banned the usage of external Generative AI systems after the company discovered the exposure of its confidential information by employee usage. Earlier this year, ChatGPT exposed sensitive data, including usernames and passwords, documents, and other information. An organization’s ability to react to these security incidents reduces their impact, including significant monetary costs.
While AI system usage will continue to transform organizational operations, organizations must ensure that they manage the risks associated with the usage. Risk Management processes provide visibility to all risks to the organizations. Proper governance through policies and other guidance ensure all members of the organization know the “rules of the road” related to usage. Effective incident response processes ensure reduced organizational impact when things go wrong. Through these efforts, organizations can successfully recognize the revolutionary capabilities that AI systems provide.
Confluunt Advisors is designed to assist organizations with various challenges including the risks associated with the usage and implementation of AI systems. Our advisors can assist with the design and implementation of AI systems. In addition, our advisors have vast experience with building risk management processes, cybersecurity policy development, and incident response plan development and testing.

Delante Lee Bess
Innovation & Artificial Intelligence