As we navigate an increasingly digital landscape, one trend has emerged as a game-changer: the adoption of Artificial Intelligence (AI) within the enterprise. It's not just a passing phase; it's an essential evolution that is rapidly reshaping how businesses operate and compete.
The 2024 Annual Work Trend Index from Microsoft and LinkedIn has some amazing statistics. Consider this: a staggering 76% of professionals believe that AI skills are crucial for their career growth, and they are prepared to acquire these skills by any means necessary. This isn't surprising when you realize that 90% of those who use AI report significant time savings. Indeed, it's a testament to the transformative power of AI that 75% of knowledge workers globally are already leveraging generative AI in their day-to-day tasks.
Yet, there's a burgeoning trend that we as leaders, need to address: the phenomenon of "Bring Your Own AI" (BYOAI). With 78% of AI users bringing their own AI tools to work, we're witnessing a double-edged sword. On one hand, it underscores the unyielding demand and enthusiasm for AI capabilities. On the other, it introduces substantial compliance and security risks that cannot be ignored. When users bring their own AI tools to work, they may inadvertently expose sensitive or proprietary information to external parties. For instance, if a user inputs confidential data into a generative AI tool that is hosted on a third-party platform, they may unknowingly grant access to that data to the platform owner or other entities. This data may then be used to train future versions of the AI models, compromising the security and privacy of the original data owner. Furthermore, if the generative AI tool is not compliant with the relevant regulations or ethical standards, it may generate content that is inaccurate, misleading, biased, or harmful, damaging the reputation and credibility of the user and the organization. Therefore, it is imperative that users are aware of the potential consequences of using unvetted AI tools and that they adhere to the policies and guidelines set by their organization.
In this blog series, we'll focus on a strategy to safely deliver AI to the enterprise. We'll explore the advantages of AI, the potential pitfalls of unmanaged AI use, and best practices for integrating AI in a manner that enhances productivity while safeguarding your organization. Join me as we unpack the critical role of AI in driving future success and how we can harness its potential responsibly.
Step 1: Getting Started – Plugging the Hole with AI for All
There are many aspects to AI, such as natural language processing, computer vision, speech recognition, and machine learning. However, the chances are that knowledge workers are already using generative AI. Generative AI is the ability to create new content or data from existing data, such as text, images, audio, or video. For example, M365 Copilot and OpenAI are both generative AI tools that can help you write faster, better, and more creatively by suggesting relevant content based on your input.
Given the statistics above, the chances are that there are knowledge workers in your organization that are already using Gen AI tools in their day-to-day work. It is therefore essential that this potential data leak be plugged as quickly as possible.
That's why we highly recommend using the Spyglass AI GENIE, our solution for AI for all safely in the enterprise. AI GENIE (Generative Expert Natural language Interactive Engine) not only simplifies AI access for your entire organization, but also prioritizes security and compliance. With AI GENIE, you can deploy an enterprise-grade secure AI Landing Zone in Azure, ensuring that your data and operations are protected. AI GENIE provides a web based chat experience that allows users to use multiple LLM models, such as OpenAI 3.5, 4.0, 4 Omni, as well as Llama 2 and the Mistral models. This can then be used by knowledge workers instead of using public services such as OpenAI. Deploying AI GENIE in conjunction with modifications to endpoint management to prevent access to these public services can plug the potential data leaks.
Another AI tool is Microsoft 365 Copilot, which integrates seamlessly with your existing M365 environment, leveraging content stored in SharePoint, Teams, and OneDrive. It uses the content that each end-user has access to, and while it is "security trimmed" to respect access controls, the nature of Large Language Models (LLMs) like ChatGPT can sometimes introduce unexpected results. M365 Copilot not only allows you to create content it also provides a more powerful search feature allowing you to use natural language to find content you are looking for. It's important to be aware of some nuances and potential drawbacks. For example, a user might search including the term "paid" intending to find information related to recent invoices. However, due to the expansive and associative nature of LLMs, the search could return additional, potentially sensitive information such as "total compensation," "stock grant," or other remuneration details. This could happen because the AI is trained to identify and link related concepts and data points, sometimes surfacing content that the user might not have been fully aware they had access to.
However, the potential for these unexpected results underscores the importance of managing permissions creep and over-permissioning in SharePoint and Teams. Permissions creep happens when users gradually accumulate more access rights than they need, often due to role changes or project-based access that isn't revoked in a timely manner. Over-permissioning, on the other hand, occurs when users are granted more access than necessary from the outset, which can lead to significant security risks. Both scenarios can result in sensitive information being accessible to individuals who do not require it, thereby increasing the risk of data leaks and security breaches.
Implementing robust data governance and compliance frameworks can mitigate these risks, and this is where tools like Microsoft Purview come into play. Microsoft Purview offers comprehensive solutions for data governance, risk management, and compliance across your Microsoft 365 environment. By using Purview, you can set up policies to regularly audit and review user permissions, ensuring that access is aligned with current roles and responsibilities. This helps in preventing both permissions creep and over-permissioning.
We recommend doing some level of data governance review and implementation prior to deploying M365 Copilot.
In the next installment I’ll go into operationalizing AI. Stay tuned and contact us today to discuss further!