Ensuring Data Privacy and Security in the Age of ChatGPT

What’s the Status of the Unofficial GPT Pilot Running in Your Company?

With the buzz and media coverage surrounding AI and generative technologies, it’s crucial for your company to proactively discuss and define what the approved usage of these technologies should look like.

According to an Increditools article, over 100 million people worldwide are using ChatGPT, generating around 10 million daily requests. Monthly traffic is estimated at 96 million visitors. These statistics suggest a high likelihood that someone in your company is already interacting with AI—whether during work hours or on company devices. While “shadow IT” is a well-known concern, we’re now entering the realm of “shadow AI.”

Although generative AI is exciting, it comes with risks. Some tasks should never be delegated to an AI, as the consequences of errors could be too costly. For example, using AI to respond to an HR complaint might seem obviously inappropriate, but not everyone may recognize the risks. Different tools have varied data policies, so it’s important to understand how using a particular AI tool could impact company data or intellectual property (IP).

I recently read about employees at a tech company who inadvertently exposed sensitive data by sharing source code with ChatGPT. It made me wonder: What company data or IP might others be unintentionally disclosing?

If this raises any concerns, here are four steps you can take to reduce your risk—and put your mind at ease:

  1. Create a GPT Use Policy When developing your policy, consider these questions:
    • Can AI tools be accessed via company hardware?
    • How should AI tools be used?
    • What should NEVER be done with AI?
    • What constitutes “common sense” when it comes to AI usage?
  2. Communicate Approved and Forbidden Tools/Practices Make sure to clearly define:
    • Which tools should NEVER be used.
    • How much trust should be placed in AI-generated responses, depending on the situation.
    • Which settings need to be configured to prevent unwanted data sharing.
  3. Review Data Agreements for AI Tools For the AI tools your company uses, examine the data agreements. Understand who has access to the data you feed into the system and how that data will be handled.
  4. Establish a Review Process Set up regular reviews of these policies, ensuring that the rules and practices are up-to-date and effective.

This is a starting point—not a final solution—but if you weren’t sure where to begin, this is a good foundation. I’d love to hear your thoughts or learn about any early steps you’ve taken to ensure AI usage remains secure and controlled.

Continue reading

News

Jill Donahue: People on the Move

PROFESSIONAL RECOGNITION Education: Calpoly, San Luis Obispo As Partner of Client Engagement at Valtree Corporation, April

Blog

Adaptable Data Management

The Rise of Data Governance: A Flexible Approach for Modern Businesses In today’s fast-paced, competitive world,

News

Valtree Signs On to ESGR’s Statement of Support Program

Valtree is excited to join Pledge 1%, a global initiative that encourages companies of all sizes