Business
March 19, 2026

5 Steps to Build Secure AI That Won’t Create Compliance Risk

10 min to read

Many organizations are rushing into AI without understanding the security risks or clearly defining the expected benefit. The result is expensive projects, compliance exposure, and tools nobody uses.

Future in Tech’s Board Member Dr. Galina Datskovsky and CISO Ed Moyle have spent years working with, teaching, and training organizations on effective, secure AI. Their guidance points to five concrete steps any organization — eager early adopter or cautious skeptic — can follow to build an internal AI model that delivers real value while staying secure and compliant.

Step 1: Define the Business Outcome

Many AI projects produce something you can’t monetize or that doesn’t deliver tangible value. To avoid money pits, answer four essential questions before writing a single line of code:

  • Who will use this AI, and for what purpose?
  • What does success look like, and how will you measure it?
  • Where do the savings or new capabilities come from?
  • What data sources, security levels, and governance policies apply?

Answering these questions requires upfront investment — but skipping them is what dooms AI projects. As Dr. Datskovsky puts it: “people are sort of throwing it out there and say[ing], ‘Here it is. Use it.’ And really maybe they haven’t identified the business purpose.” Without a clear use case, there’s no way to answer the most important question of all: did it succeed?

Step 2: Audit and Clean Your Data

A secure AI model depends on clean, well-governed data, which requires knowing where all your data lives, eliminating ROT, and cleaning up structured data. [Dr. Datskovsky / Galina] notes that organizations assume structured data to be reliable and therefore automatically ready to use. Yet across most organizations, the same concept carries different labels in different departments. Sales and marketing may both track lead sources but record them in completely different formats. “Zip code” and “postal code” may be used interchangeably, producing mismatched records.

Before feeding structured data into a training set, align naming conventions and inputs across your organization. Well-governed data won’t make you immediately AI-ready, but it puts you further ahead than most.

Step 3: Prepare Access to Trusted Data Sources

Defining the right sources — and monitoring them — separates a secure model from a vulnerable one. Ed recommends evaluating three categories:

Training data. Know where your data originated and how it was processed. Training data must be well-controlled, labeled, and access-restricted. This is where information governance matters most.

RAG and other context sources. Retrieval-Augmented Generation (RAG) lets an AI pull from external sources at query time, useful for industry-specific knowledge (for example, a legal firm’s AI drawing on its full case archive). Choose sources that are heavily monitored and regularly updated. Poorly maintained sources open the door to data poisoning.

Interaction logs. Every prompt a user submits and every response the model returns constitutes a record with potential legal relevance. Rather than waiting for regulation to catch up, build logging into your system from day one and establish a retention policy. Moyle and Dr. Datskovsky both note this is a cornerstone of defensible AI.

Step 4: Implement Access Controls

Secure AI controls not just what data it ingests, but what information it surfaces, and to whom. Because LLMs deliver answers as natural language responses, the model must know which answers are appropriate for which users.

Consider an internal LLM at a law firm loaded with all current and past client files. Due to a conflict of interest, Attorney A cannot work with Client XYZ. If that attorney queries the AI for Client XYZ’s files, should the model return “access denied” — or simply “no files found”? The technical implementation differs significantly, and the right answer depends on what compliance requires in that specific context.

As Dr. Datskovsky notes, organizations must understand “what compliance means [for their organization]” before the model goes live. These decisions require human sign-off, and they need to happen before employees start using the tool.

Step 5: Log and Govern Interactions

Governance must be built into the foundation. Tracking outputs manually isn’t scalable due to the volume and speed of LLM interactions. What organizations can govern, however, are the inputs: data quality, data freshness, and access permissions.

Tools like FiT Governance give teams control over data quality and lifecycle, flagging when data should be retired and who can have access, ensuring your model draws only from sources you trust.

AI governance doesn’t need to wait for legislation. Moyle notes that “many people are still trying to figure out what an LLM is, let alone how they’ll secure it.” Building auditability in now — before a compliance incident forces the issue — ensures future scalability and security. 

The Foundation Determines the Outcome

Organizations that treat AI governance as an afterthought will struggle to scale AI safely. Those that build security into the foundation — defining outcomes, cleaning data, selecting trusted sources, enforcing access controls, and logging interactions — can unlock real enterprise value.

If your organization is pursuing an AI initiative, talk with our team about how FiT’s information governance software supports every step of the process. Book a demo today.

Book a Demo

Resource center

Insights from the FiT Blog

Compliance

Breaking Down Technology & Data Silos for Better Governance

March 5, 2026
Learn More
Business

Dark Data in Law Firms: The Hidden Risk and the Information Governance Fix

February 17, 2026
Learn More
Compliance

Box Retention Challenges: Managing Complex Retention Rules

February 4, 2026
Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
This is some text inside of a div block.

Heading

Learn More
View All

Modernize Your Document
Lifecycle with Bespoke Solutions!

Discover tailored tools to streamline and elevate your workflows.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.