A comprehensive AI security checklist gives small businesses a clear, repeatable process for keeping their AI tools safe. This 20-point checklist covers four categories — access controls, data protection, vendor assessment, and ongoing monitoring — and every item can be completed without a dedicated IT team. According to Verizon’s 2025 Data Breach Investigations Report, 68% of breaches involved a human element, which means the most effective security measures are the ones that shape how your team interacts with AI tools daily.
Key Takeaways
- This checklist covers 20 items across four categories that form a complete AI security foundation
- Every item can be implemented by a non-technical business owner in under a week total
- Access controls and data protection are your highest-impact starting points
- Vendor assessment should happen before you sign up, not after
- Ongoing monitoring prevents the gradual security drift that catches most businesses off guard
How to Use This Checklist
Work through each category in order. The first two categories (Access Controls and Data Protection) should be completed within your first week. Vendor Assessment applies each time you evaluate a new AI tool. Ongoing Monitoring is a recurring cadence you maintain once the foundation is set. Print this page or save it as a reference you revisit quarterly.
Access Controls (5 Items)
1. Inventory Every AI Tool in Your Business
Create a spreadsheet listing every AI tool your team uses — including ones employees signed up for on their own. For each tool, record: the tool name, who has access, what business data it touches, and whether it’s on a free or paid plan. You can’t secure what you don’t know about. Many businesses discover 3-5 AI tools they didn’t know their team was using when they do this for the first time.
2. Assign Individual Accounts (No Shared Logins)
Every person who uses an AI tool should have their own account. Shared logins make it impossible to track who did what, and they make it impossible to revoke access for one person without disrupting everyone else. If your ChatGPT team plan has three seats, three people get individual logins — not one login passed around.
3. Apply Least-Privilege Access
Give each team member the minimum level of access they need. Not everyone needs admin rights. Most AI tools offer role-based access — admin, editor, viewer. The receptionist who uses AI for scheduling doesn’t need the same permissions as the owner who manages billing and integrations.
4. Enable Two-Factor Authentication on Every AI Account
This single step prevents 99% of credential-based attacks. Enable 2FA on every AI tool that supports it, and on every account that connects to an AI tool (email, CRM, cloud storage). Use an authenticator app (Google Authenticator, Authy) rather than SMS codes, which can be intercepted.
5. Set Up an Offboarding Process for AI Tools
When an employee leaves, their AI access should be revoked the same day. Add AI tool access to your standard offboarding checklist alongside email, building access, and other systems. One former employee with active AI access who can view customer conversations is one too many.
Data Protection (5 Items)
6. Classify Your Business Data Into Three Tiers
Not all data needs the same protection. Create three categories: Public (marketing content, product descriptions — any AI tool is fine), Internal (financial data, strategic plans — business-tier AI tools only), Restricted (customer PII, health records, payment data — requires specific approved tools and processes). Post this classification where your team can reference it.
7. Write a One-Page AI Usage Policy
Document what data employees can and cannot enter into AI tools. Be specific: “Do not paste customer email addresses, phone numbers, Social Security numbers, or credit card numbers into any AI tool” is clearer than “Be careful with sensitive data.” Include a list of approved AI tools and who to contact with questions. Our AI security guide covers the full framework for writing this policy.
8. Use Business-Tier Plans for Any Tool That Touches Customer Data
Free AI plans typically have weaker data protections and may use your inputs for model training. Business and enterprise plans come with data processing agreements, no-training commitments, and better access controls. The $20-30/month per user cost is negligible compared to the liability of a data incident.
9. Anonymize Customer Data Before AI Processing
When you need AI to analyze patterns in customer data — support ticket themes, purchase trends, feedback sentiment — strip identifying information first. Replace names with “Customer A,” remove email addresses, and redact account numbers. The business insight is the same; the risk is significantly lower.
10. Enable Data Export and Backup for AI Platforms
Verify that you can export your data from every AI platform you use. Run an actual export test — don’t just trust the vendor’s promise. Store backups separately from the AI platform. If a vendor goes down, changes terms, or gets breached, your data shouldn’t be trapped inside their system.
Vendor Assessment (5 Items)
11. Check the Vendor’s Training Data Policy
Before signing up for any AI tool, find a clear answer to: “Does this vendor use my data to train their models?” The answer should be in their terms of service, privacy policy, or data processing agreement — not just in a blog post or FAQ. If you can’t find a clear answer, ask directly. Vague answers are a red flag.
12. Verify SOC 2 Type II Certification
SOC 2 Type II means the vendor has been independently audited for security controls over a sustained period (not just at a point in time). Ask to see the report. A badge on their website isn’t proof. This certification doesn’t guarantee security, but its absence means the vendor hasn’t submitted to independent scrutiny.
13. Review the Data Processing Agreement
Request and read the vendor’s DPA before entering any customer data. Key items to verify: data retention periods, sub-processors, breach notification timeline (should be 72 hours or less), deletion procedures, and audit rights. Most major vendors have standard DPAs available — you just need to ask and sign. Our data privacy guide has the full DPA checklist.
14. Test Data Deletion Procedures
Before committing to a vendor, test their data deletion process. Create a test account, add some sample data, request deletion, and verify it’s actually gone. Some vendors say they support deletion but take 90+ days or only soft-delete (hide data rather than remove it). You need to know the reality before your customer data is in the system.
15. Evaluate Business Continuity Risk
What happens to your data and workflows if this AI vendor shuts down, gets acquired, or has a prolonged outage? Check: data portability (can you export everything in a usable format?), contract terms for vendor discontinuation, and whether your workflows can be rebuilt on an alternative platform. Document your backup plan for each critical AI tool.
Ongoing Monitoring (5 Items)
16. Review AI Tool Access Quarterly
Every three months, audit who has access to each AI tool and what level of access they have. Remove accounts that are no longer needed. Downgrade access levels that are higher than necessary. People change roles, leave the company, or stop using tools — access should reflect current reality, not historical permissions.
17. Check Vendor Terms of Service for Changes
AI vendors update their terms frequently — sometimes with material changes to data handling. Set a quarterly reminder to review the terms of your key AI vendors. Look specifically for changes to: data retention, training data usage, pricing, sub-processors, and breach notification procedures. A side-by-side comparison of AI tools can help you evaluate alternatives if terms change unfavorably.
18. Monitor for Shadow AI Usage
Shadow AI is when employees use unapproved AI tools for work tasks. It’s the AI equivalent of shadow IT. Check periodically: are team members using personal ChatGPT accounts for work? Are they uploading files to AI tools not on the approved list? A brief anonymous survey or a review of browser extension lists can surface this without being intrusive.
19. Test Your Incident Response Plan
If an AI tool is breached or an employee leaks data through an AI tool, what happens next? Document the steps: who gets notified, how customers are informed, which accounts get locked, and who contacts the vendor. Run through this scenario once a year as a tabletop exercise. The worst time to write your incident response plan is during an actual incident.
20. Update Your AI Inventory and Policies Annually
At minimum once a year, revisit your AI tool inventory, usage policy, data classifications, and vendor assessments. AI moves fast — the tools you’re using, the risks they present, and the regulatory landscape all change. An annual review ensures your security posture keeps pace. Automated workflows can help you track tool usage and flag policy deviations without manual effort.
Implementation Timeline
You don’t need to do all 20 items at once. Here’s a practical timeline:
- Week 1: Items 1-5 (Access Controls) — Inventory your tools, fix logins, enable 2FA
- Week 2: Items 6-10 (Data Protection) — Classify data, write your AI policy, upgrade to business tiers
- Ongoing (per new tool): Items 11-15 (Vendor Assessment) — Run this checklist before every new AI tool adoption
- Quarterly: Items 16-18 (Monitoring) — Access reviews, terms checks, shadow AI sweeps
- Annually: Items 19-20 (Full review) — Incident response test, complete policy and inventory refresh
If your business handles regulated data or wants expert help implementing this checklist from day one, schedule a call with our team. We configure AI deployments with security built in so you don’t have to figure it out as you go.
Frequently Asked Questions
How long does it take to complete this entire checklist?
Most small businesses can complete the foundational items (Access Controls and Data Protection) in about a week. Vendor Assessment is an ongoing practice applied per tool. Monitoring is a quarterly cadence. The full initial setup typically takes 2-3 weeks of part-time effort.
Do I need an IT background to implement these items?
No. Every item on this checklist is written for non-technical business owners. The most technical step is enabling two-factor authentication, which most AI tools walk you through in their settings. If you can manage a social media account, you can implement this checklist.
What’s the most important single item on this list?
Item 4 — enabling two-factor authentication on every AI account. It’s the highest-impact, lowest-effort security measure available. It blocks 99% of credential-based attacks and takes about 5 minutes per account to set up.
How often should I revisit this checklist?
The monitoring items (16-20) should be done quarterly. The full checklist should be reviewed annually or whenever you add a significant new AI tool to your business. If regulations change in your industry, do an immediate review of the relevant sections.
What if I find shadow AI tools my employees are using?
Don’t panic or punish. Employees usually adopt shadow AI tools because they genuinely help with their work. The productive response is to evaluate those tools against your Vendor Assessment checklist (items 11-15), approve the ones that meet your standards, provide alternatives for the ones that don’t, and update your approved tools list.