Popular Lesson

1.5 – Privacy, PII, PHI, and Data Use in AI (What You May and May Not Do) Lesson

Use a clear, repeatable method to document data, evaluate purpose and legal basis, and mitigate risks when using AI. Watch the lesson video for demonstrations and practical decision checks.

What you'll learn

  • Identify personal data: Recognize when data can point back to a person, even in combinations like postcode and birthdate.

  • Classify sensitive data: Distinguish PII such as bank details and national IDs, and PHI such as biometrics, DNA, and medical record data.

  • Define a lawful purpose: Tie your purpose to a legal basis under GDPR and understand why they must change together when your use changes.

  • Evaluate vendors: Ask four essential questions about logging, storage location and duration, access, and opt-out controls before you commit.

  • Design transparent consent: Make consent easy to give, understand, and withdraw, and rewrite notices so a 12-year-old would understand them.

  • Build rights-handling basics: Set up intake, identity verification, routing, and an audit trail so access, correction, deletion, and opt-out requests are handled reliably.

Lesson Overview

One careless prompt to an AI can expose sensitive details, as in the example of a patient diagnosis being revealed outside a healthcare organization. That outcome was preventable. This lesson teaches a practical mindset you can apply under pressure: document what data you have, evaluate why you are using it and on what legal basis, and mitigate risk with strong vendor choices, clear transparency, real consent, and working data rights processes.

You will learn a simple test for personal data: could this information, alone or combined with other data, identify someone? If yes, raise your controls. You will also sort sensitive categories like PII and PHI, so you can treat them with heightened care. The lesson explains how purpose and legal basis are locked together under GDPR. Using customer data for fraud detection under legitimate interest is not the same as reusing it later for targeted ads, which will likely require consent and an update to policy terms.

Because AI tools often log prompts, outputs, and metadata, you will practice questioning vendors about logging specifics, storage location and duration, who can access data, and whether you can opt out or reduce logging. Finally, you will see what transparent notices and frictionless consent look like, and how to stand up a data rights process that can actually fulfill access, correction, deletion, and opt-out requests without chaos.

Who This Is For

If you handle customer, client, or patient data, or you help select AI tools and set data policies, this lesson will help you protect people and your organization.

  • Product managers deciding how features use customer data
  • Marketers considering new uses of existing datasets
  • Data analysts and engineers who prompt AI tools with real data
  • Healthcare and support teams who see PHI or PII in their workflows
  • Operations and compliance staff building data rights processes
Skill Leap AI For Business
  • Comprehensive, Business-Centric Curriculum
  • Fast-Track Your AI Skills
  • Build Custom AI Tools for Your Business
  • AI-Driven Visual & Presentation Creation

Where This Fits in a Workflow

Use this lesson’s method before you collect data, before you send any real data to an AI tool, and whenever a team proposes a new use for existing data. It also belongs in your vendor selection and renewal process, where logging and retention terms can make or break acceptable risk.

Examples:

  • Your team wants to reuse fraud detection data for targeted ads. Apply the purpose and legal basis check, and determine whether consent and a policy update are needed.
  • You are trialing an AI assistant that logs prompts. Ask the four vendor questions about logging details, storage and duration, access, and opt-out controls, and decide whether to proceed.
  • A customer submits a deletion request. Use your intake, verification, routing, and audit trail steps to respond without hunting across systems.

Technical & Workflow Benefits

The old way collected data “just in case,” reused it without rechecking purpose, pasted legal jargon into policies, and trusted vendor defaults. That path leads to reputational harm when PII or PHI leaks into an AI system, unclear consent, and fire drills when people ask for deletions.

This lesson’s method keeps you focused on the minimum data needed and ties the purpose to a legal basis that fits. When the purpose changes, you reassess and, if needed, seek consent and update policy terms. Vendor due diligence shifts you from vague assurances to specific answers about logging, storage location and duration, access, and opt-out settings. Clear transparency notices and consent flows reduce confusion and complaints. A basic rights-handling process with intake, identity verification, routing to data owners, and an audit trail turns last-minute chases across 15 systems into a repeatable workflow. The result is less risk, faster responses, and clearer proof that you are using data responsibly.

Practice Exercise

Use a dataset and one AI tool your team already touches.

  • Step 1: List the data fields in your chosen dataset. For each, ask: could this data alone or combined with other fields identify a person? Mark items as personal data, and label any PII or PHI based on the definitions in this lesson.
  • Step 2: Write the current purpose for using this dataset and the legal basis you rely on. Now imagine a new use. If the purpose changes, note what legal basis would be required and whether consent and a policy update are needed.
  • Step 3: Pick one AI vendor or tool. Answer the four questions: what exactly is logged, where data and logs are stored, how long they are kept, who has access, and whether you can opt out or reduce logging.

Reflection: If someone asked you to delete their data right now, what exact steps would you take without searching across multiple systems?

Course Context Recap

This lesson focuses on responsible data use in AI by documenting what you hold, evaluating purpose and legal basis, and mitigating risk through vendor checks, clear transparency, real consent, and working data rights processes. It fits the course theme of protecting people while enabling useful AI. Use the video to see the checks and decision points in action, then continue through the course to apply this method across your tools and teams. Keep practicing the personal data test, the purpose and legal basis match, the four vendor questions, and the deletion gut check so these habits become second nature.