The Problem
AI coding assistants like Claude Code, Cursor, and GitHub Copilot have become indispensable tools for many developers. Their memory features provide continuity across sessions, learning your codebase and preferences over time.
But this creates a fundamental tension: How do you work on sensitive projects while using memory-enabled AI tools?
Consider the scenarios:
- Pre-patent research that could lose protection if disclosed
- Trade secrets that require "reasonable measures" to maintain secrecy under law
- Proprietary algorithms that provide competitive advantage
- Client work under confidentiality agreements
The current approach for most developers is binary: use memory for everything, or disable it entirely. Neither is ideal. You lose either the security or the productivity benefits.
Our Solution: Two Complementary Frameworks
Today we're releasing two open-source frameworks that work together to solve this problem:
Knowledge Tier Framework for AI Agents
A four-tier classification system for categorizing projects by sensitivity. Defines the structure and rules for each level of information protection.
Epistemic Guardrails for AI Agents
Cross-platform hooks that enforce the tier boundaries. Works with Claude Code, Cursor, and GitHub Copilot to block access when memory settings conflict with project sensitivity.
The Four Tiers
Not all information requires the same level of protection. The Knowledge Tier Framework provides a structured way to classify projects:
| Tier | Name | Memory | Use Cases |
|---|---|---|---|
| 1 | Restricted | OFF | Trade secrets, pre-patent, proprietary research |
| 2 | Confidential | Enabled | Competitive advantage, internal R&D |
| 3 | Internal | Enabled | Client work, business operations |
| 4 | Public | Enabled | Open source, documentation, tutorials |
The key insight: Tier 1 (Restricted) projects require memory to be OFF. If you try to access a Restricted project with memory enabled, Epistemic Guardrails blocks the access and warns you.
How It Works
Epistemic Guardrails uses a layered approach:
- Session-Start Guard warns the AI about sensitive directories when memory is enabled
- PreToolUse Hook blocks file access to sensitive directories during active sessions
- Path + Keyword Detection identifies sensitive projects by directory path and naming patterns
When the system detects a conflict (Restricted project + memory enabled), it blocks access before any information can be retained:
$ claude
[Epistemic Guardrails] WARNING: Memory is enabled but you are in a Restricted project.
Access blocked. Disable memory before working on this project.
Cross-Platform by Design
Different AI assistants use different hook formats. Epistemic Guardrails abstracts these differences, providing:
- One configuration for all platforms
- Consistent behavior across Claude Code, Cursor, and GitHub Copilot
- Easy maintenance - update core logic once
Why This Matters
Under U.S. trade secret law (Defend Trade Secrets Act of 2016), information loses protection if the owner fails to take "reasonable measures" to maintain secrecy. Allowing AI memory to retain proprietary information may constitute inadequate protection.
Beyond legal concerns, there's a practical reality: the more powerful AI tools become, the more important it is to control what they can access.
These frameworks give you that control without forcing you to abandon the productivity benefits of AI assistance.
Get Started
Both frameworks are open source under the MIT license.
Transparency Note
This work is conducted by Theios Research Institute, Inc., a 501(c)(3) nonprofit research organization. The project is currently authored by a single investigator and is released openly to invite external scrutiny, replication, and critique.
An AI coding assistant was used to help implement memory partitioning and security logic under human-specified constraints. All architectural decisions, threat models, and epistemic guardrails were designed and validated by the author.
Questions or feedback? Reach out at [email protected].