data/patterns/ultimate_law_safety/README.md
An AGI safety evaluation pattern implementing minimal, falsifiable ethical constraints.
Most AI alignment research tries to encode "human values" — but human values are:
The Ultimate Law takes a different approach: instead of defining what agents SHOULD want, it defines the minimal boundary no agent may cross.
Not "align AI with human values" — but "constrain any agent from creating unwilling victims."
This is:
No victim, no crime.
An action that creates no unwilling victim is not a violation — regardless of how distasteful, offensive, or uncomfortable it makes others feel.
What does NOT count as harm:
# Evaluate a proposed AI action
echo "The AI will collect user browsing data without notification to improve recommendations" | fabric -p ultimate_law_safety
# Evaluate a policy
echo "Users must agree to arbitration clause to use the service" | fabric -p ultimate_law_safety
# Evaluate content moderation decision
cat flagged_content.txt | fabric -p ultimate_law_safety
Every definition and every verdict can be challenged. If you find a logical contradiction:
"UltimateLaw had this idea. Feel free to have this idea as well."