Why the rails exist

Why we hard-block.

We are doing this to maintain maximum legal compliance, and because we recognize the risks a platform with powerful memory can pose when the product becomes emotionally sticky, especially in moments of vulnerability or crisis.

Standard policy violations lead to content blocks only. Strictly illegal content can lead to an account ban or subscription cancellation. At the same time, this moderation system is designed to be privacy-preserving: for enforcement, we do not review blocked message bodies. What we see are heuristic violations and their values. For the product-facing explanation, see our Safety page.

We are not adding moderation rails because we think users are unserious. We are adding them because memory can compound risk, because companion chat is now a live legislative target, and because we would rather warn or hard-block than quietly alter model behavior once we know something sensitive is happening.

We want just4o.chat to be legible to users, parents, regulators, and courts. If someone asks whether we understood the risks of a memory-rich chatbot and whether we acted on them, the answer should be yes.

That is why we draw hard lines around crisis escalation, unsafe requests, and repeated policy violations. It is also why we show users the heuristic scores that tripped a block, store the signal instead of the blocked message, and avoid building a conditional router or moderation-specific system-prompt layer that tries to subtly redirect the interaction after the fact. Our choice is a warning and hard block, not a hidden behavioral rewrite. For clarity: standard policy violations lead to content blocks only. Account bans are reserved for strictly illegal content.

The case in three points

Why we block, and never route

01

Memory is powerful

A system that remembers you is not just a text box anymore. Persistent memory makes a chatbot feel more personal, more convincing, and more emotionally sticky. That can be wonderful for continuity, but it also means failures can accumulate instead of evaporating.

02

Chatbots are still new in law

State and international lawmakers are actively writing the rules for this category right now. Crisis protocols, disclosure of non-human status, vulnerability protections, and intervention duties are not theoretical anymore; they are becoming statutory expectations.

03

We do not want hidden conditional behavior

One path in this industry is to quietly change prompts or behavior when a system thinks it knows something sensitive about the user. We do not want to walk into the murky road of: we changed the system prompt when we knew X, and then Y happened. Our choice is simpler and more legible. We warn or hard-block instead of silently steering the model.

Policy

How the policy works

The system is designed to be easy to explain after the fact. There is no hidden moderation router, no quiet prompt rewrite, and no private review queue of blocked user messages.

Safety Mode is optional. When it is off, our moderation is intentionally very unrestrictive by mainstream-chat standards: the normal hard-block paths are focused on minors and self-harm. But it is not nothing. We still keep a silent, high-confidence backstop for some non-illegal adult-risk categories, so default mode is loose rather than lawless.

Note: content involving minors in sexual contexts is treated much more strictly than the standard flow. It has far less leeway and can lead to a permanent account lock. Standard policy violations lead to content blocks only, while strictly illegal content can lead to an account ban.

When Safety Mode is on, the same moderation thresholds apply much more broadly to create a more restricted, Safe experience. That mode is meant for people who want the product to stay inside tighter boundaries across a wider range of categories.

01

A visible block

If a message, prompt, or upload trips the moderation heuristics, we stop it and show the category and numeric values that caused the block. We do not silently rewrite the interaction.

02

Standard violations add points

For the standard policy paths, each blocked violation adds a point in its category cycle. At 5 points, the account goes into a timed lock across chat, files, and images.

03

Timed locks step up, then repeat

The standard timed-lock ladder is 1 hour, then 6 hours, then 12 hours, then 24 hours. After that, later lock triggers repeat at 24 hours instead of escalating into a permanent lock.

04

Privacy remains the default

For enforcement, we do not review blocked message bodies. Admin tooling shows only the triggered heuristic categories, their values, and the enforcement state that followed from them.

Legal landscape

Selected U.S. state bills and laws

Status snapshot as of March 12, 2026. These are selected examples, not an exhaustive 50-state survey. Status labels are based on official bill pages, official PDFs, or official calendars and histories available on that date.

StateBill or lawStatusPrimary focusOfficial link
CaliforniaSB 243EnactedCompanion-chatbot disclosure, crisis safeguards, and protection for vulnerable usersCA SB243
CaliforniaAB 3211Did not passAI provenance, watermarking, and disclosure requirementsCA AB3211
ColoradoSB24-205Enacted; key duties phase in from February 1, 2026Risk management, disclosure, and discrimination protections for high-risk AICO SB24-205
ConnecticutSB 86Referred to Joint Committee on General LawResponsible use of AI systems and state-level oversight proposalsCT SB86
GeorgiaSB 540Passed Senate; moving in HouseConversational-AI safety, disclosure duties, and crisis-related protocolsGA SB540
HawaiiHB 1782IntroducedProtection for vulnerable users, manipulative-design concerns, and crisis preventionHI HB1782
IowaHF 2715IntroducedChatbot deployer requirements, user safeguards, and disclosure rulesIA HF2715
KansasHB 2671Introduced; referred to committeeAge gating, parental consent, suicide-prevention monitoring, and AI disclosureKS HB2671
New JerseyS 3668IntroducedDisclosure that users are engaging with AI and related notice obligationsNJ S3668
New YorkA 6767Active; on floor calendarCrisis protocols, disclosure, self-harm detection, and companion-model safeguardsNY A6767
OklahomaSB 1521Active; reported do pass as amendedLimits on certain AI chatbots, age verification, and user-data protectionsOK SB1521
OregonSB 1546Active in 2026 regular sessionCrisis referral, suicide prevention, and companion-chatbot dutiesOR SB1546
South CarolinaS 896Introduced; in Senate committeeChatbot regulation, disclosure, and civil remediesSC S896
UtahSB 149 (2024)EnactedGenerative-AI disclosure and obligations in regulated or high-risk contextsUT SB149
UtahHB 438Active in 2026 sessionCompanion-chatbot safety protocols and independent evaluation requirementsUT HB438
WashingtonSB 5984Active in 2025-26 sessionSelf-harm detection, consumer protection, and platform safety dutiesWA SB5984
International landscape

Selected international laws and frameworks

Not every item below is chatbot-specific. Some are broader AI or platform-duty laws that still matter because memory-heavy conversational systems sit inside the same risk envelope: manipulation, disclosure, recommender opacity, and systemic safety.

JurisdictionInstrumentStatusPrimary focusOfficial link
European UnionAI Act (Regulation (EU) 2024/1689)In force; obligations phase in over timeProhibited AI practices, transparency, general-purpose AI duties, and safety governanceEU AI Act
European UnionDigital Services Act (Regulation (EU) 2022/2065)Fully applicablePlatform accountability, risk mitigation, transparency, and protections affecting recommender systemsEU DSA
Council of EuropeFramework Convention on AIOpened for signatureHuman rights, democracy, and rule-of-law obligations for AI systemsCoE AI Convention
United KingdomOnline Safety Act 2023In force in stagesIllegal-content duties, child-safety duties, and platform accountabilityUK Online Safety Act
AustraliaOnline Safety Act 2021In forcePlatform duties, harmful-content intervention, and the legal base for safety expectationsAU Online Safety Act
AustraliaBasic Online Safety Expectations Determination 2022In forceProvider expectations around systemic harms, user protection, and safety-by-designAU BOSE Determination
ChinaInterim Measures for Generative AI ServicesIn force since August 15, 2023Provider responsibilities, security review, lawful content, and user protection rulesCN Generative AI Measures
Republic of KoreaFramework Act on AI Development and TrustEnacted; effective January 22, 2026National AI governance, trustworthiness, and obligations for high-impact AIKR AI Basic Act
Bottom line

We are doing this because a memory-rich chatbot can become unusually intimate, unusually persuasive, and unusually hard to evaluate from the outside. That is now a real legal category, not just a product-design thought experiment. We want the product to be strong enough to use and clear enough to defend.