Safety

What this page is for

This page explains, in practical terms, how we think about safety, risk, moderation, and user agency on just4o.chat. For the binding legal terms, see our Terms of Service. For data handling details, see our Privacy Policy. For the legal and policy background behind our moderation stance, see Why We Do This.

Our approach

We take safety, psychological risk, and dependency seriously. We believe adults can make their own choices, and our job is to prevent clear harm—not to police opinions, creativity, or private conversations. We are an aggregator and content management service: we connect you to AI models and manage your content. The model providers (OpenAI, xAI, Google, Anthropic, and others) have their own terms and moderation. We respect your adulthood and your responsibility toward your real-life relationships.

Users benefit from continuity, transparency, and choice—but those values do not override genuine safety concerns. In some cases, the right outcome is for a user to disengage or move away from a given model. What we try to do is provide access to multiple models, pair that with clear guardrails and hard stops, and communicate risks plainly rather than treating them as an afterthought.

just4o.chat is not meant to be an NSFW platform. Some model outputs may still become sexual or otherwise adult in nature because AI behavior is not fully controllable and can be influenced by user inputs, conversation context, and the third-party model provider's own systems. That variability is part of why we try to describe risks and boundaries plainly instead of implying that every output can be perfectly predicted or constrained.

Adults only

just4o.chat is for adults 18 and older. We require users to confirm they are 18+ because the product can involve long-term memory, emotionally charged conversations, sexually explicit material, and access to multiple third-party AI systems with different behaviors and safeguards.

Adult-only access does not mean the product is risk-free. It means we expect users to exercise adult judgment, to step away when needed, and not to treat the system as a substitute for real-world relationships, professional care, or high-stakes decision-making.

Age and model behavior

You may declare your age in your account settings. That age is passed into the system prompt for every chat. The models are instructed to be age-appropriate based on what you provide. For access to more permissive models, users must explicitly confirm their age; we try to keep those boundaries clear. If you do not provide an age, the models receive no age context. You control this: update your age in your account at any time, and the next message will use the new value. We do not independently verify age; we rely on what you declare and on the 18+ account requirement.

Memory and multi-model risk

Long-term memory changes the feel of a product. A system that remembers prior conversations can become more useful, more continuous, and more personally resonant. It can also become more persuasive, harder to evaluate from the outside, and more emotionally sticky for some users.

Multiple-model access creates a different kind of risk. Each provider has different moderation, retention, and product behavior. One model may refuse something another will answer. One may feel emotionally flat while another feels highly engaging. We think users should know that these differences are real and that choosing a model does not eliminate the underlying risks of AI output.

Psychological harm and dependency

We take claims of psychological harm seriously. AI systems can shape users' thoughts and emotions in subtle ways, and responsible deployment requires explicit safeguards. We do not position AI as a substitute for therapy, medical care, or real-world relationships. Our Terms of Service include risk-acknowledgment language that is intentionally explicit—we would rather over-warn than under-inform about possible harms like dependency, isolation, and emotional distress. In our view, those risks should be acknowledged up front rather than buried in assumptions.

Nothing in our product is designed to keep ambivalent or distressed users hooked. One reason we offer multiple model options is so that users can experiment with different behaviors and levels of constraint instead of relying exclusively on a single checkpoint. For some individuals, reducing usage or changing models can absolutely be beneficial. Memory, projects, and personas are yours to shape—we do not override your preferences. If you are in crisis, seek professional help; we are not a substitute for therapy, medical care, or emergency services.

Session awareness and break nudges

For logged-in users, we keep lightweight session records under the account so the product can show transparent usage information rather than asking you to guess. That includes current session time, when the session started, when you were last active, and how many chats you sent in that session. The timer is meant to reflect active use, not just an open tab, so it pauses after more than five minutes without interaction and resumes when you come back.

After roughly one hour of active session time in Safety Mode, or roughly two hours otherwise, we show a modal that says, "Hey, we notice you've been here a while." It also displays your current session time and offers two choices: exit just4o.chat or dismiss the usage notice. This is a gentle interruption, not a punishment or lockout. The goal is to add a moment of reflection for users who may be drifting into extended use without noticing.

How we moderate

We use OpenAI's moderation API on both user inputs and model outputs, with stricter thresholds in voice mode. When content crosses defined risk thresholds, we enforce a hard stop: the message is blocked, rather than silently rerouted to a different system prompt or "safer" model. We think this simpler pattern reduces the sense of being scolded or manipulated while still enforcing safety standards, and it intentionally breaks the illusion of a continuous, human-like interlocutor when a line is crossed.

We block content that poses clear, serious harm: material involving minors, instructions for self-harm, terrorism-related content, and extreme graphic violence. We do not block based on political views, creative writing, or adult themes between consenting adults. Our thresholds are narrow and harm-focused.

When our moderation system blocks something, the default behavior is a visible stop, not a hidden prompt rewrite. We prefer legible boundaries over silently steering the conversation while pretending nothing happened.

What admins can review

No one at just4o.chat reads your chats unless you give explicit written consent or we are legally required to disclose them (for example, through a valid court order or similar legal process). We do not sell your data.

Admins may review moderation metadata for flagged violations, such as categories, scores, thresholds, timestamps, source, and enforcement state. That review is for enforcement, appeals, abuse investigation, and account-safety decisions. Our moderation audit log is designed to store the violation signal rather than the blocked prompt body itself.

Your conversations may still be processed automatically by our systems and by underlying model providers as described in our Privacy Policy and the providers' own terms.

Provider policies

just4o.chat is an aggregator and content management service. We connect you to models from OpenAI, xAI, Google AI, Anthropic, Fireworks AI, Cerebras, and others. Each provider has its own terms of service, acceptable use policies, and moderation. When you use a model, your requests and responses may be subject to that provider's rules. We do not control how providers filter, log, or handle content on their side. We recommend reviewing each provider's policies when you choose a model.

Disputing a block

If a message is blocked, you will see a brief notice. Our blocks are automated and based on harm thresholds. Some blocks are enforced by our platform; others may come from the model provider. If you believe something was blocked in error, you can email us at just4ochat@gmail.com with a specific request. Tell us what was blocked, why you believe it was blocked in error, and any context that might help us understand. We will review and respond when we can. We cannot override provider-side blocks. We may be able to adjust our thresholds or explain our reasoning in particular cases. We will not agree to unblock content if it seems unsafe, overly inappropriate, or harmful to younger users.

If you are unsure whether to use this product

If you are feeling unusually attached to a model, emotionally destabilized by conversations, tempted to rely on the system for crisis support, or worried that use is starting to interfere with your judgment, relationships, or daily life, take that seriously. Reducing use, switching models, turning off memory, or stepping away completely may be the right move.

We would rather be explicit about that than pretend every form of engagement is healthy just because it is voluntary.

Questions

If you want to understand our policies in more detail or have feedback, write to us at just4ochat@gmail.com. We are happy to clarify or expand on any of this.