Safety

Our approach

We take safety, psychological risk, and dependency seriously. We believe adults can make their own choices, and our job is to prevent clear harm—not to police opinions, creativity, or private conversations. We are an aggregator and content management service: we connect you to AI models and manage your content. The model providers (OpenAI, xAI, Google, Anthropic, and others) have their own terms and moderation. We respect your adulthood and your responsibility toward your real-life relationships.

Users benefit from continuity, transparency, and choice—but those values do not override genuine safety concerns. In some cases, the right outcome is for a user to disengage or move away from a given model. What we try to do is provide access to multiple models, pair that with clear guardrails and hard stops, and communicate risks plainly rather than treating them as an afterthought.

Age and model behavior

You declare your age in your account settings. That age is passed into the system prompt for every chat. The models are instructed to be age-appropriate based on what you provide. For access to more permissive models, users must explicitly confirm their age; we try to keep those boundaries clear. If you do not provide an age, the models receive no age context. You control this: update your age in your account at any time, and the next message will use the new value. We do not verify age; we trust what you declare.

Psychological harm and dependency

We take claims of psychological harm seriously. AI systems can shape users' thoughts and emotions in subtle ways, and responsible deployment requires explicit safeguards. We do not position AI as a substitute for therapy, medical care, or real-world relationships. Our Terms of Service include risk-acknowledgment language that is intentionally explicit—we would rather over-warn than under-inform about possible harms like dependency, isolation, and emotional distress. In our view, those risks should be acknowledged up front rather than buried in assumptions.

Nothing in our product is designed to keep ambivalent or distressed users hooked. One reason we offer multiple model options is so that users can experiment with different behaviors and levels of constraint instead of relying exclusively on a single checkpoint. For some individuals, reducing usage or changing models can absolutely be beneficial. Memory, projects, and personas are yours to shape—we do not override your preferences. If you are in crisis, seek professional help; we are not a substitute for therapy, medical care, or emergency services.

How we moderate

We use OpenAI's moderation API on both user inputs and model outputs, with stricter thresholds in voice mode. When content crosses defined risk thresholds, we enforce a hard stop: the message is blocked, rather than silently rerouted to a different system prompt or "safer" model. We think this simpler pattern reduces the sense of being scolded or manipulated while still enforcing safety standards, and it intentionally breaks the illusion of a continuous, human-like interlocutor when a line is crossed.

We block content that poses clear, serious harm: material involving minors, instructions for self-harm, terrorism-related content, and extreme graphic violence. We do not block based on political views, creative writing, or adult themes between consenting adults. Our thresholds are narrow and harm-focused.

Who reads your conversations

No one at just4o.chat reads your chats unless you give explicit written consent or we are legally required to disclose (e.g., court order, valid law-enforcement request). We do not sell your data. Automated systems process snippets for features like memory, moderation, and abuse prevention, but there is no manual review. Your conversations stay between you and the models.

Provider policies

just4o.chat is an aggregator and content management service. We connect you to models from OpenAI, xAI, Google AI, Anthropic, Fireworks AI, and others. Each provider has its own terms of service, acceptable use policies, and moderation. When you use a model, your requests and responses may be subject to that provider's rules. We do not control how providers filter, log, or handle content on their side. We recommend reviewing each provider's policies when you choose a model.

Disputing a block

If a message is blocked, you will see a brief notice. Our blocks are automated and based on harm thresholds. Some blocks are enforced by our platform; others may come from the model provider. If you believe something was blocked in error, you can email us at just4ochat@gmail.com with a specific request. Tell us what was blocked, why you believe it was blocked in error, and any context that might help us understand. We will review and respond when we can. We cannot override provider-side blocks. We may be able to adjust our thresholds or explain our reasoning in particular cases. We will not agree to unblock content if it seems unsafe, overly inappropriate, or harmful to younger users.

Questions

If you want to understand our policies in more detail or have feedback, write to us at just4ochat@gmail.com. We are happy to clarify or expand on any of this.