Legal

AI Use & Transparency Policy

How idealAi uses artificial intelligence responsibly · Effective May 13, 2026 · Version 1.0

HomeAI Use & Transparency Policy

This AI Use & Transparency Policy explains how SSYP Solutions LLC uses artificial intelligence in the idealAi platform, what the AI can and cannot do, how we protect your data in the context of AI processing, and how we govern AI responsibly. It is incorporated into the Terms of Service by reference.

1. Purpose and Scope

In plain English:This Policy explains how idealAi uses AI, what the AI can and cannot do, and how we govern it responsibly.

This AI Use & Transparency Policy ("AI Policy") describes how SSYP Solutions LLC ("SSYP") develops, deploys, and governs the artificial intelligence features of the idealAi platform (the "Platform"). This AI Policy applies to all Users and Customers and is incorporated into the Terms of Service by reference. It is intended to help you understand: (a) what AI models power the Platform and how they work; (b) what the AI can and cannot do; (c) how we protect your data in the context of AI processing; (d) how we approach fairness, bias, and responsible AI; and (e) what governance structures we maintain.

2. AI Models and Providers

In plain English:The Platform uses large language models from Anthropic (Claude) and OpenAI (GPT). We have contractual protections in place with both providers.

The Platform's AI coaching, roleplay, and analysis features are powered by large language models ("LLMs") provided by third-party AI model providers. As of the effective date of this Policy, the Platform uses:

Anthropic, PBC -- Claude family of large language models;

OpenAI, LLC -- GPT family of large language models.

SSYP selects and contracts with AI model providers based on: (a) the quality and reliability of their models; (b) their data protection and security commitments; (c) their alignment with responsible AI principles; and (d) their contractual prohibitions on using Customer data for model training.

SSYP may change or add AI model providers from time to time. Material changes to AI model providers will be communicated to Customers in advance.

3. How the AI Works

In plain English:The AI generates responses based on your inputs and the context of your conversation. It does not have access to information about you beyond what you share in the Platform.

3.1 Generative AI: The Platform uses generative AI to produce coaching responses, roleplay outputs, and analytical summaries. Generative AI works by predicting the most likely next token (word or word fragment) given the input context. It does not reason, understand, or know things in the way humans do.

3.2 Context Window: Each AI session operates within a context window -- a fixed amount of text that the model can process at one time. The Platform manages context windows to provide coherent, continuous coaching experiences.

3.3 Persistent Memory: The Platform's Persistent Memory feature stores selected coaching insights and goals across sessions. This stored context is retrieved and included in subsequent AI sessions to provide continuity.

3.4 No External Knowledge: The AI models used by the Platform do not have access to the internet, real-time information, or information about you beyond what is included in the current session context and any retrieved Persistent Memory.

3.5 No Human in the Loop: AI responses are generated automatically. There is no human reviewing or approving individual AI responses before they are delivered to you. You should treat AI Output the way you would treat input from a thoughtful but fallible advisor: consider it, question it, and verify it before relying on it.

4. What the AI Can and Cannot Do

In plain English:The AI is a coaching and development tool. It is not a therapist, doctor, lawyer, or financial advisor. It can be wrong. It can hallucinate. It can reflect biases present in its training data.

The AI CAN: provide coaching conversations, reflection prompts, and leadership development support; help you think through organizational and team challenges; generate roleplay scenarios for skill development; summarize and analyze information you provide; and help you explore the idealAi platform's capabilities.

The AI CANNOT: provide medical, psychiatric, therapeutic, legal, financial, or accounting advice; make decisions for you or your organization; guarantee accuracy, completeness, or reliability of its outputs; access real-time information or external databases; or replace human judgment in high-stakes decisions.

The AI MAY: generate responses that are inaccurate, incomplete, biased, or out of date; hallucinate facts, citations, or statistics; produce outputs that reflect biases present in its training data; and misunderstand context or nuance.

YOU SHOULD: always review AI Output critically before relying on it; not use AI Output as the sole or primary basis for any decision producing legal or significant effects on an individual; consult qualified professionals for advice in regulated fields; and report concerning outputs to [email protected].

5. Data Protection in AI Processing

In plain English:Your coaching conversations are processed by AI models to generate responses. We have strong contractual protections in place to ensure your data is not used to train those models.

5.1 No Training on Customer Data: SSYP does not use, and contractually prohibits its AI model providers from using, Customer Personal Data, User conversations, roleplay content, uploaded documents, or any other Platform Data to train, fine-tune, or otherwise improve AI models.

5.2 Real-Time Processing Only: AI model providers process your inputs solely for the purpose of generating AI Output in real time. They do not retain your inputs or outputs beyond the time necessary to generate the response.

5.3 Privacy Wall: The Platform's Privacy Wall architecture ensures that individual coaching conversations are not accessible to Customer Administrators, even when AI processing is involved. See Section 5 of the Privacy Policy for details.

5.4 Prompt Injection: We implement technical measures to reduce the risk of prompt injection attacks, in which malicious content in user inputs attempts to manipulate the AI's behavior. However, no technical measure is perfect, and Users should not include sensitive information in AI inputs that they would not want processed by third-party AI model providers.

6. Fairness, Bias, and Responsible AI

In plain English:AI models can reflect biases present in their training data. We take this seriously and we work to mitigate it.

6.1 Bias Acknowledgment: AI models are trained on large datasets that may reflect historical biases, stereotypes, and inequities. AI Output may therefore reflect such biases, including with respect to gender, race, ethnicity, age, disability, and other characteristics.

6.2 Mitigation Efforts: SSYP works to mitigate bias in AI Output through: (a) selection of AI model providers with strong responsible AI programs; (b) system prompt design that promotes balanced, professional, and inclusive coaching; (c) monitoring of AI Output for patterns of bias or harm; and (d) user feedback mechanisms.

6.3 No Automated Decision-Making for Employment: AI Output should not be used as the sole or primary basis for any employment-related decision, including hiring, promotion, compensation, discipline, or termination. All such decisions remain the sole responsibility of the Customer and the human decision-makers within the Customer's organization.

6.4 Human Oversight: We encourage Customers and Users to maintain meaningful human oversight over AI-assisted processes, particularly those that affect individuals' employment, development, or wellbeing.

7. Transparency and Explainability

In plain English:We are transparent about how the AI works and what it can and cannot do. We do not pretend the AI is human.

7.1 AI Identity Disclosure: The Platform's AI coaching features are AI-generated. The Platform does not represent that AI-generated responses are produced by a human. Users are informed that they are interacting with an AI system.

7.2 Limitations Disclosure: We disclose the limitations of AI Output in these Terms of Service, this AI Policy, and within the Platform's user interface.

7.3 No Fabricated Data: The AI coaching assistant is instructed not to fabricate ROI figures, specific case study data, or other factual claims. However, generative AI can hallucinate, and Users should verify any factual claims made by the AI before relying on them.

7.4 Feedback Mechanism: Users who encounter AI Output that appears inaccurate, biased, harmful, or otherwise concerning are encouraged to report it to [email protected].

8. AI Governance

In plain English:We have internal processes to oversee how AI is used on the Platform.

8.1 Internal Oversight: SSYP maintains internal oversight processes for the AI features of the Platform, including review of AI model provider commitments, monitoring of AI Output quality, and response to user feedback.

8.2 Policy Updates: This AI Policy will be updated from time to time to reflect changes in the Platform's AI capabilities, AI model providers, applicable law, and best practices in responsible AI.

8.3 Regulatory Compliance: SSYP monitors developments in AI regulation, including the EU AI Act, and works to align the Platform's AI governance with applicable regulatory requirements as they evolve.

9. Contact

If you have questions about this AI Policy or want to report a concern about AI Output, please contact us at [email protected] or [email protected].

Privacy PolicyTerms of ServiceEULAAI Use PolicyData Processing AddendumCookie Policy