1. Introduction
MoEngage’s Merlin AI suite represents the future of productivity, content creation and strategic optimization for marketing teams. Security is not merely a feature of our product but an inherent part of our identity. It informs every decision we make—from the data we collect to the entities with whom we share it.
Our commitment is to provide an environment that ensures high availability and tight security for all your AI-driven marketing needs. We strive to maintain absolute transparency about our Security-by-Design approach. Merlin AI guides marketing strategy towards safety, security, and growth by utilizing the world’s leading AI models alongside our proprietary algorithms.
This policy is intended to be a one-stop source of truth for how MoEngage uses AI across:
AI features are embedded in the MoEngage platform (for example, predictive models, recommendation and decisioning systems, and generative AI features in Merlin AI).
AI-assisted workforce tools used by MoEngage employees to support operations such as support, debugging, and knowledge retrieval (for example, “Support Assist” and similar tools described in Section 3.2).
This document is written as a security and privacy disclosure and operating policy.
It explains the following:
(a) what AI systems do we use
(b) how those systems interact with data
(c) what security and privacy controls we apply, and
(d) what responsibilities remain with the user or customer
1.1 Standards and regulatory alignment
This privacy notice is in line with GDPR, India’s Digital Personal Data Protection Act (DPDP Act), and the Digital Personal Data Protection Rules, 2025, ISO/IEC 27701, ISO/IEC 42001, and other applicable privacy laws and regulations. (Press Information Bureau)
ISO/IEC 42001 is a management system standard for AI. It defines requirements and guidance for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS), covering topics such as transparency, accountability, risk management, and lifecycle oversight. (ISO)
ISO/IEC 27701 is a privacy information management standard designed to complement ISO/IEC 27001 and ISO/IEC 27002 by adding privacy-specific requirements and controls for managing personally identifiable information (PII). (ISO has also published an updated edition in 2025.) (ISO)
1.2 Key terms used in this policy
The following terms are used consistently across the policy:
- AI system / AI feature: Any product feature or operational tool that uses machine learning, statistical models, large language models (LLMs), or image generation models.
- Core AI Models: Proprietary models and statistical systems developed and hosted within the MoEngage ecosystem for optimization and prediction.
- Generative AI models: Third-party foundation models (text, image, or code) accessed via enterprise-grade cloud environments (Azure, Google Cloud, AWS Bedrock) and used for content generation, coding assistance, and workflow automation.
- Tenant / isolated environment: A customer’s logical environment within MoEngage, designed to isolate customer data and configurations from other customers.
- Customer data: Data processed by MoEngage on behalf of a customer, such as event streams, attributes, segments, campaign performance metrics, and other telemetry required for product functionality.
- Prompts and completions: User inputs provided to a generative AI model (“prompts”) and model outputs (“completions”).
- Embeddings: Vector representations of text or other content used for retrieval, semantic search, and related applications.
- PII and sensitive data: Data that can identify an individual (directly or indirectly) and other sensitive categories as defined by applicable law, contracts, and MoEngage data classification.
1.3 How to use this document
This policy is written to explain MoEngage’s design and operating commitments for AI features and AI-assisted tools, with an emphasis on security, privacy, and responsible use. It is not legal advice.
If there is any conflict between this document and the applicable MoEngage agreement(s) with a customer (including the Data Processing Addendum), the agreement(s) govern. This document may be updated over time as Merlin AI evolves, partner offerings change, or regulations and standards are revised. The version history at the beginning of this document should be used to track material updates.
2. Security Architecture & Access Control
We strictly adhere to MoEngage AI usage standards designed to safeguard customer rights without limiting innovation.
2.1 Security framework and governance
Security Framework: We adhere to a comprehensive security framework aligned with industry standards, including ISO 27001 and ISO 27701. This involves systematic audit processes, key personnel interviews, and regular evaluations of our Information Security Management System (ISMS) and Privacy Information Management System (PIMS). (ISO)
To operationalise ISO/IEC 42001 principles for AI, MoEngage maintains an AI governance program that defines:
- accountable owners for AI features and datasets,
- review gates for high-impact changes,
- risk assessment and mitigation requirements,
- monitoring, incident response, and continuous improvement processes. (ISO)
2.2 Data separation and tenant isolation
Data Separation: Our architecture is designed to ensure customer data is kept separate from our model providers. For Merlin AI features that use third-party models, MoEngage minimizes data shared with model endpoints and is designed to avoid sending customer event/attribute values or PII unless the feature explicitly requires it and the customer has enabled it.
Tenant Isolation: Merlin AI and MoEngage data systems are designed to process data within a customer’s isolated environment. Logical separation controls are applied across storage, compute, access policies, and audit trails.
2.3 Input validation and sanitization
Input Validation & Sanitization: To prevent malicious attacks, we employ multi-layered input validation. This includes input sanitization processes to prevent SQL and script injections.
For features that involve tool-use (for example, generation of segmentation logic, templates, or campaign flows), MoEngage applies additional guardrails designed to reduce prompt injection risk, including strict schema validation, allowlisted tool calls, and output sanitization before any generated artifact can be executed.
2.4 Content safety, filtering, and moderation
Content Filtering: For Generative AI models, we leverage the native content filtering and moderation systems provided by our foundational model partners to screen prompts and outputs for malicious or inappropriate content. This follows a layered security mitigation strategy aligned with responsible AI standards.
As an example of partner capabilities:
- Microsoft Foundry / Azure OpenAI provides configurable content filtering for prompts and completions across harm categories and severity levels. (Microsoft Learn)
- Azure AI Content Safety provides detection of harmful user-generated and AI-generated content, supporting text and image scenarios. (Microsoft Learn)
MoEngage uses a defense-in-depth posture:
- validation and sanitization of inputs before model invocation,
- partner content filters/guardrails at the model boundary,
- MoEngage output validation and policy checks before outputs are shown or saved,
- role-based access controls and mandatory review requirements before outputs are deployed in customer campaigns.
2.5 Access control
Access Control: While Merlin AI is accessible to authorized users for content generation and brainstorming, the actual usage and deployment of AI-generated content in live marketing campaigns is restricted. Only users with explicit Campaign Creation permissions are authorized to incorporate AI-generated content into active campaigns. This ensures that every piece of AI content used in a public-facing campaign has been reviewed and deployed by a qualified marketer.
Access to Merlin AI is secured through account login and is restricted to users authorized for campaign creation. Access permissions are granted strictly based on job requirements, preventing unauthorized misuse.
For MoEngage workforce tools, access is controlled through role-based entitlements, authentication (SSO where applicable), and audit logging. MoEngage employees are expected to comply with the “Employee Responsibilities” section in Section 9, including restrictions on entering sensitive data into prompts.
2.6 Customization and customer controls
Customization: The architecture allows for custom authentication protocols and complete control over access to global knowledge, while allowing account-level customization of generation limits, credits, and usage policies.
Where a feature supports retrieval over customer knowledge bases (for example, knowledge documents, catalogs, or templates), access is constrained to the authorized user and the relevant tenant context. Retrieval indices are designed to respect tenant boundaries and access policies.
2.7 Native security
Native Security: Advanced security and privacy measures are built in, assuring data confidentiality without requiring additional customer-side configuration.
3. How it Works: The Hybrid System
3.1 Core AI Models (Proprietary)
These models are hosted entirely within the MoEngage ecosystem and do not share data with third-party LLM providers. Our Core AI features are powered by a central engine called Merlin AI, which optimizes marketing automation using the following algorithms:
- Predictive Segments: Uses Gradient Boosting Classifiers (Supervised Machine Learning) to detect behavioral patterns and assign probability scores for future user actions (e.g., Churn or Purchase).
- Proactive Assistant: Employs Advanced Clustering using K-Means to provide actionable insights.
- A/B Testing for Intelligent Content Optimization (ICO), Intelligent Delay Optimization (IDO), and Intelligent Path Optimization (IPO): Powered by Bayesian Multi-Armed Bandit algorithms, which use sequential experimentation to continuously explore and exploit winning campaign variations in real-time.
- Best Time to Send (BTS), Most Preferred Channel (MPC), and Next Best Action: Utilizes proprietary and advanced Statistical Models based on weighted frequency and recency aggregation analysis to determine the optimal timing and channel for user engagement.
- RFM Analysis: Supported by aggregation algorithms to automatically segment users based on behavior.
3.2 Generative AI Models (Merlin AI Suite)
MoEngage utilizes a hybrid architecture combining third-party foundational models for content generation with proprietary in-house models for optimization and prediction. We leverage a multi-model approach, accessing state-of-the-art models via enterprise-grade, secure environments on Microsoft Azure, Google Cloud Platform, and AWS Bedrock. The partner environments used for model access include well-defined privacy and data handling commitments, such as restrictions on cross-customer data access and restrictions on using customer prompts/completions to train foundation models without permission. (Microsoft Learn)
MoEngage’s Merlin AI suite includes:
- Merlin AI Copywriter: Utilizes GPT-4o (via Microsoft Azure) to generate high-quality marketing content and subject lines. It leverages Keyword Impact Quotient (KIQ) (built on proprietary algorithms) to optimize content based on historical performance.
- Merlin AI Designer: Powered by Gemini 2.5 Flash and Imagen 4 (via Google), this feature creates unique marketing banners and visual assets from text prompts. (Google AI for Developers)
- Segmentation AI: Uses a fine-tuned GPT-3.5 class model paired with a Retrieval-Augmented Generation (RAG) system. This translates natural language queries into database segmentation logic with controls designed so that the model is not required to access specific user values.
- Jinja AI: Built on Gemini 2.5 Pro, this model generates code-based personalization templates (HTML/CSS/Jinja) to enable complex dynamic content. (Google AI for Developers)
- Flow Assist: Uses Gemini 2.5 Flash to convert natural language requests into production-ready campaign flows, utilizing a multi-agent architecture for validation. (Google AI for Developers)
- In-App HTML Generator: Leverages a composite of models, including Gemini 2.5 Flash/Pro, Imagen 4, and Claude Sonnet 4.5 (via AWS Bedrock) to generate code and assets for in-app campaigns. (Google AI for Developers)
- Support & Workforce Tools: Assistive tools (such as Moe Support Assist and OpenWeb UI) utilize GPT-4o-mini, Gemini 2.5 Flash, and models hosted on AWS Bedrock and Azure OpenAI to assist employees with support, debugging, and knowledge retrieval.
4. Functionality & Data Interaction
Our models interact with data in a strictly controlled manner to ensure privacy and isolation.
4.1 Data handling protocol
These models analyze user behavioral data and preferences only within your isolated environment. Learnings from one customer never train algorithms for others. In-house proprietary models train strictly on the preferences and behavioral data of your users.
In addition, for features that invoke third-party generative models, MoEngage follows a data minimization approach:
- Only the minimum necessary context is included in model prompts,
- prompts are structured to avoid including PII unless explicitly required by the user workflow,
- outputs are post-processed and validated before being made available for use.
4.2 Core Merlin AI decisioning and optimization
- Sherpa AI (Core): Performs real-time A/B testing optimization by directing traffic to winning variations based on live campaign interactions.
For bandit-driven optimization (ICO/IDO/IPO), the system maintains guardrails to prevent unsafe or undesirable allocations (for example, minimum exploration, pacing constraints, and campaign-level caps). Optimization decisions are designed to be reversible and are monitored using online performance signals (for example, CTR, conversion, fatigue/complaints, and safety-related guardrails).
4.3 Segmentation, analytics, and predictive systems
- AI-Based Segmentation: Enables dynamic user segmentation based on semantic relations within user data points (events/attributes) on MoEngage servers.
- Predictive Segmentation: Forecasts Customer Lifetime Value (CLTV) and behavior by analyzing patterns of similar users who converted in the past.
For segmentation and predictive systems, MoEngage maintains:
- training and inference pipelines that operate within tenant isolation boundaries,
- access controls and audit logs for feature access,
- monitoring for drift and performance regression (see Section 7).
4.4 Content, asset, and template generation
Generative AI features are designed to be assistive. They generate text content, banners, code templates, and workflow drafts. To reduce the risk of unsafe or inaccurate content, MoEngage applies:
- partner content filtering and safety configurations,
- policy checks for disallowed content categories,
- mandatory human review requirements before use in customer campaigns (see Section 9).
4.5 Support and knowledge retrieval workflows
MoEngage uses AI-assisted tools to help employees operate more efficiently (for example, summarizing support tickets, suggesting troubleshooting steps, drafting knowledge base content, or assisting with debugging). These tools are governed by:
- strict employee data entry rules (do not paste sensitive customer personal data),
- role-based access controls,
- logging and auditability for usage,
- retrieval boundaries that restrict access to authorized knowledge sources.
5. Data Privacy & Model Training Standards
We adhere to strict data boundaries. Regarding your inputs (prompts), completions (outputs), embeddings, and training data, MoEngage guarantees the following:
5.1 Core models
- Core Models: Hosted entirely within the MoEngage ecosystem. Only your end-user data is used to train these models, and this data is never shared with third-party model providers.
5.2 Generative AI processing boundaries
- Generative AI (Merlin): Utilizes Azure, Google Cloud, and AWS models to generate content based on marketer inputs. Customer event and attribute data used for core optimization is not provided to these models for training purposes.
5.3 Strict isolation and non-training commitments
Strict Isolation: Your prompts (inputs), completions (outputs), embeddings, and training data are:
- NOT available to other customers for training/resale
- NOT used to improve foundation models provided by partners for training/resale
- NOT used to improve any 3rd party products or services for training/resale
These commitments are aligned with major partner contractual and technical postures for enterprise model hosting. For example:
- Microsoft states that prompts and completions processed by Azure Direct Models are not available to other customers, not available to OpenAI, and not used to train foundation models without permission or instruction. (Microsoft Learn)
- Google Cloud’s Service Specific Terms include a training restriction clause stating Google will not use customer data to train or fine-tune AI/ML models without prior permission or instruction. (Google Cloud)
- AWS states that Amazon Bedrock does not store or log prompts and completions, does not use them to train AWS models, and does not distribute them to third parties. (AWS Documentation)
- Stability AI provides user controls for opting out of using content to “Improve the Model for Everyone,” and describes safety filtering in its hosted APIs. (Stability AI)
5.4 Partner data handling nuances (retention, abuse monitoring, and configuration)
Enterprise model hosting typically includes safeguards such as tenant isolation and non-training commitments, but some platforms may retain limited data for purposes like abuse monitoring or debugging unless configured otherwise.
- Microsoft documents that Azure Direct Models store and process data to provide the service and to monitor for uses that violate product terms, and describes an “abuse monitoring data store” used for review of prompts and completions flagged as potentially abusive. Microsoft also documents a “modified abuse monitoring” process for eligible customers who do not want prompts/completions stored for human review. (Microsoft Learn)
- Google documents that, while it won’t use customer data to train or fine-tune AI/ML models without permission or instruction, it may log prompts for abuse monitoring for Google models; and for certain “Grounding with Google Search” features, it stores prompts/context and outputs for 30 days for grounded results and search suggestions. Google also documents a “zero data retention” option for Vertex AI, subject to eligibility and configuration. (Google Cloud Documentation)
- AWS documents that Bedrock is designed not to store or log prompts and completions, and explains its Model Deployment Account architecture, where model providers have no access to the deployment accounts. (AWS Documentation)
MoEngage’s policy is to select configurations consistent with enterprise privacy commitments and to apply additional MoEngage controls (such as data minimization, redaction where supported, and access restrictions) to reduce risk.
5.5 Retrieval-Augmented Generation and Grounding Standards
For features that use retrieval (RAG):
- Retrieval sources are restricted to authorized knowledge bases and tenant context.
- Sensitive sources (for example, documents containing personal data) must be governed by access controls, and only the minimum excerpts needed for generation should be retrieved
- Where a customer enables RAG over their content, MoEngage treats the retrieved text as customer data and applies the same tenant isolation and access controls described elsewhere in this policy.
5.6 Data subject rights and privacy requests
MoEngage supports customer obligations to respond to privacy requests (for example, access, deletion, and correction) in alignment with applicable laws such as GDPR and DPDP. DPDP Rules, 2025 operationalize obligations and rights under India’s DPDP Act, including requirements around lawful processing and protection of digital personal data. (Press Information Bureau)
5.7 Data residency and processing location
MoEngage supports region and residency choices for customer data storage and processing, subject to product availability and contractual terms. Where generative AI partner services are used, MoEngage selects deployment options intended to keep data processing aligned with the customer-designated geography whenever feasible.
Microsoft documents that, for Azure Direct Models, data stored at rest (including uploaded data and the abuse monitoring data store) is stored in the customer-designated geography for both Global and DataZone deployment types, while the location of processing can vary depending on the deployment type chosen. (Microsoft Learn)
Customers with strict residency requirements should discuss deployment options and configurations with MoEngage so that we can align the configuration of AI features (including any partner service options) with the customer’s contractual and regulatory obligations.
6. Explainability & Bias Mitigation
We ensure our models are transparent and fair through proactive and reactive measures.
6.1 Explainability
Transparent Decision-Making: We maintain clear documentation of AI model decisions.
UI Visibility: Inputs and outputs are exposed on the dashboard, ensuring the marketer understands why a decision was recommended.
MoEngage applies explainability standards appropriate to the system type:
- For predictive scores (for example, churn likelihood), we expose score definitions, update cadence, and key drivers at an aggregate level where feasible.
- For recommendations and ranking, we disclose the objective being optimized and the main signals used.
- For generative features, we disclose that outputs are generated by models, may contain errors, and require human review before use (see Section 9).
6.2 Bias mitigation
Proactive: Core models are trained on recent user behavior, avoiding stale data that typically introduces historical bias.
Reactive: We continuously track performance; if anomalies or biases are detected, immediate corrective actions are taken.
Bias mitigation practices include:
- data quality checks and representativeness monitoring,
- periodic review of training data windows and feature definitions,
- drift detection and performance monitoring across key cohorts,
- escalation and remediation procedures when bias signals are detected.
7. Performance Monitoring & Human Oversight
We employ rigorous metrics to evaluate model accuracy and safety.
7.1 Data science metrics
- Predictions: Evaluated using the ROC AUC curve (Receiver Operating Characteristic Area Under the Curve) to measure classification performance.
- Recommendations: Monitored via Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP) to ensure relevance and ranking accuracy.
- Generative AI (Merlin Suite): Assessed using automated metrics where appropriate (for example, text similarity checks such as BLEU for regression suites where a reference exists). Additionally, we have implemented a comprehensive Evals System across almost all Generative AI products. This system rigorously tests model outputs against defined baselines to ensure quality, relevance, and alignment with user intent before deployment.
MoEngage’s Evals System is designed to include:
- curated prompt suites that represent typical and edge-case user intents,
- rubric-based evaluation for quality, policy compliance, and factuality where applicable,
- safety tests (prompt injection attempts, disallowed content tests),
- regression testing to detect degradation during model upgrades or prompt changes.
7.2 Automated monitoring
Continuous tracking of accuracy, precision, and recall.
Automated anomaly detection systems flag unexpected behaviors.
For generative features and workforce tools, monitoring also includes:
- abuse and misuse detection signals,
- rate limiting and throttling controls where applicable,
- alerting for abnormal spikes in harmful content classifications.
7.3 Human-in-the-loop
Development Phase: All features undergo a human assessment round to detect erroneous output before deployment.
Execution Phase: The “Final Decision” always rests with the marketer. AI outputs are suggestions; they must be validated and approved by a human before being used in live campaigns.
For employee-facing assistive tools, MoEngage applies similar principles:
- The employee remains accountable for the final action taken,
- outputs must be verified before applying changes to production systems or customer-facing communications,
- high-risk actions require peer review or established approvals.
8. Third-Party AI Partners & Infrastructure
We partner with industry leaders to provide best-in-class privacy controls.
MoEngage uses enterprise-grade hosting environments and contractual controls for third-party models, with a focus on tenant isolation, data minimization, and restrictions on training use. The following sections summarize key partner commitments relevant to Merlin AI.
8.1 Google Cloud Platform (Vertex AI)
Deployment: We leverage Google’s Vertex AI platform to access advanced models like Gemini and Imagen within a secure, enterprise-grade environment.
Data Privacy: Google’s Service Specific Terms include a “Training Restriction” clause: Google will not use customer data to train or fine-tune AI/ML models without the customer’s prior permission or instruction. (Google Cloud)
Retention and abuse monitoring: Google documents that it may log prompts for abuse monitoring for Google models, and for certain grounding features it stores prompts/context and outputs for 30 days. Google also documents a “zero data retention” option for Vertex AI, subject to eligibility and configuration. (Google Cloud Documentation)
Safety: Vertex AI includes safety capabilities for generative AI, including controls intended to reduce harmful outputs and misuse.
8.2 Microsoft Azure (OpenAI Service)
Deployment: We utilize the Azure OpenAI Service through Microsoft’s Azure AI Foundry / Azure Direct Models environment.
Isolation: Microsoft states that Azure Direct Models are hosted in Microsoft’s Azure environment and do not interact with services operated by model providers (for example, OpenAI-operated services such as ChatGPT or the OpenAI API). (Microsoft Learn)
Data privacy: Microsoft states that prompts, completions, embeddings, and training data processed by Azure Direct Models are not available to other customers, not available to OpenAI or other providers, and are not used to train foundation models without permission or instruction. (Microsoft Learn)
Abuse monitoring and content filtering: Microsoft documents real-time harmful content evaluation and filtering, and describes an abuse monitoring process that may store and review flagged prompts/completions, with a “modified abuse monitoring” option for eligible customers. (Microsoft Learn)
Microsoft’s broader obligations for processing and security of customer data in Azure services are described in the Microsoft Products and Services Data Protection Addendum (DPA). (microsoft.com)
8.3 Amazon Web Services (AWS)
Environment: AWS provides a secure environment with network isolation.
Models: We utilize AWS Bedrock to access high-performing models securely.
Data Handling: AWS states that Amazon Bedrock does not store or log prompts and completions, does not use them to train AWS models, and does not distribute them to third parties. AWS also describes a Model Deployment Account architecture where model providers have no access to deployment accounts. (AWS Documentation)
8.4 Stability AI
Transparency: Stability AI publishes a Privacy Center that includes controls for opting out of using content to “Improve the Model for Everyone.” (Stability AI)
Filtering and authenticity: Stability AI states that it applies robust filters on prompts and outputs in its hosted applications and APIs, and describes the implementation of content authenticity standards and watermarking to help identify AI-assisted content. (Stability AI)
Stability AI also reports that it has implemented prompt filters and NSFW classifiers to block disallowed content in its hosted services. (Stability AI)
9. Customer Responsibilities & Human Oversight
While MoEngage and our partners implement considerable security controls, the ultimate responsibility for the appropriate use of generated content resides with the user or customer.
9.1 MoEngage responsibilities
MoEngage is responsible for:
- implementing privacy and security controls described in this policy,
- maintaining tenant isolation and access control,
- monitoring model performance and safety signals,
- maintaining documentation, evaluation protocols, and change management,
- responding to security and privacy incidents in accordance with our incident response procedures.
9.2 Customer responsibilities
Assistive Tool: Generative AI should be leveraged as an assistive tool, not a replacement for human judgment.
Mandatory Review: Content generated by AI should never be published without human evaluation for quality, accuracy, and brand alignment.
Commercial Use: You hold the right to use the generated output for commercial purposes (e.g., marketing campaigns). However, manual review is strongly suggested before commercial deployment.
Customers are also responsible for:
- ensuring that only authorized users access AI features (through permissions and role management),
- avoiding entry of personal or sensitive data into prompts unless explicitly needed and permitted by law and policy,
- verifying claims, product offers, pricing, and regulatory statements in AI-generated content before publishing,
- using AI-generated code/templates in accordance with security best practices (for example, review for unsafe scripts and injection risks)
9.3 Employee responsibilities
MoEngage employees must:
- use approved AI tools and follow MoEngage data handling rules,
- not enter sensitive customer personal data or secrets into AI prompts,
- treat AI outputs as suggestions and validate before acting,
- follow access controls and least privilege,
- report suspected misuse, unsafe outputs, or potential data leaks immediately through established channels.
9.4 Prohibited use and misuse prevention
MoEngage AI features and workforce tools must not be used to:
- generate or facilitate illegal content,
- generate disallowed content such as hate speech, harassment, or sexual content involving minors,
- attempt to bypass safety controls, exfiltrate data, or conduct prompt injection attacks,
- impersonate individuals or misrepresent AI output as a verified fact without validation,
- process data in violation of applicable law, contract, or MoEngage policy.
MoEngage and its partners may enforce restrictions, throttling, or suspension where abuse is detected in accordance with applicable terms and policies. (Microsoft Learn)
10. Intellectual Property & Output Usage
10.1 Ownership and licensing of inputs/outputs (by service terms)
MoEngage’s baseline position:
- Customers retain rights to their inputs (Customer Inputs) and control how outputs are used, subject to applicable law and partner terms.
Partner terms that inform this:
- Microsoft Product Terms: Output Content is Customer Data; Microsoft does not own Customer’s Output Content. (Microsoft)
- Google service terms (archived 2023): Generated Output is Customer Data; Google does not assert ownership rights in new IP in Generated Output. (Google Cloud)
- Anthropic (Bedrock third-party model terms): Customer retains rights to Inputs and owns Outputs; Anthropic may not train on Customer Content from services. (Amazon Web Services, Inc.)
10.2 Similarity of outputs and non-uniqueness considerations
Generative AI can produce similar outputs for different prompts and different customers. We explicitly note that multiple customers may have or claim rights in content that is the same or substantially similar, and we do not determine whether output is copyright-protected or enforceable.
Therefore:
- Customers should not assume exclusivity of generated text or images.
- Customers should conduct an appropriate review for brand differentiation and legal risk
10.3 Copyright and trademark guidance
MoEngage cannot guarantee that outputs are non-infringing, and should not be held liable for any resulting intellectual property conflicts. To mitigate risk, Customers should:
- avoid prompting models to imitate specific copyrighted works or protected brand assets,
- review outputs for trademark and brand conflicts,
- Use licensed source materials when providing inputs.
10.4 Use of third-party content and attribution expectations
Outputs may contain elements that resemble third-party content. Customers are responsible for:
- verifying rights for commercial use,
- providing attribution where required,
- complying with platform and advertising rules.
For provenance, MoEngage tracks standards such as C2PA to help provide authenticity signals in media workflows where feasible. (C2PA)
11. Data Retention & Commitment
As part of ongoing enhancements, MoEngage may adjust the date range of data retained by Merlin AI to improve service quality. However, regardless of retention periods, MoEngage’s design goal is that end-user PII and raw campaign interaction data are not transmitted to third-party foundation model providers as part of core optimization and prediction workloads. For generative AI features, the content processed by model endpoints is derived from marketer prompts and selected context; customers and employees should avoid including personal or sensitive data in prompts unless it is required for the intended workflow and permitted by policy and law.
MoEngage retains and processes data according to:
- contractual requirements,
- legal obligations,
- operational requirements (for example, security monitoring and troubleshooting),
- customer-configured settings where applicable.
11.1 Retention categories
MoEngage applies retention controls by data category, including:
- customer behavioral data used for analytics and core model training,
- prompts and outputs associated with generative AI features,
- embeddings and retrieval indices (where enabled),
- logs and audit trails for security and compliance,
- evaluation artifacts used to validate model quality and safety.
11.2 Partner retention considerations
MoEngage uses enterprise model hosting environments that may include limited retention for abuse monitoring or debugging unless configured otherwise.
- Microsoft describes an abuse monitoring data store for prompts and completions selected for review, and a modified abuse monitoring process that disables storage and human review for eligible customers. (Microsoft Learn)
- Google documents prompt logging for abuse monitoring for Google models and a zero data retention option for Vertex AI (subject to eligibility), and documents 30-day storage for certain grounding features. (Google Cloud Documentation)
- AWS documents that Bedrock does not store or log prompts and completions. (AWS Documentation)
- Stability AI provides opt-out controls for training and describes safety filtering and watermarking initiatives. (Stability AI)
11.3 Commitment statement
Ensuring the security and privacy of customer information is vital to our mission. While MoEngage provides robust security controls and upholds responsible AI standards, the appropriate use of generated content ultimately resides with the customer.
Your data belongs to you. It will never be sold or shared irresponsibly. We remain steadfast in our commitment to safeguarding your information throughout the entire lifecycle.
12. Appendices
12.1 Merlin AI feature catalog (summary)
The Merlin AI suite includes, but is not limited to:
- Predictions (Predictive Segments, CLTV, and propensity signals where applicable)
- Multivariate testing and bandit-driven optimization (ICO/IDO/IPO)
- Best Time to Send (BTS)
- Most Preferred Channel (MPC)
- Next Best Action and recommendation signals
- Generative features (Copywriter, Designer, Segmentation AI, Jinja AI, Flow Assist, In-App HTML Generator)
- Assistive workforce tools (support, debugging, and knowledge retrieval assistants)
12.2 AI model partner summary (illustrative)
| Provider | Service | Key privacy posture (high level) |
|---|---|---|
| Microsoft | Azure OpenAI / Azure Direct Models | Prompts/completions not shared with other customers or OpenAI; not used to train foundation models without permission; hosted in Microsoft Azure; configurable content filters and abuse monitoring options. (Microsoft Learn) |
| Vertex AI (Gemini/Imagen) | Training restriction (no training/fine-tuning on customer data without permission); documented prompt logging for abuse monitoring; zero data retention option; special handling for “Grounding with Google Search.” (Google Cloud) | |
| AWS | Bedrock | Does not store/log prompts/completions; does not use them to train AWS models; model providers have no access to deployment accounts. (AWS Documentation) |
| Stability AI | Hosted APIs | User controls for opting out of training; prompt/output filtering; watermarking initiatives and transparency reporting. (Stability AI) |
| AI Decisioning (Core Models) | MoEngage Core AI Models /Merlin AI | Hosted entirely within the MoEngage ecosystem. Only your end-user data is used to train these models, and this data is never shared with third-party model providers. Training and inference pipelines operate strictly within tenant isolation boundaries. |
13. References
- ISO/IEC 42001: AI management systems overview. (ISO)
- ISO/IEC 27701: privacy information management. (ISO)
- DPDP Rules, 2025 notification and overview (Government of India). (Press Information Bureau)
- Microsoft Azure Direct Models (Azure OpenAI) data privacy and abuse monitoring documentation. (Microsoft Learn)
- Google Cloud Vertex AI training restriction and zero data retention documentation. (Google Cloud Documentation)
- AWS Bedrock data protection documentation and “Model Deployment Account” architecture. (AWS Documentation)
- Stability AI privacy, safety, and transparency reporting. (Stability AI)
- Gemini 2.5 Flash and Gemini 2.5 Pro model documentation. (Google AI for Developers)
- Imagen 4 model documentation (Vertex AI). (Google Cloud Documentation)
- Claude Sonnet 4.5 availability in Amazon Bedrock (AWS announcement/blog). (Amazon Web Services, Inc.)