Skip to content

Engagement

FDE ModelOutcome PricingEngagement TiersEquity Partnerships

How We Build

Our StackTeam StructureSecurity & ComplianceAI Tooling & Ethics
AI Tooling & Ethics

Method / How We Build

AI Tooling & Ethics.

Sprout builds AI systems that operate in regulated industries, handle personal data, and make decisions that affect real people. That makes AI ethics an engineering discipline, not a PR statement. OJK's April 2025 AI Governance Guidance is the Indonesian regulatory baseline. ISO/IEC 42001 is the international AI management system standard. BSSN's emerging AI governance posture, the EU AI Act, and NIST AI RMF set the broader frame. This page is what we publicly commit to, how we build to those commitments, and what “responsible AI” means in practice, not in slogans.

OJK April 2025ISO/IEC 42001BSSN AlignedAudit-Ready

AI ethics as an engineering discipline

Every AI engagement at Sprout operates under a defined governance structure. Model validation, bias testing, drift monitoring, human oversight on high-risk decisions, documentation that audit teams can defend. The OJK April 2025 AI Governance Guidance defines six role profiles (AI Owner, Model Owner, Data Steward, Model Validator, Auditor, Compliance Lead) across the AI lifecycle. We staff those roles on regulated engagements. ISO/IEC 42001 is the emerging international standard for AI management systems. Our practice is aligned and our certification path is documented. The specific commitments below are the ones we'll defend in writing.

April 2025
OJK AI Governance Guidance effective. Sprout practice aligned
ISO/IEC 42001
International AI management system standard. Alignment path active (status TBD, Arno to confirm current certification state)
6 role profiles
OJK-defined lifecycle roles (AI Owner, Model Owner, Data Steward, Model Validator, Auditor, Compliance Lead), staffed on regulated-sector AI engagements
International aware
EU AI Act, NIST AI RMF, BSSN AI governance, tracked and applied where engagements require cross-border alignment

Signature Visual

AI governance lifecycle

A horizontal lifecycle flow through six phases (design, build, validate, deploy, monitor, audit) with governance gates at each and regulator chips (OJK, UU PDP, ISO 42001, BSSN) attached to the appropriate sign-offs. A footer strip lists Sprout's published AI principles: transparency, fairness, human oversight, data stewardship, continuous improvement, audit readiness. Governance-documentation aesthetic. Coming soon.

How we run AI ethics as a discipline

Four principles: specific, published, testable.

01

Governance-structured, not slogan-based

Every regulated-sector AI engagement operates under OJK's six-role governance structure. Role responsibilities are staffed and documented. “Responsible AI” without named role holders is theater. We staff accordingly.

02

Evaluation before deployment

Every production AI deployment passes an evaluation harness: accuracy against ground truth, bias testing across customer segments, adversarial testing where applicable, human-oversight trigger testing. The validation report is part of deployment paperwork.

03

Continuous monitoring, not launch-day one-time

Drift, bias, and accuracy decay are ongoing realities in production AI. We wire continuous monitoring from first deploy, not after the first incident. Drift reviews are scheduled, not opportunistic.

04

Audit-ready by default

OJK, BSSN, internal audit, client audit, regulator audit: the evidence pack, documentation trail, and governance records are produced once and kept current. An audit request should be a 2-day fulfillment, not a 2-month scramble.

The commitments

Four specific commitments we make publicly and build to operationally.

OJK April 2025 Alignment

Six-role governance structure staffed on OJK-supervised engagements. Model validation documentation, bias testing, drift monitoring, human-oversight trigger design. Evidence packs produced for OJK audit.

6-Role StructureModel ValidationBias TestingOJK Audit-Ready

ISO/IEC 42001 Alignment

International AI Management System standard alignment. Policy, risk assessment, operational controls. Certification path documented; current status published transparently.

AIMSPolicy + Risk AssessmentOperational ControlsCertification Path

Transparency Commitments

Model provenance documentation. Data source and licensing disclosure. Training-data lineage where applicable. Where Sprout uses frontier models (Claude, OpenAI, Google), the vendor and model version are documented in engagement artifacts.

Model ProvenanceData LineageVendor TransparencyEngagement Documentation

Human Oversight Discipline

High-risk decisions (credit, claims denial, medical advice, fraud flagging, regulatory decisioning) do not operate without human-in-the-loop. Oversight triggers defined, escalation paths documented, and override logging comprehensive.

High-Risk ClassificationIn-Loop TriggersEscalation PathsOverride Logging

AI governance in action

Sprout's AI governance operates under a regulatory reality that's still in active definition.

REGULATORY SIGNAL

OJK AI Governance defined six lifecycle roles. Sprout staffs them

OJK's April 2025 AI Governance Guidance defines six role profiles across the AI lifecycle: AI Owner, Model Owner, Data Steward, Model Validator, Auditor, and Compliance Lead. For Indonesian AI deployments in OJK-supervised contexts, these roles are not optional. Sprout's practice staffs these roles on regulated-sector engagements.

6 rolesOJK-defined lifecycle governance roles
REGULATORY SIGNAL

The international AI governance stack is converging

EU AI Act (full applicability August 2026), NIST AI RMF, and ISO/IEC 42001 are converging on a shared governance vocabulary: risk classification, model validation, continuous monitoring, human oversight. For Indonesian AI deployments serving international audiences or cross-border data flows, alignment to these frameworks complements OJK compliance rather than duplicating it.

EU AI Act + NIST + ISO 42001The international frameworks Sprout's practice is aligned against
REGULATORY SIGNAL

EU AI Act penalties set the global compliance-seriousness bar

The EU AI Act's penalty structure (€40M / 7% of global turnover for prohibited AI; €20M / 4% for high-risk; €10M / 2% for other violations) has set the global bar for how seriously AI governance compliance must be taken. Even firms operating exclusively in SEA face indirect exposure through clients or vendors with EU nexus.

€40M / 7%Max EU AI Act penalty for prohibited AI

Need to deploy AI into a regulated environment, and pass audit?

Tell us the engagement: the AI use case, the regulated surface (OJK / BI / BPJPH / SATUSEHAT / cross-border), the risk classification. We'll scope an engagement with OJK role structure, validation harness, and human-oversight design as scope requirements. Audit-ready is the default, not the upgrade tier.

Start a project