Built by the co-creators of OpenLineage and Marquez

Observability reinvented for Spark & Iceberg

Skip the dashboards. We build a reusable telemetry context layer where the Data Reliability Engineer and agents can solve data incidents together on day one

No credit card required
brew install OleanderHQ/tap/oleander-cli

End-to-end data operations with full context

We unify compute, storage, lineage, and observability so your team can move faster with shared context at every stage. Use any or all of our tools.

Core
[01] Compute

Spark compute with context

Build. Deploy. Debug. Let us deploy your Spark jobs and handle production monitoring, running investigations as soon as things go wrong.

process_pending_invoices.py
[02] Storage

Lake storage & query engine

Work on your own data with an integrated Iceberg storage and query layer, while keeping lineage and observability attached to every query. You can also bring your own iceberg catalog along for the ride.

flower_power.sql
Observability
[03] Incidents

Automated incident investigations

Anomaly investigation starts the moment alerts fire. Pull deep context from your telemetry lake to pinpoint and assist with root cause before downstream impacts.

Incidents/ spark job rows written dropped 15x
Investigations/ #381
    [04] Root cause

    Full context-aware root cause analysis

    Skip dashboard hopping. Debug production issues using an independent metadata context layer and trace root causes across your data infrastructure on day zero.

    Terminal
    [05] Telemetry

    Query telemetry data with SQL

    Correlate metrics, logs, traces, and lineage metadata instantly so you can understand the intent behind every deployed pipeline with zero context switching.

    pending_invoices.sql
    [06] Alerting

    Quick, smart alerting with incidence triage

    Reduce on-call burden. Every alert is paired with a detailed knowledge graph of your data infrastructure so you understand how everything fits together. A shared context with your team to solve incidents faster.

    @alertingInvestigate new production alerts and generate triage context with downstream blast radius and likely remediation steps.

    DoneThought for 5s

    Active Alert

    severityP1 high-impact anomaly
    pipelinefinance.billing.process_pending_invoices
    signalrows_written down 94% vs 7-day avg
    started_at2025-01-28 10:03:12 UTC

    Triage Context

    blast_radius4 downstream models + 2 dashboards
    linked_runs3 upstream runs in last 30 minutes
    ownershipBilling Data Platform
    recommended_next_steprollback run i7k2n + replay window
    [07] Insights

    The context graph for your data infrastructure

    Every run, commit, deployment and dataset is connected. Search it. Trace it. Understand it. Share it.

    @insightsAnalyze the last 30 days of finance.billing.process_pending_invoices. Investigate the 15x drop in output volume and map the downstream impact.

    DoneThought for 5s

    Deployment & Execution

    activity28 days active across 45 runs
    outages3 major outages (#381, #394, #412)
    data12.5TB in; 2.1TB out with drift
    uptime99.2% uptime vs. 15x volume drop

    Code & Schema Evolution

    commits18 commits linked to active runs
    schema_changes4 schema migrations
    files_touched12 files touched spark/jobs/finance/
    investigations7 automated investigations triggered
    Compatibility
    [08] Spark

    Supurb Spark compatibility

    Databricks and Amazon EMR are supported today. Built on open standards, integration keeps pace with your continuously evolving data stack.

    PlatformStatus
    Databricks
    supported
    Amazon EMR
    supported
    Amazon SageMaker AI
    supported
    AWS Glue
    supported
    Google Cloud Dataproc
    supported
    Jupyter Notebook
    supported
    MLflow
    coming soon
    Integrations

    Production intelligence, built on open standards. OpenTelemetry (OTel) captures execution traces for your Spark jobs, while OpenLineage (OLin) maps your high-level data flow context.

    Coming soon
    Coming soon

    Subscribe to our newsletter