Skip to content

phiture/soda-core

 
 

Repository files navigation

📣 Tuesday, June 28: 📣 After months of hard work, Soda Core + SodaCL are now generally available! 🎉 🎉 Check out the Release notes for details.


Soda Core

Data reliability testing for SQL- and Spark- accesssible data.

License: Apache 2.0 Slack


✔ An open-source, CLI tool and Python library for data reliability
✔ Compatible with Soda Checks Language (SodaCL) and Soda Cloud
✔ Enables data quality testing both in and out of your pipeline, for data observability, and for data monitoring
✔ Integrated to allow a Soda scan in a data pipeline, or programmatic scans on a time-based schedule

Soda Core is a free, open-source, command-line tool that enables you to use the Soda Checks Language to turn user-defined input into aggregated SQL queries.

When it runs a scan on a dataset, Soda Core executes the checks to find invalid, missing, or unexpected data. When your Soda Checks fail, they surface the data that you defined as “bad”.

Get started

Soda Core currently supports Amazon Athena, Amazon Redshift, GCP BigQuery, PostgreSQL, Snowflake, and Spark.

Requirements

  • Python 3.8 or greater
  • Pip 21.0 or greater
  1. To get started, use the install command, replacing soda-core-postgres with the package that matches your data source. See Install Soda Core.
    pip install soda-core-postgres
  • soda-core-athena
  • soda-core-bigquery
  • soda-core-db2
  • soda-core-postgres
  • soda-core-redshift
  • soda-core-snowflake
  • soda-core-spark-df
  • soda-core-sqlserver
  • soda-core-mysql
  • soda-core-trino
  1. Prepare a configuration.yml file to connect to your data source. Then, write data quality checks in a checks.yml file. See Configure Soda Core.
  2. Run a scan to review checks that passed, failed, or warned during a scan. See Run a Soda Core scan. soda scan -d your_datasource -c configuration.yml checks.yml

Example checks

# Checks for basic validations
checks for dim_customer:
  - row_count between 10 and 1000
  - missing_count(birth_date) = 0
  - invalid_percent(phone) < 1 %:
      valid format: phone number
  - invalid_count(number_cars_owned) = 0:
      valid min: 1
      valid max: 6
  - duplicate_count(phone) = 0

# Checks for schema changes
checks for dim_product:
  - schema:
      name: Find forbidden, missing, or wrong type
      warn:
        when required column missing: [dealer_price, list_price]
        when forbidden column present: [credit_card]
        when wrong column type:
          standard_cost: money
      fail:
        when forbidden column present: [pii*]
        when wrong column index:
          model_name: 22
# Check for freshness 
  - freshness(start_date) < 1d

# Check for referential integrity
checks for dim_department_group:
  - values in (department_group_name) must exist in dim_employee (department_name)

Documentation

About

Data reliability tools for SQL- and Spark-accessible data

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • Other 0.9%