Data Engineer

Building reliable
data pipelines.

I design and operate scalable data infrastructure — ETL/ELT pipelines, warehouses, and orchestration — so teams can trust their data.

About

Who I am

Replace this with your background — years of experience, domains you've worked in (finance, healthcare, SaaS), and what drives your work in data engineering.

Highlight certifications, education, or notable achievements. Keep it concise and focused on outcomes: latency reduced, cost saved, pipelines migrated, teams enabled.

Skills

Toolbox

Languages

  • Python
  • SQL
  • Scala

Data Platforms

  • Snowflake
  • BigQuery
  • Databricks

Orchestration & IaC

  • Airflow
  • dbt
  • Terraform

Portfolio

Data Engineering Projects

Production-style pipelines, warehouses, and tooling. Each card links to the GitHub repository — update titles, descriptions, tech tags, and URLs below.

ETL Pipeline — Project Name

Brief description: e.g. batch ingestion from APIs into a raw layer, dbt transformations, and scheduled Airflow DAGs with data quality checks.

  • Python
  • Airflow
  • PostgreSQL

Lakehouse — Another Project

Brief description: e.g. Delta Lake on S3, Spark jobs, and medallion architecture with CI/CD for infrastructure and dbt models.

  • Spark
  • dbt
  • AWS

More repos on GitHub

Contact

Let's connect

Open to data engineering roles, consulting, and collaboration on pipeline and platform work.