Data Engineer
Job Description
• Design, develop, and optimize ETL/ELT pipelines in Databricks
• Build real-time & batch processing solutions using Apache Spark & Delta Lake
• Develop scalable data transformation workflows using PySpark/Scala
• Ensure data quality, performance tuning, and cost optimization
• Implement CI/CD pipelines (Terraform, GitHub Actions, Azure DevOps)
• Monitor and optimize Databricks clusters and workflows
• Collaborate with cross-functional teams to deliver impactful data solutions
What We’re Looking For:
• Strong hands-on experience with Databricks (AWS), Delta Lake & Medallion architecture
• Proficiency in PySpark and SQL
• Experience with Kafka, APIs, and streaming data pipelines
• Knowledge of DevOps & CI/CD practices
• Understanding of data governance, security, and access control
Similar Jobs
GCP Cloud And Infra Engineer
Remote
Data Governance Lead
Texas
Senior Site Reliability Engineer (Sre)
Texas
Data Engineer
New York
Actimize Engineer
North Carolina