DevOps Data Engineer
Job Description
Hiring: DevOps Data Engineer (Kubernetes | Spark | Python) –
Raleigh/Durham, NC (Local Only) 🚨
We are looking for a strong DevOps Data Engineer / Platform Engineer to join our team and work on real-time data processing & Data Lakehouse infrastructure. This is a high-impact role involving architecture, scalability, and modern cloud-native technologies.
📍 Location: Raleigh–Durham, NC (Local candidates only)
📅 Experience: 8–10 Years
🟢 Visa: GC / USC (C2C & W2)
🔧 Key Responsibilities
Design and build real-time data pipelines & Data Lakehouse platforms
Develop Python-based data processing pipelines
Manage and deploy applications using Kubernetes (EKS), Docker, and Helm
Maintain infrastructure using Ansible / IaC
Work with Spark & Hadoop clusters for large-scale data processing
Monitor, troubleshoot, and optimize containerized applications & clusters
Implement observability, capacity planning, and performance tuning
Drive containerization and modernization initiatives
Collaborate with cross-functional teams and lead technical decisions
✅ Required Skills
Strong experience with Python + Data Pipelines
Hands-on expertise in Kubernetes (K8s), Docker, Helm
Experience with Apache Spark / Hadoop ecosystem
Proficiency in Ansible / Infrastructure as Code (IaC)
Solid understanding of AWS (EKS, ECS, S3)
Experience with monitoring tools (Dynatrace / Prometheus / Grafana)
Knowledge of data lake / lakehouse architecture
⭐ Nice to Have
Experience with Dremio
Exposure to Disaster Recovery (DR) for distributed systems
Strong understanding of cluster management & scaling strategies
Similar Jobs
DevOps Engineer
Texas
DevOps Engineer
Texas
DevOps Engineer
Remote
DevOps Engineer
Texas
DevOps Engineer
Remote