AWS Data Engineer
Job Description
Job Description: AWS Data Engineer
Location:Franklin TN. (day 1 onsite)
Client : Nissan
First priority is looks and Independent visa and H1B,H4-EAD also fine.
Looking for 14+ profiles.
Key Responsibilities :
Design, build, and maintain scalable data pipelines on AWS using services such as S3, Glue, Lambda, Step Functions
Develop and optimize ETL/ELT workflows, ensuring high data quality, consistency, and availability across analytical and operational systems.
Work extensively with AWS ECS/ECR to containerize, orchestrate, and deploy data processing and ML/GenAI workloads.
Build secure and scalable APIs using API Gateway, Lambda, and container-based services for consumption by internal and external applications.
Architect and manage NoSQL workloads on DynamoDB, including partitions, performance tuning, and data modeling.
Implement secure authentication and authorization mechanisms using AWS Cognito.
Configure and manage Route 53, DNS routing, SSL certificates, and integrations for multi-service distributed systems.
Utilize CloudWatch Logs and Metrics for monitoring, troubleshooting, performance tuning, and building automated alerts.
Implement AWS Secrets Manager for secure credential and key management across the data ecosystem.
Design, manage, and optimize AWS OpenSearch clusters for indexing and querying structured and unstructured data.
Deploy and manage static websites using S3, CloudFront, Route 53, and related AWS services.
Integrate Generative AI capabilities into data workflows using Amazon Bedrock, AgentCore, custom LLMs, and embeddings.
Build RAG (Retrieval-Augmented Generation) pipelines leveraging vector databases (e.g., OpenSearch vector engine) to enhance AI-driven search and automation.
Develop data-driven solutions powering chatbots, summarization tools, document processing pipelines, and enterprise AI applications.
Collaborate closely with data scientists, ML engineers, and application developers to bring AI/ML and GenAI features into production.
Ensure compliance with data governance, security, CI/CD, and MLOps best practices across all environments.
Required Skills & Experience
7–12 years of total IT experience, including strong hands-on experience as an AWS Data Engineer.
Expert-level proficiency with AWS services:
ECS, ECR
S3, Glue, Lambda, Step Functions
API Gateway, CloudFront
DynamoDB, Cognito
Route 53, CloudWatch
Secrets Manager
AWS OpenSearch (including vector search capabilities)
Solid experience designing and operating containerized workloads using Docker + ECS/Fargate.
Strong Python skills for building data pipelines, automation, and backend integrations.
Experience integrating and implementing Generative AI solutions using:
Amazon Bedrock, Bedrock AgentCore
LLM orchestration tools (LangChain, LlamaIndex)
RAG architectures
Embeddings and vector search
Deep understanding of data modeling, distributed systems, partitioning, performance optimization, and streaming/batch data processing.
Prior experience deploying and managing static web applications on AWS (S3 + CloudFront + Route 53).
Familiarity with CI/CD pipelines using CodePipeline, GitHub Actions, or Jenkins.
Strong understanding of IAM policies, roles, cross-account access, and cloud security best practices.
Excellent communication and cross-team collaboration skills.
Preferred Qualifications
Bachelor’s degree in Computer Science, Engineering, or related field (Required)
Master’s degree in Data Engineering, Cloud Computing, or AI (Preferred)
AWS Certifications such as:
AWS Certified Data Engineer – Associate
AWS Certified Solutions Architect
AWS Machine Learning Specialty
Experience with AI/ML model deployment using SageMaker or Bedrock Agents
Regards
Similar Jobs
AWS Data Engineer
Texas, New Jersey, South Carolina
AWS Data Engineer
Illinois
AWS Data Engineer
Texas
AWS Data Engineer
Indiana
AWS Data Engineer
California