Mid-Level Data Engineer With
Job Description
Need mid-level Data engineer with Strong snowflake experience.
Arden Hills, MN on-site. Only locals
They are looking local persons.
Only H1b/H4 EAD
As a Mid-level Data Engineer, (4-6 years overall experience) you will play a crucial role in developing data solutions in collaboration with business analysts, guided by lead data engineers. Your responsibilities include ensuring the delivery of consistent, reliable, and sustainable outcomes while contributing to data management practices that enhance the accuracy and trustworthiness of data assets throughout the organization.
Competencies-Skills (Required):
• Experience: 4-5 years of experience in SQL, data engineering, and data modeling techniques.
• Data Warehouse and Data Lake:
• Snowflake: Minimum 1 year of experience with Snowflake.
• Project Leadership: Experience leading full lifecycle, large, complex reporting, or data engineering efforts
Data Pipeline Development: Experience working with heterogeneous datasets, building, and optimizing data pipelines, pipeline architectures, and integrated datasets using various data
integration technologies (ETL/ELT, data replication/CDC, message-oriented data movement, API design, etc.).
• DevOps and CI/CD: Experience with DevOps, CI/CD pipelines, and automated testing.
• Scripting Languages: Experience with any scripting languages, preferably Python.
• Self-Motivated: Must be self-motivated and able to work with minimal supervision.
• Analytical Skills: Possess strong analytical, problem-solving, and root cause analysis skills.
• Communication: Good communicator and coordinator.
• Data Security: Working knowledge of data security practices, including encryption, anonymization, and masking.
• Modern Data Architectures: Demonstrated familiarity with modern data architectures, including data lake house, data warehouse, and data lake.
• Data Flow Management: Proficient in data pipeline orchestration techniques for efficient data flow management.
• Metadata Management: Experience in metadata management and employing metadata-driven design principles to enhance data usability and governance.
• Version Control: Strong understanding of version control systems for maintaining code integrity and collaboration.
• Infrastructure Scaling: Knowledge of infrastructure considerations for scaling data pipelines within distributed computing environments.
Competencies-Skills (Preferred):
• Snowflake, Databricks and Qlik: Experience working with Snowflake (streams and tasks, dynamic tables), Databricks (or Spark) and Qlik technologies (Replicate and Compose). At least 1 year of hands-on Snowflake repository experience
• Agile Methodologies: Experience working using agile methodologies.
• Industry Experience: Experience in Manufacturing or Agriculture industry.
• Data Vault Method: Knowledge of the data vault method, model, and architecture.
• Innovative Thinking: Ability to think innovatively to implement creative solutions to problems.
Similar Jobs
Desktop Support Engineer
North Carolina
Radio Engineer
Texas
BI Engineer
Texas
Lead Apigee / Datapower Developer
North Carolina
Java Developer With Camunda
Remote