Job Title: AWS Data Engineer
Duration: 7 months (Possibility of extension)
Location: Hybrid (Newark, NJ)
Job Summary:
We are seeking an experienced AWS Data Engineer to join our Data Engineering team. As a technical leader, you will be responsible for architecting, implementing, and managing scalable data solutions on AWS. You will provide technical guidance to a team of data engineers, collaborate with cross-functional partners, and ensure best practices in cloud data engineering.
Key Responsibilities:
- Lead the design, development, and optimization of large-scale, reliable, and secure data pipelines and data lake architecture on AWS.
- Architect and implement end-to-end data solutions, including data ingestion, storage, transformation, and analytics using AWS services (Glue, Redshift, S3, Lambda, EMR, Kinesis, Athena, RDS, etc.).
- Mentor and guide a team of data engineers, conducting code reviews and fostering best practices in data engineering and cloud architecture.
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into scalable and maintainable solutions.
- Oversee migration of data from legacy systems to AWS-based data lakes and data warehouses.
- Develop and enforce standards for data quality, security, and governance.
- Drive the adoption of DevOps, CI/CD, and infrastructure-as-code practices within the data engineering team.
- Ensure solutions are cost-effective, performant, and aligned with enterprise data strategy.
- Stay current with advancements in AWS technologies and data engineering trends and evaluate new tools and frameworks for potential adoption.
- Troubleshoot complex data issues and provide technical leadership in problem resolution.
Required Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 7+ years of experience in data engineering, with at least 3 years in technical leadership or lead engineer role.
- Extensive hands-on experience with AWS data services (Glue, Redshift, S3, Lambda, EMR/Spark, Kinesis, Athena, RDS, API Gateway, etc.).
- Proficient in programming languages such as Python and SQL; experience with Shell scripting and Scala is a plus.
- Strong experience designing, implementing, and managing data lakes, data warehouses, and data ingestion pipelines on AWS.
- Proven experience with ETL/ELT processes, data modeling, and big data frameworks.
- Demonstrated ability to lead, mentor, and coach engineers in a collaborative team environment.
- Experience with DevOps practices, CI/CD pipelines, and infrastructure-as-code tools (e.g., CloudFormation, Terraform).
- Excellent problem-solving, communication, and organizational skills.
Preferred Qualifications:
- AWS Solutions Architect or AWS Data Engineer certification.
- Experience with real-time streaming technologies.
- Knowledge of data governance, compliance, and security best practices.
- Familiarity with Lakehouse architecture and modern data platforms.
|