Senior Associate Data Engineering

Mid / Senior

|

In Office

Meytier Premier Employer

Working there

About This Workplace

Meytier Partner

Publicis Sapient is looking for a Senior Data Engineer to be part of our team of top-notch technologists. You will lead and deliver technical solutions for large-scale digital transformation projects. Working with the latest data technologies in the industry, you will be instrumental in helping our clients evolve for a more digital future.


Key Responsibilities:

  • Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business
  • Lead, design, develop and deliver large-scale data systems, data processing and data transformation projects that delivers business value for clients
  • Automate data platform operations and manage the post-production system and processes
  • Conduct technical feasibility assessments and provide project estimates for the design and development of the solution
  • Provide technical inputs to agile processes, such as epic, story, and task definition to resolve issues and remove barriers throughout the lifecycle of client engagements
  • Creation and maintenance of infrastructure-as-code for cloud, on-prem, and hybrid environments using tools such as Terraform, CloudFormation, Azure Resource Manager, Helm, and Google Cloud Deployment Manager
  • Mentor, help and grow junior team members

Desired Profile:

  • Demonstrable experience in data platforms involving implementation of end to end data pipelines
  • Good communication and willingness to work as a team
  • Hands-on experience with at least one of the leading public cloud data platforms (Amazon Web Services, Azure or Google Cloud)
  • Implementation experience with column-oriented database technologies (i.e., Big Query, Redshift, Vertica), NoSQL database technologies (i.e., DynamoDB,
  • BigTable, Cosmos DB, etc.) and traditional database systems (i.e., SQL Server, Oracle, MySQL)
  • Experience in implementing data pipelines for both streaming and batch integrations using tools/frameworks like Glue ETL, Lambda, Google Cloud DataFlow, Azure Data Factory, Spark, Spark Streaming, etc.
  • Ability to handle module or track level responsibilities and contributing to tasks “hands-on”
  • Experience in data modeling, warehouse design and fact/dimension implementations
  • Experience working with code repositories and continuous integration
  • Developer certifications for any of the cloud services like AWS, Google Cloud or Azure
  • Understanding of development and project methodologies
  • Willingness to travel

© 2024 Meytier - All Rights Reserved.
   Privacy Policy    Terms Of Use