Senior Associate Data Engineering

Mid / Senior

|

In Office

Meytier Premier Employer

Working there

About This Workplace

Meytier Partner

Publicis Sapient is looking for a Senior Data Engineer to be part of our team of top-notch technologists. You will lead and deliver technical solutions for large-scale digital transformation projects. Working with the latest data technologies in the industry, you will be instrumental in helping our clients evolve for a more digital future.


Key Responsibilities:

  • Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business
  • Lead, design, develop and deliver large-scale data systems, data processing and data transformation projects that delivers business value for clients
  • Automate data platform operations and manage the post-production system and processes
  • Conduct technical feasibility assessments and provide project estimates for the design and development of the solution
  • Provide technical inputs to agile processes, such as epic, story, and task definition to resolve issues and remove barriers throughout the lifecycle of client engagements
  • Creation and maintenance of infrastructure-as-code for cloud, on-prem, and hybrid environments using tools such as Terraform, CloudFormation, Azure Resource Manager, Helm, and Google Cloud Deployment Manager
  • Mentor, help and grow junior team members

Desired Profile:

  • Demonstrable experience in data platforms involving implementation of end to end data pipelines
  • Hands-on experience with at least one of the leading public cloud data platforms (Amazon Web Services, Azure or Google Cloud)
  • Implementation experience with column-oriented database technologies (i.e., Big Query, Redshift, Vertica), NoSQL database technologies (i.e., DynamoDB, BigTable, Cosmos DB, etc.) and traditional database systems (i.e., SQL Server, Oracle, MySQL)
  • Experience in implementing data pipelines for both streaming and batch integrations using tools/frameworks like Glue ETL, Lambda, Google Cloud DataFlow, Azure Data Factory, Spark, Spark Streaming, etc.
  • Ability to handle module or track level responsibilities and contributing to tasks “hands-on”
  • Experience in data modeling, warehouse design and fact/dimension implementations
  • Experience working with code repositories and continuous integration
  • Data modeling, querying, and optimization for relational, NoSQL, timeseries, and graph databases and data warehouses and data lakes
  • Data processing programming using SQL, DBT, Python, and similar tools
  • Logical programming in Python, Spark, PySpark, Java, Javascript, and/or Scala
  • Data ingest, validation, and enrichment pipeline design and implementation
  • Cloud-native data platform design with a focus on streaming and event-driven architectures
  • Test programming using automated testing frameworks, data validation and quality frameworks, and data lineage frameworks
  • Metadata definition and management via data catalogs, service catalogs, and stewardship tools such as Open Metadata, Data Hub, Alation, AWS Glue Catalog, Google Data Catalog, and similar
  • Code review and mentorship
  • Bachelor’s degree in Computer Science, Engineering or related field.

Compensation Range: $108k-$210k

© 2023 Meytier - All Rights Reserved.
   Privacy Policy    Terms Of Use