Premier Employers are industry leaders that have forged exclusive partnerships with Meytier to forward our shared mission to offset bias in hiring, and are only visible to members of the Meytier community.
EXCLUSIVELY ON MEYTIER
You're in luck. This opportunity exclusively available through Meytier.
Join our diverse and inclusive team where you will feel valued and motivated to contribute with your unique skills and experience. Exavalu offers permanent remote working model as we believe in going where the right talent is.
Key Responsibilities:
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue, Snowflake.
Analyze, re-architect and re-platform on-premises data warehouses to data platforms on AWS cloud using AWS or 3rd party services.
Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala.
Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.
Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.
AWS data experience. Redshift, Aurora, Data Glue, Lambda etc.
Our ideal candidate is one who has strong AWS data architecture and technical (hands on) experience. Preferably from another SI.
Desired Profile:
Bachelor’s Degree in computer science, Information Technology, or other relevant fields
Has experience in any of the following AWS Athena and Glue Pyspark, EMR, DynamoDB, Redshift, Kinesis, Lambda, Snowflake
Proficient in AWS Redshift, S3, Glue, Athena, DynamoDB
Total experience in data warehousing/Analytics/Reporting should be 10+ and minimum 4-5 years in AWS
{"group":"Organization","title":"AWS Data Architect","zohoId":"557706000005517192","endDate":"2022-11-15T00:00:00.000Z","jobType":"Full Time","job_url":"187-exavalu-aws-data-architect","agencyId":1,"location":[{"lat":37.09024,"lon":-95.712891,"city":"","text":"United States","state":"","country":"United States","zipCode":"","is_state":false,"is_country":true,"state_code":"","countryCode":"US","isLocationSet":true,"loc_h3_hex_res4":"8426ee9ffffffff","isLocationResolved":true,"is_address_available_from_parser":true}],"maxSalary":null,"minSalary":null,"startDate":"2022-09-20T00:00:00.000Z","onBehalfOf":"62","description":"<span id=\"spandesc\"><div class=\"align-justify\" style=\"text-align: justify\">Join our diverse and inclusive team where you will feel valued and motivated to contribute with your unique skills and experience. Exavalu offers permanent remote working model as we believe in going where the right talent is.<br /></div><div class=\"align-justify\" style=\"text-align: justify\"><br /></div><div class=\"align-justify\" style=\"text-align: justify\"><b>Key Responsibilities:</b><br /></div><ul><li class=\"align-justify\" style=\"text-align: justify\"><span>Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue, Snowflake. <br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Analyze, re-architect and re-platform on-premises data warehouses to data platforms on AWS cloud using AWS or 3rd party services.<br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Design and build production data pipelines from ingestion to consumption within a big data architecture, using Java, Python, Scala.<br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Design and implement data engineering, ingestion and curation functions on AWS cloud using AWS native or custom programming.<br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.<br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>AWS data experience. Redshift, Aurora, Data Glue, Lambda etc.<br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Our ideal candidate is one who has strong AWS data architecture and technical (hands on) experience. Preferably from another SI.<br /></span></li></ul><div class=\"align-justify\" style=\"text-align: justify\"><b>Desired Profile:</b><br /></div><ul><li class=\"align-justify\" style=\"text-align: justify\"><span>Bachelor’s Degree in computer science, Information Technology, or other relevant fields <br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Has experience in any of the following AWS Athena and Glue Pyspark, EMR, DynamoDB, Redshift, Kinesis, Lambda, Snowflake<br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Proficient in AWS Redshift, S3, Glue, Athena, DynamoDB<br /></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span>Total experience in data warehousing/Analytics/Reporting should be 10+ and minimum 4-5 years in AWS<br /></span></li></ul></span><br />","isHybridJob":false,"isRemoteJob":true,"noOfOpenings":1,"maxExperience":"18","minExperience":"10","isOnPremiseJob":false,"onBehalfOfName":"Exavalu","otherlocations":[],"zohoCurrencyIn":"USD","experienceLevel":"Mid / Senior","zohoNoOfOpenings":{"noOfOpenings":1,"fulfillmentIds":["557706000005517192-1"]},"otherJobReference":null,"sharpenedJobTitle":"AWS Data Architect","job_category_group":3,"portalLocationDisplay":"United States","expertise_coreskill_or_product":["Other"],"job_id":"2558"}