Big Data Engineer (Azure) - Chicago
Associate / Junior
|
Hybrid
EXCLUSIVELY ON MEYTIER
You're in luck. This opportunity exclusively available through Meytier.
to learn more about this employer
About Meytier Premier Employers
Premier Employers are industry leaders that have forged exclusive partnerships with Meytier to forward our shared mission to offset bias in hiring, and are only visible to members of the Meytier community.
EXCLUSIVELY ON MEYTIER
You're in luck. This opportunity exclusively available through Meytier.
The Big Data Azure Engineer will be responsible for architecting, designing, and implementing advanced analytics capabilities. These capabilities include batch and streaming analytics, machine learning models, natural language generation, and other emerging technologies in the field of advanced analytics.
Requirements Bachelor’s degree in Computer Science or similar field 4+ years of experience in traditional and modern Big Data technologies (HDFS, Hadoop, Hive, Pig, Sqoop, Kafka, Apache Spark, hBase, Oozie, No SQL databases, PostgreSQL, GIT, Python, REST API, Snowflake, etc.) Experience in Java/Python/Scala Experience extracting/querying/joining large data sets at scale Experience building data platforms using Azure stack Experience building data ingestion pipelines using Azure Data Factory to ingest structured and unstructured data Strong knowledge on Azure Storage schematics such as Gen1 and Gen2 Experience in harmonizing raw data into a consumer-friendly format using Azure Databricks Knowledge of Azure networking, security, key vaults, etc. Experience in data wrangling, advanced analytic modeling, and AI/ML capabilities is preferred Experience utilizing Snowflake to build data marts with the data residing in Azure storage is a plus Knowledge of SAS, Teradata, Oracle, or other databases a plus Exposure with R and ML technologies a plus Strong communication and organizational skills
{"group":"Organization","title":"Big Data Engineer (Azure) - Chicago","zohoId":"557706000009890106","endDate":"2022-11-15T00:00:00.000Z","jobType":"Full Time","job_url":"232-tiger-analytics-big-data-engineer-azure-chicago","location":[{"lat":41.88194444,"lon":-87.62777778,"city":"Chicago","text":"Chicago, IL 60601, USA","state":"Illinois","country":"","zipCode":"60601","is_state":false,"is_country":false,"state_code":"IL","countryCode":"","isLocationSet":true,"loc_h3_hex_res4":"842664dffffffff","isLocationResolved":false,"is_address_available_from_parser":true}],"maxSalary":null,"minSalary":null,"startDate":"2022-09-20T00:00:00.000Z","onBehalfOf":"59","description":"<span id=\"spandesc\"><div style=\"margin: 0px 0px 40px; padding: 0px; border: 0px; font-stretch: inherit; line-height: inherit; vertical-align: baseline; orphans: 2; text-indent: 0px; widows: 2\" class=\"job-preview-styles__description--2BkR3\"><div style=\"margin: 0px 0px 1em; padding: 0px; border: 0px; font-stretch: inherit; line-height: 1.5; vertical-align: baseline\"><p style=\"margin: 0px 0px 1em; padding: 0px; border: 0px; font-stretch: inherit; line-height: 1.5; vertical-align: baseline; text-align: justify\" class=\"align-justify\"><span class=\"colour\" style=\"\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">The Big Data Azure Engineer will be responsible for architecting, designing, and implementing advanced analytics capabilities. These capabilities include batch and streaming analytics, machine learning models, natural language generation, and other emerging technologies in the field of advanced analytics.</span></span></span><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></p></div></div><div style=\"margin: 0px 0px 40px; padding: 0px; border: 0px; font-stretch: inherit; line-height: inherit; vertical-align: baseline; orphans: 2; text-indent: 0px; widows: 2\" class=\"job-preview-styles__requirements--2kg4_\"><h4 style=\"margin: 0px 0px 8px; padding: 0px; border: 0px; font-stretch: inherit; line-height: normal; vertical-align: baseline; text-align: justify\" class=\"align-justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Requirements</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></h4><div style=\"margin: 0px 0px 1em; padding: 0px; border: 0px; font-stretch: inherit; line-height: 1.5; vertical-align: baseline\"><ul><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Bachelor’s degree in Computer Science or similar field</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">4+ years of experience in traditional and modern Big Data technologies (HDFS, Hadoop, Hive, Pig, Sqoop, Kafka, Apache Spark, hBase, Oozie, No SQL databases, PostgreSQL, GIT, Python, REST API, Snowflake, etc.)</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Experience in Java/Python/Scala</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Experience extracting/querying/joining large data sets at scale</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Experience building data platforms using Azure stack</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Experience building data ingestion pipelines using Azure Data Factory to ingest structured and unstructured data</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Strong knowledge on Azure Storage schematics such as Gen1 and Gen2</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Experience in harmonizing raw data into a consumer-friendly format using Azure Databricks</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Knowledge of Azure networking, security, key vaults, etc.</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Experience in data wrangling, advanced analytic modeling, and AI/ML capabilities is preferred</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Experience utilizing Snowflake to build data marts with the data residing in Azure storage is a plus</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Knowledge of SAS, Teradata, Oracle, or other databases a plus</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Exposure with R and ML technologies a plus</span><span class=\"font\" style=\"font-family: verdana, sans-serif\"><br /></span></span></li><li class=\"align-justify\" style=\"text-align: justify\"><span class=\"size\" style=\"font-size: 13px\"><span class=\"font\" style=\"font-family: verdana, sans-serif\">Strong communication and organizational skills</span></span><br /></li></ul></div></div><div class=\"align-justify\" style=\"text-align: justify\"><br /></div></span><br />","isHybridJob":true,"isRemoteJob":false,"noOfOpenings":1,"maxExperience":"7","minExperience":"4","isOnPremiseJob":false,"onBehalfOfName":"Tiger Analytics","otherlocations":[],"zohoCurrencyIn":"USD","experienceLevel":"Associate / Junior","zohoNoOfOpenings":{"noOfOpenings":1,"fulfillmentIds":["557706000009890106-1"]},"otherJobReference":null,"sharpenedJobTitle":"Big Data Engineer (Azure) - Chicago","portalLocationDisplay":"Chicago IL","expertise_coreskill_or_product":["Other"],"job_id":"232"}