972-914-2991   info@solsnow.com   helpdesk@solsnow.com

Data Engineer – Snowflake

Expired on: Apr 30, 2024

 

We are seeking a dynamic and experienced Data Engineer with over 5 years of experience in SDLC and data analysis. The ideal candidate will have expertise in RDBMS (Oracle, DB2), No SQL DB (HBase, Cassandra, Mongo DB, AWS Dynamo DB), Snowflake database, and AWS RedShift database. The candidate should have experience in implementing data pipelines, ETL processes, and data pipeline automation using AWS services such as AWS S3, AWS Glue, AWS Athena, and EMR Serverless. Additionally, experience in business intelligence solutions using Tableau, Power BI, and TIBCO Jasper Soft is preferred.

 

Responsibilities:

 

– Design, develop, and implement data pipelines from various sources to Snowflake and AWS RedShift databases

– Create ETL pipelines using Snow pipe for automating ETL processes from AWS S3 to Snowflake DB

– Implement data pipelines using AWS services such as AWS S3, AWS Glue, AWS Athena, and EMR Serverless

– Design and develop business intelligence solutions using Tableau, Power BI, and TIBCO Jasper

– Manage CI/CD pipelines through Jenkins and automate manual tasks using Shell scripting and AWS Code Commit

– Build ETL processes using Informatica Mappings, Sessions, and workflows

– Implement business logic in the HealthCare domain and protect/de-identify PHI data using Data Masking Transformation in Informatica ETL tool

– Utilize Apache Spark for advanced procedures like text analytics and processing using Pyspark

– Work with all stages of the SDLC and Agile Development model from requirement gathering to deployment and production support

– Utilize Hadoop Ecosystem tools including Pig, Hive, Python, MapReduce, Sqoop, Spark, and Oozie

– Experience with Cloud technologies of AWS including VPC, EC2, ELB, ASG, S3 Bucketing, AWS CLI, AWS Dynamo DB

– Strong understanding of Hadoop Architecture, Cassandra database, and Mongo DB

– Proficient in working with complex SQL queries, Oracle PL/SQL, and data modeling using ER Studio

 

Requirements:

 

– Masters in Computer Science (or similar)

– 8 Years of Experience in SDLC and Data analysis

– Experience with Cloud technologies of AWS including VPC, EC2, ELB, ASG, S3 Bucketing, AWS CLI, AWS Dynamo DB

– Strong understanding of Hadoop Architecture, Cassandra database, and Mongo DB

– Proficient in working with complex SQL queries, Oracle PL/SQL, and data modeling using ER Studio

– Experience in Snowflake (mandatory) and AWS RedShift databases

 

We offer a full-time position with a compensation of $65 per hour, as well as opportunities for growth and professional development.

 

Apply Now!

Job Type: Full Time
Job Location: Remote
Sorry! This job has expired.
REQUEST A QUOTE

We are glad that you preferred to contact us. Please fill our short form and one of our friendly team members will contact you back.

X
Get a Quote