Data Engineer

Data Engineering

Oferty pracy     Data Engineer

Data Engineer

The person in this role will work with project teams developing a modern data platform in a cloud environment for global brands. They will work in an international environment and specialize in BI & Big Data cloud architecture and the latest technologies in that area, as well as promote the principals of the DataOps manifesto.

 

Responsibilities: 

  • Building ETL/ELT pipelines of data from various sources using SQL/Python/Spark  

  • Ensuring that data are modelled and processed according to architecture and requirements both functional and non-functional 

  • Understanding and implementing required development guidelines, design standards and best practices always 

  • Working cross-functionally with enterprise architects, data management teams, cloud integration architects, information security teams, and platform teams 

  • Delivering right solution architecture, automation and technology choices starting from experimentation and proof of concept phases of new analytical models that generate insights and answers to business question 

  • Suggesting and implementing architecture improvements

 

Requirements: 

  • Very good knowledge of any SQL dialect (T-SQL, PL/SQL, PostgreSQL etc.) 

  • Knowledge of programming language (Python, Scala) 

  • Knowledge of Data Warehouse, Data Lake, Big Data, ETL/ELT issues supported by a minimum of 2 years of professional work experience 

  • Ability to create and orchestrate ETL/ELT processes (ADF, Airflow, Step Function, Databricks Workflowsdbt, Glue, Snowpipe) 

  • Ability to model data (Star schema, Lakehouse, Data Vault, Data Mesh). 

  • Practical knowledge of various relational as well as non-relational database engines in the cloud (Amazon Aurora, Amazon Redshift, Azure Synapse, Azure Cosmo DB, Databricks, Snowflake, etc.) 

  • Analytical approach to problem solving - the ability to divide the business requirement into smaller elements (feature -> user story -> task) along with an accurate estimation of the time needed for the task 

  • Very good knowledge of English (minimum B2) 

  • Independence, efficiency and responsibility for assigned tasks 

 

Nice to have: 

  • Hands-on experience with data services offered by AWS or Azure cloud 

  • Knowledge of Apache Spark – work experience with PySpark (i.e., AWS Glue, AWS EMR, Databricks, Azure Synapse Spark Pools) 

  • Knowledge of Hive Metastore (i.e., AWS Glue Data Catalog, Databricks Unity Catalog, Apache Nifi, Presto, Apache Atlas, Hortonworks DataPlane, Cloudera Navigator) 

  • Knowledge of Git repos and CI/CD services 

  • Knowledge of ticket management tools (JIRA, ClickUp) and knowledge base (Confluence, SharePoint) 

  • Having specialized certifications in the area of data management (e.g., from AWS, Microsoft, Databricks, Snowflake) 


We offer: 

  • Global projects in multiple clouds - we work with clients from all over the world based on modern cloud technologies    

  • Certification reimbursement - we fund exams, certifications from Microsoft, AWS, Databricks, Snowflake  

  • Time to learn - 60 paid hours per year    

  • Flexible approach - you can choose to work from home or meet at our offices 

  • Personalized benefits - medical care, subsidized sports packages, language tuition, new employee referral bonus (up to PLN 15,000) as well as annual and media bonus 

Stanowisko

Data Engineer

Dział

Data Engineering

Osoby kontaktowe

Udostępnij ofertę

   
Recruitment powered by tomHRM