Job Description
Data Engineer
This role is a part of the Digital Solutions Data Backbone team. The team, as well as a successful candidate, will create useful data products to be used in operations, data science modeling, business intelligence, and reporting across Halliburton Product Service Lines (PSLs)
Overview:
The candidate possesses programming skills relevant to the Extract Transform and Load (ETL) process in a cloud environment. The candidate will also have good problem-solving skills and great attention to detail to ensure we are creating a reliable product. The data engineer will need good communication skills with the ability to understand and describe cloud environments effectively.
Responsibilities:
• Use Python or SQL for engineering and scientific calculations
• Understand the different parts of a modern cloud ETL pipeline, specifically Azure
• Draw conclusions by effectively plotting and analyzing data
• Use Business Intelligence tools like PowerBI to create and modify reports for customers
• Research new cloud technologies and give recommendations
Qualifications:
• Completion of an undergraduate degree in STEM and 2+ years of related experience is required
• Coding knowledge and experience with Python, SQL, Dash/Plotly, PowerBI, Databricks, and DAX
• Experience creating and optimizing data pipelines with Azure Data Factory
• Demonstrable experience creating and executing workflows with Logic and Function applications
• Familiar with all components of a modern Azure environment
• Proficient in PySpark or Spark SQL is preferred
• Knowledge of cloud solutions and architecture patterns
Any Graduate