Job Responsibilities:
1. Participate in the demand research and analysis of data projects and provide delivery results on time.
2. Participate in the design and development of data solutions, such as large-scale data batch processing and near-real-time data processing.
3. Responsible for the development and debugging of data logic. Data processing through SQL, Python, Scala, etc.
4. Responsible for using big data technologies (e.g., Spark, Hive, Flink, HBase, Elasticsearch, etc.) to solve technical challenges in the enterprise data space, including high throughput, low latency, and high availability or stability.
5. Ensure the stability of big data-related components in the data project and ensure the stable operation of the project in extreme situations such as high throughput or high concurrency.
6. Provide data technology solutions and perform technical feasibility tests to support the early process of big data projects.
Qualifications:
1. Bachelor’s degree or higher in computer science or related majors.
2. At least 3 years of data engineering and related work experience.
3. Logical and excellent coding skills.
4. Proficient in Spark SQL or HiveQL development; Python data processing experience is required.
5. Familiar with mainstream databases such as Oracle, DB2, SQL Server, etc.
6. Familiar with big data development and job scheduling process.
7. Familiar with data modelling, database storage process development, data query performance analysis and SQL tuning;
8. Experience in large-scale data platform or data warehouse design, terabyte level data processing experience is preferred.
9. Experience in Microsoft Azure data-related component services (such as ADF, Azure Databricks, Azure Synapse Analytics, etc.) is preferred.
10. Strong logical thinking and analysis skills, plan execution and teamwork skills.
11. Passionate about technology and patient to solve various data-related technical problems.
Any Graduate