Description

About the Role

 

As a Data Engineer at Neuralix.ai, you will play a crucial role in designing and implementing high performance, scalable data architectures that power our AI-driven solutions. You will work with cutting-edge technologies, optimize real-time and batch data processing, and collaborate with cross functional teams to ensure seamless data flow.

 

  • This role is perfect for those who thrive in a fast-paced, innovative environment and are eager to push the boundaries of data engineering. What You’ll Do Architect and optimize high-performance data pipelines for structured and unstructured data.
  • Design scalable ETL workflows that fuel our AI/ML models and business intelligence systems. Engineer cloud-native data infrastructure on AWS, GCP, or Azure to handle massive datasets.
  • Build and maintain data lakes, warehouses, and real-time data streaming solutions.
  • Optimize query performance and database architectures for lightning-fast insights.
  • Automate workflows using orchestration tools like Airflow, Luigi, or Prefect.
  • Collaborate with data scientists, analysts, and engineers to unlock the full potential of data.
  • Implement best practices in data security, governance, and compliance.

 

What Makes You a Great Fit? � �

 

• 3-7 years of hands-on experience in data engineering or a related field. Strong expertise in Python, SQL, and distributed data frameworks (Spark, Hadoop, Kafka).

• Deep understanding of data modeling, warehousing (Snowflake, Redshift, BigQuery), and schema design.

• Experience with both relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, Cassandra).

• Solid knowledge of real-time and batch data processing (Kafka, Flink, Spark Streaming).

• Passion for automation, CI/CD, and infrastructure-as-code (Terraform, Docker, Kubernetes).

• Ability to troubleshoot and optimize complex data workflows.

• A problem-solving mindset with a love for tackling large-scale data challenges

Education

Any Graduate