Employee Records
AI Data Engineer
Harri - Palestine
Full Time
Hybrid remote
3 Years Experience
Coins Icon To be discussed
AI Data Engineer
Harri - Palestine

Full Time
Hybrid remote
3 Years Experience
Coins Icon To be discussed
Skills
Fluent in English
Description

harri is the frontline employee experience platform built for companies who have service at the heart of their business. The solution is built on the notion that the customer experience will never exceed the employee experience. The Harri suite of talent attraction, workforce management and employee engagement technologies enable organizations to attract, manage, engage and retain the best talent for their business.

Hospitality is in our DNA, with most of our global team having front line and restaurant management experience - we are changing the landscape of our industry and frontline workers technology. We need the very best and brightest to join us on this mission to disrupt the market as it stands today. 

Based in NYC, Harri has global offices in the UK, Palestine and India and has been awarded: Top 50 Startup by LinkedIn, Best Enterprise Solution for HR/Workforce by HR Tech Awards & NYC Best Tech Startup for the Tech in Motion Events Timmy Awards.

About the Role

We’re seeking a Mid-Level Data Engineer to join our dynamic, agile team of about 10 members. In this role, you’ll play a key part in developing the data infrastructure and pipelines that power our AI initiatives — spanning both Generative AI and Classical Machine Learning — within our Workforce Management (WFM) SaaS platform.

This position offers a unique blend of data engineering and AI/MLOps, with a strong emphasis on building robust data pipelines and scalable APIs to support innovation and growth.

Working within our Scaled Agile Framework (SAFe), you’ll collaborate closely with data scientists, AI engineers, and fellow data engineers to ensure our models are powered by high-quality, real-time data. You’ll also contribute across the entire AI model lifecycle — from training through to deployment — with your work directly influencing the development and delivery of our AI-driven features.


What You'll Do?

  • Build Data Pipelines: Design, develop, and maintain robust and scalable data pipelines using ETL technique and store data into DWH and / other data stores. You will ingest, transform, and prepare diverse datasets for our AI models, ensuring data quality, consistency, and security throughout the pipeline lifecycle. 

  • Develop Production APIs: Write clean, efficient, and well-documented production code in Python / Java, build and maintain REST APIs that serve our AI models to other product teams and external clients.

  • Manage AWS Data Stack: Leverage your expertise with our core data technologies on AWS, mainly Sagemaker.

  • Work with Snowflake, Redshift, RDS, MySQL, PostgreSQL, data streams, and ORC / Parquet files to optimize data storage and retrieval.

  • Support MLOps: Focus on the data side of MLOps, building automated data workflows and monitoring systems to ensure continuous data flow to our models, minimizing data drift and ensuring model reliability.

  • Share Knowledge and build culture as we work in team structure, it is very essential to have the spirit of sharing the knowledge.

  • Keep aligned with HARRI’s team(s) coding and design standards.

  • Be an Efficient Team Player, ability to communicate and work well with others.

  • Have Ability to deliver on time keeping a high quality of work.

  • Be Eager to learn with self-motivation to quickly become an expert in our tech stack, specifically the systems relating to your role. Requesting training when required.

  • Check in with your line manager and update on your progress.

  • Live and breathe the Harri Company values.


What We're Looking For?

  • 3-5 years of experience as a Data Engineer or in a similar role.

  • Strong proficiency in Python, Java, and advanced SQL.

  • A solid experience of data modeling, ETL/ELT processes, data streaming techniques, RDB like MySQL and PostgreSQL, and NoSQL DBs.

  • Proven Hands-on experience with data warehouses mainly Snowflake.

  • Proven working experience in building and scaling APIs (Mainly REST).

  • Proven experience with AWS and its data services. Sagemaker is a strong plus.

  • Hands-on experience with MLOps and its principles is a strong plus.

  • Familiarity with machine learning frameworks like PyTorch and TensorFlow is a strong plus, as you'll be working closely with the teams that use them.

  • Strong analytical and problem-solving abilities.

  • Excellent communication and collaboration skills, with the ability to work effectively within an agile team and communicate technical concepts to non-technical stakeholders.

  • Excellent verbal and written English communication skills.

  • High attention to detail, curiosity, and willingness to experiment and learn.

harri is the frontline employee experience platform built for companies who have service at the heart of their business. The solution is built on the notion that the customer experience will never exceed the employee experience. The Harri suite of talent attraction, workforce management and employee engagement technologies enable organizations to attract, manage, engage and retain the best talent for their business.

Hospitality is in our DNA, with most of our global team having front line and restaurant management experience - we are changing the landscape of our industry and frontline workers technology. We need the very best and brightest to join us on this mission to disrupt the market as it stands today. 

Based in NYC, Harri has global offices in the UK, Palestine and India and has been awarded: Top 50 Startup by LinkedIn, Best Enterprise Solution for HR/Workforce by HR Tech Awards & NYC Best Tech Startup for the Tech in Motion Events Timmy Awards.

About the Role

We’re seeking a Mid-Level Data Engineer to join our dynamic, agile team of about 10 members. In this role, you’ll play a key part in developing the data infrastructure and pipelines that power our AI initiatives — spanning both Generative AI and Classical Machine Learning — within our Workforce Management (WFM) SaaS platform.

This position offers a unique blend of data engineering and AI/MLOps, with a strong emphasis on building robust data pipelines and scalable APIs to support innovation and growth.

Working within our Scaled Agile Framework (SAFe), you’ll collaborate closely with data scientists, AI engineers, and fellow data engineers to ensure our models are powered by high-quality, real-time data. You’ll also contribute across the entire AI model lifecycle — from training through to deployment — with your work directly influencing the development and delivery of our AI-driven features.


What You'll Do?

  • Build Data Pipelines: Design, develop, and maintain robust and scalable data pipelines using ETL technique and store data into DWH and / other data stores. You will ingest, transform, and prepare diverse datasets for our AI models, ensuring data quality, consistency, and security throughout the pipeline lifecycle. 

  • Develop Production APIs: Write clean, efficient, and well-documented production code in Python / Java, build and maintain REST APIs that serve our AI models to other product teams and external clients.

  • Manage AWS Data Stack: Leverage your expertise with our core data technologies on AWS, mainly Sagemaker.

  • Work with Snowflake, Redshift, RDS, MySQL, PostgreSQL, data streams, and ORC / Parquet files to optimize data storage and retrieval.

  • Support MLOps: Focus on the data side of MLOps, building automated data workflows and monitoring systems to ensure continuous data flow to our models, minimizing data drift and ensuring model reliability.

  • Share Knowledge and build culture as we work in team structure, it is very essential to have the spirit of sharing the knowledge.

  • Keep aligned with HARRI’s team(s) coding and design standards.

  • Be an Efficient Team Player, ability to communicate and work well with others.

  • Have Ability to deliver on time keeping a high quality of work.

  • Be Eager to learn with self-motivation to quickly become an expert in our tech stack, specifically the systems relating to your role. Requesting training when required.

  • Check in with your line manager and update on your progress.

  • Live and breathe the Harri Company values.


What We're Looking For?

  • 3-5 years of experience as a Data Engineer or in a similar role.

  • Strong proficiency in Python, Java, and advanced SQL.

  • A solid experience of data modeling, ETL/ELT processes, data streaming techniques, RDB like MySQL and PostgreSQL, and NoSQL DBs.

  • Proven Hands-on experience with data warehouses mainly Snowflake.

  • Proven working experience in building and scaling APIs (Mainly REST).

  • Proven experience with AWS and its data services. Sagemaker is a strong plus.

  • Hands-on experience with MLOps and its principles is a strong plus.

  • Familiarity with machine learning frameworks like PyTorch and TensorFlow is a strong plus, as you'll be working closely with the teams that use them.

  • Strong analytical and problem-solving abilities.

  • Excellent communication and collaboration skills, with the ability to work effectively within an agile team and communicate technical concepts to non-technical stakeholders.

  • Excellent verbal and written English communication skills.

  • High attention to detail, curiosity, and willingness to experiment and learn.