Getro Community
companies
Jobs

Data Solutions Architect (Principal/Staff Engineer)

SymphonyAI

SymphonyAI

IT
Bengaluru, Karnataka, India
Posted on Mar 31, 2026

Data Solutions Architect (Principal/Staff Engineer)

Location IN-KA-Bengaluru
ID 2026-2786
Category
Software Engineering
Position Type
Full-Time

Introduction

IRIS Foundry Data Platform Overview Role Overview We are looking for an Architect who has prior experience of handling Data Platform infrastructure and products to architect, implement and maintain data pipelines for di6erent datasets supported by IRIS Foundry. This role works closely with products and customer engagements to deliver fully functional data pipelines with 99.9% uptime, engineering team to incorporate latest available product features into pipeline templatization, raise alerts and troubleshoot functional issues while avoiding data ingestion downtime. This role is expected to be independent and lead a small team of senior, mid-senior or junior engineers to realise the customer delivery roadmap

Job Description

Role Overview

We are looking for an Architect who has prior experience of handling Data Platform

infrastructure and products to architect, implement and maintain data pipelines for

di6erent datasets supported by IRIS Foundry.

This role works closely with products and customer engagements to deliver fully

functional data pipelines with 99.9% uptime, engineering team to incorporate latest

available product features into pipeline templatization, raise alerts and troubleshoot

functional issues while avoiding data ingestion downtime.

This role is expected to be independent and lead a small team of senior, mid-senior or

junior engineers to realise the customer delivery roadmap.

Key Responsibilities

1. Collect data requirements from Professional Services and negotiate the data

acquisition strategy with customers to bring in di6erent datasets timeseries,

tabular, document stores.

2. Understand the capabilities of Foundry Data Platform and update the set of

supported data pipeline templates, based on newly added features.

3. Communicate with customer engagements team to understand the upcoming

customer deliverables, PoCs and prepare timelines for delivery.

4. Attend the customer meetings and understand the architecture of customer

ecosystem to provide available options for pipelines, ask the right questions and

collect information for us to make the right decision of pipeline template.

5. Understand the dataset completely and hence incorporate custom clean up and

preprocessing steps in the pipeline.

6. Propose monitoring and alerting rules for all the data pipelines, implement

custom rules based on the use cases and update them with changing

environments. Coordinate with engineering (backend/devops) teams for L2

support.

7. Drive the conversations when any changes are expected from the customer side

and also plan the downstream modifications to avoid data disruptions.

8. Establish best practices to write and automate component tests for data pipelines.

Expected Outcomes

1. Improve reliability of data ingestion to 99.9% uptime

2. Reduce data disruptions by improving monitoring and alerting for data pipelines

3. Configure custom dashboards per tenant in multi-tenancy cloud environments.

4. Proactively identify bottlenecks and suggest items for product roadmap.

Required Qualifications

1. Experience working with ETL/ELT data pipelines using opensource or enterprise

platforms

2. Understanding of cloud native services like Azure, AWS, GCP is required

3. Understanding of SQL databases like Postgres, no SQL databases like Mongo,

Elasticsearch, messaging systems like Azure queues, event hubs, etc

4. Hands-on coding and debugging skills with Python, Groovy, Java is a must

5. Prior experience with run time containerization using Docker, Kubernetes and

CI/CD pipelines

6. Experience in Apache NiFi, Apache Airflow or any comparable workflow

orchestration tools

7. Knowledge of Prometheus, Grafana, Kibana tools is preferred

8. Candidate should be comfortable using GenAI tools like Cursor as an e6ective

assistant to rapidly build, test and automate deployment of data pipelines

9. Knowledge of database historians like OSI PI, AspenTech IP.21, Wonderware, etc

is a huge plus

About Us

What We Offer

· Competitive salary and benefits package.

· Flexible hybrid working model.

· Opportunities for professional growth and development.

· Collaborative and inclusive work environment.

· Access to the latest technologies and tools.

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed
Application FAQs

Software Powered by iCIMS
www.icims.com