Job Description
Fast-Paced Job
Position: Enterprise Architect (18+ Year)
Company: Vaiticka Solution
Company: Vaiticka Solution
Job Brief
about the job
position: enterprise architect (18 year)location: remoteemployment: contract. . architectural design
Experience
Professional Development is required
Job Detail
- Define and implement Open Lakehouse Architecture leveraging technologies like Apache Iceberg and Delta Lake
- Design Event-Driven Architecture using streaming platforms such as Apache Kafka, Google Pub/Sub, and Apache Flink
- Cloud & Platform Expertise Architect solutions on Google Cloud Platform using services like Dataproc, Dataflow, and Cloud Storage
- Integrate Starburst for federated query capabilities across heterogeneous data sources
- Data Strategy & Governance Establish enterprise-wide data standards, metadata management, and data quality frameworks
- Ensure compliance with security, privacy, and regulatory requirements
- Integration & Scalability Develop strategies for integrating structured, semi-structured, and unstructured data from multiple sources
- Optimize for performance, cost, and elastic scalability in cloud and hybrid environments
- Collaboration & Leadership Partner with business stakeholders, data engineers, and application architects to align architecture with business goals
- Mentor teams on modern data architecture patterns and best practice
- Technical Expertise Strong knowledge of Open Lakehouse frameworks (Iceberg, Delta Lake, Hudi)
- Proficiency in streaming technologies (Kafka, Flink, Spark Structured Streaming, Google Pub/Sub)
- Hands-on experience with Google Cloud Platform (BigQuery, Dataflow, Dataproc)
- Experience with Starburst or similar query federation tools
- Expertise in data modeling, partitioning strategies, and schema evolution
- Familiarity with object storage (GCS) and query engines (Presto, Trino)
- Skills: – Mandatory skills Cloud Platform: Google Cloud (BigQuery, Dataflow, Dataproc)
- Lakehouse Frameworks: Apache Iceberg, Delta Lake, Hudi
- Streaming & Event-Driven Architecture: Apache Kafka, Google Pub/Sub, Apache Flink
- Query Federation: Starburst, Presto, Trino
- Data Modeling: Advanced modeling techniques, schema evolution
- Governance & Security: Metadata management and data quality frameworks
- Architecture Principles: Event sourcing, CQRS, Data Mesh
- Good to have skills: – Experience with Databricks Lakehouse
- Understanding of Data Catalog and Data Observability tools like Atlan, Monte Carlo, etc
- Healthcare Knowledge – Understanding and exposure of Healthcare Interoperability standards like HL7, CCDA, FHIR, etc
- Experience in implementation of Clinical and Claims Data Analytics in a large Payor Ecosystem ~