Logo

Site Reliability Engineer - Data Platform

Kraken
Europe, United States, Canada
Full time
$110,000 - $176,000 per year
Remote

Overview

Department

IT

Job type

Full time

Compensation

$110,000 - $176,000 per year

Location

Europe, United States, Canada

Company size

Mature [ 50+ employess ]

Ready to apply?

You're one step away - it takes less than a minute to upload your resume

Resume Assistance

See how well your resume matches this job role with our AI-powered score. By uploading your resume, you agree to our Terms of Service

Join Kraken's Data Infrastructure team to ensure the reliability, scalability, and efficiency of their data platform. Collaborate with cross-functional teams to design and maintain data infrastructure.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
  • Proven experience (5+ years) working as a Site Reliability Engineer, Infrastructure Engineer, or similar roles, with a focus on data infrastructure and security.
  • Experience with real-time data processing technologies, such as Kafka and Debezium
  • Working experience in managing hybrid systems particularly AWS and (HashiCorp nice to have).
  • Infrastructure as Code tools such as Terraform, Terragrunt and Atlantis
  • Experience with containerization and orchestration tools, particularly Kubernetes and Docker
  • Solid understanding of bash/shell scripting and proficiency in at least one programming language (preferably Python or Rust).
  • Familiarity with CI/CD deployment pipelines and related tools.
  • Strong problem-solving skills and the ability to troubleshoot complex systems.
  • Experience with data-related technologies (databases, data lakes, airflow, spark) is a plus.
  • Responsibilities

  • Design the data governance mechanisms that ensure our lakehouse is easy to interact with, secure and in compliance with all applicable regulations.
  • Implement the infrastructure we use to ingest our data, store it, catalog it with the right metadata and capture its lineage.
  • Provide a state-of-the-art suite of BI tools for multiple teams within the company.
  • Guarantee the availability, high performance, scalability and cost efficiency of our data platform.
  • Implement data infrastructure solutions (self service) that support the needs of 10+ business units and over 100 engineering and data analysts
  • Utilize Infrastructure as Code (IaC) principles to design, provision, and manage both on-premises and cloud (AWS) infrastructure components using tools such as Terraform
  • Develop and maintain automation scripts using bash/shell scripting and to automate operational tasks and deployments.
  • Enhance and manage CI/CD pipelines to facilitate consistent software deployments across the data infrastructure.
  • Implement robust data monitoring and alerting solutions to proactively detect anomalies and performance issues.
  • Manage and implement role-based access control (RBAC) and permissions for a multitude of user groups and machine workflows across different environments
  • Manage and maintain real-time streaming data architecture using technologies like Kafka and Debezium Change Data Capture (CDC).
  • Ensure the timely and accurate processing of streaming data, enabling data analysts and engineers to gain insights from up-to-date information.
  • Utilize Kubernetes to manage containerized applications within the data infrastructure, ensuring efficient deployment, scaling, and orchestration.
  • Implement effective incident response procedures and participate in on-call rotations.
  • Collaborate with data analysts, engineers, and cross-functional teams to understand requirements and implement appropriate solutions.
  • Document architecture, processes, and best practices to enable knowledge sharing and support continuous improvement.
  • Support AI/ML teams with their infra requests
  • © All rights reserved.