Senior Database Reliability Engineer (DBRE) & Architect (worldwide remote) at CloudLinux

Welcome to Real Work From Anywhere.

The only fully location independent job board. We hand pick every job on this site. Live and work from anywhere.

💜 Love this site? plz tweet about us

Headshot
Job applications getting ignored?
Professional headshots increase response rates by 40%
✅ Ready in 3 minutes✅ Save $200 vs traditional photographers

Job Description

CloudLinux is transforming the Linux infrastructure market by ensuring security and stability for over 500,000 servers worldwide. Our products - CloudLinux OS, TuxCare, and Imunify360 - are the de facto standard in the hosting industry and Enterprise segment.

We are seeking a visionary engineer to lead the evolution of our data platform. In 2025, we are shifting from classic database administration to an Internal Database-as-a-Service (DBaaS) model. We need a specialist who doesn’t just "configure backups," but designs resilient distributed systems, writes code to automate infrastructure, and transforms databases into a reliable service for product teams.

If you are tired of endless tickets and want to build platforms capable of processing petabytes of data, this role is for you.

Your Challenges & Responsibilities:

  • DBaaS Architecture: Design and implement a self-service platform based on Terraform and Ansible, enabling the deployment of HA clusters (PostgreSQL and ClickHouse, MongoDB, Redis) in a heterogeneous environment (Bare Metal + OpenNebula + Kubernetes + Public Clouds). You will turn infrastructure into a product.
  • Scaling ClickHouse: Manage exponentially growing analytics clusters (12+ clusters, tens of terabytes of data). You will tackle sharding, table engine optimization (ReplicatedMergeTree), and building reliable S3 backup pipelines under high load.
  • Data Platform & Analytics Support: Maintain and scale the infrastructure for Apache Airflow and Redash. You will ensure the reliability of ETL pipelines and visualization tools, bridging the gap between raw infrastructure and the data analytics team.
  • Reliability as Code: Implement SRE practices in data management. Replace manual incident response with automated self-healing mechanisms. Define and implement SLO/SLI for all databases.
  • Stack Modernization: Lead the migration process from legacy solutions to modern cloud patterns. Participate in decision-making regarding the implementation of Kubernetes operators for stateful workloads.
  • Expertise & Mentorship: Serve as the technical authority for product teams, helping them optimize data schemas and SQL queries for high-load systems.

Our Tech Stack:

  • Databases: PostgreSQL 15+ (Patroni, PgBouncer), ClickHouse (Sharded/Replicated), MongoDB, Redis, Kafka
  • Data & Analytics: Apache Airflow, Redash (Infrastructure & Integration).
  • Infrastructure: Own 3+DC colocation (OpenNebula, Kubernetes, Bare Metal), AWS, Google Cloud, Azure, DO – Hybrid Cloud.
  • Automation & IaC: Terraform, Ansible, Python/Go, GitLab, Jenkins, Gerrit.
  • Observability: VictoriaMetrics, Grafana, Loki.

Why CloudLinux?

  • Culture: A Remote-first company with an "Employees First" principle. We value results, not hours in the office.
  • Impact: Your architectural decisions will determine the stability of services used by thousands of companies around the world.
  • Growth: We support professional development and pay for training and conferences.

Requirements

What We Expect From You:

  • Deep PostgreSQL Expertise (5+ years): You know MVCC internals, understand locking mechanics, can configure Patroni and PgBouncer "with your eyes closed," and have experience with seamless major version upgrades under load.
  • ClickHouse Mastery: Experience operating large clusters, understanding ZooKeeper/ClickHouse Keeper, sharding, replication internals, and the ability to diagnose performance issues at the data-part level.
  • Engineering Mindset (SRE/DevOps): You hate doing the same task twice by hand. Experience writing complex Terraform modules and Ansible roles is mandatory. Programming skills in Python or Go for automation are a huge plus.
  • Hybrid Environment Experience: You understand the differences between running DBs on Bare Metal vs. Kubernetes vs. Cloud and know how to optimize TCO and disk subsystem performance (NVMe, Network Storage).
  • Systems Approach: You see the big picture - from the network packet to the application business logic. You understand the importance of security (FIPS, Audit logs) and Disaster Recovery.

Nice to Have:

  • Experience building an Internal Developer Platform (IDP).
  • Experience operating databases in Kubernetes (CloudNativePG, Altinity Operator).
  • Experience working in Cloud and Hosting providers on similar services.

Benefits

What's in it for you?

  • A focus on professional development.
  • Interesting and challenging projects.
  • Fully remote work with flexible working hours, which allows you to schedule your day and work from any location worldwide.
  • Paid 24 days of vacation per year, 10 days of national holidays, and unlimited sick leaves.
  • Compensation for private medical insurance.
  • Co-working and gym/sports reimbursement.
  • Budget for education.
  • The opportunity to receive a reward for the most innovative idea that the company can patent.

By applying for this position, you agree with CloudLinux Privacy Policy and give us your consent to maintain and process your personal data with this respect. Please read our Privacy Policy for more information.

Please mention that you found the job on Real Work From Anywhere, this helps us grow. Thanks.

CloudLinux company logo

CloudLinux

Linux-oriented infrastructure tools for hosting, focusing on stability and security.

View Company Profile

About the job

Posted on

Nov 19, 2025

Apply before

Dec 19, 2025

Job type
Full-Time
Category
Location
Worldwide

Share this job

Similar Jobs