
Biotechnology • Artificial Intelligence • SaaS
Saphetor is a company that develops the VarSome Suite, an AI-driven set of bioinformatics software tools for large-scale human genome (NGS) data analysis and interpretation for research and clinical use. Their offerings include VarSome. com (a community-driven variant knowledge base and search engine), VarSome Premium (a subscription with enhanced annotations and analytics), VarSome Clinical (a CE‑IVDR-certified, HIPAA-compliant clinical platform with automated variant classification), and VarSome API for integration. Saphetor supports clinicians and researchers with automated, standards-based variant interpretation, extensive aggregated genomic resources, and scalable SaaS delivery.
51 - 200 employees
Founded 2014
🧬 Biotechnology
🤖 Artificial Intelligence
☁️ SaaS
November 9
Ansible
Cloud
Consul
DNS
Docker
ElasticSearch
Firewalls
Google Cloud Platform
Grafana
Linux
Packer
Python
TCP/IP
Terraform
Unix

Biotechnology • Artificial Intelligence • SaaS
Saphetor is a company that develops the VarSome Suite, an AI-driven set of bioinformatics software tools for large-scale human genome (NGS) data analysis and interpretation for research and clinical use. Their offerings include VarSome. com (a community-driven variant knowledge base and search engine), VarSome Premium (a subscription with enhanced annotations and analytics), VarSome Clinical (a CE‑IVDR-certified, HIPAA-compliant clinical platform with automated variant classification), and VarSome API for integration. Saphetor supports clinicians and researchers with automated, standards-based variant interpretation, extensive aggregated genomic resources, and scalable SaaS delivery.
51 - 200 employees
Founded 2014
🧬 Biotechnology
🤖 Artificial Intelligence
☁️ SaaS
• Design, build and maintain our cloud and on-premises infrastructure using IaC with Terraform and Configuration Management with Ansible. • Work closely with software engineers to design, deploy and manage applications running on Linux servers in GCP, OCI and ONPREM environments. • Design, implement and manage secure and scalable network architectures, including VPCs, subnets, firewall rules and load balancing in our cloud environments. • Develop and maintain our CI/CD pipelines using GitHub Workflows or Cloud Build for seamless application delivery. • Manage code repositories and collaboration through GitHub. • Proactively troubleshoot production issues, perform root cause analysis and implement remediation fixes to ensure business continuity with minimum downtime. • Contribute to project planning, including task estimation and the creation of comprehensive technical documentation. • Continuously investigate and suggest improvements to enhance system performance, scalability, and cost effectiveness. • Collaborate with peers and architects to ensure compliance with company standards and security best practices. • Design, implement, and manage scalable logging, monitoring, and alerting systems. • Utilize Python and Bash scripting to automate operational tasks and improve efficiency. • Configure and manage our API ecosystem using APISIX Gateway. • Provide on call support according to scheduled rotation.
• University degree in Computer Science or a related field • 4+ years of work experience as a DevOps Engineer or in a similar role • Minimum 4 years of work experience with UNIX/Linux systems, including configuration, troubleshooting and scripting with Python and/or Bash • Strong, hands-on experience with: -Agile methodologies, including frameworks like Scrumban. -Terraform and Ansible. -Docker containerization. -HashiCorp stack, including Nomad for orchestration, Consul for service discovery and Packer for image building. -Version control systems, particularly Git and GitHub. -Networking principles (TCP/IP, DNS, HTTP/S) and cloud networking concepts (VPCs, firewalls, load balancers) • Good knowledge of: -API management with gateways like APISIX. -Building and managing CI/CD pipelines, preferably with GitHub Workflows or Cloud Build. -Logging and monitoring tools (e.g., Elasticsearch, Kibana, Grafana or cloud-native solutions). • Fluency in English (written and spoken) • Strong communication and collaboration skills, with experience working in Agile environments using Jira and Confluence. • GCP or OCI certifications are nice to have. • Proactive team player with demonstrated ability in self-organization, task prioritization, planning, and estimation. • Proactive mindset with a passion for continuous improvement and learning new technologies.
• A competitive compensation package • Remote work with occasional meetings • Endless learning opportunities
Apply NowNovember 7
DevOps Engineer seeking to scale infrastructure solutions across AWS and Alibaba Cloud. Join a global team in building automated systems that power data products and services.
November 6
Senior Site Reliability Engineer ensuring smooth operation of critical infrastructure and applications for clients. Working with Jenkins, Kubernetes, AWS, and Terraform in a fully remote role.
November 4
DevOps Engineer automating CI/CD pipelines for Creatio's global AI-native platform. Collaborating with developers and teams to implement best practices in a remote-first environment.
🇵🇱 Poland – Remote
💰 Private Equity Round on 2024-06
⏰ Full Time
🟡 Mid-level
🟠 Senior
⛑ DevOps & Site Reliability Engineer (SRE)
November 1
Kubernetes DevOps Engineer driving custom integration across k0rdent-ai platform. Collaborating with engineering teams to design scalable AI infrastructure on Kubernetes environments.
October 30
Senior SRE specializing in automation and efficiencies for customer-facing applications and infrastructure at Akamai. Responsible for driving operational excellence and creative problem-solving.
🇵🇱 Poland – Remote
💰 Post-IPO Equity on 2001-07
⏰ Full Time
🟠 Senior
⛑ DevOps & Site Reliability Engineer (SRE)