Data Engineer

Job not on LinkedIn

15 hours ago

Apply Now
Logo of Ciklum

Ciklum

Artificial Intelligence • B2B • Enterprise

Ciklum is a global digital engineering and AI-enabled product and platform services company that helps enterprises design, build, and scale AI-infused software, cloud, data, and automation solutions. It combines UX and product design with engineering, DevOps, data engineering, responsible AI, and edge/IoT capabilities to move pilots into production and deliver enterprise-ready outcomes across industries such as banking, retail, healthcare, hi-tech, automotive, and travel. Ciklum emphasizes platform-agnostic, scalable solutions—covering AI incubators, conversational AI, agentic automation, cloud and edge services, XR/AR/VR, and digital assurance—focused on transforming workflows and customer experiences for B2B enterprise clients.

📋 Description

• Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process data quickly at big-data scales • Contribute design, code, configurations, and documentation for components that manage data ingestion, real-time streaming, batch processing, and ETL across multiple data storages • Own one or more key components of the infrastructure and work to continually improve them, identifying gaps and enhancing the platform’s quality, robustness, maintainability, and speed • Cross-train other team members on technologies being developed, while also continuously learning new technologies from other team members • Interact with engineering teams to ensure that solutions meet customer requirements regarding functionality, performance, availability, scalability, and reliability • Perform development, QA, and DevOps roles as needed to ensure total end-to-end responsibility of solutions • Work directly with business analysts and data scientists to understand and support their use cases • Contribute to the unit’s activities and community building, participate in conferences, and provide excellence in exercises and best practices • Support sales activities, customer meetings, and digital services

🎯 Requirements

• 5+ years of experience coding in SQL, C#, Python, with solid CS fundamentals including data structures and algorithm design • 3+ years contributing to production deployments of large backend data processing and analysis systems as a team lead • 2+ years of hands-on implementation experience with a combination of the following technologies: Hadoop, MapReduce, Pig, Hive, Impala, Spark, Kafka, Storm, SQL and NoSQL data warehouses such as HBase and Cassandra • 3+ years of experience with cloud data platforms (Azure, AWS, GCP) • Knowledge of SQL and MPP databases (Vertica, Netezza, Greenplum, Aster Data) • Knowledge of data warehousing, design, implementation, and optimization • Knowledge of data quality testing, automation, and results visualization • Experience with Databricks and Snowflake • Knowledge of BI reports and dashboards design and implementation (Power BI, Tableau) • Knowledge of development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations • Experience working in an Agile software development team (Scrum) • Experience designing, documenting, and defending designs for key components in large distributed computing systems • A consistent track record of delivering exceptionally high-quality software on large, complex, cross-functional projects • Demonstrated ability to learn new technologies quickly and independently • Ability to handle multiple competing priorities in a fast-paced environment • Undergraduate degree in Computer Science or Engineering from a top CS program required, Master’s preferred • Experience supporting data scientists and complex statistical use cases highly desirable

🏖️ Benefits

• Strong community: Work alongside top professionals in a friendly, open-door environment • Growth focus: Take on large-scale projects with a global impact and expand your expertise • Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications • Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies • Flexibility: Enjoy radical flexibility – work remotely or from an office, your choice • Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations

Apply Now

Similar Jobs

Yesterday

Senior Data Engineer maintaining and enhancing data processes in AdTech domain at Sigma Software. Focusing on Python services, data flow management, and architectural alignment.

Airflow

AWS

Python

SQL

November 19

Data Engineer at Ruby Labs developing robust and trustworthy data pipelines and monitoring systems. Collaborating to enhance data reliability and support data-driven decisions.

Airflow

BigQuery

Cloud

ETL

Python

SQL

November 19

Data Warehouse Administrator managing the Data Warehouse (DWH) infrastructure for Kyivstar.Tech. Optimizing performance, ensuring security, and collaborating with ETL developers for reliable operation.

ETL

Greenplum

Informatica

Linux

MS SQL Server

Oracle

Postgres

Python

Shell Scripting

SQL

TCP/IP

Unix

November 4

Data Engineer role at Intetics Inc. focusing on designing Apache Airflow pipelines and optimizing large-scale ETL workflows. Collaborating with cross-functional teams on data processes and solutions.

Airflow

Apache

ElasticSearch

ETL

Flask

Linux

Oracle

Postgres

Python

Unix

October 22

Data Engineer leading a small team to modernize a data platform into a cloud-based analytics project at Sigma Software.

Azure

ETL

Python

RDBMS

Spark

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com