
Recruitment • IT Services • SaaS
aKUBE is a recruitment firm that specializes in connecting top-tier IT professionals with innovative companies. They emphasize a quality-focused approach to recruiting, ensuring a perfect match between candidates and company values. By leveraging advanced technology and a refined process, aKUBE streamlines staffing solutions that align with client timelines while offering services like contract staffing, direct hire placements, and payroll management. Their commitment to performance and client partnerships positions aKUBE as a trusted partner in achieving business success through tailored staffing solutions.
September 12

Recruitment • IT Services • SaaS
aKUBE is a recruitment firm that specializes in connecting top-tier IT professionals with innovative companies. They emphasize a quality-focused approach to recruiting, ensuring a perfect match between candidates and company values. By leveraging advanced technology and a refined process, aKUBE streamlines staffing solutions that align with client timelines while offering services like contract staffing, direct hire placements, and payroll management. Their commitment to performance and client partnerships positions aKUBE as a trusted partner in achieving business success through tailored staffing solutions.
• Design, build, and optimize large-scale data pipelines and warehousing solutions • Develop ETL workflows in Big Data environments across cloud, on-prem, or hybrid setups • Collaborate with Data Product Managers, Architects, and Engineers to deliver scalable and reliable data solutions • Define data models and frameworks for data warehouses and marts supporting analytics and audience engagement • Maintain strong documentation practices for data governance and quality standards • Ensure solutions meet SLAs, operational efficiency, and support analytics/data science teams • Contribute to Agile/Scrum processes and continuously drive team improvements
• 6+ years of experience in data engineering with large, distributed data systems • Strong SQL expertise and experience with MPP databases (Snowflake, Redshift, or BigQuery) • Expertise in Big Data engineering pipelines • Hands-on experience with Apache Spark (PySpark, Scala) and Hadoop ecosystem (HDFS, Hive, Presto) • Proficiency in Python, Scala, or Java • Experience with cloud environments (AWS – S3, EMR, EC2) • Experience with orchestration/ETL tools such as Airflow • Data warehousing and data modeling knowledge • Familiarity with Agile methodologies • Bachelor’s degree in STEM required • Work authorization: Green Card, US Citizen, or valid EADs (except OPT, CPT; H1B not accepted) • No C2C, 1099, or subcontractors; W2 only
• Remote work
Apply NowAugust 29
AEP Data Architect leading Adobe Experience Platform deployments | Designs models, integrations, and data activation for enterprise clients
ETL
NoSQL
SQL
August 28
201 - 500
🏢 Enterprise
☁️ SaaS
🤖 Artificial Intelligence
Design and operate Azure Data Factory and Databricks pipelines for cloud data platform. Integrate Oracle and PostgreSQL sources, implement data quality and monitoring.
Azure
Cloud
ETL
Oracle
Postgres
PySpark
Scala
Spark
SQL
August 26
Lead federated AWS UDP architecture to modernize healthcare data access and governance. Align platform governance, virtualization, and semantic unification.
Amazon Redshift
AWS
Cloud
Postgres
PySpark
Tableau
August 26
Snowflake Data Engineer building and optimizing Snowflake data pipelines and integrations for fintech platform engineering team
AWS
Cloud
EC2
ETL
Java
Kafka
Python
SQL
Unix
Go
August 26
Remote Data Engineer for fintech digital transformation; build streaming, NoSQL, cloud and warehousing solutions within USA.
Amazon Redshift
AWS
Azure
Cassandra
Cloud
Hadoop
Java
Kafka
Linux
MapReduce
MongoDB
MySQL
NoSQL
Python
Scala
Shell Scripting
Spark
SQL
Unix