
B2B • Recruitment • SaaS
RemoteStar is a global recruitment service that specializes in hiring top-quality tech talent. By assembling diverse teams with vetted developers from various regions, RemoteStar ensures high-quality staffing while maximizing cost efficiency for companies. The service includes a rigorous vetting process, technical matching, and full onboarding support, allowing businesses to focus on their core operations while RemoteStar handles the administrative aspects of recruitment and team management.
February 19
Amazon Redshift
Apache
AWS
Docker
EC2
ElasticSearch
ETL
Grafana
Hadoop
HBase
HDFS
Kafka
Kubernetes
Prometheus
Scala
SDLC
Spark
Yarn

B2B • Recruitment • SaaS
RemoteStar is a global recruitment service that specializes in hiring top-quality tech talent. By assembling diverse teams with vetted developers from various regions, RemoteStar ensures high-quality staffing while maximizing cost efficiency for companies. The service includes a rigorous vetting process, technical matching, and full onboarding support, allowing businesses to focus on their core operations while RemoteStar handles the administrative aspects of recruitment and team management.
• Contribute to the design and development of Real-Time Data Processing applications. • Development and Maintenance of Real-Time Data Processing Applications by using frameworks and libraries such as: Spark Streaming, Spark Structured Streaming, Kafka Streams and Kafka Connect. • Manipulation of Streaming Data: Ingestion, Transformation and Aggregation. • Keeping up to date on Research and Development of new Technologies and Techniques to enhance our applications. • Collaborating closely with the Data DevOps, Data-Oriented streams and other multi-disciplined teams. • Comfortable working in an Agile Environment involving SDLC. • Familiar with the Change and Release Management Process. • Have an investigative mindset to troubleshoot problems and manage incidents. • Full ownership of Projects and Tasks assigned together with being able to work within a team. • Able to document well processes and perform Knowledge Sharing sessions with the rest of the team.
• Have strong knowledge in Scala. • Knowledge or familiarity of Distributed Computing like Spark/KStreams/Kafka Connect and Streaming Frameworks such as Kafka. • Knowledge on Monolithic versus Microservice Architecture concepts for building large-scale applications. • Familiar with the Apache suite including Hadoop modules such as HDFS, Yarn, HBase, Hive, Spark as well as Apache NiFi. • Familiar with containerization and orchestration technologies such as Docker, Kubernetes. • Familiar with Time-series or Analytics Databases such as Elasticsearch. • Experience with Amazon Web Services using services such as S3, EC2, EMR, Redshift. • Familiar with Data Monitoring and Visualisation tools such as Prometheus and Grafana. • Familiar with software versioning tools like Git. • Comfortable working in an Agile environment involving SDLC. • Have a decent understanding of Data Warehouse and ETL concepts – familiarity with Snowflake is preferred. • Have strong analytical and problem-solving skills. • Good learning mindset. • Can effectively prioritize and handle multiple tasks and projects.
Apply Now