Data Engineer

Job not on LinkedIn

January 4

Apply Now
Logo of Particle41

Particle41

SaaS • Artificial Intelligence • Enterprise

Particle41 is a company that specializes in providing expertise and solutions in technology development, data science, and DevOps. They offer CTO advisory services, acting as a partner to strengthen tech strategies and deliver robust software solutions tailored to their clients' needs. Particle41 emphasizes modernizing operations using cloud architecture, integrating software systems, and leveraging artificial intelligence to provide innovative digital products. They work closely with businesses to ensure on-time project delivery, scalability, and maintaining data security. Particle41 is particularly focused on helping businesses improve their competitive edge through strategic tech solutions and ongoing support.

📋 Description

• Particle41 is seeking a talented and versatile Data Engineer to join our innovative team. • As a Data Engineer, you will play a key role in designing, building, and maintaining robust data pipelines and infrastructure to support our clients' data needs. • You will work on end-to-end data solutions, collaborating with cross-functional teams to ensure high-quality, scalable, and efficient data delivery. • This is an exciting opportunity to contribute to impactful projects, solve complex data challenges, and grow your skills in a supportive and dynamic environment.

🎯 Requirements

• Bachelor’s degree in Computer Science, Engineering, or related field. • Proven experience as a Data Engineer, with a minimum of 3 years of experience. • Proficiency in Python programming language. • Experience with database technologies such as SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB) databases. • Strong understanding of Programming Libraries/Frameworks and technologies such as Flask, API frameworks, data warehousing/lakehouse principles, database and ORM, data analysis, databricks, pandas, Spark, Pyspark, Machine learning, OpenCV, scikit-learn. • Utilities & Tools: logging, requests, subprocess, regex, pytest • ELK stack, Redis, distributed task queues • Strong understanding of data warehousing/lakehousing principles and concurrent/parallel processing concepts. • Familiarity with at least one cloud data engineering stack (Azure, AWS, or GCP) and the ability to quickly learn and adapt to new ETL/ELT tools across various cloud providers. • Familiarity with version control systems like Git and collaborative development workflows. • Competence in working on Linux OS and creating shell scripts. • Solid understanding of software engineering principles, design patterns, and best practices. • Excellent problem-solving and analytical skills, with a keen attention to detail. • Effective communication skills, both written and verbal, and the ability to collaborate in a team environment. • Adaptability and willingness to learn new technologies and tools as needed.

Apply Now

Similar Jobs

December 27, 2024

Join Scalepex as an AWS Data Engineer to build scalable data pipelines for utilities. Work with premium brands while leveraging your AWS expertise.

Airflow

Amazon Redshift

AWS

Distributed Systems

DynamoDB

ETL

Pandas

PySpark

Python

December 19, 2024

Join Kalshi as a Data Engineer to build and operate the data stack, focusing on insights.

AWS

Cloud

Numpy

Pandas

Python

Spark

SQL

December 13, 2024

Data Migration Specialist at Thrive, focused on importing client data into their software.

Bash

ETL

MySQL

Open Source

Postgres

Python

Ruby

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com