Data Engineer

February 10

Apply Now
Logo of Truelogic Software

Truelogic Software

SaaS • B2B • Enterprise

Truelogic Software is a nearshore software development company specializing in agile staff augmentation services. They focus on providing custom outsourced software development with a team of highly skilled engineers from Latin America. Truelogic Software partners with both startups and Fortune 500 companies, offering solutions that align with their clients' time zones and ensuring high-quality outcomes through collaboration and responsiveness. With a presence in over 25 countries, Truelogic emphasizes remote work for better quality of life, and their engineers are experienced in various industries, delivering a wide range of successful projects globally.

501 - 1000 employees

Founded 2004

☁️ SaaS

🤝 B2B

🏢 Enterprise

📋 Description

• About Truelogic: At Truelogic we are a leading provider of nearshore staff augmentation services headquartered in New York. • For over two decades, we’ve been delivering top-tier technology solutions to companies of all sizes. • Our team of 600+ highly skilled tech professionals, based in Latin America, drives digital disruption by partnering with U.S. companies on their most impactful projects. • By applying for this position, you’re taking the first step in joining a dynamic team that values your expertise and aspirations.

🎯 Requirements

• 5+ years working on the role professionally. • Coding: • Coding Python and use in data processing solutions and related data technologies like Pandas, and PySpark. • Consume data from different sources like REST APIs. • Data Stores: • Work with relational and non-relational data stores (like: HBASE, Cassandra or MongoDB; S3, blobs). • Consumption; • Design. • Data Streams (Kafka, Kinesis, Flume,) and message queuing (SQS, SNS, RabbitMQ, etc). • Ensure that the data model scales and enables high performance. • Distributed data stores. • Processing: • Data/Stream processing (Spark, Flink, Hadoop). • Data pipelines, data ingestion pipelines, scalable streaming data pipelines processing. • ETL using solutions: • Talend; • Informatica; • SQL Server Integration Services (SSIS). • Reporting & Analytics: • Data warehouse (Snowflake, Redshift, Hive). • Implementation of data warehouse solutions, providing near real-time data to a variety of client systems; • Using SQL databases to construct data storage. • Reporting / BI, design, implementation, and enhancement of BI tool is a plus. • Looker; • Power BI; • Tableau. • Experience designing and implementing data applications and services on the public cloud, AWS, GCP, or Azure using PaaS platforms. • Familiarity with data privacy regulations and best practices.

🏖️ Benefits

• 100% Remote Work: Enjoy the freedom to work from the location that helps you thrive. All it takes is a laptop and a reliable internet connection. • Highly Competitive USD Pay: Earn an excellent, market-leading compensation in USD, that goes beyond typical market offerings. • Paid Time Off: We value your well-being. Our paid time off policies ensure you have the chance to unwind and recharge when needed. • Work with Autonomy: Enjoy the freedom to manage your time as long as the work gets done. Focus on results, not the clock. • Work with Top American Companies: Grow your expertise working on innovative, high-impact projects with Industry-Leading U.S. Companies.

Apply Now

Similar Jobs

February 7

Team Lead position for Data Engineers to design data pipelines in a fully remote setting.

Apache

BigQuery

Cloud

Hadoop

Kafka

Node.js

Spark

February 4

As a data engineering leader at xcelForce, optimize performance in big data environments while mentoring teams.

AWS

EC2

ETL

Node.js

RDBMS

January 12

Data Engineer developing and refining dashboards in Power BI for educational projects. Collaborating on data cleaning, modeling, and API integrations in a flexible remote setting.

🗣️🇭🇺 Hungarian Required

ETL

Python

SQL

January 7

Advancio seeks a Data Scientist to analyze datasets and develop machine learning models.

AWS

Azure

Cloud

Google Cloud Platform

Hadoop

Node.js

Numpy

Pandas

Python

PyTorch

Scikit-Learn

Spark

SQL

Tableau

Tensorflow

January 6

Develop low-latency data products to enhance customer experiences at Netflix. Work with data scientists and engineers yielding critical data insights.

Distributed Systems

GRPC

Java

Scala

Spark

Spring

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com