Data Engineer

Job not on LinkedIn

November 17

🗣️🇪🇸 Spanish Required

Apply Now
Logo of Technosylva

Technosylva

Artificial Intelligence • SaaS • B2B

<Technosylva> is a provider of AI-driven wildfire and extreme weather risk mitigation software that delivers real-time forecasting, predictive simulations, and incident management tools for electric utilities, fire agencies, and insurers. Their cloud-based products (Wildfire Analyst, Tactical Analyst, fiResponse) offer situational awareness, operational decision support, and risk quantification to help customers plan, operate, and respond to wildfire and severe weather events.

📋 Description

• As a Data Engineer on our team, your work will be focused on evolving our data infrastructure. • A large part of our current work involves migrating existing data pipelines (many based on Windows services) to this new, modern, and scalable platform. • Your day-to-day work will involve: Designing, building, and maintaining robust data pipelines on our new platform. • Orchestrating complex workflows that process massive volumes of data, primarily in batch, but with some pseudo-real-time needs. • Handling a significant and fascinating geospatial data component, including its specific file formats and processing challenges. • Collaborating closely with our Science teams to adapt their calculation models (which may come in Python, R, or .Net) so they can be validated, monitored, and scaled effectively within our production pipelines. • Contributing to our DevOps culture by working closely with the Platform team. This includes managing infrastructure as code (Terraform) and building and maintaining our CI/CD pipelines in GitLab. • Helping our organization on its journey to democratize data access for everyone.

🎯 Requirements

• A strong foundation in Python as a primary language for data processing and backend development. • Solid experience in data engineering: You have built and maintained data pipelines before and understand the fundamentals of data orchestration, validation, and processing. • A collaborative, service-oriented mindset: You enjoy helping others and understand the value of building platforms that enable other teams (that "Team Topologies" spirit). • A genuine interest in DevOps and infrastructure: You are comfortable working close to the metal and believe that teams should own their services, from code to deployment (CI/CD, IaC). • A pragmatic approach to technology: You understand that we must support existing codebases (like .Net or R) while building the future in Python. • Professional fluency: You must be fluent in Spanish for daily team communication and professionally proficient in English for documentation and company-wide discussions.

🏖️ Benefits

• Competitive annual salary. • An annual bonus based on individual and company performance. • Flexible working hours. • Possibility of remote work.

Apply Now

Similar Jobs

November 14

Senior Data Engineer designing and optimizing scalable data pipelines for machine learning solutions at Cotiviti. Collaborating across teams to deliver high-quality data products.

🗣️🇪🇸 Spanish Required

Airflow

AWS

Cloud

ETL

Hadoop

MySQL

Oracle

Spark

SQL

November 7

Nielsen

10,000+ employees

📱 Media

Design and automate essential data pipelines ensuring seamless integration for Nielsen's analytics. Collaborate with teams and uphold data integrity throughout its lifecycle.

Airflow

AWS

Cloud

EC2

PySpark

Python

SDLC

SQL

Tableau

November 6

GCP Data Engineer in remote role for a leading data solutions company. Responsible for implementing data architectures and ETL processes with a focus on Google Cloud Platform.

🗣️🇪🇸 Spanish Required

Apache

BigQuery

Cloud

ETL

Google Cloud Platform

PySpark

Python

Spark

SQL

November 4

Data Engineer at Inetum designing and operating large-scale petabyte data systems. Building real-time data pipelines and collaborating in agile environments to deliver scalable solutions.

🗣️🇪🇸 Spanish Required

Airflow

AWS

Azure

Cassandra

Google Cloud Platform

Hadoop

HBase

Java

Kafka

Oracle

Python

Spark

SQL

November 4

Data Engineer developing and maintaining data pipelines for a global agile consultancy. Utilizing Modern Data Stack with expertise in Snowflake and Azure Data Factory.

🇲🇽 Mexico – Remote

💵 $50k - $65k / month

💰 Post-IPO Equity on 2007-03

⏰ Full Time

🟡 Mid-level

🟠 Senior

🚰 Data Engineer

🗣️🇪🇸 Spanish Required

Azure

ETL

ITSM

Python

SDLC

ServiceNow

SQL

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com