
Artificial Intelligence • SaaS • B2B
<Technosylva> is a provider of AI-driven wildfire and extreme weather risk mitigation software that delivers real-time forecasting, predictive simulations, and incident management tools for electric utilities, fire agencies, and insurers. Their cloud-based products (Wildfire Analyst, Tactical Analyst, fiResponse) offer situational awareness, operational decision support, and risk quantification to help customers plan, operate, and respond to wildfire and severe weather events.
51 - 200 employees
Founded 1997
🤖 Artificial Intelligence
☁️ SaaS
🤝 B2B
November 17
🗣️🇪🇸 Spanish Required

Artificial Intelligence • SaaS • B2B
<Technosylva> is a provider of AI-driven wildfire and extreme weather risk mitigation software that delivers real-time forecasting, predictive simulations, and incident management tools for electric utilities, fire agencies, and insurers. Their cloud-based products (Wildfire Analyst, Tactical Analyst, fiResponse) offer situational awareness, operational decision support, and risk quantification to help customers plan, operate, and respond to wildfire and severe weather events.
51 - 200 employees
Founded 1997
🤖 Artificial Intelligence
☁️ SaaS
🤝 B2B
• As a Data Engineer on our team, your work will be focused on evolving our data infrastructure. • A large part of our current work involves migrating existing data pipelines (many based on Windows services) to this new, modern, and scalable platform. • Your day-to-day work will involve: Designing, building, and maintaining robust data pipelines on our new platform. • Orchestrating complex workflows that process massive volumes of data, primarily in batch, but with some pseudo-real-time needs. • Handling a significant and fascinating geospatial data component, including its specific file formats and processing challenges. • Collaborating closely with our Science teams to adapt their calculation models (which may come in Python, R, or .Net) so they can be validated, monitored, and scaled effectively within our production pipelines. • Contributing to our DevOps culture by working closely with the Platform team. This includes managing infrastructure as code (Terraform) and building and maintaining our CI/CD pipelines in GitLab. • Helping our organization on its journey to democratize data access for everyone.
• A strong foundation in Python as a primary language for data processing and backend development. • Solid experience in data engineering: You have built and maintained data pipelines before and understand the fundamentals of data orchestration, validation, and processing. • A collaborative, service-oriented mindset: You enjoy helping others and understand the value of building platforms that enable other teams (that "Team Topologies" spirit). • A genuine interest in DevOps and infrastructure: You are comfortable working close to the metal and believe that teams should own their services, from code to deployment (CI/CD, IaC). • A pragmatic approach to technology: You understand that we must support existing codebases (like .Net or R) while building the future in Python. • Professional fluency: You must be fluent in Spanish for daily team communication and professionally proficient in English for documentation and company-wide discussions.
• Competitive annual salary. • An annual bonus based on individual and company performance. • Flexible working hours. • Possibility of remote work.
Apply NowNovember 14
Senior Data Engineer designing and optimizing scalable data pipelines for machine learning solutions at Cotiviti. Collaborating across teams to deliver high-quality data products.
🗣️🇪🇸 Spanish Required
November 7
Design and automate essential data pipelines ensuring seamless integration for Nielsen's analytics. Collaborate with teams and uphold data integrity throughout its lifecycle.
November 6
GCP Data Engineer in remote role for a leading data solutions company. Responsible for implementing data architectures and ETL processes with a focus on Google Cloud Platform.
🗣️🇪🇸 Spanish Required
November 4
Data Engineer at Inetum designing and operating large-scale petabyte data systems. Building real-time data pipelines and collaborating in agile environments to deliver scalable solutions.
🗣️🇪🇸 Spanish Required
November 4
Data Engineer developing and maintaining data pipelines for a global agile consultancy. Utilizing Modern Data Stack with expertise in Snowflake and Azure Data Factory.
🇲🇽 Mexico – Remote
💵 $50k - $65k / month
💰 Post-IPO Equity on 2007-03
⏰ Full Time
🟡 Mid-level
🟠 Senior
🚰 Data Engineer
🗣️🇪🇸 Spanish Required