
Cybersecurity • Security • Enterprise
WatchGuard Technologies is a cybersecurity company specializing in providing comprehensive security solutions for businesses and managed service providers (MSPs). Their offerings include network security products like firewalls and secure Wi-Fi, endpoint security with detection and response capabilities, and identity security featuring multi-factor authentication. WatchGuard's Unified Security Platform integrates these services to deliver efficient, scalable, and automated security management. The company focuses on simplifying cybersecurity with AI-driven technologies and threat intelligence, making it accessible and effective for a range of industries.
May 13
Airflow
Amazon Redshift
Apache
AWS
Azure
Cloud
ETL
Google Cloud Platform
Hadoop
Kafka
Python
Spark
SQL
Tableau
Terraform

Cybersecurity • Security • Enterprise
WatchGuard Technologies is a cybersecurity company specializing in providing comprehensive security solutions for businesses and managed service providers (MSPs). Their offerings include network security products like firewalls and secure Wi-Fi, endpoint security with detection and response capabilities, and identity security featuring multi-factor authentication. WatchGuard's Unified Security Platform integrates these services to deliver efficient, scalable, and automated security management. The company focuses on simplifying cybersecurity with AI-driven technologies and threat intelligence, making it accessible and effective for a range of industries.
•We are looking for an experienced and passionate Senior Data Engineer to join our growing data team. •In this role, you will be responsible for designing, developing, and maintaining scalable data pipelines and systems to support a wide range of analytics and business intelligence solutions. •You will work closely with cross-functional teams including data scientists, analysts, and engineers to provide data solutions that drive key business decisions. •The ideal candidate should have strong experience in data architecture, ETL/ELT processes, cloud technologies, and data warehousing. •Design, develop, and optimize data pipelines to extract, transform, and load (ETL/ELT) data from a variety of sources. •Build and manage data models and data warehouses that support business intelligence, reporting, and analytics needs. •Leverage cloud technologies such as AWS, Azure, or Google Cloud Platform for building scalable, reliable, and efficient data solutions. •Develop and maintain automated data workflows using tools like Airflow, AWS Glue, Azure Data Factory, or similar technologies. •Work with large datasets and complex data structures, ensuring data quality, integrity, and performance. •Write and optimize SQL queries for complex data extraction, aggregation, and transformation tasks. •Integrate APIs to connect data sources, extract information, and facilitate real-time data processing. •Collaborate with business intelligence and data science teams to define data requirements and ensure the availability of clean, accurate data for analysis and decision-making. •Implement CI/CD pipelines for automated deployment of data pipelines and models. •Monitor the performance of data systems, ensuring reliability, availability, and scalability of data architectures. •Create and maintain comprehensive documentation for data pipelines, systems, and processes. •Stay up to date with emerging trends and technologies in the data engineering field and continuously improve data systems.
•Solid experience as a Data Engineer or similar role in data architecture and pipeline development. •Strong experience with cloud platforms such as AWS, Azure, or Google Cloud. •Advanced knowledge of ETL/ELT processes, data modeling, and data warehousing (e.g., Snowflake, Redshift). •Proficiency in SQL for complex data transformation and querying. •Hands-on experience with data pipeline orchestration tools like Azure Data Factory, Apache Airflow, AWS Glue, or similar. •Strong programming skills in Python for automation, data processing, and integration tasks. •Experience working with big data technologies such as Hadoop, Spark, or Kafka is a plus. •Familiarity with GitHub for version control and CI/CD pipelines for deployment automation. •Strong understanding of data security, governance, and compliance best practices. •Experience with business intelligence tools such as Tableau, Power BI, or similar for reporting and data visualization. •Ability to work in an agile, fast-paced environment and manage multiple tasks simultaneously.
•Competitive salary and comprehensive benefits package. •Opportunities for career growth and professional development.
Apply NowMay 13
Join Spassu as a Data Engineer. Work remotely on innovative technology projects.
🗣️🇧🇷🇵🇹 Portuguese Required
Apache
AWS
Azure
Caffe
Cassandra
Cloud
Docker
DynamoDB
ETL
Google Cloud Platform
Grafana
Hadoop
HBase
HDFS
Java
Jenkins
Keras
MapReduce
Maven
MongoDB
MS SQL Server
Node.js
NoSQL
OpenShift
Oracle
Postgres
Python
Scala
Spark
SQL
Subversion
Tableau
Tensorflow
Terraform
Unix
Yarn
May 8
Join Spassu as a Senior Data Engineer. Work remotely in a major technology project.
🗣️🇧🇷🇵🇹 Portuguese Required
Airflow
Azure
ETL
Node.js
Python
Spark
SQL
April 27
Design, build, and optimize data pipelines for AI solutions at Distrito.
🗣️🇧🇷🇵🇹 Portuguese Required
Airflow
Apache
AWS
Azure
Cassandra
Cloud
Docker
ETL
Google Cloud Platform
Kafka
Kubernetes
MongoDB
MySQL
NoSQL
Pandas
Postgres
PySpark
Python
RabbitMQ
Redis
Spark
SQL
Terraform
April 26
Join inQuesti as a Data Engineer to work on data solutions and integrations.
🗣️🇧🇷🇵🇹 Portuguese Required
Airflow
BigQuery
Cloud
Docker
ETL
Google Cloud Platform
Python
SQL
April 26
Global company is looking for a Data Engineer/Developer focusing on data solutions and automation.
🗣️🇧🇷🇵🇹 Portuguese Required
Airflow
Apache
Azure
Django
Docker
ETL
Flask
Google Cloud Platform
Linux
NoSQL
Python
Spark
SQL
Vault