
SaaS âą B2B âą Enterprise
Thaloz is a tech consulting and outsourcing company focused on cloud-native product development and engineering team outsourcing. They offer tailored software development solutions, leveraging top-tier talent from Latin America. Thaloz provides a variety of services such as custom product development, staff augmentation, and enterprise solutions. Their approach emphasizes people-focused consulting and remote work culture, ensuring seamless integration of skilled IT professionals into their clients' projects. With expertise in diverse technologies, Thaloz offers end-to-end support from strategy to launch, fostering smooth communication and collaboration.
51 - 200 employees
âïž SaaS
đ€ B2B
đą Enterprise
August 22

SaaS âą B2B âą Enterprise
Thaloz is a tech consulting and outsourcing company focused on cloud-native product development and engineering team outsourcing. They offer tailored software development solutions, leveraging top-tier talent from Latin America. Thaloz provides a variety of services such as custom product development, staff augmentation, and enterprise solutions. Their approach emphasizes people-focused consulting and remote work culture, ensuring seamless integration of skilled IT professionals into their clients' projects. With expertise in diverse technologies, Thaloz offers end-to-end support from strategy to launch, fostering smooth communication and collaboration.
51 - 200 employees
âïž SaaS
đ€ B2B
đą Enterprise
âą Senior Data Engineer responsible for designing, building, and maintaining scalable and reliable data pipelines and ETL processes for the Credit Platform Data team. âą Architect, develop, and maintain scalable data pipelines and ETL workflows that support the ingestion, transformation, and storage of large datasets from diverse sources. âą Implement automated data quality checks and validation processes to ensure the accuracy, consistency, and reliability of data across systems. âą Work closely with product managers, data analysts, and business stakeholders to gather and understand data requirements, translating them into technical specifications and actionable engineering tasks. âą Continuously monitor and optimize data systems for performance, scalability, and cost-efficiency, ensuring that data infrastructure meets evolving business needs. âą Diagnose and resolve data-related issues promptly, providing root cause analysis and implementing preventive measures. âą Maintain comprehensive documentation of data pipelines, ETL processes, and system architecture. Participate in design and code reviews to uphold high engineering standards. âą Stay abreast of emerging data engineering technologies, tools, and best practices to drive innovation and continuous improvement within the team. âą Provide guidance and mentorship to junior data engineers, fostering a culture of knowledge sharing and technical excellence.
âą Bachelor's degree in Computer Science, Engineering, or a related field. âą SQL: Expert-level proficiency in SQL for querying, manipulating, and optimizing relational databases. Ability to write complex queries, optimize performance, and work with large datasets efficiently. âą Python: Strong programming skills in Python, including experience with data processing libraries such as Pandas. Ability to develop robust, maintainable, and scalable data processing scripts and automation tools. âą PySpark: Proficient in using PySpark for distributed data processing on large-scale datasets. Experience with Spark's DataFrame API, RDDs, and performance tuning in a big data environment. âą ETL (Extract, Transform, Load): Deep understanding of ETL concepts and hands-on experience designing and implementing ETL pipelines that ensure data integrity and efficiency. âą Data Modeling: Expertise in data modeling techniques to design logical and physical data models that support efficient querying and reporting. Familiarity with normalization, denormalization, and schema design best practices. âą Relational Databases: Experience working with relational database management systems (RDBMS) such as Oracle, MySQL, or similar platforms. Knowledge of database design, indexing, and query optimization. âą Data Warehousing: Solid understanding of data warehousing concepts, architectures, and best practices. Experience building and maintaining data warehouses that support business intelligence and analytics. âą Unix/Linux: Proficiency in Unix/Linux operating systems for managing data workflows, scripting, and system monitoring. âą Shell Scripting: Ability to write shell scripts to automate routine tasks, manage data pipelines, and integrate with other system components. âą Automation Testing: Experience implementing automated testing frameworks for data pipelines and ETL processes to ensure data quality and system reliability. âą Professional Experience: Minimum of 3+ years of proven experience as a Data Engineer or in a similar role, with a strong background in database development, ETL processes, and software development.
Apply NowAugust 22
Data Engineer mapping business processes and implementing PowerApps/SharePoint for GFT. Documenting rules, supporting data governance and process automation.
đŁïžđ§đ·đ”đč Portuguese Required
August 21
Engenheiro de Dados SĂȘnior remoto; atua com Databricks/Delta Lake, Unity Catalog, e pipelines de dados, conduzindo soluçÔes de dados em Lakehouse.
đŁïžđ§đ·đ”đč Portuguese Required
August 21
RunTalent Senior Data Engineer; designs scalable data pipelines with Databricks, Delta Lake and Unity Catalog, enabling data-driven decisions.
đŁïžđ§đ·đ”đč Portuguese Required
August 21
Engenheiro de Dados SĂȘnior remoto; trabalha com Databricks, PySpark e Delta Lake; envolve governança de dados e pipelines.
đŁïžđ§đ·đ”đč Portuguese Required
August 21
Senior Data Engineer at Wellhub (CARE) remote Brazilâbuilds data models and pipelines for AI-powered user experiences.
đ§đ· Brazil â Remote
đ° $5.4M Venture Round on 2021-12
â° Full Time
đ Senior
đ° Data Engineer