
B2B • Artificial Intelligence • Fintech
Newfire Global Partners is an American IT services and advisory firm (founded in 2016 in Boston) that provides talent augmentation, software engineering, data & analytics, and AI/ML advisory to enterprise and investor clients. The company offers multidisciplinary engineering, product, and marketing teams, technical due diligence for VC/PE, data platform optimization, and an internal ML tool (Novel Heat) to improve code quality and Scrum velocity. Newfire operates across the Americas, Europe, and APAC with a 24x5 follow-the-sun delivery model and focuses on sectors such as digital healthcare, fintech, cybersecurity, and education technology.
501 - 1000 employees
Founded 2016
🤝 B2B
🤖 Artificial Intelligence
💳 Fintech
October 23

B2B • Artificial Intelligence • Fintech
Newfire Global Partners is an American IT services and advisory firm (founded in 2016 in Boston) that provides talent augmentation, software engineering, data & analytics, and AI/ML advisory to enterprise and investor clients. The company offers multidisciplinary engineering, product, and marketing teams, technical due diligence for VC/PE, data platform optimization, and an internal ML tool (Novel Heat) to improve code quality and Scrum velocity. Newfire operates across the Americas, Europe, and APAC with a 24x5 follow-the-sun delivery model and focuses on sectors such as digital healthcare, fintech, cybersecurity, and education technology.
501 - 1000 employees
Founded 2016
🤝 B2B
🤖 Artificial Intelligence
💳 Fintech
• Design and implement robust ETL/ELT pipelines for ingesting and processing diverse data formats. • Develop dbt-based transformations to evolve data from raw to curated layers, ensuring performance, consistency, and schema compliance. • Build monitoring and alerting solutions to improve visibility into data quality, pipeline health, and operational metrics. • Optimize data workloads for scalability, cost-efficiency, and reliability. • Collaborate with cross-functional teams to validate and integrate new data sources. • Manage batch and streaming pipelines across multiple formats (e.g., CSV, JSON, HL7, FHIR). • Contribute to data architecture decisions involving schema design, data validation, and metadata management. • Ensure best practices around version control, testing, and production releases.
• 5+ years of professional experience designing and maintaining ETL/data pipeline systems in modern cloud environments. • Strong proficiency in SQL and Python, with hands-on experience in tools such as dbt, Airflow, Dagster, or similar orchestration frameworks. • Deep familiarity with Databricks or other cloud data warehouse technologies. • Experience working with structured and semi-structured data, including healthcare-related formats (HL7, FHIR, JSON, CSV). • Working knowledge of Spark, Kafka, or similar distributed data processing systems. • Strong understanding of data modeling, data governance, and pipeline observability principles. • Ability to take ownership of production workflows, including monitoring, troubleshooting, and stakeholder communication. • Excellent communication skills and a collaborative mindset, comfortable working in a remote-first, Agile environment
Apply NowSeptember 17
Lead development of scalable PySpark ETL and AWS data lake architecture to power 8020REI's AI real estate analytics and ML models.
🗣️🇪🇸 Spanish Required
August 26
Data Engineer at Softgic building scalable streaming platforms, data warehouses, and ETL pipelines. Work with Databricks, Spark, Python/Scala to process large-scale datasets.
🗣️🇪🇸 Spanish Required
August 14
Senior Data Pipeline Developer at Ansira builds and optimizes real-time data workflows in cloud-native environments. Works with Spring Cloud Data Flow, Kafka, CockroachDB, and Kubernetes.
August 9
Data Architect con visión consultiva para una consultora líder en soluciones financieras. Diseña, modela y visualiza datos; trabajo 100% remoto con clientes internacionales.
🗣️🇪🇸 Spanish Required
August 1
Topsort is seeking a Senior Data Engineer to build data pipelines and optimize data solutions.