Senior Data Warehouse Engineer, Finance Data Mart

October 29

🗣️🇨🇳 Chinese Required

Apply Now
Logo of Binance

Binance

Crypto • Fintech

Binance is the world's leading cryptocurrency exchange, serving over 235 million registered users across more than 180 countries. The platform offers a wide array of services, including the trading of over 350 cryptocurrencies in Spot, Margin, and Futures markets. Users can also buy and sell crypto via Binance P2P, earn interest through Binance Earn, and engage in NFT trading on the Binance NFT marketplace. Binance provides low transaction fees and diverse payment options, making it a preferred choice for cryptocurrency enthusiasts worldwide.

1001 - 5000 employees

Founded 2017

₿ Crypto

💳 Fintech

💰 Initial Coin Offering on 2020-12

📋 Description

• Design and build a scalable and flexible data warehouse system in line with the company’s standards and business requirements, enabling rapid support for analytical needs and reducing redundant development efforts. • Responsible for data model design, development, testing, deployment, and real-time monitoring of data jobs, with strong capability in troubleshooting complex issues, optimizing calculation logic, and enhancing performance. • Contribute to data governance initiatives, including the development of the company’s metadata management and data quality monitoring systems. • Support technical team development through knowledge sharing, continuous learning, and skill enhancement, fostering the team’s overall growth and expertise.

🎯 Requirements

• 5+ years of hands-on experience in data engineering, data lake, or data warehouse development. • Deep expertise in data warehouse modeling and governance, including dimensional modeling, information factory (data vault) methodologies and “one data” principles. • Proficiency in at least one of Java, Scala or Python, plus strong Hive & Spark SQL programming skills. • Practical experience with OLAP engines (e.g., Apache Kylin, Impala, Presto, Druid) and real-time serving systems. • Proven track record both high-throughput batch pipelines and low-latency streaming pipelines (e.g., Flink, Kafka), with production SLAs for stability, availability, and sub-second freshness. • Familiarity with core Big Data technologies (Hadoop, Hive, Spark, Flink, Delta Lake, Hudi, Presto, HBase, Kafka, Zookeeper, Airflow, Elasticsearch, Redis). • Experience in Web3 data domains (on-chain/off-chain data, token/transaction/holder analytics) and ability to design data services powering online applications. • AWS Big Data service experience is a plus. • Strong analytical and system-design capabilities, with a clear understanding of business requirements into scalable, high-quality data architecture. • Collaborative mindset, skilled at building partnerships across teams and stakeholders. • Preferred: Experience managing petabyte-scale data in Internet environments and resolving critical real-time production incidents. • Bilingual English/Mandarin is required to be able to coordinate with overseas partners and stakeholders.

🏖️ Benefits

• Competitive salary and company benefits • Work-from-home arrangement (the arrangement may vary depending on the work nature of the business team)

Apply Now

Similar Jobs

Built by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com