Next-Gen Fintech Automation
Digital Transformation • Cloud Enablement • Enterprise Integration • Digital Customer Experience • Data-Powered Insights
51 - 200
💰 Venture Round on 2016-12
April 11
Amazon Redshift
AWS
Azure
Cloud
Docker
ETL
Hadoop
Java
Kafka
Kubernetes
MySQL
Python
PyTorch
Scala
Spark
SQL
Tensorflow
Loading...
Next-Gen Fintech Automation
Digital Transformation • Cloud Enablement • Enterprise Integration • Digital Customer Experience • Data-Powered Insights
51 - 200
💰 Venture Round on 2016-12
• Design, develop, and maintain scalable and robust data pipelines to efficiently collect, process, and store data from various sources. • Develop and maintain data models and schemas to support analytics and reporting requirements. Ensure data integrity, consistency, and optimization for performance. • Implement Extract, Transform, Load (ETL) processes to cleanse, transform, and enrich raw data into usable formats for analysis and consumption. • Integrate data from multiple sources such as databases, APIs, and streaming platforms. Ensure seamless data flow across systems while adhering to data governance and security standards. • Identify opportunities to optimize data pipelines and processes for improved performance, scalability, and efficiency. Monitor system performance and implement optimizations as needed. • Develop and implement data quality checks and validation processes to ensure accuracy, completeness, and consistency of data. • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs. Communicate effectively with cross-functional teams to gather requirements and provide updates on project status. • Create and maintain technical documentation including data pipeline workflows, data dictionaries, and system architecture diagrams. Ensure documentation is up-to-date and accessible to relevant stakeholders. • Stay updated on emerging trends, technologies, and best practices in data engineering and related fields. Continuously enhance skills and knowledge to drive innovation and efficiency in data management.
• Bachelor’s degree in computer science, Engineering, or related field. Master's degree preferred. • 8+ years of experience in data engineering or related roles. • Proficiency in programming languages such as Python, Java, or Scala. • Experience with big data technologies such as Hadoop, Spark, or Kafka. • Strong understanding of SQL and database systems (e.g., MySQL, PostgreSQL). • Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and related services (e.g., S3, Redshift, BigQuery). • Familiarity with data warehousing concepts and tools (e.g., Snowflake, Amazon Redshift). • Experience with version control systems (e.g., Git) and CI/CD pipelines. • Strong analytical and problem-solving skills with attention to detail. • Excellent communication and collaboration skills. • Ability to work independently and as part of a team in a fast-paced environment. • Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). • Knowledge of machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).
• Competitive salary • Comprehensive health and dental benefits • Flexible work hours and work-from-home options • Opportunities for professional growth and development
Apply NowMarch 14
51 - 200