Big Data Engineer

Job not on LinkedIn

July 6

Apply Now
Logo of Orion Innovation

Orion Innovation

Enterprise • SaaS • Telecommunications

Orion Innovation is a leading digital transformation firm that partners with clients to drive digital innovation. With over 30 years of experience, Orion offers services in cloud and infrastructure management, cybersecurity, data and analytics, and enterprise platform solutions. The company works across various sectors, including telecommunications, media, professional services, financial services, industrial products, medtech, healthcare, and education, helping clients improve operations and enhance customer experiences. Their mission is to inspire and accelerate digital transformation and innovation.

5001 - 10000 employees

Founded 1992

🏢 Enterprise

☁️ SaaS

📡 Telecommunications

💰 Funding Round on 2015-01

📋 Description

• Requirement analysis, requirement gathering, design, business impact analysis, gap analysis, estimation, development, testing, coding, code review, unit testing and deployment of the application. • Design, build, install, configure, and support Hadoop (Big Data). • Design and implement big data pipelines to ingest and process data in real-time and monitor for data miss or error in progress to ensure that data reaches the end system. • Measure the performance of various APIs and implement optimization on slow services to enhance the responsiveness of the system. • Deliver functional enhancements to the existing big data applications. • Provide inputs on solution architecture based on evaluation/understanding of solution alternatives, frameworks, and products. • Develop Hadoop batch jobs for data extraction from multiple unstructured and structured data sources for populating into various repositories (Hadoop, Neo4J, MongoDB, Apache SOLR). • Interact with clients to elicit architectural and non-functional requirements such as performance, scalability, reliability, availability, maintainability. • Participate in designing the data model structured data in RDBMS into a graph database solution and NoSQL database solution. • Develop near real-time data processing solutions using Kafka and Spark Streaming. • Participate in designing Spark architecture with Databricks and Structured Streaming. • Set up Microsoft Azure with Databricks, and Databricks Workspace for business analytics. • Contribute to the automate build process by using Jenkins and Ansible to achieve continuous integration and continuous deployment.

🎯 Requirements

• Master’s in Computer Science/Applications, Information Technology/Systems or Electronics/Electrical Engineering + minimum 1 year experience as Big Data Engineer, Data Engineer, Data Engineering Specialist, Data Warehousing Analyst or related occupation. • Must be willing to travel/relocate to anywhere in the US. • Edison, NJ and unanticipated locations throughout US

Apply Now

Similar Jobs

June 27

KIS Solutions

51 - 200

☁️ SaaS

🤖 Artificial Intelligence

A remote Junior Data Engineer role with a global leader in IT Services optimizing cloud infrastructure.

🇺🇸 United States – Remote

⏰ Full Time

🟢 Junior

🚰 Data Engineer

May 26

Givzey

2 - 10

💸 Finance

🤝 Non-profit

☁️ SaaS

Join Givzey as a Data Engineer to architect data pipelines for AI-driven fundraising solutions.

🇺🇸 United States – Remote

⏰ Full Time

🟢 Junior

🟡 Mid-level

🚰 Data Engineer

April 28

Meetsta

2 - 10

🏢 Enterprise

⚡ Productivity

🤝 B2B

Join Meetsta to design and develop next-gen mobile applications and improve user engagement.

🇺🇸 United States – Remote

💵 $1 - $2 / year

⏰ Full Time

🟢 Junior

🚰 Data Engineer

February 11

LigaData

51 - 200

🤖 Artificial Intelligence

📡 Telecommunications

🤝 B2B

Join the engineering team to implement data solutions and troubleshoot production data issues. Ideal for entry-level candidates seeking growth in data engineering.

🇺🇸 United States – Remote

⏰ Full Time

🟢 Junior

🚰 Data Engineer

Developed by Lior Neu-ner. I'd love to hear your feedback — Get in touch via DM or support@remoterocketship.com