Experience: 4–10 Years
Location: Remote / On-site / Hybrid
Employment Type: Full-time / Contract
Job Description
We are looking for an experienced Big Data Engineer to design, build, and maintain scalable data pipelines and big data platforms. The role focuses on processing large volumes of structured and unstructured data to support analytics, reporting, and data science initiatives.
Key Responsibilities
- Design, develop, and maintain large-scale data pipelines
- Build and optimize ETL/ELT processes for high-volume data ingestion
- Work with big data frameworks and distributed systems
- Ensure data quality, reliability, and performance
- Collaborate with data scientists, analysts, and software engineers
- Implement data security, governance, and compliance standards
- Monitor and troubleshoot data pipelines and workflows
- Optimize data storage and processing costs
- Stay updated with big data technologies and best practices
Required Skills & Qualifications
- Strong experience with big data technologies (Hadoop, Spark, Hive, Kafka)
- Proficiency in programming languages such as Python, Scala, or Java
- Experience with SQL and NoSQL databases
- Hands-on experience with data streaming and real-time processing
- Experience with cloud platforms (AWS, Azure, GCP)
- Knowledge of data warehousing and data lakes
- Familiarity with workflow orchestration tools (Airflow, Oozie)
- Strong problem-solving and analytical skills
Preferred Qualifications
- Experience with cloud-native big data services (EMR, Databricks, BigQuery, Synapse)
- Knowledge of Delta Lake, Iceberg, or Hudi
- Experience with containerization and Kubernetes
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field
What We Offer
- Opportunity to work on large-scale, data-intensive systems
- Exposure to modern big data and cloud technologies
- Competitive compensation and benefits
- High-impact role with ownership and growth opportunities
- Collaborative and data-driven work environment