✦ Luna Orbit — Data & Analytics

Data Engineer (AWS / Azure / GCP | Python | PySpark | ETL Pipelines)

at Tech Scalerz

📍 Remote, US Remote 💰 $60 – $70 USD / year Posted April 01, 2026
Salary $60 – $70 USD / year
Type Contract
Experience mid
Exp. Years Not specified
Education Not specified
Category Data & Analytics

Data Engineer responsible for designing and maintaining scalable data pipelines and ETL processes across cloud platforms. The role emphasizes Python/PySpark development, cloud-native services, and data quality governance.

  • Data Pipeline Development
  • Cloud & Data Architecture
  • Data Modeling & Querying
  • Data Quality & Operations
  • Collaboration & DevOps

Backend data engineering role focusing on Python, PySpark, and cloud data platforms (AWS, Azure, GCP). Key activities include building ETL pipelines, data modeling in cloud warehouses, and implementing CI/CD for data workflows.

The ideal candidate is a data engineer with 3+ years of Python and PySpark experience, proficient in building scalable ETL pipelines across cloud platforms (AWS, Azure, or GCP). They should be comfortable working in a remote setup and optimizing data modeling and queries in cloud data warehouses.

Strong programming experience in Python and PySparkHands-on experience with at least one cloud platform: AWSAzureor GCPExperience with cloud-native data services (AWSAzureGCP)Strong SQL skills and experience with data modeling and query optimizationExperience building and maintaining ETL pipelines and data workflowsFamiliarity with data quality frameworksmonitoringand alertingExperience with Git and CI/CD workflows
Experience across multiple cloud platforms (AWSAzureGCP)Data lake architectures (S3ADLSGCS)Spark optimization and distributed systemsStreaming data pipelines
AWSAmazon RedshiftAzure Data FactoryAzure Synapse AnalyticsGoogle Cloud PlatformBigQueryPub/SubDataflowS3ADLSGCSGitCI/CD
PythonPySparkETL pipelinesSQLdata modelingRedshiftBigQuerySynapseData FactoryDataflowPub/Subcloud-native data servicesGitCI/CDAWSAzureGCP
PythonPySparkETL pipelinesSQLRedshift (AWS)BigQuery (GCP)Synapse Analytics (Azure)Data FactoryDataflowPub/SubCloud-native data servicesCI/CDGit
CollaborationProblem-solvingCommunicationTeamwork
Industry SaaS
Job Function Develop and maintain cloud-based data pipelines and ETL systems
Role Subtype Data Engineer
Tech Domains Amazon Web Services, Python, SQL / PostgreSQL, Google Cloud Platform, Microsoft 365
data engineerawsamazon web servicesazuregcppysparkpythonetl developerdata pipelinesredshiftbigquerysynapsedata factorydata flowpub/subcloud data engineersparkdata lakeserverlessgitetl

No Python or PySpark experience, No cloud platform experience (AWS, Azure, or GCP), No ETL or data pipeline experience

Apply for this Position →

Get matched to jobs like this

Luna finds roles that fit your skills and career goals — no endless scrolling required.

Create a Free Profile