Job Description
Data Engineer
Job Location:  Taguig City
Location Flexibility:  Primary Location Only
Req Id:  6127
Posting Start Date:  3/3/26

At Fujitsu, we've been driven to create a sustainable world through innovation since 1935. Today, we lead in digital transformation globally with our 130,000 employees across 50+ countries. We empower our diverse community to achieve greatness through career development and opportunities. Explore our internal positions and join us in shaping a brighter future. Thank you for being a part of Fujitsu. We look forward to growing together toward a brighter future.

 

Data Engineer
Location: BGC, Taguig
Employment Type: Full-time
Work Setup: Hybrid
Department: Data & Analytics / IT

About the Role
We are seeking a highly motivated Data Engineer to support the design, development, and enhancement of our data platform and analytics environment. The ideal candidate will have strong SQL skills, experience working with modern data engineering tools, and familiarity with Azure and Databricks-based ecosystems. This role involves building scalable data pipelines, transforming datasets, and contributing to data initiatives across the organization.


Key Responsibilities

  • Design, build, and maintain ETL/ELT data pipelines to support analytics, reporting, and business operations.
  • Develop and optimize data models including fact and dimension tables in alignment with data warehouse best practices.
  • Create and maintain Databricks notebooks using PySpark and Spark SQL for complex transformations.
  • Write and optimize intermediate to complex SQL queries for large-scale datasets.
  • Collaborate with agile development teams (Azure DevOps/Jira) to deliver data engineering solutions.
  • Implement and support Azure-based data components including Azure SQL, Synapse, Databricks, Storage Accounts, Key Vault, and other cloud services.
  • Participate in sprint-based team development, contributing to enhancements beyond day-to-day support work.
  • (Preferred) Develop simple dashboards using Power BI, Tableau, or Qlik as needed.

Minimum Qualifications

  • Knowledge of data warehousing concepts, including ETL processes, fact tables, and dimension tables.
  • Experience writing intermediate SQL, including window functions, aggregations, and subqueries.
  • Hands-on experience with Python or PySpark, including DataFrames or Pandas.
  • Basic understanding of data engineering concepts and the benefits of notebook development.
  • Familiarity with core Azure services such as storage accounts and PaaS resources (e.g., Azure SQL).
  • Experience with at least one ETL tool: SSIS, Talend, or DataStage.
  • Experience collaborating with a team on enhancements to a data platform or data warehouse solution.

Preferred Qualifications

  • Experience in intermediate data warehouse environments (5+ fact tables, 5+ dimensions).
  • Knowledge of Data Lakehouse concepts.
  • Ability to write complex SQL and apply performance tuning; understanding of multi-parallel processing.
  • Strong hands-on experience with Databricks (PySpark notebooks, Spark SQL notebooks).
  • Experience with Azure Data & AI services: Synapse, Databricks, Storage Accounts, Key Vault, Resource Groups.
  • Experience developing Azure Data Factory pipelines.
  • Experience building dashboards and writing calculations (e.g., Power BI DAX).
  • Experience working in agile/scrum development teams using Azure DevOps or Jira.
  • Experience contributing to greenfield projects.
Relocation Supported:  No
Visa Sponsorship Approved:  No

At Fujitsu, we are committed to an inclusive recruitment process that values the diverse backgrounds and experiences of all applicants. We believe that hiring people from a wide variety of backgrounds makes us stronger, not because it's the right thing to do, but because it allows us to draw on a wider range of perspectives and life experiences.