09 Dec Senior Microsoft Fabric Data Engineer
We are looking for Senior_Microsoft_Fabric_Data_Engineer to join our data & analytics consulting team.
📍 Location: Hybrid (Cairo, Egypt) 🕒 Experience: 5–7 years
This role is designed for an experienced data professional who can architect, build, and optimize complex data pipelines and end-to-end workflows in Microsoft Fabric, with a strong foundation in Azure Synapse, Azure Data Factory and enterprise data warehousing principles.
Key Responsibilities
– Design and Architect End-to-End hashtag
Fabric_Solutions Lead architecture and implementation across Fabric components
– Data Pipelines, Lakehouses, Dataflows Gen2, Warehouses, and Semantic Models Develop and Optimize Data Pipelines Design, orchestrate, and monitor complex workflows in Microsoft Fabric, Azure Data Factory (ADF), and Azure Synapse Analytics Data Modeling & Transformation:
– Apply advanced SQL and data modeling techniques (star schema, snowflake, normalization, SCD, CDC, etc.) to build scalable, maintainable data models.
Implement best practices in ELT/ETL, data ingestion, transformation, partitioning, and performance tuning Leverage Delta/Parquet formats, incremental loading, and metadata-driven ingestion patterns.
– Work closely with Data Architects, Business Analysts, and BI Developers to align technical design with business goals.
Required Skills & Experience
Microsoft_Fabric: Deep hands-on experience with Data Pipelines, Lake houses, Warehouses, Dataflows Gen2, and OneLake integration.
Azure_Data_Factory: Mastery in building, orchestrating, and monitoring large-scale pipelines.
Azure_Synapse_Analytics: Proficiency in dedicated SQL pools, server less queries, partitioning, and performance optimization.
SQL_Mastery: Advanced SQL development (CTEs, window functions, dynamic SQL, tuning, indexing).
Data_Lake_Architecture: Delta Lake, Parquet, partitioning strategies, medallion (Bronze/Silver/Gold) design
Data_Engineering_Concepts: (ELT vs ETL, Incremental and full-load design, Orchestration and monitoring, Data quality and validation frameworks, Parallelism and pipeline optimization)
Data_Warehousing_Concepts: (Star/Snowflake schema design, SCDs, Data Modelling,
Data_governance and security_layers (Row-Level Security, sensitivity labels) Performance Optimization: Analyze workloads and optimize pipelines, partitioning, caching, and cost-performance balance.
Preferred Qualifications
Experience: 5–7 years in data engineering and BI solutions.
Education: Degree in Computer Science, Information Systems, or a related field.
Programming Skills: Python, PySpark, or DAX is a plus.
Tools & Ecosystem: Exposure to Power BI, Data bricks, or Azure_Logic_Apps is a plus.
Oracle Source Systems: Familiarity with hashtag
Oracle_EBS/
Fusion or similar on-prem sources is a plus
Certifications: Microsoft_Fabric or Azure_Data_Engineer_Associate (DP-203) preferred.
if you interested kindly send your updated cv to hr@out-sourcy.com
Job Features
| Job Category | Project | Program Management |