This content is password protected. To view it please enter your password below:

Global AI Excellence Model (GAIEM)

Strategic Lens: Responsible AI governance and system-level capability building

What It Is:

A cross-sector, cross-country AI readiness model co-developed with the Global Centre for AI Excellence (GCAIE) in the United Kingdom, aligning with global standards including the EU AI Act, NIST RMF, and ISO 42001.

My Role & Impact:

  • Advisory Board Member, contributing to readiness standards, sector frameworks, and evaluation criteria.
  • Supporting the future AI upskilling strategies for the Dubai Government and governmental entities in Jordan.
 

Why It Matters: 

Helps governments and organisations shift from pilots to scalable, value-creating AI transformation.

Methods & Evidence: 

Multi-pillar structure (Leadership, Execution, Enablers, Value Creation), benchmarked across global governance models.

AI Readiness Framework for Higher Education (UK–Qatar Collaboration)

Status: Work-in-Progress | Sponsor: UK Quality Assurance Agency (QAA)

Strategic Focus:

Establishing clear, scalable pathways for institutional AI adoption aligned with quality standards. Focus on inclusive, responsible, and ethical AI use across teaching and learning environments.

Core Deliverable:

  • Structured, scalable framework co-developed with Edge Hill, Liverpool, and Nottingham universities
  • Maps institutional capabilities → identifies gaps → creates upskilling pathways
  • Integrates ethical considerations and behaviorally informed readiness profiles 

My Role & Impact:

  • Lead developer, working with Edge Hill, Liverpool, and Nottingham universities.
  • Developing scalable pathways for staff and student upskilling. 

Why It Matters:

Addresses a critical global gap: universities lack structured, evidence-based AI literacy models linked to academic workflows. 

Methods & Evidence:

Behavioural readiness profiles, curricular alignment, ethics-by-design principles.

AI-Ready Clinical Workforce Toolkit (UK–Qatar Strategy Fund Proposal)

Status: Proposed Project | Funder: UK-Qatar Strategy Fund 

Strategic Focus:

Operationalizing safe, confident AI adoption in regulated healthcare environments. Supporting nurses and allied health professionals in clinical AI integration aligned with governance standards.

Core Features:

  • Four persona-specific learning pathways (Fearful/Cautious, Enthusiastic Beginners, etc.) 
  • Adaptive micro-modules (8–12 minutes) with AI-enabled clinical scenarios 
  • Assessment rubrics aligned with nursing competency standards

Behavioral & Gamification Design:

  • Octalysis-inspired mechanics: competency badges, XP, leaderboards 
  • Reinforces safe AI behaviors through immediate feedback and social proof 
  • Motivation design grounded in behavioral psychology insights 

Governance Alignment:

Qatar Law No. 13/2016 compliance, NHS standards integration, clinical safety protocols embedded throughout.

Expected Outcomes:

Staff confidence increase, safe tool adoption rates, clinical error reduction, regulatory compliance, measurable competency progression.

Consulting Application: 

Organizations implementing large-scale upskilling, regulated industries requiring competency demonstration, government agencies needing scalable capability building, change management leaders engaging diverse personas. 

AI Workshop Series (Executives, Educators, Students)

Strategic Lens: Workforce-wide AI capability building

What It Is:

A structured series covering AI in marketing, AI for personalisation, responsible AI, prompt engineering, and scenario-based practice for industry clients in Qatar.

My Role & Impact:

  • Delivered across universities, governmental teams, and industry. 
  • Designed rapid-learning frameworks combining theory, ethics, and hands-on execution.

Why It Matters:

Organisations require practical, context-specific AI fluency, not abstract theory.

Methods & Evidence:

Perplexity AI for market analysis, NotebookLM for synthesis, strategic prompt frameworks.

AI-Assisted Financial & Strategic Analysis

Status: Delivered | Client: Consulting Haus Qatar | Impact: Embedded in Organizational AI Literacy Program

Strategic Focus:

Building critical analysis and verification skills into AI-driven consulting workflows; moving beyond tool training to governance-embedded, responsible AI practice. 

My Role & Impact:

Curriculum architect; framework developer integrating ROSES prompt engineering with verification methodologies; delivery lead; tool orchestration expert. 

Core Core Content:

  • AI Fundamentals for Consultants: How LLMs work, limitations, hallucination patterns, reliability concerns 
  • ROSES Framework: Role, Objective, Scenario, Expected Solution, Steps for structured, auditable prompt engineering 
  • Verification Layer: Cross-checking outputs, identifying inconsistencies, validating sources, detecting unsupported claims 
  • Multi-platform Orchestration: Perplexity (research), NotebookLM (synthesis), Claude (analysis) 
  • Risk Identification: Auditing AI outputs for bias, incomplete reasoning, missed considerations

Business Impact:

  • Shifts mindset from “AI replaces expertise” to “AI augments expertise”
  • Builds internal capability for responsible AI use across engagements 
  • Creates efficiency gains while maintaining analytical rigor 
  • Demonstrates how responsible AI creates competitive advantage

Sector-Specific AI Literacy Frameworks

Strategic Lens: Targeted capability building aligned with Qatar’s National Development Strategy (NDS3)

What It Is:

Tailored literacy pathways for finance teams, marketing students, and clinicians.

My Role & Impact:

  • Designed workflows, verification tasks, and confidence-level segmentation. 
  • Integrated industry tools (SimilarWeb, Talkwalker, Synthesia).

Why It Matters:

Different sectors require different forms of safe-AI literacy; general training is insufficient.

Methods & Evidence:

Task-based competency mapping, sector benchmarks.

AI/ML for Executives, Entrepreneurs & Startup

Owners Status: Delivered | Client: Qatar Development Bank (QDB)

Strategic Focus:

Building technical foundations and strategic implementation capabilities for business leaders. Technical-to-strategic translation: from “what is AI” to “how to implement AI” to “how to measure AI business value.” 

Workshop Architecture:

  • AI/ML Foundations: Supervised learning, object recognition, predictive modeling with real-world applications (fraud detection, credit scoring, autonomous systems) 
  • Implementation Strategies: Team structure design (Data Engineers Data Scientists MLOps), deployment patterns (batch, real-time, edge computing) 
  • ROI Measurement: Hard ROI vs. Soft ROI, NPV calculation, common pitfalls in measuremen
  • IoT & Smart Systems: Sensor deployment, API integrations, strategic pillars (interoperability, scalability, security) 
  • Responsible AI & Governance: Ethical frameworks, data privacy, zero-trust architecture aligned with Qatar National AI Strategy 

Pedagogical Innovation:

  • Hands-on exercises: participants apply frameworks to real scenarios (hospitality AI, fraud detection ROI, IoT fleet optimization) 
  • Case study methodology: PayPal fraud detection ($2B impact), Siemens predictive maintenance (30% cost reduction), Walmart IoT (millions in spoilage prevention)
  • Technical depth appropriate for executives: balances conceptual understanding with practical implementation knowledge 

Core Deliverables:

  • Comprehensive executive-level training covering AI fundamentals through implementation ROI 
  • Practical frameworks: ROSES prompt engineering, Accenture’s 4 Pillars (Responsible AI), IoT deployment patterns 
  • Understanding of team structure requirements, deployment strategies, and technology orchestration 

Measurable Outcomes:

Executive confidence in AI strategy development, ability to evaluate AI vendor proposals, framework for calculating AI business cases, understanding of implementation requirements and risk mitigation strategies. 

Entrepreneurial Ecosystem Navigator (Mosaed)

Status: Google Partnership Launch Phase | Market: Qatar | Model: Scalable to Additional Geographies

My Role:

Product architect; algorithm designer; UX strategist; platform strategist.

Core Innovation:

Transparent Mentor Matching: 

  • Algorithm transparency: Users see why mentors are recommended (explicit weighting, not black-box) 
  • Stage alignment: Prioritizes mentors experienced at entrepreneur’s current stage (ideation, launch, growth, scale) 
  • Industry context: Considers mentor’s industry experience relative to entrepreneur’s sector 
  • Functional expertise: Evaluates specific functional strengths (marketing, fundraising, operations, technical) 
  • Explainability: Provides natural language explanation of recommendation rationale
System Architecture:
  • Mentor Database (Airtable): Structured mentor profiles (stage experience, industry, expertise, availability) 
  • Research Integration (Perplexity): Real-time research enriching entrepreneur context 
  • Matching Engine: Custom algorithm implementing weighted decision tree prioritizing stage fit
  • Communication Layer: Conversational AI translating outputs into supportive guidance

User Experience:

  • “Entrepreneur’s Wingman” Persona: Supportive, encouraging tone avoiding corporate language 
  • Customized communication: Reflects entrepreneurial psychology (aspirational, practical, empowering) 
  • Guided workflows: Leading entrepreneurs through regulatory, business development, growth phases
  • Progressive revelation: Information released as entrepreneur progresses

Business Model:

  • White-label opportunity: Framework adaptable to other underserved populations 
  • Ecosystem scaling: Expand to additional geographies with locally-appropriate networks 
  • Revenue models: Mentor subscription, entrepreneur premium tiers, enterprise licensing 


Consulting Application:

Development finance institutions (World Bank, regional development banks), government economic development agencies, corporate social responsibility programs, organizations in emerging markets.

Self-AI Integration Framework & Taxonomy

Strategic Lens: Identity, trust, and long-term adoption behaviour 

What It Is:

A conceptual and empirical model explaining how people integrate AI tools into their self-concept, resulting in four relationship types (Functional, Aspiring, Committed, Replacement). 

My Role & Impact:

Why It Matters:

Adoption is psychological, not only technical or operational.

Methods & Evidence:

Mixed-method programme (qualitative + experimental).

Consulting Application:

Informs persona development, AI communication strategy, change management during transformation, risk mitigation for over-trust/unhealthy dependency. 

Trust Calibration, Reactance & Manipulation Models

Strategic Lens: Trust, transparency, and user protection 

What It Is:

Traditional AI voice assistants, like Alexa or Google Assistant, excel at functional tasks (e.g., setting reminders or finding information) but often lack the emotional depth needed for meaningful user connections, especially in emotionally driven settings like shopping for experiential products (e.g., scented candles) or customer service interactions.

To address this gap, researchers from the University of Zurich (UZH), University of Doha for Science and Technology and ETH Zurich conducted a groundbreaking study as part of the AI Empathy Research Initiative, exploring how empathic AI could impact the consumer decision-making process by enhancing user satisfaction and emotional well-being.

The research team integrated Hume’s Empathic Voice Interface (EVI) with the eBay catalog to function as a shopping assistant with varying empathy levels:

  1. Utility-Focused EVI: Designed to provide straightforward, task-oriented assistance (e.g., finding products, answering questions).
  2. Empathic EVI: Adapted its tone and responses based on user emotions to provide a more thoughtful and engaging shopping experience.

The study involved simulated shopping scenarios for both functional products (e.g., batteries) and experiential products (e.g., scented candles), allowing researchers to measure how empathy influenced user preferences and decision-making in these different consumer contexts.

My Role & Impact:

  • Designed multi-experiment studies using socially intelligent agents. 
  • Identified specific triggers of reactance and trust drops.

Why It Matters:

Essential for designing AI systems users trust without feeling manipulated.

Methods & Evidence:

2×2 experimental designs; moderated mediation models. 

Research Findings (Illustrative):

  • High empathy + collaborative: highest trust AND highest user agency (not over-trust) 
  • High empathy + directive: highest compliance but reactance signals in some populations 
  • Low empathy + collaborative: lower trust, also lower over-reliance
  • Low empathy + directive: compliance but lower satisfaction, potential reactance

Consulting Application: 

AI communication guidelines, change management communication, voice assistant design, risk mitigation for preventing reactance or over-trust.

Empathic Voice Assistants (EVA) Research

Strategic Lens: Emotion regulation • Sustainable behaviour • Human-AI interaction

What It Is:

A research stream exploring how empathic voice agents shape sustainable choices, emotion regulation, and perceived manipulation. 

My Role & Impact:

  • Lead investigator on conceptual and empirical studies.
  • Applied behavioural theories to conversational AI design.

Why It Matters:

Voice interfaces represent the next frontier of AI adoption; trust and emotion determine long-term engagement.

Methods & Evidence:

Experiments using empathic vs non-empathic VA scripts; moderated mediation analysis.

Consulting Application:

Valuable for organizations implementing voice-based AI (customer service, employee support, healthcare advice); change management during high-stakes AI adoption; ethical AI governance; user experience design.