Back to Industries

    AI & Agentic AI

    Supporting companies developing cutting-edge AI technologies, from machine learning models to autonomous agent systems.

    AI development qualifies as R&D when it involves creating new knowledge through novel architectures, algorithms, or methodologies that go beyond routine software development.

    What Qualifies as R&D

    Understanding what qualifies as R&D is crucial for maximizing your tax credits. In AI & Agentic AI, innovation takes many forms—from breakthrough algorithms to novel system architectures. Here's what the UAE tax authorities recognize as eligible R&D activities:

    Model Development & Training: Designing new neural network architectures, developing novel training algorithms, or creating custom loss functions that address previously unsolved problems

    LLM Fine-tuning for Novel Applications: Experimenting with fine-tuning techniques for domain-specific tasks where optimal approaches are unknown

    Reinforcement Learning Systems: Developing RL algorithms for complex decision-making where the reward structure or environment dynamics present technical uncertainty

    Inference Optimisation: Research into novel compression techniques, quantisation methods, or edge deployment strategies that push beyond existing capabilities

    Agent Orchestration: Building multi-agent systems with emergent behaviors, novel coordination mechanisms, or autonomous decision-making frameworks

    Synthetic Data Generation: Creating new methods for generating training data that preserve statistical properties or privacy constraints

    The Five Core Criteria

    Your work must satisfy all five criteria established by the Frascati Manual—the international standard for R&D classification. Here's how these apply to your industry:

    Novel (Frascati 2.14)

    You're advancing beyond existing model architectures, creating new approaches to common problems (e.g., developing a transformer variant that reduces computational complexity), or discovering new applications of AI in unexplored domains

    Creative (Frascati 2.17)

    The work requires human expertise to devise new solutions—not just applying existing frameworks like running a standard GPT model. Examples include designing custom attention mechanisms or inventing new evaluation metrics

    Uncertain (Frascati 2.18)

    You cannot predict whether your model will converge, what hyperparameters will work, or how long training will take. This distinguishes R&D prototyping (testing technical concepts with high failure risk) from deploying proven models

    Systematic (Frascati 2.19)

    Experiments are documented with hypotheses, methodologies, results, and resource allocation. You maintain experiment logs, track model versions, and record both successful and failed approaches

    Transferable & Reproducible (Frascati 2.20)

    Results can be shared through technical documentation, papers, or internal knowledge bases, allowing other teams to build on your findings

    Common Misconceptions

    Not every development activity qualifies as R&D. It's important to understand the boundaries. The following activities, while valuable to your business, don't meet the criteria for R&D tax credits:

    Routine model deployment using established frameworks without modification

    Standard API integration of third-party AI services

    Quality control testing of production models

    Regular model retraining with new data using known methods

    Customer-specific customisation as part of ongoing business (unless involving technical uncertainty)

    The Documentation Challenge

    Even when your work clearly qualifies, inadequate documentation can cost you thousands in lost credits. We've seen brilliant innovations go unclaimed simply because teams didn't capture the right evidence. Here's what we've learned from working with hundreds of AI & Agentic AI companies:

    Common Pain Points

    ·

    Missing experiment logs showing iterations and failures

    ·

    Unclear staff time allocation between R&D and production work

    ·

    Undocumented technical decisions and alternative approaches tested

    ·

    Lack of evidence showing uncertainty and systematic approach

    Best Practices That Work

    Maintain detailed experiment logs with hypotheses, parameters, and outcomes

    Record failed experiments (these demonstrate uncertainty)

    Track time spent by researchers vs. support staff

    Document technical decisions and why certain approaches were chosen over others

    Keep version control history showing iterative development

    How We Make It Easy

    RDvault was built by engineers who understand the unique challenges of documenting technical work. We automate the tedious parts so you can focus on innovation.

    Automatic evidence capture from GitHub commits, Notion docs, and Jira tickets

    Model version tracking with timestamp and parameter logging

    Experiment logs automatically formatted for claim submissions

    Integration with ML platforms (MLflow, Weights & Biases) to capture training metrics

    Does Your Project Qualify?

    Ask yourself these five questions. If you answer yes to most of them, you're likely sitting on unclaimed R&D credits:

    Are you building models from scratch or modifying existing architectures significantly?

    Are you testing new algorithms or approaches where success is uncertain?

    Does your work require specialized AI/ML expertise beyond standard implementation?

    Are you documenting experiments, iterations, and technical decisions?

    Could other researchers reproduce or build upon your work?

    Ready to Claim What You've Earned?

    Join forward-thinking AI & Agentic AI companies already maximizing their R&D credits with RDvault. Get your personalized eligibility assessment in minutes.