Roadmap: Phase 3 - Run
Full-Stack Autonomy & Market Leadership (Q1 2029 - Q4 2030)
Phase Overview: If Phase 1 built the AI agents and Phase 2 taught them to work together, Phase 3: Run is about achieving a state of Human-AI Symbiosis. The AI pipeline becomes a proactive, learning partner in the chip design process, moving beyond simple automation to intelligent optimization and strategic contribution. This is the stage where we transition from using AI as a tool to integrating AI as a core pillar of the company's growth, innovation, and competitive advantage.
Executive Overview: Project Chimera Phase 3
Phase 3 of Project Chimera marks the transition from developing AI capabilities to achieving full-scale Human-AI Symbiosis in chip design. This "Run" phase is not merely about using AI as a tool but about integrating it as a core pillar of our innovation, growth, and competitive strategy. The objective is to create a proactive, learning partnership between our engineers and a sophisticated AI system that automates and optimizes the entire design workflow.
The strategic foundation of Phase 3 rests on five key pillars:
- Seamless Workflow Orchestration: We will integrate the entire design pipeline, from specification to final layout, into a single, automated workflow. This will be managed by a Hierarchical Supervisor that orchestrates our suite of commercial EDA tools, eliminating the costly "integration tax" and manual handoffs that currently slow down development. Success will be measured by achieving over 99% automation in the end-to-end design flow and reducing total design runtime from months to under two weeks.
- Generative IP Creation: The system will move beyond optimization to active creation. Advanced AI agents will be capable of generating novel architectural suggestions and producing complex, verified silicon IP blocks (like RISC-V cores) from high-level specifications in under 24 hours. This transforms IP from a potential bottleneck into a rapidly customizable asset.
- The "Self-Hosting" Flywheel: A key milestone of this phase is to use the Project Chimera system to design its own next-generation AI accelerator chips. This creates a powerful, compounding feedback loop where our AI builds the specialized hardware it will run on in the future, accelerating its own evolution and creating an unmatched competitive advantage. The primary KPI is to tape out a functional AI accelerator that outperforms its predecessor by over 30% on key performance-per-watt benchmarks.
- Continuous Learning from Silicon: The AI pipeline will be designed to learn from every project. By creating a post-silicon feedback loop, the system will analyze real-world performance data from manufactured chips to continuously refine its internal models, ensuring it becomes more accurate and effective with each design cycle.
- Evolving the Engineering Role and Business Model: This phase will fundamentally reshape our workforce and business strategy.
- Talent: We will focus on hiring and training a new class of AI-Hybrid Engineers who possess both deep chip design fundamentals and AI fluency. This is critical for managing, debugging, and providing the essential human judgment needed to guide the AI system.
- Business Model: As AI drastically reduces engineering hours, we will transition from a time-based billing model to outcome-based pricing. This aligns our revenue with the tangible value we deliver to clients—such as superior PPA, accelerated time-to-market, and reduced NRE costs from eliminating respins.
- Future Commercialization: The ultimate goal is to productize our innovations. The generative IP capabilities will be developed into an IP-as-a-Service (IPaaS) offering, while the entire Chimera orchestration system has the potential to be licensed as a Platform-as-a-Service (PaaS), creating new, high-margin revenue streams.
In summary, Phase 3 is designed to deliver transformative ROI by drastically reducing design time, improving chip performance beyond human capability, and slashing non-recurring engineering costs. By successfully executing this plan, we will not only optimize our internal design processes but also position the company as a leader in AI-driven design and create new, scalable lines of business.
Part 1: Strategic Clarification: We Are the Architect, Not the Toolsmith
To be perfectly clear: Project Chimera's goal is not to build new EDA tools. Our strategy is to create a proprietary AI system that sits on top of the industry-standard EDA toolchain.
Think of it this way: The world's best Formula 1 driver doesn't build their own car from scratch; they master the use of a highly advanced machine to win races. Similarly, our AI will be the "expert driver" for the best EDA tools available, orchestrating them in a holistic, end-to-end flow to design world-beating chips faster and more efficiently than any competitor. Our intellectual property is the AI, not the underlying tools.
Part 2: Full Stack Autonomy & The Human in the Loop
This is not about replacing human engineers; it's about elevating them. "Full stack autonomy" means the AI can handle the entire chip design workflow, from high-level specification to the final GDSII file ready for manufacturing, by intelligently driving our suite of commercial EDA tools at every stage.
How it Works with a Human in the Loop:
- The Engineer as a Supervisor: Instead of writing RTL code line-by-line, the engineer acts as a high-level architect and project manager. They define the "what" (e.g., "design a low-power neural processing unit with these specific performance targets"), and our AI orchestrates the EDA tools to handle the "how."
- Strategic Intervention: The human's role shifts to strategic decision-making. The AI might present three optimized design options, each with different trade-offs in power, performance, and area (PPA). The engineer uses their expertise to select the best option based on market needs, customer requirements, and long-term product strategy.
- Creative Exploration: Engineers are freed from tedious, repetitive tasks, allowing them to focus on innovation. They can use the AI to rapidly prototype and test novel architectures that would have been too time-consuming to explore manually.
Part 3: Detailed Step-by-Step Plan for Phase 3
This section outlines the execution plan for Phase 3, breaking down the high-level goals into concrete steps with defined methodologies and measurable Key Performance Indicators (KPIs). The plan is designed to be iterative, with each step building upon the last to create a compounding capability advantage.
Step 1. Seamless Workflow Integration & Orchestration
Description:
Integrate all stages of the AI-Powered Design Pipeline into a single, seamless workflow. This workflow is orchestrated by a Hierarchical Supervisor architecture that interacts with a central Model Context Protocol (MCP) for data management. This ensures a fluid, automated handoff from one design stage to the next, eliminating the "integration tax" of multi-vendor toolchains.
Methodology:
- Hierarchical Supervisor: Implement a central AI orchestrator to manage task-specific agents, plan sequences, and make decisions based on tool outputs. This supervisor retrieves design data and IP information from the MCP.
- Model Context Protocol (MCP): Establish the MCP as the centralized, hierarchical data management system, acting as the single source of truth (SSoT) for the entire IP portfolio and design data.
- Event-Driven Architecture (EDA): Build the system on an event-driven backbone (e.g., using Kafka or RabbitMQ) to decouple agents and tools. This allows for flexible, asynchronous communication and makes the system resilient to changes in individual tools.
- Standardized API Contracts: Define and enforce strict API contracts for all agent-tool interactions, using standards like AsyncAPI to ensure reliable data exchange and governance.
Key Objectives & KPIs:
Objective: Achieve >99% automated flow for a complete design from spec to layout, with verifiable and repeatable results.
KPIs:
- Automation Rate: Percentage of the end-to-end design flow executed without manual intervention. Target: >99%.
- Manual Intervention Points: Number of required manual handoffs in the workflow. Target: Reduction of >95% from baseline.
- Total Autonomous Runtime: End-to-end wall-clock time for a full design. Target: Reduce from months to < 2 weeks.
- Workflow Reliability: Successful completion rate of autonomous runs. Target: >98%.
Step 2. Emergent Architectural Suggestion
Description:
Empower the system to autonomously explore the design space and suggest novel architectures and micro-architectural improvements. Based on its holistic view and historical data, the AI will identify new opportunities for optimization that humans might miss.
Methodology:
- Generative AI for Architecture: Utilize Large Language Models (LLMs) and Diffusion Models, fine-tuned on architectural data, to generate novel design concepts from high-level requirements.
- Automated Design Space Exploration (DSE): Implement a DSE framework that allows the AI to systematically explore trade-offs between different architectural choices.
- Evolutionary Algorithms & Novelty Search: Employ evolutionary algorithms to guide the search for high-performing architectures. Integrate a novelty metric (e.g., k-nearest neighbor distance in a defined behavior space) to reward the discovery of non-intuitive and structurally different solutions.
Key Objectives & KPIs:
Objective: Generate at least one novel, high-performance architectural component that is reviewed, validated, and adopted into a production design.
KPIs:
- Architect Acceptance Rate: Percentage of AI-suggested architectural improvements accepted by human architects for further evaluation. Target: >20%.
- Novelty Score: A computational metric assessing the structural difference of generated architectures from the training data and known designs. Target: Achieve a portfolio of architectures with novelty scores in the top decile.
- Simulated PPA Improvement: PPA metrics of AI-suggested architectures must show a >10% improvement on the PPA frontier compared to the human-designed baseline for a given benchmark project.
Step 3. Generative IP Creation
Description:
Develop advanced AI-driven IP generation tools. This allows the system to rapidly create and customize complex, verified silicon IP blocks (e.g., memory controllers, RISC-V cores) based on high-level specifications, drastically reducing design time and ensuring IP is an asset, not a bottleneck.
Methodology:
- AI-Assisted RTL Generation: Use LLMs fine-tuned on Verilog/VHDL to generate synthesizable RTL code from natural language specifications or Python models.
- Automated Verification Suite Generation: The AI will generate corresponding UVM testbenches, formal verification assertions, and coverage models to ensure the correctness of the generated IP.
- Automated IP Management: Use AI tools to manage the IP lifecycle, including documentation, versioning, and prior art analysis to ensure freedom to operate.
Key Objectives & KPIs:
Objective: Create a library of 10+ core IP blocks (e.g., memory controllers, RISC-V cores, interconnect fabrics) generated and maintained by the AI system.
KPIs:
- Time-to-Generate & Verify Custom IP: Time from high-level specification to a fully verified, synthesizable IP block. Target: < 24 hours.
- First-Pass Verification Rate: Percentage of generated IP blocks that pass verification without manual code modification. Target: >90%.
- IP Quality (Bug Density): Number of functional bugs discovered per KLOC post-generation. Target: <0.1 bugs/KLOC, outperforming human-coded equivalents.
- Resource Utilization: PPA metrics of the generated IP must be within 5% of hand-optimized equivalents.
Step 4. "Self-Hosting": AI Designing AI Chips
Description:
Initiate the first projects to design novel AI accelerator chips using the Project Chimera system itself. This closes the symbiotic AI-silicon loop, using our AI to build the specialized hardware it will run on in the future, creating a powerful flywheel effect.
Methodology:
- Capstone Integration Project: This step leverages all previously developed capabilities (workflow orchestration, architectural suggestion, IP generation) in a single, end-to-end design project.
- Full Stack Autonomous Run: The human team will provide the high-level specifications for a next-generation AI accelerator. The Chimera system will execute the entire design flow, from architectural exploration to GDSII tape-out, with human engineers acting as supervisors and strategic decision-makers.
Key Objectives & KPIs:
Objective: Tape out a functional AI accelerator chip designed by Chimera that outperforms its predecessor by >30% on key benchmarks.
KPIs:
- Performance-per-Watt (TOPS/Watt): The final silicon must demonstrate a >30% improvement in energy efficiency on target AI workloads compared to the previous generation.
- Time-to-Tape-Out: Total project duration from specification to tape-out. Target: < 6 months, a >50% reduction from traditional cycles.
- First-Silicon Success Rate: The AI-designed chip must be fully functional with the first batch of silicon, eliminating the need for costly respins. Target: 100%.
- NRE Cost Reduction: Quantifiable reduction in non-recurring engineering costs due to automation and avoidance of respins. Target: >40%.
Step 5. Continuous Learning & Post-Silicon Feedback
Description:
The AI pipeline will learn from every project. It will analyze the performance of its own designs post-silicon and use that real-world data to refine its internal models for driving the EDA tools, ensuring it gets smarter and more accurate over time.
Methodology:
- Data Ingestion Pipeline: Build an automated pipeline to collect and process post-silicon data, including wafer-level electrical test results, characterization data across PVT corners, and in-field performance telemetry.
- AI Model Retraining: Use the curated post-silicon data to continuously retrain and fine-tune the AI agents, particularly the PPA Optimization Agent. This closes the loop between simulation and silicon reality.
- Knowledge Transfer: Learnings from one project (e.g., optimal tool settings for a 3nm process) are automatically incorporated and applied to subsequent projects.
Key Objectives & KPIs:
Objective: Demonstrate a measurable improvement in the AI's predictive accuracy and design optimization capability with each completed project.
KPIs:
- Model Predictive Accuracy: Correlation between the AI's pre-silicon PPA predictions and actual post-silicon measurements. Target: Improve correlation coefficient by >15% per project cycle.
- PPA Optimization Improvement: For a fixed benchmark design, the retrained AI should achieve a >5% improvement on the PPA frontier compared to its pre-retraining state.
- Bug Escape Rate: Rate of functional bugs discovered in post-silicon validation that were missed by pre-silicon verification. Target: Decrease by >25% per project.
- Time-to-Insight: Time required for the AI to analyze post-silicon data and generate actionable recommendations for the next design cycle. Target: < 48 hours.
Part 4: Company Growth, Cost, and Performance
The investment in Phase 3 is substantial but foundational to our long-term strategy. The costs are front-loaded investments in compounding assets (talent and infrastructure) that will yield disproportionate returns in speed, efficiency, and product quality.
Phase 3 Cost Estimation (Annual Estimate):
Talent (The AI Team):
- Cost Breakdown: 1 Lead AI Architect, 4 Senior ML/AI Engineers, 2 Data/Infrastructure Engineers, 1 Product Manager (AI Interfaces).
- Estimated Annual Cost: £1.2M - £1.8M.
- Notes: This team is focused exclusively on developing and refining the Chimera AI agents and the Model Context Protocol (MCP). This is our core IP investment. Salaries are based on competitive rates for top-tier AI talent in the UK/EU.
Cloud Compute & Infrastructure:
- Cost Breakdown: GPU-intensive training clusters (e.g., AWS P4d/P5 instances), CPU-intensive simulation/EDA farm, High-throughput storage & networking.
- Estimated Annual Cost: £800k - £1.5M.
- Notes: This is the most variable cost and will be highest during intensive training periods for new AI models. It will scale with the number of parallel chip design projects being executed by the AI.
EDA Tool Licensing:
- Cost Breakdown: Comprehensive licenses from major vendors (Synopsys, Cadence, Siemens/Mentor), Seats for both human engineers and AI-driven processes.
- Estimated Annual Cost: £1.5M - £2.5M.
- Notes: While we already have EDA licenses, running a fully autonomous AI pipeline requires a significant increase in license seats to allow the AI to run thousands of jobs in parallel without being bottlenecked. This is a crucial enabler for the AI's speed.
Total Estimated Annual Cost:
- Total: £3.5M - £5.8M.
- Notes: This represents the operational cost to run at the full "Run" phase. The return on this investment is realized through transformative improvements in design efficiency and final product quality, leading to significant financial gains and a strong competitive advantage.
Projected Performance Gains and Return on Investment (ROI):
The investment in Phase 3 is justified by tangible and compounding returns from dramatic improvements in engineering efficiency and superior final chip characteristics, which together drive significant financial ROI.
Transformative Time-to-Market (TTM) Reduction:
The primary value driver is a radical acceleration of the design cycle by automating repetitive and error-prone processes.
- Industry benchmarks show AI-driven automation can reduce chip design time by up to 50% and cut debugging time by as much as 70%.
- The goal is to compress design timelines from months to weeks, mirroring gains of over 10x seen by industry leaders.
- This speed allows for greater agility, faster response to market demands, and more projects with the same engineering team.
Superior Power, Performance, and Area (PPA):
The system is designed to achieve PPA optimizations beyond human capability by exploring a vast design space to find a superior balance of competing objectives.
- Case studies show AI tools can achieve significant PPA gains, including up to 15-20% power reduction, 10-20% area reduction, and substantial performance boosts.
- A superior PPA profile results in a more competitive product that is cheaper to manufacture and more efficient for the end-user.
Compounding Financial ROI:
- Increased Revenue and Market Share: Faster TTM allows for capturing early-adopter markets and responding to competitive pressures more effectively.
- Reduced Non-Recurring Engineering (NRE) Costs: The system aims to virtually eliminate the multi-million-dollar cost of silicon respins by improving verification and achieving first-pass success.
- Higher Profit Margins: Superior PPA leads to direct cost savings (smaller die area lowers cost per wafer) and can justify premium pricing (better power efficiency). The combination of lower NRE, reduced manufacturing cost, and accelerated revenue creates a powerful cycle of compounding ROI.
Part 5: Deployed Agents, New Hires, and the Future
Types of Agents Deployed in Phase 3:
- Specification & Architecture Agent: Works with human architects to translate product requirements into a formal, machine-readable specification.
- RTL Generation & Refinement Agent: The core design agent from Phase 1 and 2, now highly optimized.
- PPA Optimization Agent: Uses reinforcement learning to fine-tune the design for power, performance, and area by intelligently manipulating EDA tool constraints.
- Verification & Formal Analysis Agent: Exhaustively checks the design for logical correctness by driving verification tools and analyzing their output.
- Physical Design & Layout Agent: Takes the final logic and generates the physical layout for manufacturing by intelligently driving commercial place-and-route tools.
- Hierarchical Supervisor: The primary orchestrator that manages all other agents and the overall workflow, making API calls to and interpreting results from our EDA tool suite.
- Model Context Protocol (MCP): While not an active agent, the MCP is the critical underlying data management system. It serves as the single source of truth for all design data, IP, and project configurations, which it provides to the Hierarchical Supervisor to enable the workflow.
Integrating New Hires:
Onboarding will be fundamentally different. A new hire's first "mentor" will be the AI itself.
- AI-Guided Training: New engineers will learn the company's design methodology by working on small-scale projects through the Human-AI Interface, with the AI providing real-time feedback and guidance.
- Focus on "Why," Not "How": Training will focus on high-level architectural thinking and strategic decision-making, as the AI will handle the low-level implementation. The most valued skill will be the ability to ask the AI the right questions.
Strategic Hiring: The AI-Hybrid Engineer:
Our hiring strategy must fundamentally shift away from traditional engineers who rely solely on manual expertise, as they will struggle to be productive in our highly automated environment. We will seek a new breed of hybrid talent that combines deep domain expertise with AI fluency. Ideal candidates will possess a strong foundation in the entire chip design pipeline, from architecture to verification and physical design, coupled with proficiency in how agentic AI systems operate, including machine learning principles and data modeling. This dual expertise is non-negotiable. When the AI system encounters a novel problem or produces an unexpected result, it is this hybrid engineer who will have the foundational knowledge to debug the issue, interpret the AI's reasoning, and implement a robust solution. They are the essential "human-in-the-loop" who provides the critical judgment that a purely automated system lacks.
The Future After Phase 3:
Phase 3 establishes a foundation for true AGI (Artificial General Intelligence) in hardware design. The next steps will be even more ambitious:
- Generative Architecture: An AI that can invent entirely new, non-human-intuitive computer architectures based on a desired outcome.
- IP-as-a-Service (IPaaS): Evolve the Generative IP Creation capability into a full-fledged commercial offering. The Chimera system can be used to generate and license highly specialized, verified IP blocks (e.g., advanced security cores, custom AI accelerators, novel interconnect fabrics) to the broader market, creating a new high-margin revenue stream.
- Platform-as-a-Service (PaaS): Package and commercialize the entire Project Chimera orchestration system as a licensable platform. Other fabless semiconductor companies could license this platform to accelerate their own design flows, positioning our core AI technology as a product in itself and empowering them to iterate faster and de-risk their own designs.
Part 6: Learning from the EDA Industry & Building Our Advantage
Our strategy is validated by the direction the EDA industry itself is heading. While we are not building these tools, understanding them is key to our success.
- Google's AI for Chip Floorplanning: Proved a reinforcement learning agent could master a complex design task (floorplanning) within the design environment. This validates our approach for individual agents.
- Synopsys's DSO.ai & Cadence's Cerebrus: These products show that the EDA vendors are successfully adding AI to their individual point tools. This is good for us—it makes the tools we use even more powerful.
Our Unique Advantage:
The critical difference is that these are "walled garden" solutions. Synopsys AI optimizes the Synopsys flow, and Cadence AI optimizes the Cadence flow. Project Chimera builds the holistic intelligence above all of them. Our Hierarchical Supervisor can make strategic decisions across the entire toolchain, potentially using a Synopsys tool for synthesis, a Cadence tool for place-and-route, and a Mentor Graphics tool for verification, optimizing for the best global outcome that no single tool vendor can achieve on its own. That integration and orchestration capability is our core competitive moat.