Key UVM Component Concepts: Phase
Key Component Concepts: Phase
Introduction
The Universal Verification Methodology (UVM) stands as a cornerstone in the field of VLSI design verification, providing a standardized and robust framework for the rigorous testing of complex System-on-Chips (SoCs) and intellectual property (IP) blocks. Built upon SystemVerilog, UVM offers a comprehensive set of base classes and utilities designed to organize testbench components and their intricate interactions.[1, 2] The primary objective of UVM is to manage the inherent complexity of contemporary verification environments, thereby ensuring synchronization, promoting reusability, and enhancing maintainability across diverse projects.[1, 3, 4] This methodology formalizes the testbench architecture, moving beyond ad-hoc SystemVerilog coding practices to establish a structured and predictable verification flow.
Central to the UVM framework are its phases, which are predefined virtual methods (either functions or tasks) encapsulated within the uvm_component class. These phases dictate a structured and predictable execution order for all testbench components.[1, 5] Their importance cannot be overstated, as they are fundamental to coordinating activities, preventing race conditions, and simplifying the debugging process within large and complex testbenches.[3, 4] The phased approach ensures that components operate in a synchronized manner, advancing from one stage to the next only when all participating components have successfully completed their current phase's responsibilities.[5] The very existence of UVM phases, and their strict ordering, directly reflects the profound complexity and inherent interdependencies present in hardware verification. In this domain, precise setup, configuration, stimulus application, and result collection are paramount. Without a standardized, enforced order, verification engineers would face immense challenges in manually managing these intricate dependencies, leading to ad-hoc, brittle, and error-prone testbenches. UVM phases effectively abstract away this manual synchronization burden, providing a robust, predictable flow that inherently addresses common verification challenges such as race conditions and ensures that all components function cohesively as a unified system.[4, 5] This structured approach is a primary reason UVM has become the industry standard for functional verification.
Core UVM Phase Concepts
UVM phases primarily function as critical synchronization points throughout the testbench lifecycle. They ensure that all components complete their current phase's activities before the entire testbench collectively progresses to the subsequent phase.[1, 5] For enhanced clarity and organizational structure, UVM phases are broadly categorized into three principal groups [1, 3]:
- Build Phases: These phases are specifically dedicated to configuring and constructing the testbench hierarchy and subsequently establishing connections between components. Examples include
build_phase, connect_phase, and end_of_elaboration_phase.
- Runtime Phases: This category encompasses the phases where the actual test execution occurs. These phases consume simulation time and involve direct interaction with the Device Under Test (DUT). Key examples are
start_of_simulation_phase and run_phase.
- Clean-up Phases: These phases are responsible for the collection, analysis, and reporting of simulation results. This group includes
extract_phase, check_phase, report_phase, and final_phase.
Understanding Parent-Child Relationship and Execution Flow
The execution order of UVM phases is intricately linked to the hierarchical structure of uvm_components, which are organized in well-defined parent-child relationships.[6, 7] This hierarchy fundamentally dictates how phases traverse the testbench environment.
Top-Down Execution
In a top-down execution flow, a phase commences its execution at the highest-level parent component, such as uvm_test_top. It then systematically proceeds to its immediate children and recursively descends through the hierarchy to the lowest-level components.[5, 6, 7, 8] This execution order is characteristic of phases where a higher-level component must orchestrate or configure its subordinates before they can fully initialize themselves. The build_phase and final_phase are prime examples of phases that exhibit this top-down execution behavior.[4, 5, 9]
Bottom-Up Execution
Conversely, in a bottom-up execution flow, a phase initiates its activities from the lowest-level child components. It then progresses upwards to their respective parents, ultimately reaching the top-level parent component.[5, 7] This order is typical for phases where lower-level components must complete their specific tasks, such as establishing connections or processing data, before their results can be aggregated or utilized by higher-level components. The majority of other function-based phases, including connect_phase, end_of_elaboration_phase, start_of_simulation_phase, extract_phase, check_phase, and report_phase, adhere to this bottom-up execution paradigm.[4, 5, 7]
Functions vs. Tasks: Time-Consuming vs. Non-Blocking
A fundamental distinction within UVM phases lies in whether they are implemented as SystemVerilog functions or tasks.[1, 3, 4, 7] This choice directly determines whether the phase consumes simulation time.
- Functions: These phases execute in zero simulation time, meaning they are non-blocking. All build phases (
build, connect, end_of_elaboration), clean-up phases (extract, check, report, final), and start_of_simulation are implemented as functions.[1, 3, 4] Consequently, these phases are prohibited from containing time-consuming operations such as #delays or wait statements.
- Tasks: These phases are designed to consume simulation time, making them blocking. The
run_phase and its various sub-phases are the only task-based phases in UVM.[1, 3, 4] This is the designated domain where all time-consuming activities, including stimulus generation, DUT interaction, and waiting for responses, must occur.
The strict distinction between function and task types for UVM phases represents a deliberate design decision that leverages SystemVerilog's capabilities to enforce simulation time discipline. This ensures that testbench setup and teardown operations are instantaneous, and that simulation time is focused exclusively on the DUT's dynamic operation. If a build_phase (a function) were permitted to consume simulation time, the entire testbench construction process would be delayed, potentially creating unpredictable race conditions with static initial blocks or other time-zero activities. By implementing build and cleanup phases as functions, UVM guarantees that the testbench hierarchy is fully elaborated, configured, and connected before any simulation time advances. This architectural separation ensures that the "setup" and "teardown" portions of the testbench are purely structural and logical, while the "execution" portion (the run_phase) is where all actual time-based interactions with the DUT occur. This fundamental design decision contributes significantly to predictable and efficient simulation performance.
Concurrent Execution of Tasks (Specifically the run_phase)
The run_phase occupies a unique position within the UVM phasing scheme, as it executes concurrently across all active components in the testbench.[1, 3, 4, 7, 8, 10, 11] This concurrency allows, for instance, a driver component to actively generate stimulus while a monitor component simultaneously observes DUT behavior, and a scoreboard concurrently performs result checking.
While the run_phase itself defines a broad concurrent domain, it also encapsulates a predefined, sequential schedule of 12 distinct sub-phases. These include pre_reset, reset, post_reset, pre_configure, configure, post_configure, pre_main, main, post_main, pre_shutdown, shutdown, and post_shutdown.[3, 4, 10, 11] These sub-phases execute in a strict sequence, with the testbench only advancing to the next sub-phase once all participating components have completed their work in the current sub-phase, typically by dropping their objections.[11]
The combination of global concurrency for the run_phase and sequential sub-phases within it represents a sophisticated design pattern for managing complex, time-consuming verification tasks. This approach effectively balances efficiency through parallelism with precise synchronization for critical operations. A purely parallel execution model, such as a simple fork/join_none, would make it exceedingly difficult to synchronize critical testbench activities, like ensuring all components are reset before configuration commences. Conversely, a purely sequential execution model would be inefficient, as many testbench elements, such as passive monitors, can operate independently without needing to wait for others. The UVM designers recognized the need for both approaches. The run_phase provides a broad, concurrent environment where independent activities can proceed simultaneously, maximizing simulation throughput. The sub-phases within run_phase then serve as essential checkpoints or barriers, enabling the entire testbench to collectively transition through critical, interdependent stages of DUT operation, such as power-up, reset, configuration, main stimulus application, and shutdown. This layered approach effectively balances the need for efficient concurrent execution with the necessity of precise, synchronized control for interdependent operations, accurately reflecting the real-world operational flow of a complex DUT.
UVM Phase Summary Table
The following table provides a concise summary of the primary UVM phases, outlining their categorization, method type, execution order, and primary purpose.
| Phase Name |
Category |
Method Type |
Execution Order |
Primary Purpose / Key Activity |
build |
Build |
Function |
Top-Down |
Hierarchical construction and instantiation of components. |
connect |
Build |
Function |
Bottom-Up |
Establishing TLM connections and assigning resource handles. |
end_of_elaboration |
Build |
Function |
Bottom-Up |
Final adjustments and displaying testbench topology. |
start_of_simulation |
Runtime |
Function |
Bottom-Up |
Initial runtime configuration and displaying information. |
run |
Runtime |
Task |
Concurrent |
Actual test execution, stimulus generation, and DUT interaction. |
extract |
Cleanup |
Function |
Bottom-Up |
Retrieving and processing data from scoreboards/coverage. |
check |
Cleanup |
Function |
Bottom-Up |
Verifying DUT correctness by comparing data. |
report |
Cleanup |
Function |
Bottom-Up |
Generating and displaying final test results and messages. |
final |
Cleanup |
Function |
Top-Down |
Completing any remaining outstanding actions (often empty). |
Detailed Breakdown of UVM Phases
1. Build Phases
build_phase
The build_phase is the foundational phase, executing first in the UVM testbench lifecycle. Its sole responsibility is the hierarchical construction and instantiation of all testbench components.[1, 3, 5, 12] This includes the creation of environments, agents (which typically contain drivers, monitors, and sequencers), scoreboards, and other essential verification components. This phase is implemented as a function, ensuring it executes in zero simulation time.[3, 4] This characteristic guarantees that the entire testbench structure is built instantaneously before any simulation time advances.
The execution order of the build_phase is strictly top-down.[3, 4, 5, 7, 8] The uvm_root component, often implicitly managed by the run_test() function, initiates its build_phase. This, in turn, creates its children, such as uvm_test_top. Subsequently, the build_phase of the newly created child is invoked, and this process recursively descends through the entire component hierarchy in a depth-first traversal.[5, 6] This top-down order is critical because parent components are responsible for creating their child components. It allows a parent to configure its children, for instance, by setting values in the uvm_config_db, before the child's build_phase executes and the child attempts to retrieve those configurations.[4, 8, 13] This flow ensures a predictable and controllable setup, where the overall testbench structure and initial configurations are defined from the highest level down. Key activities within this phase involve component instantiation, typically performed using the UVM factory's ::type_id::create() method.[6, 12]
connect_phase
Following the instantiation of components, the connect_phase is dedicated to establishing the necessary connections between the various testbench components.[1, 3, 5, 12] This primarily involves linking TLM (Transaction-Level Modeling) ports and exports, connecting analysis ports to scoreboards or functional coverage collectors, and assigning handles to shared testbench resources. Like the build_phase, the connect_phase is implemented as a function and therefore executes in zero simulation time.[3, 4]
The execution order for the connect_phase is bottom-up.[1, 3, 4, 5, 7] This order ensures that lower-level components, such as agents, establish their internal connections and expose their interfaces (ports, exports) before higher-level components, like environments, attempt to connect to them.[4, 7] This provides a robust connection scheme where the "leaf" components are fully ready, and connections are then reliably made upwards through the hierarchy, ensuring correct implementation throughout the design hierarchy.[4]
end_of_elaboration_phase
The end_of_elaboration_phase is utilized for making any final adjustments to the testbench's structure, configuration, or connectivity just before the simulation begins.[1, 3, 5, 12] It also serves as a common and recommended location to display the final UVM topology, providing a comprehensive overview of the instantiated and connected testbench.[1, 3, 12] This phase is also a function and executes in zero simulation time.[3]
Its execution order is bottom-up.[1, 3, 5, 12] While the build_phase handles primary construction and the connect_phase establishes links, the end_of_elaboration_phase provides a crucial finalization step, enabling actions that depend on the entire hierarchy being fully built and connected. This phase executes after both build and connect phases have completed. Some actions, such as printing the complete and final testbench topology, or performing global sanity checks on the interconnected graph, can only be accurately performed once both construction and connection are entirely finalized across the whole hierarchy. This phase provides that specific, guaranteed window, ensuring the testbench is fully established and structurally sound before any simulation time advances or dynamic behavior commences. It is a critical point for verifying the structural integrity and completeness of the testbench.
2. Runtime Phases
start_of_simulation_phase
The start_of_simulation_phase performs initial runtime configuration and is primarily used for displaying informational banners, the final testbench topology, or configuration details immediately before the time-consuming run_phase begins.[1, 3, 5, 12] It is sometimes colloquially referred to as a "marketing purposes" phase due to its focus on displaying information.[5] This phase is implemented as a function and thus executes in zero simulation time.[3, 12]
Its execution order is bottom-up.[3, 5, 12] The explicit recommendation to avoid driving signals until start_of_simulation or later [14] underscores this phase's role as the critical transition point from static testbench setup to dynamic, time-consuming simulation, ensuring a stable initial state for the DUT. This phase marks the definitive boundary between the elaboration (zero-time setup) and simulation (time-consuming execution) stages. The preceding phases (build, connect, end_of_elaboration) are entirely dedicated to establishing the static structure of the testbench, without any advancement in simulation time. The start_of_simulation_phase is the final opportunity for any zero-time, static setup before the simulation clock begins ticking. By advising against driving signals before this phase, UVM ensures that both the DUT and the testbench are in a fully stable, completely configured state at time 0, ready for the very first clock edge or stimulus. This prevents potential race conditions between static initialization activities and the earliest dynamic interactions with the DUT, leading to more predictable and reliable simulation starts.
run_phase
The run_phase is the most critical and time-consuming phase, where the actual test execution takes place.[1, 3, 5] During this phase, stimulus is generated and applied to the DUT, sequences and sequence items are executed, and drivers generate the necessary signals to interact with the DUT.[3] Concurrently, monitors passively observe DUT behavior, and scoreboards perform real-time checking. This phase is implemented as a task, making it the only phase where simulation time can advance.[3, 4]
The execution order of the run_phase is concurrent across all components.[1, 3, 4, 7, 8, 10, 11] This allows different components, such as a driver, monitor, and scoreboard, to operate in parallel, reflecting the concurrent nature of hardware.
The run_phase itself contains a predefined, sequential schedule of 12 distinct sub-phases.[3, 4, 10, 11] These are: pre_reset, reset, post_reset, pre_configure, configure, post_configure, pre_main, main, post_main, pre_shutdown, shutdown, and post_shutdown. Each sub-phase has specific entry/exit criteria and typical uses, providing fine-grained control over the test's timeline.[3, 10]
Run-Time Sub-Phases Overview Table
| Sub-Phase Name |
Typical Uses |
Key Entry/Exit Criteria |
pre_reset_phase |
Waiting for power good, initializing outputs to X/Z, initializing clock signals, assigning reset to X, waiting for reset assertion. |
Entry: Power applied, no active clock edges. Exit: Reset signal ready to be asserted. |
reset_phase |
Asserting/de-asserting reset, driving outputs to idle, initializing state variables, starting clock generation. |
Entry: Hardware reset signal ready to be asserted. Exit: Reset de-asserted, main clock stable, at least one active clock edge. |
post_reset_phase |
Components begin behavior for inactive reset (e.g., idle transactions, interface training). |
Entry: DUT reset signal de-asserted. Exit: Testbench/DUT in known, active state. |
pre_configure_phase |
Modifying DUT configuration, waiting for components for configuration to complete training. |
Entry: DUT completed reset, ready for configuration. Exit: DUT configuration information defined. |
configure_phase |
Components execute transactions for configuration, programming DUT/memories. |
Entry: DUT ready to be configured. Exit: DUT configured, ready to operate normally. |
post_configure_phase |
Waiting for configuration to propagate/take effect, enabling DUT, sampling configuration coverage. |
Entry: Configuration fully uploaded. Exit: DUT fully configured, enabled, and ready to operate normally. |
pre_main_phase |
Waiting for components to complete training and rate negotiation. |
Entry: DUT fully configured. Exit: All components ready to generate/observe normal stimulus. |
main_phase |
Generating primary test stimulus, starting data sequences, waiting for timeout/completion. |
Entry: Stimulus for test objectives ready to be applied. Exit: Sufficient stimulus applied to meet primary objective. |
post_main_phase |
Included for symmetry; handles any finalization of main_phase. |
Entry: Primary stimulus objective met. Exit: None. |
pre_shutdown_phase |
Included for symmetry. |
Entry: None. Exit: None. |
shutdown_phase |
Waiting for all data to drain from DUT, extracting buffered data. |
Entry: None. Exit: All data drained/extracted, interfaces idle. |
post_shutdown_phase |
Performing final checks requiring run-time access to DUT. |
Entry: No more "data" stimulus applied. Exit: All run-time checks satisfied, uvm_run_phase ready to end. |
The UVM Objection Mechanism
The duration of the run_phase is controlled by the UVM objection mechanism.[3, 12, 15] This mechanism provides a robust method to synchronize and coordinate components, preventing the simulation from prematurely terminating while active processes, such as sequences driving stimulus or monitors waiting for responses, are still running.[15]
The mechanism operates as follows: Components raise_objection() when they initiate a time-consuming activity and drop_objection() when that activity completes.[15, 16] The run_phase for the entire testbench will only conclude when the total objection count across all components (including their hierarchical descendants) reaches zero.[11, 15, 16] After the last objection is dropped, a configurable drain_time allows for a brief period of final activity, such as waiting for the last transactions to complete, before the phase truly ends and propagates the "all dropped" status up the hierarchy.[15, 16] This is crucial for ensuring all pending transactions or responses are processed before transitioning to the cleanup phases.
The objection mechanism is a critical abstraction that decouples the end of simulation from explicit time-based waits, allowing for dynamic, data-dependent test termination. This makes testbenches more robust to changes in stimulus duration and supports complex asynchronous activities, which are common in hardware verification. Hardcoding simulation termination with $finish at a fixed time is inflexible; if stimulus generation takes longer or shorter than anticipated, the test might either cut off prematurely, losing coverage, or waste valuable simulation cycles. The uvm_objection mechanism allows components to dynamically signal their active status. This means the simulation runs precisely long enough for all meaningful activity to complete, irrespective of the actual time taken. This mechanism is a cornerstone of UVM's flexibility and efficiency. It enables dynamic test durations, supports complex asynchronous activities where components might finish at different, unpredictable times, and simplifies the management of concurrent processes. It shifts the burden of deciding "when to end" from the verification engineer's manual timing to the components themselves, based on their actual workload. This is a powerful abstraction for managing the non-deterministic and highly parallel nature of complex hardware behavior.
3. Cleanup Phases
extract_phase
The extract_phase is used to retrieve and process information from data collection components such as scoreboards and functional coverage monitors.[1, 3, 5] This is typically where raw collected data is analyzed and prepared for subsequent verification. This phase is implemented as a function and executes in zero simulation time.[3] Its execution order is bottom-up.[1, 5]
check_phase
In the check_phase, the correctness of the DUT's behavior is verified by comparing predicted data from the reference model with the actual data collected from the DUT.[1, 3, 5] This phase is crucial for identifying any mismatches or errors that may have occurred during the simulation. It is also a function and executes in zero simulation time.[3] Its execution order is bottom-up.[1, 5]
report_phase
The report_phase is the primary and most commonly utilized phase for generating and displaying the final test results.[1, 3, 5, 17, 18] Activities within this phase include reporting errors, warnings, fatal messages, coverage summaries, and the overall pass/fail status of the test. This phase is implemented as a function and executes in zero simulation time.[3] Its execution order is bottom-up.[1, 5]
It is important to note that if the simulation terminates prematurely due to a UVM_FATAL error or if the UVM_MAX_QUIT_COUNT is reached, the report_phase might not execute.[17] For essential information that must be reported even in such abrupt termination scenarios, the pre_abort() callback, available in uvm_component, should be utilized.[17]
The extract, check, and report phases being bottom-up ensures a logical flow of data processing from raw collection to final presentation. This guarantees that lower-level components complete their analysis before higher-level components aggregate and report the comprehensive results. The verification process naturally follows a data pipeline: monitors collect raw data, scoreboards process this data to check for correctness (during extract and check), and then the test or environment aggregates these individual component results to generate a comprehensive final report (during report). For this logical chain to function correctly and produce accurate results, the lowest-level data processing (extraction, checking) must complete before the higher-level aggregation and reporting can occur. If these phases were top-down, a parent component might attempt to generate a report before its child components had even finished processing their respective data, leading to incomplete or incorrect results. This bottom-up flow ensures data integrity and completeness throughout the cleanup process.
final_phase
The final_phase is the absolute last phase in the UVM testbench execution flow, intended to complete any remaining outstanding actions that the testbench has not yet finished.[1, 3, 5] It is implemented as a function and executes in zero simulation time.[3]
Its execution order is top-down.[5, 9] For normal UVM flows, this phase is often left empty, as end-of-test reporting and most cleanup activities are typically handled in the report_phase().[9] Its primary existence is for advanced scenarios involving multiple loops of run, extract, check, and report phases, where a top-down reset or re-initialization might be needed before jumping back to run() for a new iteration of concatenated tests.[9] The final_phase's top-down execution and its niche use case for multi-loop test scenarios indicate UVM's underlying extensibility for highly specialized, non-standard verification flows, even if it is not part of the common methodology. If the final_phase is used to prepare the testbench for another iteration of the test, for example, by re-initializing a global state, resetting a top-level component, or cleaning up global resources before a new test starts, a top-down approach allows the test-level component to orchestrate these global reconfigurations. This is a control-oriented cleanup, rather than solely a data aggregation one. The fact that its support is "partially defined" and it is "not required for normal UVM flows" underscores its advanced, specialized nature. This demonstrates the framework's power to accommodate highly complex test automation strategies, which can be crucial for optimizing simulation farm utilization by reducing simulator startup overhead between tests.
Visualizing UVM Phase Execution
The provided diagrams offer valuable visual aids for understanding the UVM phase execution flow.
Interpreting the Hierarchical Block Diagram
This diagram clearly represents the uvm_component hierarchy, illustrating the nesting of components such as simv as the simulation entry point, containing uvm_top, which calls run_test(), leading to uvm_test_top, env, and comp. The vertical arrangement effectively emphasizes the parent-child relationships, which are fundamental to understanding how phases traverse the testbench.[6, 7, 8, 13] This visual representation reinforces the concept of a structured, nested testbench.
Interpreting the Phase Execution Flow Diagram
This horizontal flow chart provides a dynamic and sequential view of how the UVM phases execute across different components, including run_test(), uvm_test_top, env, and comp.
The diagram correctly depicts the build phase executing from top-to-bottom (parent to child). This visually reinforces the concept that parent components create and configure their children before the children's build_phase executes.[5, 6, 7, 8, 13]
For other phases, specifically connect, end_of_elaboration, start_of_simulation, extract, check, report, and final, the diagram illustrates execution lines moving from bottom-to-top (child to parent), demonstrating the general bottom-up execution concept.[1, 3, 4, 5, 7] However, it is important to note a discrepancy: while the diagram depicts the final_phase as executing bottom-up, multiple authoritative research sources explicitly state that the final_phase is top-down.[5, 9] This highlights that while diagrams are excellent for visual understanding, the UVM specification and detailed textual resources should be prioritized for precise technical accuracy. When faced with such a contradiction, the UVM specification and well-regarded technical documentation, such as that from Verification Academy, are generally more authoritative than a simplified diagram. Diagrams often abstract or simplify details for visual clarity, and this might be one such instance. For technical accuracy, the UVM standard defines final_phase as top-down, aligning it with build_phase for specific global cleanup or re-initialization scenarios.
The diagram correctly highlights the run phase as a separate, concurrent thread. This visually represents its unique nature as a time-consuming task that runs in parallel across different components, distinct from the sequential, zero-time function phases.[1, 3, 4, 7, 8, 10, 11]
Conclusion
UVM phases form the fundamental backbone of a robust, efficient, and scalable verification environment. By providing a predefined structure and synchronization points, they are instrumental in managing the inherent complexity of modern VLSI testbenches. The phased approach enables modularity, promotes reusability of verification IP, and significantly simplifies the debugging process, ultimately leading to higher quality verification.[2, 3]
Effective utilization of UVM phases demands a thorough understanding of their individual purposes, execution order, and the critical distinction between function-based (zero-time) and task-based (time-consuming) phases. Correctly implementing and managing the objection mechanism is paramount for ensuring the proper and timely termination of the run_phase, preventing premature simulation exits or hangs.[15] Adhering to UVM phase guidelines, leveraging built-in UVM features like the factory and the uvm_config_db (understanding its phase-dependent "parent wins" versus "last write wins" behavior), and continuously refining the verification methodology are key to achieving optimal testbench performance, reliability, and coverage.[2]
The comprehensive and sometimes intricate rules governing UVM phases are not arbitrary; rather, they are a direct reflection of deep engineering principles designed to manage the profound complexity and concurrency inherent in verifying modern hardware. Each rule, such as the top-down nature of build, the bottom-up flow of connect, the distinction between function and task phases, and the sophisticated objection mechanism, is a carefully considered and engineered solution to a specific, recurring, and often subtle problem in hardware verification. These problems include dependency management, resource allocation, precise synchronization across concurrent processes, and dynamic test termination. The "best practices" associated with UVM phases are, in essence, the practical application of these underlying design principles. A deep understanding of why UVM is structured the way it is, and how to apply it with true expertise, elevates a verification engineer from a mere user to a master of the methodology.
References - Key Component Concepts: Phase
References
- https://vlsiquest.com/uvm-phases-best-prctice/
- https://www.numberanalytics.com/blog/uvm-vlsi-best-practices
- https://www.emtechsa.com/post/uvm-phases
- https://up824.gitbooks.io/verification/uvm.html
- https://www.scribd.com/document/669204191/What-are-UVM-Phases
- https://stackoverflow.com/questions/19353096/uvm-phase-query
- https://m.youtube.com/shorts/uVznI-fmDTo
- https://verificationacademy.com/forums/t/top-down-bottom-up-build-connect-phase/32916
- https://verificationacademy.com/forums/t/why-final-phase-is-top-to-bottom/50584
- https://verificationacademy.com/verification-methodology-reference/uvm/docs_1.2/html/files/base/uvm_runtime_phases-svh.html
- https://forums.accellera.org/topic/8138-uvm-phase-jump-from-run-phase-to-final-phase-is-not-happening/
- http://vlsikt.blogspot.com/2017/09/uvmphases-and-flow.html
- https://verificationacademy.com/forums/t/build-phase-execution/31604
- https://stackoverflow.com/questions/52368523/in-which-phase-initial-blocks-are-executed
- https://www.scribd.com/document/835277507/Uvm-Objection
- https://verificationacademy.com/verification-methodology-reference/uvm/docs_1.2/html/files/base/uvm_objection-svh.html
- https://verificationacademy.com/forums/t/report-phase/34898
- https://www.youtube.com/watch?v=zGzUrc1dDaU
- https://www.edaplayground.com/x/5MST