You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fifth part was published, titled Wilson Study Comparison:
Part 5 of this series compares our GitHub-based research on the verification frameworks used for VHDL designs with the findings in the Wilson Research Group functional verification study. Our analysis shows that the derived GitHub data confirms the Wilson study results for UVM, OSVVM, and UVVM but it also shows that the Wilson study misses a large part of the overall picture by not including all commonly used frameworks.
The figure below shows the verification landscape when combining the data from GitHub with that of the Wilson study. The confidence intervals (as indicated by the arrows) are narrower for UVM, OSVVM and UVVM because of the larger sample sizes reached when combining data from two studies. The data for VUnit and cocotb builds solely on GitHub which results in wider confidence intervals. This doesn't change the fact that these two frameworks play a significant role in contemporary verification practices.
Before reaching this result it is important to also consider any biases involved and other explanations for the data we see. That and more can be found in the Wilson study comparison section of our study.
The next post will conclude this series and present our conclusions. In addition, we will also discuss the future of open source verification tools.
The code used to derive these facts are part of an open science project. Everything can be reviewed and the results can be repeated. We encourage contributions and suggestions on other interesting facts that we should derive.
The fifth part was published, titled Wilson Study Comparison:
The text was updated successfully, but these errors were encountered: