Visualizing the Behavior of Logic Synthesis Algorithms

One of the biggest sources of frustration for users of synthesis tools is that the tools often seem to behave unpredictably. A small change to an RTL design may produce a large change in gates. Asking for a faster result may produce a slower one. Setting a particular synthesis flag may give better results at one time, and then later give worse results. There seems to be a certain amount of "luck" involved.

This problem is perhaps most acute during benchmarking. We would like to have precise measures of whether one setting of synthesis flags is better than another, whether one RTL architectural gives a faster netlist than another, and so on. But if there is a large random component in the results, and we can't quantify it, then we can't be certain that any of our conclusions are true, or even estimate the probability that they are true.

Many people try to limit this problem by using a large number of different test cases. This can help because the expected total error only goes up as the square root of the number of test cases; so, the average error per case goes down as the number of test cases increases. However, there are still problems with this approach. In order to reduce the average error by one half, you need to run four times as many test cases, and so on (the square root works against you). More importantly, most people don't even have any idea of what their error is! They're hoping that they've run enough test cases to make it "negligible", but they don't actually try to measure it.

This paper takes a different approach. Rather than simply taking more test cases, we investigate how to understand a single test case in great detail, with a reasonable expenditure of synthesis resources.

View Entire Paper | Previous Page | White Papers Search

If you found this page useful, bookmark and share it on: