The rising complexity of Systems-on-Chips, combined with time-to-market pressure, has rendered virtual prototyping almost indispensable for early design space exploration, verification and parallel HW/SW co-design. At the very center of processor models, the instruction decoder is responsible for classifying instructions as defined by the instruction set architecture. In addition to system models, instruction decoders are intrinsic to binary toolchains such as assemblers or debuggers. For all usecases, the decoding speed is fundamental to the success of the simulation platform or tool at large. The wide utilization range and the relevance to performance both make it highly rewarding to invest in an optimized instruction decoder model that can be repeatedly deployed, modified or expanded.
As the trend is towards larger and more complex instruction sets, the manual implementation of decoders has arguably become one of the most time-consuming, complex and error-prone modeling tasks, and one which seldom yields acceptable performance. Furthermore, many recent instruction sets include irregularities such as non-uniform opcodes, logic propositions on bit fields and multiple or nested instruction specializations. There are only few available tools that attempt to automatically generate decoders from instruction set definitions. Those either cannot be applied to irregular instruction sets, require unfeasible time or resources during the generation process or, worse, output functionally erroneous decoders.
Our work is concerned with algorithms for generating decoder decision trees. We investigate different approaches to decision tree generation that can handle irregularities in the instruction set, specifically logic propositions on bit fields. We consider cross-domain results from system modeling, information theory, coding theory, decision trees and propositional logic. We perform both experimental and theoretical cost analysis of the resulting decoders according to a newly developed cost model.