TY - THES
T1 - Exploiting error resilience for hardware efficiency
T2 - targeting iterative and accumulation based algorithms
AU - Gillani, Ghayoor
PY - 2020/7/3
Y1 - 2020/7/3
N2 - Computing devices have been constantly challenged by resource-hungry applications such as scientific computing. These applications demand high hardware efficiency and thus pose a challenge to reduce energy/power consumption, latency, and chip-area to process a required task. Therefore, an increase in hardware efficiency is one of the major goals to innovate computing devices. On the other hand, improvements in process technology have played an important role to tackle such challenges by increasing the performance and transistor density of integrated circuits while keeping their power density constant. In the last couple of decades, however, the efficiency gains due to process technology improvements are reaching the fundamental limits of computing. For instance, the power density is not scaling as well as compared to the transistor density. Hence, posing a further challenge to control the power-/thermal-budget of the integrated circuits.Keeping in view that many applications/algorithms are error-resilient, emerging paradigms like approximate computing come to rescue by offering promising efficiency gains especially in terms of power-efficiency. An application/algorithm can be regarded as error-resilient or error-tolerant when it provides an outcome with a required accuracy while utilizing processing-components that do not always compute accurately. There can be multiple reasons that an algorithm is tolerant of errors, for instance, an algorithm may have noisy or redundant inputs and/or a range of acceptable outcomes. Examples of such applications are machine learning, scientific computing, and search engines.Approximate computing techniques exploit the intrinsic error tolerance of such applications to optimize the computing systems at software-, architecture- and circuit-level to achieve efficiency gains. However, the state-of-the-art approximate computing methodologies do not sufficiently address the accelerator designs for iterative and accumulation based algorithms. Taking into account a wide range of such algorithms in digital signal processing, this thesis investigates approximation methodologies to achieve high-efficiency accelerator architectures for iterative and accumulation based algorithms. As a case study, we apply our proposed approximate computing methodologies to radio astronomy calibration processing which results in a more effective quality-efficiency trade-off as compared to the state-of-the-art approximate computing methodologies.
AB - Computing devices have been constantly challenged by resource-hungry applications such as scientific computing. These applications demand high hardware efficiency and thus pose a challenge to reduce energy/power consumption, latency, and chip-area to process a required task. Therefore, an increase in hardware efficiency is one of the major goals to innovate computing devices. On the other hand, improvements in process technology have played an important role to tackle such challenges by increasing the performance and transistor density of integrated circuits while keeping their power density constant. In the last couple of decades, however, the efficiency gains due to process technology improvements are reaching the fundamental limits of computing. For instance, the power density is not scaling as well as compared to the transistor density. Hence, posing a further challenge to control the power-/thermal-budget of the integrated circuits.Keeping in view that many applications/algorithms are error-resilient, emerging paradigms like approximate computing come to rescue by offering promising efficiency gains especially in terms of power-efficiency. An application/algorithm can be regarded as error-resilient or error-tolerant when it provides an outcome with a required accuracy while utilizing processing-components that do not always compute accurately. There can be multiple reasons that an algorithm is tolerant of errors, for instance, an algorithm may have noisy or redundant inputs and/or a range of acceptable outcomes. Examples of such applications are machine learning, scientific computing, and search engines.Approximate computing techniques exploit the intrinsic error tolerance of such applications to optimize the computing systems at software-, architecture- and circuit-level to achieve efficiency gains. However, the state-of-the-art approximate computing methodologies do not sufficiently address the accelerator designs for iterative and accumulation based algorithms. Taking into account a wide range of such algorithms in digital signal processing, this thesis investigates approximation methodologies to achieve high-efficiency accelerator architectures for iterative and accumulation based algorithms. As a case study, we apply our proposed approximate computing methodologies to radio astronomy calibration processing which results in a more effective quality-efficiency trade-off as compared to the state-of-the-art approximate computing methodologies.
U2 - 10.3990/1.9789036550116
DO - 10.3990/1.9789036550116
M3 - PhD Thesis - Research UT, graduation UT
SN - 978-90-365-5011-6
T3 - DSI Ph.D. Thesis Series
PB - University of Twente
CY - Enschede
ER -