TY - JOUR
T1 - Energy-efficient In-Memory Address Calculation
AU - Yousefzadeh, Amirreza
AU - Stuijt, Jan
AU - Hijdra, Martijn
AU - Liu, Hsiao Hsuan
AU - Gebregiorgis, Anteneh
AU - Singh, Abhairaj
AU - Hamdioui, Said
AU - Catthoor, Francky
N1 - Publisher Copyright:
© 2022 Copyright held by the owner/author(s).
PY - 2022/9/16
Y1 - 2022/9/16
N2 - Computation-in-Memory (CIM) is an emerging computing paradigm to address memory bottleneck challenges in computer architecture. A CIM unit cannot fully replace a general-purpose processor. Still, it significantly reduces the amount of data transfer between a traditional memory unit and the processor by enriching the transferred information. Data transactions between processor and memory consist of memory access addresses and values. While the main focus in the field of in-memory computing is to apply computations on the content of the memory (values), the importance of CPU-CIM address transactions and calculations for generating the sequence of access addresses for data-dominated applications is generally overlooked. However, the amount of information transactions used for "address"can easily be even more than half of the total transferred bits in many applications. In this article, we propose a circuit to perform the in-memory Address Calculation Accelerator. Our simulation results showed that calculating address sequences inside the memory (instead of the CPU) can significantly reduce the CPU-CIM address transactions and therefore contribute to considerable energy saving, latency, and bus traffic. For a chosen application of guided image filtering, in-memory address calculation results in almost two orders of magnitude reduction in address transactions over the memory bus.
AB - Computation-in-Memory (CIM) is an emerging computing paradigm to address memory bottleneck challenges in computer architecture. A CIM unit cannot fully replace a general-purpose processor. Still, it significantly reduces the amount of data transfer between a traditional memory unit and the processor by enriching the transferred information. Data transactions between processor and memory consist of memory access addresses and values. While the main focus in the field of in-memory computing is to apply computations on the content of the memory (values), the importance of CPU-CIM address transactions and calculations for generating the sequence of access addresses for data-dominated applications is generally overlooked. However, the amount of information transactions used for "address"can easily be even more than half of the total transferred bits in many applications. In this article, we propose a circuit to perform the in-memory Address Calculation Accelerator. Our simulation results showed that calculating address sequences inside the memory (instead of the CPU) can significantly reduce the CPU-CIM address transactions and therefore contribute to considerable energy saving, latency, and bus traffic. For a chosen application of guided image filtering, in-memory address calculation results in almost two orders of magnitude reduction in address transactions over the memory bus.
KW - Address calculation unit
KW - Energy optimization
KW - In-memory processing
UR - http://www.scopus.com/inward/record.url?scp=85146422935&partnerID=8YFLogxK
U2 - 10.1145/3546071
DO - 10.1145/3546071
M3 - Article
SN - 1544-3566
VL - 19
JO - ACM transactions on architecture and code optimization
JF - ACM transactions on architecture and code optimization
IS - 4
M1 - 3546071
ER -