Accelerator

Accelerator Stream

The Accelerator stream will develop and demonstrate fully European processor IPs based on the RISC-V Instruction Set Architecture, providing power efficient and high throughput accelerator tiles within the GPP chip. Using RISC-V allows leveraging open source resources at hardware architecture level and software level, as well as ensuring independence from non-European patented computing technologies.

The EPAC basic building block is a tile containing up to 8 vector processors and specialized units. The processors are coherent, sharing L2 cache banks through a Network-on-Chip, each bank with its associated Home Node agent. through a Network-on-Chip. The processors will support RISC-V vector instructions, and will also control the specialised units dedicated to Stencil and Deep Learning acceleration. The vector and stencil capabilities will address HPC workloads, while the Deep-Learning units will target AI applications.

The vector processor architecture  will be based on these guiding principles:

  • Holistic throughput-oriented vision based on long vectors and task-based models
  • Hierarchical concurrency and locality exploitation
  • Communication between programming levels
  • Make it all look very close to classical sequential programming to ensure productivity

The dedicated unit architecture, on the other hand, will be geared towards a few specific applications. This specificity will be leveraged to explicitly manage the data placement and transfer from and into local scratchpad memories, targeting high-energy efficiency

The EPAC tile will be integrated both as a node in the GPP mesh, and as a stand-alone Test Chip for demonstration and software debugging purposes.

Compiler Explorer – Overview

The compiler team at the Barcelona Supercomputing Center working on EPI have setup of a Compiler Explorer for an LLVM-based compiler that targets the RISC-V and the V-extension (still draft) architecture.

Compiler Explorer is an open-source web application for interactive compiler code generation observation.

We want to use this tool to ease the analysis and study of the compiler code generation when targeting the RISC-V V-extension. This gives us valuable information in co-design as it can quickly expose pain points in the code generation. These pain points may suggest changes in the V-extension architecture or require new code generation strategies or optimizations in the compiler.

Compiler Explorer is intended for small programs or snippets not for large applications.

We have also integrated our own user-space functional emulator `vehave`. This emulator traps the vector extension instructions emitted by the compiler and emulates them with scalar instructions. This way we can execute vector applications and check their correctness under a RISC-V Linux environment. Both real hardware, such as the HiFive Unleashed, or `qemu-user` can be used. Our Compiler Explorer uses `qemu-user`.

Compiler Explorer website is https://repo.hca.bsc.es/epic/

Compiler user guide is available here: user-guide-compiler-explorer

VaRiable Precision Processor VRP

The VaRiable Precision Unit enables efficient computation in scientific domains with extensive use of iterative linear algebra kernels, such as physics and chemistry. Augmenting accuracy inside the kernel reduces rounding errors and therefore improves computation’s stability. Usual solutions for this problem have a very high impact in memory and computation time (e.g. use double precision in the intermediate calculations).

The hardware support of variable precision, byte-aligned data format for intermediate data optimizes both memory usage and computing efficiency. When the standard precision unit cannot reach the expected accuracy with standard precision (aka double), the variable precision unit takes the relay and continues with gradually augmenting precision until the tolerance error constraint is met. The offloading from the host processor (GPP) to the VRP unit is ensured with zero-copy hadnover thanks to IO-coherency between EPAC and GPP.

The VRP is embedded as a functional unit in a 64-bits RISC-V processor pipeline. The unit extends the standard RISC-V Instruction with hardwired arithmetic basic operations in variable precision for scalars: add, subtract, multiply and type conversions. It implements other additional specific instructions for comparisons, type conversion and transfers to cache. The unit features a dedicated register file for storing up to 32 scalars with up to 256 bits of mantissa precision. Its architecture is pipelined for performance, and it has an internal parallelism of 64-bits. Thus, internal operations with higher precisions multiple of 64 bits are executed by iterating on the existing hardware.

The VRP programming model is meant for smooth integration with legacy scientific libraries such as BLAS, MAGMA and linear solver libraries. The integration in the host memory hierarchy is transparent for avoiding the need of data copy, and the accelerator offers a standard support of C programs. The libraries are organized in order to expose the variable precision kernels as compatible replacements of their usual counterparts in the BLAS and solver libraries. The complexity of arithmetic operations is confined as much as possible within the lower level library routines (BLAS). Consistently, the explicit control of precision is exclusively handled at solver level.

Stencil/tensor accelerator STX

From the beginning, EPI explicitly considered “specialised blocks for stencil and deep learning (DL) acceleration. The vector and stencil capabilities will address workloads in HPC centres, while the DL block will target learning acceleration” as part of the acceleration stream motivated by “optimised performance and energy efficiency” for “specialised computations”. In the initial DoA, two different domain-specific accelerators (NTX for machine learning, and a stencil accelerator) were suggested. During the first few months of the project, researchers from Fraunhofer Institute, ETH Zürich and University of Bologna were able to merge the functionality of both units into a very efficient computation engine that has been named STX (stencil/tensor accelerator).

Such “domain-specific accelerators” are now a major trend in industry, as can be seen by multiple new announcements in the 2019 Hot-Chips symposium and AI Summit by industry heavyweights as a multitude of startups that have presented acceleration engines that were based on specialised datapaths and not general purpose vector units, confirming the significant differentiation in architecture needed for achieving top efficiency and performance in the machine learning domain.

The main goal of STX is to achieve a significantly higher (at least 5x-10x) energy efficiency over general purpose/vector units. The efficiency tells us how many computations can be performed with the unit, and the early target for the STX unit was to achieve at least 5x more energy efficiency (TFLOPS/W) than the vector unit on deep learning applications. In the first few months of the project, it became clear that these estimations are rather conservative, and the effective efficiency within EPI chips will be significantly higher. For applications that require only inference using quantized networks, this efficiency will be another 10x higher.

STX has been designed as a modular building block with several parametrization options. Each STX accelerator consists of several clusters of computing units, a typical instance would have four such clusters. Each cluster in turn consists of specialised computing engines as well as up to two RISC-V cores that are used to control the computing engines and perform additional operations. All these units will access a local scratchpad memory, which will be filled using a centralized DMA unit. This configuration allows for 64 GFLOPS (single precision FP), and multiple instances of STX can be instantiated in an EPAC tile.

STX is programmed using OpenMP, there are solutions that allow regular operations to be offloaded to the STX unit from an Arm system (in the GPP) or the 64-bit RISC-V core (in the EPAC tile) using both GCC and LLVM based flows that will be fruther refined as part of the project.

--content--
--date--

--title--

--excerpt--
--date--

Live News

RT @PRACE_RI: Holiday time⛱ is a good opportunity to plan your #HPC training for the fall. PRACE #Training Centres offer exciting courses!…
12/08/2020 1:15 pm
#SC20 announces a fully virtual conference this year! 👇🏻 https://t.co/C6qE1jiapf
27/07/2020 3:30 pm
Some new #factsheets up in the #repository 👀👇: https://t.co/7tw6l2L7o3 Our teammates from @Kalrayinc, @Provenrun and @unipisa created two new factsheets, on #CryptoTile and #Posit. Ready, steady,... ! 😀 https://t.co/nypWrnjjxV
16/07/2020 11:58 am
Have a look at what our @_abartolini_ talked about with @NicoleHemsoth regarding #RISCV and #HPC 👇👀 https://t.co/kzHdYZQA1a
10/07/2020 11:56 am
RT @pulp_platform: This Wednesday and Thursday at 9:00, Luca and Frank will be giving a talk on how to work with RISC-V, as part of ACACES…
06/07/2020 12:47 pm

EPI @ ESIWACE Virtual Workshop

Jean-Marc Denis and Jesus Labarta presented at the ESIWACE virtual workshop.
2020-07-06
Another addition to @SIPEARL_SAS 👏. Welcome, Frédéric! https://t.co/ORSXaCYG03
02/07/2020 12:57 pm
Are you attending? 😀 The webinar will cover things like: 🧩intelligent platforms and Edge Computing 🧩innovative and scalable "manycore" HW and SW 🧩an optimal solution for parallel processing of several independent applications 🧩industrial and automotive application 👇 https://t.co/AIloBuGHLO
02/07/2020 8:27 am
RT @SIPEARL_SAS: "High-performance computing benefits from #European support, and the #EU has established a roadmap to regain sovereignty f…
02/07/2020 6:16 am
The 34th ACM International Conference on Supercomputing (ICS-2020), the premier international forum for the presentation of #ResearchResults in #HPC systems, started today as an online event for the first time ever: https://t.co/RlD1fOXGhg @BSC_CNS
29/06/2020 12:11 pm
@Pa0x73cal @pulp_platform We think @_abartolini_ should tell us 👉🏻🍝? 😂
24/06/2020 8:54 am

V for vector: software exploration of the vector extension of RISC-V

Introduction The European Processor Initiative (EPI) is building a new central processing unit (CPU) with European technology. This CPU will bundle an accelerator, based on the open source RISC-V architecture. This accelerator will include support for the upcoming V-extension of RISC-V. At the Barcelona Supercomputing Center (BSC), we have been busy at work building software […]
2020-05-05

First EPI Forum to take place in Paris

The European Processor Initiative is announcing the first EPI Forum to take place in Paris, France, on March 16-17, 2020. In a two-day event, the consortium will host experts from HPC ecosystem, engineers, researchers and global players in the field, to attend sessions, round tables and keynote speeches from prominent executives and experts.
2020-02-25

EPI @ HiPEAC in Bologna

EPI team attended HiPEAC conference in Bologna.
2020-01-29

SiPearl, Industrial hand of EPI launches

SiPearl, EPI’s industrial and business hand, joins the EPI consortium as its 27th partner and moves into its operational phase. SiPearl and its solutions will help drive the development of the European market for high-performance computing (HPC), as well as its strategic applications such as artificial intelligence and connected mobility. SiPearl will develop and market […]
2020-01-23

European Processor Initiative: First year of activities

The project is finishing its first year with introduction of a new EPI Common Platform, an updated roadmap and presence at key events
2019-11-04

Busy EPI partners in October

October started pretty busy for EPI partners, who attended various events presenting the latest from the Initiative.
2019-10-18

EPI First tutorial held in Barcelona

Partners from the European Processor Initiative organized and held their first public tutorial on EPI called “First steps towards a made-in-Europe high-performance microprocessor”. It was held on July 17th, at the Universita Politècnica de Catalunya, co-located with the ACM 2019 Summer school on HPC architectures for AI and dedicated applications. EPI distinguished experts presented in […]
2019-07-20

EPI activities in events

Last week has been loaded with activities for the European Processor Initiative. Our team attended several very important events, where EPI was discussed and our road to the low-power processor presented. EPI Chairman of the Board, Jean-Marc Denis, attended two events, in a Transatlantic hop, skip and a jump: first, the 73rd HPC User Forum […]
2019-09-20

EPI’s Manager on NVIDIA Bringing CUDA to Arm

NVIDIA announced its support for Arm CPUs, by making available to the Arm® ecosystem its full stack of AI and HPC software — which accelerates more than 600 HPC applications and all AI frameworks – by year’s end. The stack includes all NVIDIA CUDA-X AI™ and HPC libraries, GPU-accelerated AI frameworks and software development tools such as […]
2019-06-17
Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content. You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.
Accept all cookies
Decline all cookies
Privacy Policy