1st Workshop on Democratizing Domain-Specific Accelerators (WDDSA 2022)

In conjunction with the 55th IEEE/ACM International Symposium on Microarchitecture (MICRO 55) on 10/2/2022 @ Chicago

PDF Version

The evolution of computer architecture always starts with designing domain-specific ones and then turning them into general-purpose ones. For example, GPUs were just accelerators for computer graphics two decades ago, but now, GPUs are the most widely used general-purpose vector processors. Recent computer architecture trends again fall back to the wave of designing domain-specific accelerators (DSA). However, research projects have successfully used emerging accelerators for applications beyond their original target domains. Inspired by recent research projects and the story of GPGPUs (General-purpose computing on GPUs), this workshop intends to bring together experts from academia and industry to share their efforts in democratizing domain-specific accelerators. Through the presented work, WDDSA would like to explore the potential to lead the renaissance of general-purpose computing on emerging DSAs. 

While we are interested in work that supports general-purpose computing on recent DSAs, we also encourage submissions in general on DSAs and their infrastructure. This workshop is interested in but is not limited to the following topics.

  1. Novel use cases of an accelerator where applications are outside accelerators’ original application domains
  2. Systems, programming, and software for democratizing domain-specific accelerators.
  3. Architectural support for democratizing domain-specific accelerators.
  4. Performance/power/energy evaluation/analysis of democratizing domain-specific accelerators
  5. Implications to future “democratized” accelerator design.

 This workshop invites three types of presentations.

  1. Applications and demonstration on projects with artifacts available for the community to use and extend. The submission can be based on an already published work (published within 12 months upon the submission deadline). The presentation should consider including a live demo.
  2. Research papers on work-in-progress projects with preliminary results.
  3. Position papers on directions for research and development.


1:00p-1:15pOpening Remarks: Why WDDSA?
1:15p-2:06pFirst Session: “What are the potential accelerators”
Chair: Yufei Ding

FlowGNN: A Dataflow Architecture for Real-Time Workload-Agnostic Graph Neural Network Inference
Kobold: Simplified Cache Coherence for Cache-Attached Accelerators
Hardware Abstractions and Hardware Mechanisms to Support Multi-Task Execution on Coarse-Grained Reconfigurable Arrays
2:15p-3:05pSecond Session: “How to easily make an accelerated processing unit”
Chair: Chris Torng

GNNBuilder: An Automated Framework for Generic Graph Neural Network Accelerator Generation, Simulation, and Optimization
A Full-Stack Infrastructure for Automating Spatial Architecture Research
Canal: A Flexible Interconnect Generator for Coarse-Grained Reconfigurable Arrays
3:30p-4:20pThird Session: “Where are we in DDSA?“
Chair: Po-An Tsai

RTNN: Accelerating Neighbor Search Using Hardware Ray Tracing
Accelerating Applications using Edge Tensor Processing Units
Accelerating Small Matrix Processing for General-Purpose Linear Algebra Using DNN Accelerator Matrix Engines
4:20pPanel: “When: is it the right time for DDSA?”
Chair: Hung-Wei Tseng
Derek Chiou (UT Austin/Microsoft)
Po-An Tsai (NVIDIA)
Ren Wang (Intel)

Submission Guidelines

  1. Papers must be submitted in printable PDF format.
  2. Papers must be in the standard two-column conference format. There are no strict formatting requirements but use reasonable font sizes and margins, etc.
  3. Papers should be 4 to 6 pages, not including references.

Submission Website


Important Dates

  • Abstract submission deadline: August 31, 2022, 23:59 PST (Extended)
  • Full paper submission deadline: August 31, 2022, 23:59 PST
  • Author notification: September 5, 2022
  • Camera-ready due: September 28, 2022
  • Workshop date: October 2, 2022


  • Yufei Ding (University of California, Santa Barbara)
  • Christopher Torng (Stanford University)
  • Po-An Tsai (NVIDIA Research)
  • Hung-Wei Tseng (University of California, Riverside)