SiFive’s Krste Asanovic: New RISC-V Designs and AI Innovation

In this TechVoices conversation, Krste Asanovic, Co-Founder and Chief Architect at SiFive, outlines the company’s newest AI-focused RISC-V designsincluding ultra-small X160 and X180 cores for pairing with custom accelerators, a new scalar co-processor interface (SSCI), deeper memory-latency tolerance, and specialized instructions for modern activation functions.

Asanovic contrasts SiFive’s IP-licensing model with proprietary approaches, details how open standards reduce fragmentation, and highlights real-world deployments of SiFive IP for processor cores—from electric vehicle ADAS to NASA’s spaceflight computers—while forecasting a continued shift toward general-purpose architectures.

Core Takeaways

Why it matters: Open standards and right-sized AI cores are enabling faster innovation across diverse SoC (System on a Chip) designs and deployment environments.

  • Second-gen Intelligence launch: SiFive’s new release builds on 40+ first-gen design wins and introduces ultra-compact X160 and X180 cores to sit beside customer accelerators—especially for power-sensitive edge AI.
  • New SSCI + memory advances: A scalar co-processor interface (alongside the existing vector interface), more efficient multi-core memory hierarchies, deeper latency tolerance, and new instructions target today’s AI workloads.
  • Open RISC-V reduces fragmentation: Profiles such as RVA23 standardize the software target while letting vendors tailor extensions; using RISC-V across IP blocks creates a more uniform SoC toolchain and developer experience.

Key Quotes

In his words: Four extended quotes from SiFive’s Krste Asanovic that capture the company’s strategy, technology, and market impact.

New product release: Second-gen Intelligence and the new tiny cores

“So first thing to say, this is the second generation of what we call our Intelligence line. The new launch builds upon the experience we’d have over the last few years with our first generation products, where we had over 40 design wins. And so listening to our customers, figuring out the features we need to add to improve things. We’ve done a lot of development improvement based on that experience.”

“I think the most important new feature here is a new member of the family. Speaking to our customers, one of the common use cases was using our core next to their own custom acceleration logic. And one of the requests was: Can we have a much smaller core? And so we’ve attacked that with the new X160 and X180 processors, which are very small variants of our existing family. So improvements for all the family, but also this new smaller member of the family is probably the highlight of this next launch.”

SSCI, memory hierarchy, and AI-centric instructions

“Of course one of the key things is a new, what we call SSCI, SiFive Scaler Co-Processor Interface. We already had a vector co-processor interface that allowed people to connect their accelerator directly to the vector register file. So very high data throughput, but folks also wanted a scaler co-processor interface that allows scaler register values to be communicated along with custom instructions to the accelerator for control functions. So this SSCI feature is now available across the family.”

“We’ve also made a bunch of improvements to the memory system, which is very critical for AI applications. So we’ve made the configurations, the multi-core configurations more efficient in terms of how the memory hierarchy is organized. And we’ve also added even deeper latency tolerance, so we can handle full bandwidth out to memory that may be a hundred or 200 more cycles away from the processor. And finally, we’ve added a lot of new instructions to help accelerate key pieces of AI algorithms. One example is, we’ve added a fully pipeline exponential functional unit that can provide very high throughput for exponential functions, which are a key component of many of the new activation functions being used in AI applications.”

Open standards vs. fragmentation—profiles and a unified toolchain

“The open standard allows customers to shop among competitors. So in a given kind of socket, like a given kind of processor, there’ll be competition. So you can get to pick the best one. I think the other important aspect is there can be vendors who supply a range of different versions of RISC-V for different applications. You have a much broader set of offerings coming from different vendors attacking different use cases. SiFive, we have one of the broadest portfolios, everything from small microcontrollers all the way up to high performance application processes.”

“At RISC-V, the community has got together and built these things we call profiles, the ISA profiles. So RVA23 is the one we just ratified last October. And this is becoming very important in the RISC-V community as this is a standard, all the different vendors of application processes have agreed to support. And so all the software vendors are now agreed to target this one… Now the whole SoC has a very uniform software environment… Better debuggers, better tracing facilities, better compilers, better everything, because unified support. And SiFive… is giving the customer a standard RISC-V front end to which they can attach their custom engines. That helps support removing this SoC Balkanization that existed already.”

Edge strategy and real-world deployments

“In our Intelligence family, we provide a range of different sizes of the core, like different performance levels. Like I said, we just introduced this new very low-end member of the Intelligence family that has quite significant AI capability even though it’s a small core. But you can scale this up to our very largest XM member of the family, which is a large matrix acceleration unit and which can be arrayed, you can have many of these on a die… The second way we help is providing our cores as companion cores… then have our standard core on the front of it providing a standard RISC-V software environment for the rest of the SoC.”

“Some of the use cases we are seeing are in automobiles, so EVs. So in fact our intelligence cores will be on the road in a production EV next year, helping with the ADAS functions inside that EV… Another highlight application is NASA selected the first generation of our intelligence cores to form the basis of the high-performance space computer… This will now be the standard high performance computing platform for all space missions from NASA and that’s based on our first generation intelligence cores. So that’s really out on the edge, the far edge, right?”

Picture of James Maguire

James Maguire

An award-winning journalist, James has held top editorial roles in several leading technology publications, covering enterprise tech trends in cloud computing, AI, data analytics, cybersecurity and more. He regularly communicates with industry analysts and experts and has interviewed hundreds of technology executives. James is the Executive Director of TechVoices.
Stay Ahead with TechVoices

Get the latest tech news, insights, and trends—delivered straight to your inbox. No fluff, just what matters.

Nominate a Guest
Know someone with a powerful story or unique tech perspective? Nominate them to be featured on TechVoices.

We use cookies to power TechVoices. From performance boosts to smarter insights, it helps us build a better experience.