Hardware Acceleration for Serverless Computing!

Give your serverless deployments the power of hardware accelerators with vAccel

Learn More

About Us

We're Engineers

We explore the systems software stack, from the ground up, hacking VMMs and boot code up to applications and unikernels.

We're Professionals

Backed by a young SME, we offer professional services to entities around the world, tackling complex research & engineering problems.

We have vision

Facilitating the deployment or workloads is what makes us tick.

Sign-up now for our early beta!

Be one of the first few to try out vAccel on a real Serverless deployment! Leave us a note about your project and we’ll be in touch with you with more details on how to try it out yourself!

Leave us a note, to let us know you're interested!

Key Features

Flexible execution

vAccel offers flexible execution by employing software techniques to map accelerate-able workloads to the relevant hardware functions.

Modular Design

vAccel encompasses a large number of backend implementations, allowing for integration with any acceleration framework available using simple, intuitive glue plugins.


vAccel shifts security concerns to the hardware, ensuring that consecutive runs on the same accelerator will not leak sensitive data.


vAccel’s overhead is negligible. For instance, the maximum overhead measured for an ML image inference workload compared to native execution is 5%.

Seamless integration

Any application can benefit from vAccel without too much hassle! Exposing an acceleration function is super easy: just link against the runtime system and provide the function prototype for the user.


vAccel exhibits predictable performance by shifting most of the hard part of the execution to the host system.

Design choices

We’ve built vAccel from the ground up.

We based vAccel’s design on what users in the Cloud and at the Edge, need from a hardware acceleration framework. Users running their software stack on shared infrastructure care about flexibility, interoperability, security, and performance. Infrastructure providers ensure secure execution through virtualization techniques, but are still behind in terms of flexibility and interoperability. True device sharing becomes important in serverless setups where spawned functions are short-lived and response latency / boot times have to be minimal. Additionally, having to program the hardware themselves prevents users from moving to a different provider / vendor. Finally, vAccel removes software stack duplication to achieve resource efficiency and meet hardware constraints at the Edge.

  • 01- Simplicity

  • 02- Flexibility / interoperability

  • 03- Security

  • 04- Performance

  • 05- Resource Efficiency

How it works


  • (0): A user calls a function prototype:
     image_classify (input) 
  • (a): vAccelRT determines available backends, chooses what's sane to do (use virtio-backend, use physical device etc.)
  • (b): use virtio-accel, call forwarded to virtio-accel backend
  • (c): virtio-accel calls vAccelRT, which, in turn, determines the relevant acceleration framework.
  • (d): vAccelRT issues the call to the respective framework. The latter returns results to vAccelRT, which, forwards the output to the caller.

Browse our blog!

Browse through our blog to find interesting tutorials about how to try out vAccel on AWS Firecracker locally, or on a public cloud. Check out the ARMv8 section too! Secure and isolated ML inference at the Edge sounds really intriguing!

CloudKernels BLOG

Contact Us

Contact Details

Feel free to drop us a note, and sign-up for our newsletter!

West One Peak, 15 Cavendish ST. S3 7SR SHEFFIELD UK
Phone: +44 7428318494
Email: vaccel@nubificus.co.uk