Hardware Acceleration for Serverless Computing!

Give your serverless deployments the power of hardware accelerators with vAccel

Learn More

Key Features

Flexible execution

vAccel offers flexible execution by employing software techniques to map accelerate-able workloads to the relevant hardware functions.

Modular Design

vAccel encompasses a large number of backend implementations, allowing for integration with any acceleration framework available using simple, intuitive glue plugins.

Security

vAccel shifts security concerns to the hardware, ensuring that consecutive runs on the same accelerator will not leak sensitive data.

Performance

vAccel’s overhead is negligible. For instance, the maximum overhead measured for an ML image inference workload compared to native execution is 5%.

Seamless integration

Any application can benefit from vAccel without too much hassle! Exposing an acceleration function is super easy: just link against the runtime system and provide the function prototype for the user.

Predictability

vAccel exhibits predictable performance by shifting most of the hard part of the execution to the host system.

Design choices

We’ve built vAccel from the ground up.

We based vAccel’s design on what users in the Cloud and at the Edge, need from a hardware acceleration framework. Users running their software stack on shared infrastructure care about flexibility, interoperability, security, and performance. Infrastructure providers ensure secure execution through virtualization techniques, but are still behind in terms of flexibility and interoperability. True device sharing becomes important in serverless setups where spawned functions are short-lived and response latency / boot times have to be minimal. Additionally, having to program the hardware themselves prevents users from moving to a different provider / vendor. Finally, vAccel removes software stack duplication to achieve resource efficiency and meet hardware constraints at the Edge.

  • 01- Simplicity

  • 02- Flexibility / interoperability

  • 03- Security

  • 04- Performance

  • 05- Resource Efficiency

How it works

We have gathered the necessary documentation to walk you through Building, Testing and Running vAccel-enabled workloads in a wide range of deployments (standalone, docker, k8s, etc.). Follow the link below to access the docs and start accelerating your ML workloads!

Browse our blog!

Browse through our blog to find interesting tutorials about how to try out vAccel on AWS Firecracker locally, or on a public cloud. Check out the ARMv8 section too! Secure and isolated ML inference at the Edge sounds really intriguing!

CloudKernels BLOG

Contact Us

Contact Details

Feel free to drop us a note, and sign-up for our newsletter!

West One Peak, 15 Cavendish ST. S3 7SR SHEFFIELD UK
Phone: +44 7428318494
Email: vaccel@nubificus.co.uk