VoltStack is a SaaS-based offering to deploy, secure, and operate a fleet of applications across the distributed infrastructure in multi-cloud or edge. It can scale to a large number of clusters and locations with centralized orchestration, observability, and operations to reduce the complexity of managing a fleet of distributed clusters.
Using a distributed control plane running in our global infrastructure, VoltStack delivers a logically centralized cloud that can be managed using industry-standard Kubernetes APIs. This control plane removes the overhead of many individually-managed Kubernetes clusters and allows the customer to automate application deployment, scaling, security, and operations across the entire deployment as a “unified cloud”. A large and arbitrary number of managed clusters can be logically grouped into a virtual site with a single Kubernetes management interface.
Our SaaS-based service also reduces the complexity of managing and operating VoltStack services deployed within a single cloud, across multiple cloud sites, or edge sites as customers don’t have to worry about doing lifecycle management of clusters and their individual control and management plane. Since the identity, access management, policy, and configuration are centralized, any change is reflected across the entire deployment. All logging and metrics are centrally available for observability with API-based integrations to external tools like Datadog or Splunk from our centralized SaaS portal.
There are three modes of consuming VoltStack services:
Customer sites (cloud or edge) - Volterra nodes can be deployed at the edge (on commodity hardware or our purpose-built hardware) or in any virtual machine in public or private cloud locations to run containerized or virtualized workloads. Multiple nodes will automatically cluster to scale-out the delivery of compute, storage, network, and security services within a single site. Multiple sites become part of the “unified cloud”
Volterra global infrastructure - the VoltStack application runtime is available within our global network at every point of presence. In this case, our global infrastructure will be used to deploy the application workload and you can select all or a subset of our points of presence where this application workload needs to be deployed. In addition, by configuring network and security services accordingly, you can even expose the application across the global network but deploy the workload in a smaller number of sites and then scale the deployment as needed.
VoltMesh functionality is integrated with VoltStack to deliver all the connectivity and security services for workloads within the cluster and to connect these clusters using our global network backbone. VoltStack is designed to make it extremely easy for anyone to deploy, scale, secure, and operate their application workloads in the cloud, network, or edge without worrying about scalability and operations of a modern and hybrid environment.
There are three reasons why we believe that you should consider using VoltStack for your next deployment in cloud or edge:
Fleet management to simplify operations - Managing multiple clusters across heterogeneous environments is a burden on IT and DevOps teams as they have to deal with the complexity of resource management, service consistency, change management, and API integrations. They would prefer a more modern SaaS-based platform that centralizes orchestration, policy, security, and lifecycle management of application workloads and infrastructure across a distributed fleet of edge and cloud sites. Using centralized configuration management you get
Zero-touch deployment, automated clustering, upgrades, and patches for infrastructure nodes
Single source of truth for configuration, policy-based control, and lifecycle management
Simplified deployment, scaling, and rollback of workloads across a group of clusters using virtual site concept
Unified policy and configuration model for applying changes across a group of clusters.
Centralized and consolidated logging and monitoring across all the clusters with APIs to integrate with external tools
Applications in the network or edge - As the internet evolves from primarily downstream consumption of content to upstream generation of data because of highly interactive applications and machine-to-machine traffic, there is a growing need to improve latency and performance by not only using techniques like TLS termination but also moving critical portions of the applications and API processing very close to the source of data. Our solution gives you the option to run applications directly at the edge location or closer to the edge in the network.
Cluster scalability - Purpose-built and distributed control plane give the capability to scale to a very large number of distributed sites with multiple application services and dynamic policies. This is significantly different from other solutions that simply build a management layer on top of the existing Kubernetes control plane. Our distributed control plane with fleet management and true multi-tenancy removes the need to deploy and operate many Kubernetes control planes while preserving the usage of Kubernetes-API.
In addition, with VoltMesh functionality is integrated with VoltStack, you get a truly distributed and multi-cluster service mesh with all the networking and security features.
VoltStack delivers a complete range of services to automate infrastructure and application deployment, scaling, security, and lifecycle management across a large number of distributed sites. A combination of services can be centrally deployed and operated using the Volterra Console and be seamlessly enabled across your cloud or edge site using Volterra Nodes or within our global infrastructure.
The capabilities for VoltStack are grouped into two categories - Infrastructure Services and Application Services. The goal for infrastructure services is to create a homogenous and an abstracted layer across different types of infrastructure so that application services do not have to be exposed to the variances of the underlying infrastructure. Since we have already covered VoltMesh capabilities, please refer to key VoltMesh services for details on those services.
Optimized Operating System - A consistent and efficient operating system is required to securely run Volterra system microservices as well as customer workloads. This OS can be deployed in the cloud or edge and as a result, needs to support low memory footprint devices. This capability is the underpinning of the Volterra node that may be deployed as a VM in cloud or on physical hardware at the edge.
Clustering - ability to scale compute and storage resources by seamlessly clustering multiple Volterra nodes within a single site gives the ability to request resources on-demand. Application workloads or Volterra services (eg. VoltMesh and VoltStack) can be easily auto-scaled as soon as a new node is added within the cluster. Using underlying auto-scaling capabilities provided by the cloud provider, we can scale the number of Volterra nodes within a site depending on demand and configuration constraints.
Managed Kubernetes - all our infrastructure services are built using Kubernetes with enhancements for multi-tenancy, security, and ability to run virtual machines alongside docker containers. Our changes also allow us to mix critical and best-effort workloads on this platform with the ability to progressively roll-out changes and upgrade infrastructure services with minimal disruption to customer workloads. Any Volterra sites consist of one or many Volterra node that always runs three services at a very minimum - optimized operating system, clustering, and managed Kubernetes.
Distributed Storage - container-native software-defined storage or the capability to attach cloud provider storage solutions like EBS using Kubernetes PVCs gives the ability to scale storage across Volterra nodes within the cluster. This capability gives you the ability to run stateful applications without worrying about managing distributed storage services. There are additional capabilities (on the roadmap) for taking snapshots, scheduled backups to the cloud-based object store, and encryption for storage gives services that are typically required for enterprise production deployments and secure distributed data.
Distributed Infrastructure Management - This gives you the capability to manage infrastructure services deployed across many locations as a fleet. You can group all or a subset of individual locations as a fleet and then perform operations on this fleet object - zero-touch deployment, upgrade and patch operating system or infrastructure software, apply configuration changes, deploy new services across the entire or sub-set of infrastructure fleet. This creates a significant simplification for policy and configuration management of a large fleet of infrastructure components.
Continuous Deployment and Verification - deployment of applications in a cluster is typically done using continuous deployment tools like spinnaker, harness, etc. Using the capabilities of a logical grouping of distributed sites with a single Kubernetes interface across these clusters, the same continuous deployment tools can continue to be used while reaping the benefits of rolling out upgrades and changes to many locations. In addition, the system collects logs and metrics from all the clusters on which we continuously perform anomaly detection. This information can be used as input to continuous deployment systems for roll-backs, creating alerts, or integrating with external continuous verification tools as they need a rich data-source to generate meaningful insights.
Identity and Secrets Management - Uniform identity is essential to authentication and authorization in a distributed system. This becomes challenging when different systems are used to create, assign, and manage identity across different providers. Without an identity that is accepted across different systems, authorization and policy controls are not possible to implement reliably. As a result, VoltStack gives every app instance its own PKI identity that is issued and maintained through the entire life-cycle of the application. This identity is not only used for RBAC and network policies but also used for granting access to secrets and keys. In addition, our novel and cryptographically secure double-blinding system stores customer secrets without any concerns of losing valuable information if hacked as they are never stored in the clear. It is also possible for the customer to integrate this solution with their existing enterprise products like Hashicorp Vault or Cyberark.
Container Security - isolation and protection of services against malicious and/or erroneous conditions need to be handled by any application management system. We allow customers to maintain their own registries that periodically perform vulnerability scans to ensure that application software is compliant to their requirements. In addition, the shared host needs to be protected from container vulnerabilities by using a VM-like isolation boundary. We are working on providing this capability as part of our roadmap.
Distributed Application Management - This gives the capability to manage application services deployed across many locations as a logical group. You can group all or a subset of locations as a virtual-site and then perform operations and policy changes on these virtual-sites. This includes the capability to perform operations like workload deployment, scaling changes, application upgrades, roll-back, connectivity policy, security policy across all the locations within the virtual-site. This significantly simplifies configuration and policy management for large-scale application deployment and operations.
Observability - Very detailed metrics, logs, requests, notifications are centrally collected from every site to provide rich observability across application, infrastructure, network, and security services across the entire system. These metrics are used to provide a holistic view of application health, service connectivity, API requests, infrastructure resource consumption. This gives the ability to easily debug and trace issues across the system while the centralized SaaS-based service can be used to integrate logs and metrics with external performance management systems like Datadog, Splunk, etc.
Volterra services in private, public, or edge cloud sites are consumed by deploying one or more “Volterra Nodes”. Depending on service configured (eg. VoltMesh or VoltStack), appropriate software capabilities get enabled on the Volterra Node. There is no need for the customer to deploy Volterra Nodes within our global infrastructure as all the VoltMesh and VoltStack services are already available in our multi-tenant infrastructure.
These nodes are software appliances that can be deployed in a VM or Bare Metal in the cloud/edge or is already integrated in our Volterra hardware for the edge. These Volterra Nodes are always under the management of our SaaS service and can be deployed in the cloud by directly downloading from the cloud marketplace, or downloading the software image directly from our portal, or using our portal to automatically deploy in the cloud.
Multiple nodes can cluster (within a site/location) to provide additional processing capability and zero-touch provisioning provides the capability of securely on-boarding the Volterra Node. At a minimum, Volterra Node comes bundled with the following features from VoltStack Infrastructure services - Optimized Operating System, Clustering, and Distributed Infrastructure Management.
VoltStack has been built in such a way that it can be deployed in many different ways to solve different use-cases:
Edge Application Management - for distributed deployment across multiple edge sites. Use Volterra SaaS to deploy (or using Volterra Hardware) Volterra nodes within each site and they will automatically and redundantly connect to the global backbone to create a “logical cloud”. Enable VoltStack features for a fully functional and distributed cloud that can be managed using our distributed application management service that provides Kubernetes APIs with additional capabilities like enterprise-grade security, centralized observability, uniform identity, distributed secrets + key management, and a globally distributed service mesh across these sites and the back-end running in public or private cloud. As you transition from development to test and production, different teams can easily access Volterra console to add security controls and networking policies to ensure compliance without affecting developers and DevOps workflows.
Multi-Cloud Application Management - for deployment of application clusters across one or more cloud regions and cloud providers. Use Volterra SaaS to deploy one or more Volterra nodes (cluster) within each location and enable VoltStack features on each of these clusters for a fully functional and distributed cloud. These clusters can be managed using our distributed application management service that provides Kubernetes APIs with additional capabilities like enterprise-grade security, centralized observability, uniform identity across cloud providers, unified secrets + key management, and rich networking services. In addition, VoltMesh will provide a globally distributed service mesh across these clusters for cross-cluster routing, VPNs, service discovery, health checks, API routing, application security, unified policy, and observability.