KServe Description
Kubernetes is a highly scalable platform for model inference that uses standards-based models. Trusted AI. KServe, a Kubernetes standard model inference platform, is designed for highly scalable applications. Provides a standardized, performant inference protocol that works across all ML frameworks. Modern serverless inference workloads supported by autoscaling, including a scale up to zero on GPU. High scalability, density packing, intelligent routing with ModelMesh. Production ML serving is simple and pluggable. Pre/post-processing, monitoring and explainability are all possible. Advanced deployments using the canary rollout, experiments and ensembles as well as transformers. ModelMesh was designed for high-scale, high density, and often-changing model use cases. ModelMesh intelligently loads, unloads and transfers AI models to and fro memory. This allows for a smart trade-off between user responsiveness and computational footprint.
Pricing
Company Details
Product Details
KServe Features and Options
KServe Lists
KServe User Reviews
Write a Review- Previous
- Next