LM-Kit.NET
LM-Kit.NET is an enterprise-grade toolkit designed for seamlessly integrating generative AI into your .NET applications, fully supporting Windows, Linux, and macOS. Empower your C# and VB.NET projects with a flexible platform that simplifies the creation and orchestration of dynamic AI agents.
Leverage efficient Small Language Models for on‑device inference, reducing computational load, minimizing latency, and enhancing security by processing data locally. Experience the power of Retrieval‑Augmented Generation (RAG) to boost accuracy and relevance, while advanced AI agents simplify complex workflows and accelerate development.
Native SDKs ensure smooth integration and high performance across diverse platforms. With robust support for custom AI agent development and multi‑agent orchestration, LM‑Kit.NET streamlines prototyping, deployment, and scalability—enabling you to build smarter, faster, and more secure solutions trusted by professionals worldwide.
Learn more
Vertex AI
Fully managed ML tools allow you to build, deploy and scale machine-learning (ML) models quickly, for any use case.
Vertex AI Workbench is natively integrated with BigQuery Dataproc and Spark. You can use BigQuery to create and execute machine-learning models in BigQuery by using standard SQL queries and spreadsheets or you can export datasets directly from BigQuery into Vertex AI Workbench to run your models there. Vertex Data Labeling can be used to create highly accurate labels for data collection.
Vertex AI Agent Builder empowers developers to design and deploy advanced generative AI applications for enterprise use. It supports both no-code and code-driven development, enabling users to create AI agents through natural language prompts or by integrating with frameworks like LangChain and LlamaIndex.
Learn more
RedPajama
Foundation models, including GPT-4, have significantly accelerated advancements in artificial intelligence, yet the most advanced models remain either proprietary or only partially accessible. In response to this challenge, the RedPajama initiative aims to develop a collection of top-tier, fully open-source models. We are thrilled to announce that we have successfully completed the initial phase of this endeavor: recreating the LLaMA training dataset, which contains over 1.2 trillion tokens.
Currently, many of the leading foundation models are locked behind commercial APIs, restricting opportunities for research, customization, and application with sensitive information. The development of fully open-source models represents a potential solution to these limitations, provided that the open-source community can bridge the gap in quality between open and closed models. Recent advancements have shown promising progress in this area, suggesting that the AI field is experiencing a transformative period akin to the emergence of Linux. The success of Stable Diffusion serves as a testament to the fact that open-source alternatives can not only match the quality of commercial products like DALL-E but also inspire remarkable creativity through the collaborative efforts of diverse communities. By fostering an open-source ecosystem, we can unlock new possibilities for innovation and ensure broader access to cutting-edge AI technology.
Learn more
MPT-7B
We are excited to present MPT-7B, the newest addition to the MosaicML Foundation Series. This transformer model has been meticulously trained from the ground up using 1 trillion tokens of diverse text and code. It is open-source and ready for commercial applications, delivering performance on par with LLaMA-7B. The training process took 9.5 days on the MosaicML platform, requiring no human input and incurring an approximate cost of $200,000.
With MPT-7B, you can now train, fine-tune, and launch your own customized MPT models, whether you choose to begin with one of our provided checkpoints or start anew. To provide additional options, we are also introducing three fine-tuned variants alongside the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, the latter boasting an impressive context length of 65,000 tokens, allowing for extensive content generation. These advancements open up new possibilities for developers and researchers looking to leverage the power of transformer models in their projects.
Learn more