Ango Hub
Ango Hub is an all-in-one, quality-oriented data annotation platform that AI teams can use. Ango Hub is available on-premise and in the cloud. It allows AI teams and their data annotation workforces to quickly and efficiently annotate their data without compromising quality.
Ango Hub is the only data annotation platform that focuses on quality. It features features that enhance the quality of your annotations. These include a centralized labeling system, a real time issue system, review workflows and sample label libraries. There is also consensus up to 30 on the same asset.
Ango Hub is versatile as well. It supports all data types that your team might require, including image, audio, text and native PDF. There are nearly twenty different labeling tools that you can use to annotate data. Some of these tools are unique to Ango hub, such as rotated bounding box, unlimited conditional questions, label relations and table-based labels for more complicated labeling tasks.
Learn more
Vertex AI
Fully managed ML tools allow you to build, deploy and scale machine-learning (ML) models quickly, for any use case.
Vertex AI Workbench is natively integrated with BigQuery Dataproc and Spark. You can use BigQuery to create and execute machine-learning models in BigQuery by using standard SQL queries and spreadsheets or you can export datasets directly from BigQuery into Vertex AI Workbench to run your models there. Vertex Data Labeling can be used to create highly accurate labels for data collection.
Vertex AI Agent Builder empowers developers to design and deploy advanced generative AI applications for enterprise use. It supports both no-code and code-driven development, enabling users to create AI agents through natural language prompts or by integrating with frameworks like LangChain and LlamaIndex.
Learn more
Molmo
Molmo represents a cutting-edge family of multimodal AI models crafted by the Allen Institute for AI (Ai2). These innovative models are specifically engineered to connect the divide between open-source and proprietary systems, ensuring they perform competitively across numerous academic benchmarks and assessments by humans. In contrast to many existing multimodal systems that depend on synthetic data sourced from proprietary frameworks, Molmo is exclusively trained on openly available data, which promotes transparency and reproducibility in AI research. A significant breakthrough in the development of Molmo is the incorporation of PixMo, a unique dataset filled with intricately detailed image captions gathered from human annotators who utilized speech-based descriptions, along with 2D pointing data that empowers the models to respond to inquiries with both natural language and non-verbal signals. This capability allows Molmo to engage with its surroundings in a more sophisticated manner, such as by pointing to specific objects within images, thereby broadening its potential applications in diverse fields, including robotics, augmented reality, and interactive user interfaces. Furthermore, the advancements made by Molmo set a new standard for future multimodal AI research and application development.
Learn more
Tinker
Tinker is an innovative training API tailored for researchers and developers, providing comprehensive control over model fine-tuning while simplifying the complexities of infrastructure management. It offers essential primitives that empower users to create bespoke training loops, supervision techniques, and reinforcement learning workflows. Currently, it facilitates LoRA fine-tuning on open-weight models from both the LLama and Qwen families, accommodating a range of model sizes from smaller variants to extensive mixture-of-experts configurations. Users can write Python scripts to manage data, loss functions, and algorithmic processes, while Tinker autonomously takes care of scheduling, resource distribution, distributed training, and recovery from failures. The platform allows users to download model weights at various checkpoints without the burden of managing the computational environment. Delivered as a managed service, Tinker executes training jobs on Thinking Machines’ proprietary GPU infrastructure, alleviating users from the challenges of cluster orchestration and enabling them to focus on building and optimizing their models. This seamless integration of capabilities makes Tinker a vital tool for advancing machine learning research and development.
Learn more