Amazon Unveils Elastic Inference, FSx for Windows File Server, Inferentia, Self-driving Racing League DeepRacer, SageMaker Ground Truth, and Outposts 13
Amazon Web Services announced a slew of new or updated offerings at its cloud-computing conference in Las Vegas, seeking to maintain its lead in the market for internet-based computing. Following is a rundown.
Amazon Elastic Inference is a new service that lets customers attach GPU-powered inference acceleration to any Amazon EC2 instance and reduces deep learning costs by up to 75 percent. From a report: "What we see typically is that the average utilization of these P3 instances GPUs are about 10 to 30 percent, which is pretty wasteful with elastic inference. You don't have to waste all that costs and all that GPU," AWS chief executive Andy Jassy said onstage at the AWS re:Invent conference earlier today. "[Amazon Elastic Inference] is a pretty significant game changer in being able to run inference much more cost-effectively." While the majority of workloads in the cloud are Linux-based, Amazon Web Services (AWS) CEO Andy Jassy said he is well aware that Windows is still significant, and as a result his company launched a new fully managed Windows file system built on native Windows file servers. From a report: "What we were hoping to do was make this Windows file system work as part of EFS -- would have been much easier for us to layer on another file system ... because it's much easier if you're trying to build a business at scale," he explained. However, he said customers wanted a native Windows file system and they "weren't being flexible." "So we changed our approach," he continued. Inferentia is company's own dedicated machine learning chip. From a report: "Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor," AWS CEO Andy Jassy explained during the announcement. Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future. Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What's more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX. TechCrunch writes about SageMaker Ground Truth: You can't build a good machine learning model without good training data. But building those training sets is hard, often manual work, that involves labeling thousand and thousands of images, for example. With SageMaker, AWS has been working on a service that makes building machine learning models a lot easier. But until today, that labeling task was still up to the user. Now, however, the company is launching SageMaker Ground Truth, a training set labeling service. Using Ground Truth, developers can point the service at the storage buckets that hold the data and allow the service to automatically label it. What's nifty here is that you can both set a confidence level for the fully automatic service or you can send the data to human laborers. GeekWire writes about the self-driving racing league and DeepRacer : Amazon Web Services chief and big sports fan Andy Jassy on Wednesday in Las Vegas unveiled a first-of-its-kind global autonomous racing league called AWS DeepRacer. The league features AWS DeepRacer, a 1/18th scale radio-controlled, self-driving four-wheel race car designed to help developers learn about reinforcement learning, a type of machine learning feature found in Amazon SageMaker. It features an Intel Atom processor; a 4-megapixel camera with 1080p resolution; multiple USB ports; and a 2-hour battery. And OutPosts: Starting next year, AWS will allow customers to order the same hardware that it uses to power its cloud services to run in their own data centers through a service called AWS Outposts. Building on its partnership with VMware, AWS Outposts will allow customers to enjoy a consistent set of hardware, software and services across their own servers and cloud servers, said AWS CEO Andy Jassy. Customers will have two options: they can run VMware Cloud on AWS on AWS Outposts, or they can run something called "AWS native" to enable this hybrid cloud setup. AWS will "deliver racks, install them, and then we'll do all the maintenance and repair on them," Jassy said.
Amazon Elastic Inference is a new service that lets customers attach GPU-powered inference acceleration to any Amazon EC2 instance and reduces deep learning costs by up to 75 percent. From a report: "What we see typically is that the average utilization of these P3 instances GPUs are about 10 to 30 percent, which is pretty wasteful with elastic inference. You don't have to waste all that costs and all that GPU," AWS chief executive Andy Jassy said onstage at the AWS re:Invent conference earlier today. "[Amazon Elastic Inference] is a pretty significant game changer in being able to run inference much more cost-effectively." While the majority of workloads in the cloud are Linux-based, Amazon Web Services (AWS) CEO Andy Jassy said he is well aware that Windows is still significant, and as a result his company launched a new fully managed Windows file system built on native Windows file servers. From a report: "What we were hoping to do was make this Windows file system work as part of EFS -- would have been much easier for us to layer on another file system ... because it's much easier if you're trying to build a business at scale," he explained. However, he said customers wanted a native Windows file system and they "weren't being flexible." "So we changed our approach," he continued. Inferentia is company's own dedicated machine learning chip. From a report: "Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor," AWS CEO Andy Jassy explained during the announcement. Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future. Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What's more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX. TechCrunch writes about SageMaker Ground Truth: You can't build a good machine learning model without good training data. But building those training sets is hard, often manual work, that involves labeling thousand and thousands of images, for example. With SageMaker, AWS has been working on a service that makes building machine learning models a lot easier. But until today, that labeling task was still up to the user. Now, however, the company is launching SageMaker Ground Truth, a training set labeling service. Using Ground Truth, developers can point the service at the storage buckets that hold the data and allow the service to automatically label it. What's nifty here is that you can both set a confidence level for the fully automatic service or you can send the data to human laborers. GeekWire writes about the self-driving racing league and DeepRacer : Amazon Web Services chief and big sports fan Andy Jassy on Wednesday in Las Vegas unveiled a first-of-its-kind global autonomous racing league called AWS DeepRacer. The league features AWS DeepRacer, a 1/18th scale radio-controlled, self-driving four-wheel race car designed to help developers learn about reinforcement learning, a type of machine learning feature found in Amazon SageMaker. It features an Intel Atom processor; a 4-megapixel camera with 1080p resolution; multiple USB ports; and a 2-hour battery. And OutPosts: Starting next year, AWS will allow customers to order the same hardware that it uses to power its cloud services to run in their own data centers through a service called AWS Outposts. Building on its partnership with VMware, AWS Outposts will allow customers to enjoy a consistent set of hardware, software and services across their own servers and cloud servers, said AWS CEO Andy Jassy. Customers will have two options: they can run VMware Cloud on AWS on AWS Outposts, or they can run something called "AWS native" to enable this hybrid cloud setup. AWS will "deliver racks, install them, and then we'll do all the maintenance and repair on them," Jassy said.
Wow (Score:1)
Slashvertisement at its finest. Scrolled back up to check on a hunch and yup, Ms. Mash.
Re: (Score:2)
Better than the alternative... (Score:2)
What, you'd prefer yet another environmental scare article?
I guess he maxes out at 10,000 a day and wanted to find something else.
I actually found this summary at least useful...
Re: (Score:2)
It could have been much worse, it could have been posted as 6 distinct articles.