Probably different projects though. Not sure what projects the US has but in the UK you have a number of types of warehouse:
- Fulfilment centres: Where all your normal run of the mill Amazon orders are picked and packed.
- Distribution centres: Centralised locations where orders are routed through; i.e. you might have 20 fulfilment centres sending packages to a distribution centre which then amalgamates them onto individual trucks (or planes) destined to further away locations. So imagine 20 trucks bring packages in from the closest fulfilment centres, those packages are then reorganised such that maybe 4 trucks worth are amalgamated to go down to London, 4 upto Scotland, 2 the South West and so on, whilst the remaining ones might go to local delivery centres because they're for local parcels.
- Delivery stations: These are the last mile stations, where parcels are offloaded from trucks onto local delivery vans.
- AMXL Warehouses: This is Amazon's extra large project for oversized goods. If you buy something like a washing machine, or fridge on Amazon it's picked and dispatched through AMXL centres. They're kitted out with equipment for shifting heavy goods.
- Prime Now Warehouses: These are local all in one time centres that have a smaller selection of goods people want quickly, i.e. typically groceries, the latest video games, batteries that are picked, packed, and dispatched all from one place for same day deliveries within 2 hours to people local enough to them to offer that.
- Returns centres: These handle returns unsurprisingly.
I'm sure there are other types I'm not aware of.
It's not uncommon for Amazon to build clusters; i.e. AMXL, Distribution, and Delivery all next to each other; some are even interconnected so parcels destined for deliveries local to Distribution Centres for example might have conveyors straight from the DC into the Delivery station so they can be routed straight through without packing and unpacking trucks.
So there is method to the madness of them building warehouses next to each other. On the outside they all look the same on the inside they're all doing completely different things.
I can kind of understand the use case for this, the problem is that for serverless code execution cloud providers are currently typically using containers to deliver FaaS, the problem is even in the best case you still have sometimes unacceptable cold start times if no instance of the execution environment is cached and available to serve.
This means the promise of cloud based hyper-scalability through FaaS for web apps has some real problems, on both ends of the scale:
- At the bottom end, low rates of concurrent execution, FaaS suffers from the cold start problem, you can't realistically serve backend requests for a front end site using FaaS in this scenario because there's never an instance ready to serve the request, so each time a user visits the site and a call is made to your FaaS function the cold start means you could see response times as bad as on the order of seconds, that's too long for user interaction.
- At the middle range the cold start problem goes away some degree because you have enough frequency of requests to your service that there's always a warm instance to serve the request and don't have the cold start problem for all your users. You still have it for some of your users however as demand ebbs and flows; your functions still have to scale up and scale down appropriately and so it becomes a headache making sure you don't under or overprovision (note you can have this problem at the low end too using things like provisioned concurrency for AWS lambdas or Azure Premium functions).
- At the high end you have different issues, the promise of FaaS, serverless, and infinite scalability vanishes. AWS only allows 3000 concurrent Lambda executions in even their largest data centres for example, that's a reasonable load but I've worked on services where you need to go to say 50,000 concurrent requests and so AWS Lambdas just can't do it - your users get sporadic HTTP 429 errors as it throttles your requests. AWS can up this limit for you, but at that point again you're really fudging a solution into a cloud architecture that's straining to cope. Amazon's limits are because for all the hype, even Amazon having to fire up 3000 containers for a number of customers at once can become a strain on their capacity; forcing people to explicitly ask for a higher cap means they can better do capacity planning for customers on any given region.
So it's not entirely surprising therefore there's a push for a more granular type of container; one that's faster to spin up, whilst still be isolated from other execution environments, and has less overhead than even Linux containers. Such a thing is needed to get us closer to that goal of a cloud environment that can support both small and large web app back ends alike, because right now the issues above mean that FaaS is often limited to other types of workflow, like background order processing and that kind of thing.
I imagine therefore, that this is what this solution is oriented towards. The problem is it's not language agnostic, meaning you'd need a similar solution for other supported languages. Ideally you need an execution environment to be able to guarantee it'll spin up, process and respond to a request on the order of mere milliseconds in an isolated execution environment.
Disclaimer: I 100% agree with you around the usage of JS, I use it professionally day in day out at a FAANG scale company right now, but have worked with C, C++, Java, C#, PHP, and Python professionally in the past. Companies shouldn't be using JS like they are, it's genuinely leading to poorer quality code, but employers are being baited in by the hoards of cheap young code camp students getting taught JS. We also use TS, but unless you understand OO and types properly these devs just end up treating TS like JS; the second they encounter the need for a complex type they just resort to the any type and go back to the JS way of doing things and it rapidly becomes a clusterfuck once more. By the time you've taught them to code properly you might as well have just used a more appropriate language like Java, C#, Go, Rust, C++ or similar and trained people from the ground up yourself through an apprenticeship programme, so it's really a false economy using JS because of the "ease of use", or "cheap labour".
Probably:
https://earthquaketrack.com/r/...
"North Atlantic Ocean has had: (M1.5 or greater)
6 earthquakes in the past 24 hours
51 earthquakes in the past 7 days
190 earthquakes in the past 30 days
3,109 earthquakes in the past 365 days"
Interestingly that page shows there was a magnitude 5.3 in the last 24 hours alone along the mid-Atlantic ridge. Unfortunately the page is a bit shit to navigate so I can't find an easy way of seeing the data for the 15th.
I think the problem is though that these things happen all the time, whether they cause any surface water movement though is dependent entirely on very unique circumstances of each one. You could have a magnitude 6 that no one notices because it had no real impact, but a magnitude 3 that just happens to trigger a massive underwater landslide resulting in a tsunami.
Having shore dived the Caribbean regularly, sometimes doing 5 dives a day and knowing that the waves can vary within a range of fuck all to a metre or so high in the space of just a few hours separate from standard tidal movements if the winds pick up. I'd wager there are plenty of 10cm tsunamis but the vast majority will simply be lost in natural variation of the waves driven by weather. This one was probably only noticed simply because someone was looking at tidal variation at the time of a big, internationally well publicised tsunami and has as a result gotten themselves a bit overexcited theorising without really thinking it through.
Eruption happened at 04:14 UTC, Tsunami hit Japan at 14:14 UTC. Three hours before that would be 11:14 UTC, but the shockwave was travelling at the speed of sound, which is around 761mph at sea level. The Caribbean is at least 7,000 miles away, so at the speed of sound it would take around 9hrs 30mins to get there, which would mean 13:44 UTC at the absolute earliest - only 30mins before the Japanese tsunami, not 3 hours before. Furthermore, if the shockwave itself was causing tsunamis this doesn't explain why the first Japanese tsunami happened after the first Caribbean one; you'd have expected shockwave driven tsunamis to appear in Japan before the Caribbean ones regardless, even if the main tsunami driven by the displacement of water at the site of the eruption itself took longer to arrive.
I appreciate it's possible the suggestion is that the shockwave passed through the core of the planet or similar rather than around the surface, but I'm not convinced they're not simply confusing correlation for causation here. I suspect more likely what happened is that movements within the earths core or crust that triggered the Tonga eruption also triggered a minor eruption or shifting of plates causing an underwater landslide somewhere in the Atlantic around the same time as the eruption near Tonga resulting in the minor tsunami seen in the Caribbean. This would be a far more plausible explanation because it would actually be physically possible in the timeframes given for starters.
Radioactive cats have 18 half-lives.